text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
MOD_INC_USE_COUNT, MOD_DEC_USE_COUNT − Support reference counting of modules #include <linux/module.h> #define MOD_INC_USE_COUNT #define MOD_DEC_USE_COUNT These macros support reference counting of modules during their lifetime. Each time MOD_INC_USE_COUNT(9) is invoked, the kernel increases the number of counted references to the module. The MOD_DEC_USE_COUNT(9) decreases the reference count. The reference count is used by the kernel to know when a module is no longer in use by the device or application software. A MOD_INC_USE_COUNT(9) is generally placed in the open entry for a driver, and a MOD_DEC_USE_COUNT(9) in the release for the driver, to count the number of times the device has been opened, and thus prevent the module being unloaded when in use. The device driver may increment the use count other times, such as when memory is mapped or the module must remain loaded for external events. If the device driver is not compiled as a module (MODULE is not defined) then the macro reduces to no behavior. These macros take no parameters and return no result. Linux 1.0+ /usr/include/linux/module.h Stephen Williams <steve AT icarus DOT com>
http://man.sourcentral.org/MGA3/9+MOD_DEC_USE_COUNT
CC-MAIN-2018-51
refinedweb
187
53.61
Hi everyone, can somebody help me with my problem. Why I have so low fps on openMV H7? forForum.jpg - Google Drive Please post your code inline on the forums. import sensor, image, time sensor.reset() sensor.set_pixformat(sensor.RGB565) sensor.set_framesize(sensor.QVGA) sensor.skip_frames(time = 2000) clock = time.clock() while(True): clock.tick() img = sensor.snapshot() print(clock.fps()) With this code I have only 18 fps. Maybe it’s overheat because I’ve noticed that when I’ve just started the amount of fps was pretty high. It’s probably the scene. The camera will adjust the exposure on the camera to compensate for the lighting. Try adding more light. It’s probably the scene. The camera will adjust the exposure on the camera to compensate for the lighting. Try adding more light. Hey, I just tried your code on my M7 and I get consistent 18 FPS when I cover the lenses due to the change in exposition time, but I get it up to 60 in a well illuminated scene., so I agree with @kwagyeman, it’s definitely the scene, unless there’s something wrong with your hardware, of course. Thanks, I’ll try to add more light. You can also just force the exposure to be low via sensor.set_auto_exposure()
https://forums.openmv.io/t/fps-problem/1820
CC-MAIN-2021-17
refinedweb
216
69.68
Delaying a function call in Flash AS3. In a number of Flash projects we've done at JMX2 (shameless plug), I've run into the situation where I simply wanted to delay the actual execution of a function. Sometimes it was to tweak an animation so its pacing felt right. Often times I wanted to delay a sound clip by a few beats so that it gave the proper timing to the scene. After writing the same code a number of times, I decided to encapsulate that whole process into a class called delayedFunctionCall. It's not the most elegant name perhaps, but it's descriptive. I've found myself using on nearly every project since I wrote it. Maybe you'll find it useful too. What is delayedFunctionCall? This ActionScript 3 (aka AS3) class is pretty simple. You pass it a function and a number of milliseconds to wait for that function to be executed. How to use delayedFunctionCall First you'll need to download the package. Once you've got it, place it in your Flash project's class path in the com/jmx2/ directory. In your ActionScript file or Flash movie, import the package. Now you've got the housekeeping out of the way, to delay a function, create a new delayedFunctionCall and pass in the name of the function you want to delay as the first parameter. The second parameter is the number of milliseconds you want to wait until the function should execute.Below is an example of how you'd use it. import com.jmx2.delayedFunctionCall; // the time vaule is in milliseconds, 1000 milliseconds = 1 second new delayedFunctionCall(myFunctionToStartLater, 3500); function myFunctionToStartLater():void { trace("This function executed 3 and a half seconds after it was called by using the delayedFuctionCall function."); }Pretty simple. Get the code Need to delay some functions of your own? Don't delay... (Sorry for that!) just click here to download your own copy of delayedFunctionCall. Use it in peace and harmony. Wow, this is great, I’m just starting with AS3 due to a flash game I have to make (yes I know is kind of late to learn AS3) but this saved my life, I’m trying to make a Soccer penalty shootout game and I´m having a really bad time since I’m just a beginner, unfortunately is really hard to find tutorials on how to do a game like that. Thanks a lot for sharing such amazing piece of code. Best, Frank You saved my life :D Any reason you don’t use the ActionScript setTimeOut function or Timer object? No, Dave. This is just a shortcut I use. Instead of delayedFunctionCall (myFunctionToStartLater, 3500); I think you can use the native:- setTimeOut (myFunctionToStartLater, 3500); It’s the same to use (in fact less to type) and you don’t need to import any additional .as libraries Gary, Absolutely right. That’s how I do it now too. :-) Just make sure your “o” is not capitalized in the word “setTimeout”. setTimeout (myFunctionToStartLater, 3500); function myFunctionToStartLater() { trace("Running!"); } Simple and useful. Thanks!
https://supergeekery.com/geekblog/comments/delaying_a_function_call_in_flash_as3
CC-MAIN-2017-13
refinedweb
515
64.51
From: Fernando Cacciola (fernando_cacciola_at_[hidden]) Date: 2003-08-14 09:09:13 Peter Dimov <pdimov_at_[hidden]> wrote in message news:002d01c36263$dc96a390$1d00a8c0_at_pdimov2... >. > There's something I don't understand. 1.30.0, as you said, doesn't has the using declaration at function scope, BUT is has it at namespace scope. See revision 1.9 or prior. The idea was that borken GCC don't like it at function scope but do at namespace scope.. I don't understand the problem with the original code (as of revision 1.19), and I don't understand why 1.30.0 fails. I do understand why HEAD fails, by my 'fix' would be to revert Jens fix, but if I were right, 1.30.0 wouldn't fail as it does. Fernando Cacciola Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2003/08/51473.php
CC-MAIN-2020-16
refinedweb
154
70.9
0 (Just to preface this, I am a Python noob, so parts may be stupid/less obvious to me) I am trying to compile a Python script to exe, with the following setup.py file: from cx_Freeze import setup, Executable includes = ["re", "PyVMF"] setup( name="Wall Tile Randomizer", version="1.2", description="Wall Tile Randomizer", executables=[Executable("WallRandomizer.py")], ) The first time I ran this, I did not have the includes = ["re", "PyVMF"] line. I ran it, and got the "ImportError: no module named 're'" error when I ran the compiled exe. As suggested elsewhere, I added that line (PyVMF is another module used) I am still, however, getting the same error. What am I doing wrong?
https://www.daniweb.com/programming/software-development/threads/446704/cx-freeze-importerror-no-module-named-re
CC-MAIN-2018-13
refinedweb
117
64.3
Evol - Evolution search optimisation local EV = require 'Math.Evol' xb, sm, fb, lf = evol(xb, sm, function, constrain, tm) -- or xb, sm = select_evol(xb, sm, choose_best, constrain) -- not yet implemented: -- new_text = text_evol(text, choose_best_text, nchoices );. Where the arguments are: xb the initial values of variables, sm the initial values of step sizes, choose_best the function allowing the user to select the best, constrain a function constraining the values, choices. evol minimises an objective funtion; that function accepts a list of values and returns a numerical scalar result. For example, local function minimise(x) -- objective function, to be minimised local sum = 0.0 for k,v in ipairs(x) do sum = sum + v*v; end -- sigma x^2 return sum } xbref, smref, fb, lf = evol (xb, sm, minimise) You may also supply a subroutine constrain(x) which forces the variables to have acceptable values. If you do not wish to constrain the values, just pass 0 instead. constrain(x) should return the list of the acceptable values. For example, local function constrain(x) -- force values into acceptable range if x[1] > 1.0 then x[1]=1.0 -- it's a probability elseif x[1] < 0.0 then x[1]=0.0 end local cost = 3.45*x[2] + 4.56*x[3] + 5.67*x[4] if cost > 1000.0 than -- enforce 1000 dollars maximum cost x[1] = x[2] * 1000/cost x[2] = x[3] * 1000/cost x[3] = x[4] * 1000/cost end if x[5] < 0.0 then x[5] = 0.0; end -- it's a population density x[6] = math.floor(x[6] + 0.5) -- it's an integer return x end xbref, smref, fb, lf = EV.evol(xb, sm, minimise, constrain) This function whose reference is passed to select_evol must accept a list of arrays; the first must be the current array of values, and the others are alternative arrays of values. The user should then judge which of the arrays is best, and choose_best must then return (preference, continue) where $preference is the index of the preferred array (1, 2, etc). The other argument (continue) is set false if the user thinks the optimal result has been arrived at; this is select_evol's only convergence criterion. For example, local function choose_best(choices) io.write("Array 1 is "..table.concat(choices[1]," ").."\n") io.write("Array 2 is "..table.concat(choices[2]," ").."\n") local preference = 0 + choose('Do you prefer 1 or 2 ?','1','2') local continue = confirm('Continue ?') return preference, continue end xb, sm, fb, lf = EV.select_evol(xb, sm, choose_best); This function which (1, 2, etc). The other argument (continue) is set false if the user thinks the optimal result has been arrived at; this is text_evol's only convergence criterion. EV.ec (>0.0) is the convergence test, absolute. The search is terminated if the distance between the best and worst values of the objective function within the last 25 trials is less than or equal to EV.ec. The absolute convergence test is suppressed if EV.ec is undefined. EV.ed (>0.0) is the convergence test, relative. The search is terminated if the difference between the best and worst values of the objective function within the last 25 trials is less than or equal to EV.ed multiplied by the absolute value of the objective function. The relative convergence test is suppressed if EV.ed is undefined. These interact with two other small numbers EV.ea and EV.eb, which are the minimum allowable step-sizes, absolute and relative respectively. These number are set within Evol as follows: EV.ea = 0.00000000000001; -- absolute stepsize EV.eb = 0.000000001; -- relative stepsize EV.ec = 0.0000000000000001; -- absolute error EV.ed = 0.00000000001; -- relative error You can change those settings before invoking the evol subroutine, e.g.: EV.Evol.ea = 0.00000000000099; -- absolute stepsize EV.Evol.eb = 0.000000042; -- relative stepsize EV.Evol.ec = nil -- disable absolute-error-criterion EV.Evol.ec = 0.0000000000000031; -- absolute error EV.Evol.ed = 0.00000000067; -- relative error The most robust criterion is the maximum-cpu-time parameter tm This module is available as a LuaRock in luarocks.org/modules/peterbillam/ so you should be able to install it with the command: luarocks install math-evol Or: luarocks install The test script is in This module is the translation into Lua of the Perl CPAN module Math::Evol, and comes in its ./lua subdirectory. The calling-interfaces are identical in both versions. Peter J Billam, modernised, and the constraining of values has been much simplified. The deterministic optimistation strategies can offer faster convergence on smaller problems (say 50 or 60 variables or less) with fairly smooth functions; see John A.R. Williams CPAN module Amoeba which implements the Simplex strategy of Nelder and Mead; another good algorithm is that of Davidon, Fletcher, Powell and Stewart, see Algorithm 46 and notes, in Comp J. 13, 1 (Feb 1970), pp 111-113; Comp J. 14, 1 (Feb 1971), p 106 and Comp J. 14, 2 (May 1971), pp 214-215. See also:
http://pjb.com.au/comp/lua/Evol.html
CC-MAIN-2018-30
refinedweb
842
59.09
Scott Shorter: why are some of the algorithms mandated as part of interoperability requirements? Eastlake: It's the IETF way, and interoperability testing tends to show up and fix ambiguities. Denis Pinkas: are there equivalents to PKCS-7 Authenticated Attributes? Discussion: place information in SignatureProperties an Object. Eastlake: we should document how similarities between our specification and PKCS-7 on this topic. Pinkas Doubts that this can be done. Tom Gindin: In time, one might try producing an example (Passport check) later. Barbara Fox: KeyInfo will permit the inclusion of certificates, but we don't define them ourselves. Canonicalization open for future standardization. Open issues on location changes. Transforms may include canonicalization. Russ Housley: What hash algorithm is needed for ECDSA? Solo: It is just a placeholder for now. Tony Lewis: Do we handle RFC 2047 (Coded Words)? Eastlake: Optional to implement, since XML has its own method for this sort of thing. Michael Zolotarev: Is there any way to force the signature profile in the document definition? Eastlake and Solo: Not in the core syntax. Zolotarev: Can the signature policy for the document be in the document? Solo: Not in the core syntax, you put policy in an object and reference it from within SignedInfo. Zolotarev (clarification) Request: is for a way in which the document definition can stipulate the sets of data included, the objects included, and any transforms applied. Eric Williams How does s relying party (RP) make an assumption about the default KeyInfo when it is missing? Solo Typical use is protocol with negotiation (e.g. SSLv3) Schemas are more extensible and precise than DTD's Especially name space handling. Reagle: Re Schema, proposes we try replacing DTD in our specification. URNs: Phillip Hallam-Baker: IANA is a more typical place for cryptographic algorithm URN's than W3C, as PEM already used it. Paul Lambert (or Housley): IPSec did too. Reagle: as long as it is registered (or otherwise normative) IANA URNs is fine. Please send me points to such URNs. Paul Lambert: There is a trust ambiguity if multiple certificates have the same key. Barbara Fox: It is just a matter of policy to distinguish multiple certificates with the same key. Zolotarev: Certificate is a legitimate alternative to key. Pinkas: Is a reference to a certificate a legal value of KeyInfo. Schaad: Yes, it's legitimate. Gindin: Is Key Identifier the right name for issuer + serial number? Tom Gindin: if excluding signed location from the base is useful, why don't we define an optional keyword 'excludeLocation' in the object reference which would default to false? [Reagle: we've been through the exclude before and choose to avoid that option.] Fox Yes - the X.509 extension names are not relevant. Schaad: Parameter set is implicitly qualified by algorithm name. Gindin: Is hash algorithm a legal parameter to an RSA key? Schaad: It isn't even a legal parameter to signature method. Solo: We use signature algorithm ID's which include hash algorithm ID's. This would require changing the order. We do not want a variable order, we want a fixed order. [SLIDES] Comment: Russ Housley: when we were doing this in PKCS7 and CMS ... the rule was put them in the order the validator needs them ... this supports pipelines . . Makes processing easier Comment: Hallam-Baker: Im not aware of attack that uses preloading ... not talking about HMACS Answer: [Eastlake?] Yes we are talking about HMACS Comment: Hallam-Baker: we are talking about naive [??] ... asked on list of anyone could cite an example of a credible attack . . the only issue is with MD5; otherwise, to require constraints of re-ordering or a nonce, people should be able to cite a specific attack ... this is not well motivated ... please cite an example Reagle: Can anyone speak to this? Answer: [Someone] We should anticipate what might happen Reagle: Is this something we should worry about? Eastlake: Assuming this is a problem, seemed to me that this wouldnt save the attacker more than 15-20% ... Hallam-Baker: If someone showed that by preloading that there was a 1% advantage, then throw out the algorithm ... we assume cryptographers have their stuff right Eastlake: There is an advantage to putting elements in order then will be used Jim Schaad, Microsoft: Im convinced that even putting them in the order they are used ... is not useful ... you need a lot of this stuff before you get to it anyway . . order is not important ... you cant parse and compute hashes at the same time Russ Housley: It must be two pass processing Eastlake: Could have one pass processing, but would need a way to delare what digesting should be done. Reagle: We also need readability Eastlake: Current ordering is logical Reagle: Need to have default . . Unless someone can cite an example, then we are going to stick with current order ... is everyone Ok with that? Otherwise, we assume the same order if no one can cite an example. NO NONCE Eastlake: Current signature algorithms do not need nonce ... Do you want to do these things optionally ... last time, conclusion was no nonce ... unless someone comes up with something, well confirm this decision No response Eastlake: There are various types of algorithms ... there is consensus that parameters should be labeled for algorithms ... given this, other questions, if it is the case that an element takes only a single parameter, then you dont need the extra element ... two examples SLIDES Decisions Needed (1) Optimize one parameter algorithm (2) type attribute versus namespace (3) generic element name versus integer/real/boolean/string/binary Other question ... Some parameters might be encoded attribute (orthogonal) Eastlake: Any comments? Reagle: I would like us to be consistent; would not like to make special considerations; would like namespaces, like integer . . etc. Dave Solo: agree with first two, but not sure about the third ... optimizing parameters is not a good idea ... I like the namespace approach ... not sure about third approach ... . Eastlake: This gives you an immediate data type up front ... Eastlake ... I dont care on the last question . . depend on how big of type [set??] you have Someone: Dont see the value Eastlake: any other comments ... probably not enough to come to conclusion ... fist two points ... no one seems to be in opposition Eastlake: on the third point . . may lend robustness to parsers ... may help to know the data type Eastlake: might help the parser ... Not sure if this is really correct ... show of hand on last one ... (1) Generic Element (2) specific Element Schaad: want element name to be the comment . .. Dont care about a namespace ... <HMAC length ... namespace> Solo: This is what I thought I was advocating Eastlake: I will write up three examples for tomorrow Four Possibilities Reagle: my opinion: if receiver changes location, then it should make an assertion that the object at old location is the same as the object at the new location Eastlake: but to verify the signature, you do need to get hand on data and digest it Reagle: true ... Eastlake: the signature processor has to process the assertion [SLIDES] Comment [Someone]: one thing I will say, whatever we can say about nested manifest ... this makes it the simplest form, this keeps the core syntax simple, simpler than transforms Eastlake: agree, but not simpler than anything else Question: (Missed it) Question: is it really harder to move an object than to just resign it; dont understand; if it moves, then resign it Solo: What if you cant get hold of the signer? Part of the question is what if the receiver wants to move it? Hallam-Baker: if we use URLs and URIs, and we keep semantics of URLs what they are supposed to be . . if it changes, then you have a problem ... examples, suppose were doing trade . . .. Eterms ... doing things according to ETerm . . .ETerm is revised ... now you come along with signed document and data is changed ... whey you go to verity, the fact that ETerms has change is relevant Solo: if this is a URL, you cant do this . . if this is a URN, then might be OK ... pick the right name for what you are tying to support Hallam-Baker: person who creates the manifest should have control over whether object can move it if want ... instead of calling it resource or location ... call it a hint Solo: I agree Hallam-Baker: table of mappings . .. . Problem: can have multiple ObjectReferences ... why dont we just move location outside and could optionally sign location ... but not as [??] Answer: Scott Laurence: the question as to whether to sign location is a unsolvable problem ... the fundamental problem is that you should not change things whose location has changed ... just shouldnt do it ... URL and URN distinction does not help ... if people change URL then theyll change URN ... . does not help to have an extra level of direction . . .behavioral problem on part of users ... Just shouldnt do this ... if you publish and sign it, then dont change it ... lots of ways to solve this problem ... gone through this same debate in XML schema ... Comment: URN gives idea that URLs are OK to change . . .cant solve with another level of indirection Eastlake: This will be used for variety of situations ... easy to imagine ... should be able to have it the same way Answer: But if you sign it, then it shouldnt change ... if you dont want to sign it, then dont ... if you sign it, then there is an immutable relationship Eastlake: I think I agree with you ... in every situation ... .every case Im showing Reagle: I agree with last speaker ... I signed the content of the dereferenced URL Boyer: ... then dont remove the location Reagle: Unfortunate that this is "location" ... should be "target" . .. Location is inviting has URL semantics thata we are not intending. Thomas Gindin: if excluding signed location from the base . . .. [missed this] Eastlake: it is not different that putting it in different place ... dont want to be able to put it in two different places ... would have to put something in low level hash stuff Comment (Someone): Im uncomfortble having a signature over a hash . . without a target ... ... Crowd: No . . if you can do that . .. Hallam-Baker: A hack that could work ... youve got the resource, what you are really signing is a resource that has been retrieved on a particular date and time, . . .. Would be . . .what was resolved from this URI on a particular date and time . . .XYZ . . data, optionally date ... Pretty gross . . .but perhaps that [??] ... You cant ever retrieve it again Greg Whitehead: one case is where signer moves the resource and the other is the situation where the retriever moves it; in the first case, if the resource moves, then the signature breaks. In the latter, we can handle that the cache has moved something Reagle: This is what I was saying Eastlake: in IOTP . . you sign an element with an ID . . the intent is that parties can cache it over . . messages, organization by . .. Move message forwards ... all Ids globally unique ... this is a case of caching . . Eastlake: Another possibility ... smart URI ... if the transform changes, not clear how you would handle that SOLO: they all seem to relate to ... Im not sure tha complexity of . .. Apply X versus make it X . . the extent of cleverness Eastlake: This was described by John ... dangerous to security, but not clear this is anything worse than what you can do with transforms Reagle: earlier we decided not to go the exclude route for our selections, would be somewhat inconsistent to apply it here now Solo: There is a difference in applying transforms to core syntax than to objects ... Reagle: if someone screws up an object reference, then that is their problem (resource validation), but if we let people mess with signed info that is a larger problem (signature validation) Boyer: I dont agree ... what is the difference between when signed info breaks ... and when the object is changed and breaks Solo: Gives you potential to have a valid result for something that shouldnt be and that is the difference versus having a invalid response for something that should be valid! Question: Terry Hayes: . . .: Is Xpath stuff required as basic capability? Answer: Eastlake: It is recommended, not required Question: Terry: if you rely on this, then basic applications arent going to be able to ... [?? Missed it] Answer: Eastlake: Recommended: you should do it unless you have a good reason not to Person In Front Row: signer might think ... [Missed this ?? ... to fast] Eastlake: We need to decide what to do ... main alternative ... allow location to occur in two places ... same as putting flag on it and not including it in the digest . . not sure what the solution is ... well revisit tomorrow Eastlake: This provides filtering functionality that we need Reagle: This is recommended not required; Im comfortable with this Eastlake: I dont see a problem ... will stay this way Problem with current draft: You cant change the location of an object without breaking the signature. Presented three scenarios brought up by WG members that need the problem solved. SLIDES Question: Phillip Hallem Baker, VeriSign: Im mystified ... if I say the signature should fail if the URL is changed ... and if this [the URL] is not where it actually is, but is where you might causally pick it up ... then we dont need a signature spec ... this is a hint resolution mechanism . . . Answer: John Boyer: Is this a question? What I am hoping for is a situation where we are able to solve this problem ... I want to sign it, but I want to move it around ... cant do that unless location is out entirely, but this is application specific Question: Davo Solo: Can we table this for tomorrow; I agree with most of what Phil is saying; there are security and processing problems if you allow transformations to be applied to signed info; I think this is slightly bad to a very bad idea; I would prefer something different Comment: Eastlake: There are various solutions in the open issues [agenda] Answer: John Boyer: Its an idea, maybe not the best idea ... people want to do this . . propose something different. Comment: Hallam-Baker: direct transform ... I dont like this ... it is a great way to play with the digest value ... if this was a mapping to where it was to where it is, then OK John: I agree this is a security problem (C++ example ... but is a standard nevertheless) ... the more flexible we get, the less secure ... the whole issue of transforms hurts Question: Carl Hewitt, MIT: Why not pull the URL out and have a separate signature ... Answer: John Boyer: We could bring it out into the manifest (one of Dons suggestions) Question: Someone: What we are discussing is how to exclude parts of signed info, might be easier to do by having a multipart ... signature = exclude Answer: John Boyer: sure ... we already have transforms and they do the job ... we have the C18N ... I"m not saying you need the full flexibility of the transforms ... it is one idea . . it also solves the second problem ... Rich Himes pointed out that if you have this simple idea of signing a data record ... want to do a database lookup ... all identified by their ID . . if you put 23 [??] in the same document, then the ID is the same, then identifying by the ID attribute ... have a need to toss out the ID, I want to validate the contents . . this is just a way ... [??] ... need to drop the defendant record ... I want to change it to d record 1 ... 2... 3... this could be solved by the same mechanism John Boyer: Third scenario (Rich Himes also mentioned this and Solo): I want to take a document and transform into base 64 and envelope it in an XML-Signature ... use the signature element as a method of delivery ... suppose it is a Word Processing document ... want to pull it out and store it in binary format, throw the object away and retain the signature ... needs to be stored in binary format to be useful to application ... A person puts the document in base 64 in markup ... transforms ... go get base 64 and decode it and then ... want to take the object and put it somewhere ... change the location . . want to drop the transforms out ... because you no longer need to decode it ... dropping the location and the transforms ... not dropping the digest ... John Boyer: Of course, you can shoot yourself in the foot with this method ... if we did go with something like this, then we wouldnt have to worry about where to put the C18N, because it is in the transforms Question: Joseph Reagle: What is the first set of assertions ... content is content ... when you start moving it around, get scary ... I would prefer ... that the receiver ... the thing I decode and store at location is the object that was yeilded by dereferencing ... . Answer: John Boyer: not a good idea, because the recipient wants to prove the signer signed [??] Comment: Reagle: If this is the case, he could reencode ... he should keep it in his native form ... [people are[ not going to trust all of these manipulations Reagle: The first and third scenarios ... Im not sure this is our problem to solve ... not our problem ... this is an XML packaging problem Eastlake: In these cases where you are modifying something later and there is a transform to drop the thing that changed because you moved something . .. That would have to be there when it was first signed ... Introduction to canonicalization issues, C14N Implementation by W3C presented by Reagle. Currently: Canonicalization over objects; WG agrees it is an option of the signer; Canonicalization over SignedInfo is option of the signer -- but we probably should have a mandatory to implement. Three proposals have been considered for Mandatory: none, simple, XML according to canonical/XML/infoset Emphasizes that equal objects must canonicalize to exactly the same value. Some parsing is DTD dependent. Both DOM and SAX destroy attribute ordering; canonicalization must result in a serial stream in a deterministic manner. Need to have something that designates the type of canonicalization and the algorithm. Currently the CanonicalizationMethod is optional, but if present the Algorithm attribute is mandatory. Suggest that most implementations will use DOM, possibly SAX. Suggest that W3C method be the normal solution for that case and that the mandatory to implement be Minimal canonicalization. Poll on changing (removing optionality) vs. staying the same: audience seems to favor adopting a fixed method if possible. However ... Dave Solo: suggests it is too hard. Asks why not have fixed set. ??: says that this is different from S/MIME because different application areas will not interoperate; need to have assurance that the infrastructure supports the goals, but need to allow application areas to choose the best canonicalization methods for their uses. John Boyer: prefers specification of canonicalization methods because XML itself does not change ordering as does DOM or SAX. Eastlake: notes that there is no default in current spec; that is very minimal. Boyer: the XML spec is clear on minimal input processing ; would like default to be XML 1.0 compliant. Says that some attribute normalization is specified, depending on whether or not processor is a "validating" one. It will resolve entity references immediately; the data after that resolution is what should be signed. Reagle suggests that comments should be written up. Don Walky? (of Mitre?) says that he could see requiring everything in SignedInfo? Algorithm parameters for SignatureMethod Shows three methods, using parameters from different namespaces (for the truncation length and signature value). On The Desire to Have Data Move Around: Boyer: transforms on signed info are not really a big deal. Eastlake: it allows techniques that don't need Transforms. Boyer: notes that "where you get the information" doesn't matter in many cases. Reagle: this can be left to the application; no one is forced to include Location in signatures. Reagle says that Location should be called "Target" or "Reference" and as a URI could be a URL or URN or some other URI scheme. Poll on how many people would like to change; options are the three Eastlake options above: Eastlake: notes that only a few people expressed a preference. If Location is missing, application can make assumptions from its history. Speaker notes that Location is already basically optional (it can be omitted and you could put it somewhere else. Reagle: asks why multiple references must have Location; Dave Solo: this is necessary for consistent processing; suggests changing current wording to "if more than one object, all but one must have Location" from current all must have Location if more than one are present. Eastlake: receives no objection to "multiple locations can have 'something' for Location" and one can omit it Composite/Orthogonal Algorithms A composite name doesn't allow one to discard a component algorithm (e.g. MD5), but orthogonal names are bulky in comparison. Dave Solo: comments that Composite makes sense. Roy, UC Irvine: Composite allows you to forbid certain combinations as in TLS; it a nuisance to have to register each composite. Jim Schaad (Microsoft): the signature algorithm element identifies "everything"; he favors orthogonal because they can be processed sequentially. Rohit Khare: seems to favor composite. Poll shows about 30 for composite, under 10 for orthogonal. Reagle: continue with composite but he'd like to see more discussion and examples on list. Teleconferences have been held on weekly basis. Reagle asks if it is still favored. One person requests that they be an hour later. Another person states that time would conflict. Reagle comments that a goal is to be "European friendly". Result: continue at same time. Discussion of when RSA conference will happen; January 13th or 14th is suggested for an XML DSIG meeting. Overlapping with another meeting (SETCo). Looks like only a few people would be affected. Plans will be settled accordingly. Expect new core syntax and processing draft in week or two.
http://www.w3.org/Signature/Minutes/DC-Minutes/Overview.html
CC-MAIN-2017-30
refinedweb
3,666
66.94
Ray Ozzie, creator of Lotus Notes, founded Groove Networks in 1997 to take groupware in a new direction. His new product, Groove, enables groups of collaborators to form in a decentralized, ad-hoc, administrator-less fashion, within or across corporate or other firewall/NAT-governed realms. Groove is a peer-empowering form of groupware -- what the company likes to call "peerware." In Groove, group members interact in highly-secure shared spaces. These spaces collect all the documents, messages, and applications ("tools") related to a group activity. Everything replicates to each member's computer -- possibly, to several devices per member -- and is available for online or offline use. Exchange of data is very granular, so that Groove-aware applications can support activities like shared editing in real time. Jon Udell, author of Practical Internet Groupware, has been using Groove for several months, and says "it's what I've been waiting for." Jon Udell: Like many of the products in the peer-to-peer space, Groove is really a hybrid of centralized and decentralized strategies. What does Groove centralize, and why? Conversely, what does Groove decentralize, and why? Ray Ozzie: Groove can operate in a purely decentralized manner, but generally that mode of use will only be typical in home network or small office network environments where there's a single LAN segment and peers can be discovered through an efficient broadcast mechanism. More typically, Groove makes use of a variety of centralized services in a pragmatic fashion in order to make the communications experience feel more transparent to the user: PRESENCE - When a computer running Groove connects to the Internet, it registers with a "presence server". Other computers interested in communicating with that device may have "subscribed" with that same server to be notified when that device comes online. This "publish-and-subscribe" awareness mechanism is a highly-efficient (e.g. generally non-polling) means by which computers can find out not only when a computer comes online, but also its IP addresses and whether or not it's behind a firewall or NAT. RELAY - When attempting to communicate with a peer, it may be discovered that that particular peer is off-line. In this case, communications destined for the remote peer are instead routed to a "relay server" designated for this purpose. When the offline user reconnects, Groove retrieves communications from the relay server as it is also taking inbound peer connections. Also, if the users are operating from behind barriers, such as firewalls or NATs, relay servers can be useful as intermediaries to efficiently "switch" communications between them. FANOUT - When one computer has the need to send the same information to multiple peers, it is sometimes more efficient to have a different computer doing retransmission to those peers. For example, if you are connected to the Internet over a 9600-bps GSM modem link, and you are trying to send the same information to five coworkers, it is more efficient to send one copy to a server that redistributes the content to the five others. MAIL - When inviting other peers into shared spaces and activities, a common mechanism for distributing invitations is through standard SMTP-based e-mail. DNS - When connecting to servers, e.g. presence, relay, or mail, Groove uses DNS lookup services in order to translate from server names to IP addresses. Jon: You've said that human error is the Achilles heel of computer and network security. Can you elaborate on the strategies Groove uses to make strong security a mode that's always-on, yet fail-safe? Ray: Based upon my past experience, vulnerabilities present in the systems management process dwarf technical vulnerabilities, from a practical sense. Two specific examples: In systems based upon a PKI, the control of higher-level certificates represents a single point of vulnerability. Unless multiple-person password schemes are used to control the certificates, a rogue or disgruntled employee might create and certify identities, thus weakening trust in the certification authority. In systems that route or authorize information based upon a directory (without cryptographic protections on that directory), rogue or careless employees can cause information to be routed to the wrong destination, can cause the wrong keys to be distributed, can cause access controls to be overridden, and so on. The simplest way to minimize vulnerabilities is to minimize human interaction by removing the "knobs." For example, in Groove, you never have to worry about your information going over the wire unencrypted because there is no option to turn it off. You never have to worry about your information being disclosed when you lose your laptop because everything is encrypted to the disk, and there's no option to turn it off. You never have to worry about a rogue central directory administrator or certifier because there are none. Groove, like products such as PGP, uses a peer-based person-to-person trust model, as opposed to a hierarchical certification model. This works because the product's design center is to support small group interactions, generally with people that we recognize. Jon: If the objective is secure, yet spontaneous, collaboration that can work within and across corporate borders, Groove beats e-mail hands down -- assuming everybody you need to communicate with runs Groove. The aim, of course, is to make Groove ubiquitous. But for the foreseeable future, it's going to continue to be e-mail that makes the world go round. Groove can use e-mail as the vector for an invitation into a shared space, but otherwise doesn't facilitate communication among mixed groups of Groove and non-Groove users. How can Groove best co-exist with the current e-mail habit, while at the same time reforming that habit? Ray: As you are subtly implying, the best co-existence strategy is one of integration. And, as you say, this is specifically why we've embraced e-mail as a key mechanism for invitation into Groove shared spaces. That said, two mechanisms are available -- albeit currently in prototype form -- that will assist in bringing e-mail-based users into collaboration with Groove shared space users. As Groove matures over the upcoming months, we plan on integrating more and more of this level of function into our base tools. First, it's possible to send e-mail directly into a shared space (through a Relay Server) -- provided an appropriate method of addressing the e-mail, and a cooperating tool within the shared space. Thus, e-mail users will be able to, in essence, send or "cc" e-mail directly to a group of users sharing a Groove shared space. Second, if designed to do so, it's a trivial exercise for a tool implementor to send a copy of shared-space activity to one or more external e-mail users, provided that they can format the content and activities in an appropriate way for the medium. Specifically, it's easy to copy messages (e.g., discussion items and documents) to e-mail users. It is a bit more challenging to understand how one might copy sketchpad strokes, changes to outline items, or chess moves to e-mail-based participants. Jon: You've said that unlike NetWare or Notes or NT, Groove does not require an enterprise to deploy new directory or naming infrastructure, but that it can ride on existing infrastructure. How does that work? Ray: When a user downloads and begins to use Groove, s/he enters one or more names by which s/he is commonly known. In my son's case, it might be his real name to his parents or a player name to his Quake friends, and yet another screen name to his EQ friends. Because most of us have multiple personas, Groove enables you to present yourself differently to different groups of people with whom you're working or communicating. But many organizations have spent hundreds of thousands or millions of dollars investing in the assignment of unique or distinguished names for their employees. I might be john.doe@bigcorp.com or CN="John Doe"/OU="Finance"/O="Big Corporation Ltd" within corporate boundaries. For this reason, Groove provides IT the capability of issuing and distributing files containing the official, preferred organizational identity -- the one from its corporate directory -- to its employees. Thus, everyone at BigCorp will know exactly what everyone else is called, and there won't be yet another redundant namespace to deal with. Jon: Groove's security algorithms are plug-replaceable on a per-shared-space basis, so that a user who joins two different shared spaces can use two completely different security regimes. Why does this matter? Ray: In dealing with security issues in Lotus Notes for something more than ten years, I learned a number of important things about cryptographic implementation. First, security technology (algorithms, key widths, protocols) is virtually inseparable from security policy (law enforcement, national security). Just when you think that you've dealt with an export issue, you run into an import issue. Second, there are issues of vulnerability -- both from a systems design perspective as well as the vulnerability of specific algorithms, specific key widths, etc. Nothing is static. In time, almost anything that we have confidence in today will at some point be questioned or questionable. Finally, there are issues of customer preference. For whatever reason, some customers choose to standardize on specific security vendors, algorithms, implementations, and so on. By designing our product to enable a variety of plug-in implementations, and by enabling a number of them to be used concurrently, we feel that the architecture has more than enough flexibility to carry us forward through many years of service for a wide variety of customers worldwide. Jon: Groove generally thinks of peer-to-peer in the sense of person-to-person. But my Groove account can also propagate to many devices -- my desktop PC, my notebook PC, and so on. What's the best way to think about the relationship of people to devices in Groove? Ray: Groove cleanly separates the notion of personal identity and device identity. We did this for a variety of reasons, but perhaps most significantly because individuals are naturally dealing with more and more devices in their work and home environment. Dealing with notions of synchronization -- trying to keep them all up-to-date -- is becoming more and more painful. The notion of just keeping simple contact lists in sync between a PC, a cell phone, and a PDA, has spawned many a new company. In Groove, although you install the software on a given device -- e.g., a PC -- every user of the software has what we refer to as an "account," which contains all of your identities, all of your shared spaces, and so on. Through a procedure involving only a few clicks, you can duplicate your account on any other supported device. After your account has been brought to that device, all of your activities on one device are naturally and automatically kept in sync with things that you do when using your other devices. It's that simple. Thus, we embrace the model that you may (and will likely) have multiple devices: a computer on your desk at work, a notebook computer that you use when traveling, and a computer in your den at home. Furthermore, because Groove allows more than one account to be present on any given device, the computer in your den at home can be used in Groove by you, your spouse, and your children. Groove embraces multiple devices-per-person, and multiple people-per-device. Next: Opportunities for other players, and enhancing the value of the pipe..
http://www.onlamp.com/pub/a/p2p/2000/10/24/ozzie_interview.html
CC-MAIN-2016-18
refinedweb
1,942
50.36
also pertains to AS1, 2 or 3 (mostly the same) let me know how it works for you R3nder Any ideas for that? Thanks! import the shared object Create a variable for the shared object using getLocal pass the username to the data tag then trace the size to show it is bigger - see if that works an example: import flash.net.SharedObject; var thisShadow:SharedObject = SharedObject.getLocal("myd thisShadow.data.username = "ashadow"; trace(thisShadow.size); // 62 I have never run into this issue so I am just spitballing it :) Is it hot in Houston - I am just north of you in Lufkin (Dang hot here) Ready to showcase your work, publish content or promote your business online? With Squarespace’s award-winning templates and 24/7 customer service, getting started is simple. Head to Squarespace.com and use offer code ‘EXPERTS’ to get 10% off your first purchase. The only problem for the moment is that the popup windows don't work properly. In the case of the newest Player, it's as if the popup gets hidden, but based on the debug player, it seems like the user doesn't have permission to modify these variables. I don't know if it's hot here - the AC keeps me nice and cool. *:-] Thanks for the spitball attempt. I've never seen this either, so I'm hoping someone will tell me to tweak the one thing I'm obviously not tweaking. Prompting user to allow saving of shared-objects or to ask for more space: It describes the problem you are having "the popup doesn’t work properly" Bad news is, I was incomplete in my details - the stage size is 1200x700. :-( This is one of the paths I've considered, though. I wondered if somehow the dialog was being hidden by something else on the stage. The behavior for the newest Flash Player acts as if the panel doesn't have room to display. But the behavior with the debug player makes it more likely that the value isn't allowed to be modified. I see other websites, and even others of my own product where the local storage pop up works as intended. It seems that only this specific swf is having this issue. (My project for tonight will be backing out to a version that doesn't access the sharedObject, and seeing if I can get that working. Logic dictates that something in my code is making it fail. I should be able to take out all the pieces and put them back till it works.) Open in new window In this zip file are 4 files. Second, more important question is - what causes the difference between the two flas? At this point, I'm assuming the fla is broken somehow, and about to try simply creating a new fla that looks exactly the same. My guess is that the new fla will work fine. I'd like to know what made fpl.fla go bad. I'm still sure there's a setting somewhere I can tweak. If I knew it, I'd rather do that than reconstruct the app. Of course, we've already spent two days trying to save the corrupted fla - had we realized that's what it was, we could've reconstructed the app much faster and easier. Let me know how the zip file works out... examples.zip fpl.fla First, publishing with hardware acceleration set to anything other than "None" disables the user settings. If you change that, the swf behaves appropriately in standalone mode. Second, setting wmode in the html page that loads it to 'none' is necessary for it to work in a browser. Apparently, wmode affects and overrides the hardware rendering setting. This was a mess, but I think that pretty much fixes it up! Thank you very much!, and it was my pleasure to help you in any way I could. R3nder My posting number 33487128 should be the accepted answer. I'm not sure how I tried to set it up last time, but I would split all the points for an assisted solution between 33481197 and 33482807. r3nder put a lot of time and consideration into helping me solve the problem, and the dialogue helped me. While those postings weren't the actual answer, they did help me get to the answer myself, and I wanted to make sure that r3nder got rewarded for the time and effort. I'd be perfectly happy splitting the 500 points between 33481197 and 33482807 and rewarding an A. Please let me know how to proceed from this point. Thanks.
https://www.experts-exchange.com/questions/26414048/Local-Storage-popup-doesn't-popup.html
CC-MAIN-2018-13
refinedweb
774
72.26
$151.20. Premium members get this course for $18.75. Premium members get this course for $62.50. Premium members get this course for $79.20. Premium members get this course for $24.99. Would a working prototype help someone? If so, just let me know and I can throw one up on some webspace. OK, new policy... I won't split the points. 500 points per valid suggestion (I'll just create a new award question). If this is not acceptable under the ToS of EE, then a moderator may kindly let me know, and I will not do that. Otherwise, I have unlimited question points and I'm itchin' to use them :) There's no glaring holes. Just a couple of comments: 1. You should add a check to see if $user and/or $pass are not specified, otherwise there's no point in proceeding. So before the open, do a: die "No username supplied\n" unless $user; die "No password supplied\n" unless $pass; 2. Also your for loop could probably be written to be a little more readable (although there's nothing really wrong with it). As I can figure, the password file has entries along the lines of: username:xxxxxxxxxxxxzzzzz Where x is the salt and z is the encrypted password? for (@pwfile) { if (/^$user:/) { my ($salt,$encpass) = /:(.{12})(.{22})/; if ($salt$encpass eq unix_md5_crypt($pass,$salt print "Authorized\n"; exit; } } } print "Incorrect username or password\n"; Solve your biggest tech problems alongside global tech experts with 1:1 help. brian:$1$CuRbwNsF$ibVNqwFp $1$CuRbwNsF$ is the salt. The rest is the encoded password. The salt is actually 8 characters, but it always begins with "$1$" and ends with "$". When you encrypt the plaintext password together with the salt value, it returns the salt value as part of the return (odd that it does it that way, but that's just the way it is). When I wrote the for() loop, I knew I was going to ask for advice, and I thought the way I did it would actually be the most readable. Of course, that was just my opinion. But you suspect correctly; I do have other experience. PHP mostly, but I'm a little new to that, also. I originally wrote this script in PHP, but all of a sudden my needs changed, and I needed it in Perl. Funny that I forgot to check for empty values. I was trying to keep it as simple as possible for readability. my ($salt,$encpass) = /:\$1\$(.{8})\$(.{22})/; is more or less useless, unfortunately, but its good to be there anyway ;-) Reason: when perl gets called, the web-server already collected all data As ozo already pointed out: you need to clean all incomming variables ($user, $pass) Don't know what unix_md5_crypt($pass,$1) does, but if it calls system commands somehow, then I'd try to give as password: ';/bin/rm -rf *.* or: |/bin/rm -rf *.* So please check all parameters using white lists, like: $user =~ s#[^a-zA-Z0-9.-]##g; $pass =~ s#[^a-zA-Z0-9.-]##g; take care when adding more characters for the password. Read about it here: "Crypt::PasswdMD5 - Provides interoperable MD5-based crypt() functions" you know for shure, or not? Are you shure that underlaying systems (like Digest::MD5) behave this way in future too? If not, make shure your parameters won't harm. That's my message, you asked for security .. ;-) I know it's better to use "white lists" instead of "black lists." So my list of "approved characters" so far is as follows: a-z A-Z 0-9 ,.<>[]{}?_+ ~!@#$%^&*()_+ This should exclude [space], backtick [`], [/\] forward and backslash, pipe [|], colon and semi-colon [;:], and everything else I can't think of. My user's need to be able to enter in as sophisticated a password as possible, so I don't want to go overkill with this. You know, I'm starting to get the feeling that I shouldn't be using regular expressions at all. If I set up a for() loop to run through each line and used ($uname,$hash) = split(/:/), then I could just match 'if $uname eq $user' ... etc. Might that be safer? Worse? No difference? Remember, efficiency is not a priority. with ! you may access shell commands with & you can combine shell commands with $ you can access variables in shell with @ you may access functions in M$ SQL with %_ you have wild cards in SQL and so on ... My pedantic filter for a username is: a-zA-Z0-9.- for the password, it needs to be more, I agree After your if (regex) { } you have nothing to handle the exception that the format of the password file is invalid (the regex itself is not matched). This is not actually insecure, as this case would still result in failed authorization, but it is confusing for a user, and hard for an admin to debug, if their correctly typed password should fail due to a corrupted file. MastaLlama
https://www.experts-exchange.com/questions/20944860/Securing-a-Perl-login-script.html
CC-MAIN-2018-09
refinedweb
840
73.07
One of the coolest things about the Raspberry Pi is its GPIO pins. They’re just sitting there, waiting to be connected to all kinds of useful peripherals so your Pi can interact with the world around it. Power an LED to signal the user. Place a button in the path of a circuit and detect when a user presses it. Attach sensors to read temperature and humidity, and plug other cards like the Sense HAT over top of the pins. A few months ago, I got a set of 37 sensor modules on Amazon. I knew they wouldn’t directly interface with the Pi, but that it was entirely possible to do it, so I put them aside for later. Well, I finally decided to pull one out, and thought the mini-joystick might offer some interesting… opportunities. :) Materials There are a few things you’ll need on hand before doing this, all of which you can find on Amazon (of course). - Raspberry Pi Starter Kit A decent starter kit includes the Pi, adapter, memory card, case, breadboard and cobbler, wires and LEDs, blah blah blah. If you already have a Pi, obviously you don't need this. - Long Breadboard Some of the kits come with a shorter breadboard. The longer ones let you fit more wires and stuff. - Kuman 37 Sensor Module Kit for Arduino It comes with a joystick control (which I used for this post), and a load of other sensors and input devices. There's no documentation, but I found a link to instructions for each module on Amazon.. it's still sparse though. - Phantom YoYo Jumper wire M/F male to female You’ll need a few of these to connect the joystick to the breadboard. I pulled 5 random wires out of the set to use on this project, and they all worked great. - Adafruit MCP3008 – 8-Channel 10-Bit ADC With SPI Interface [ADA856] A tiny chip that bridges the gap between an analog control and the Pi. It’s cheaper directly from Adafruit, but watch out for shipping. If you’re buying several instead of just one like me, consider Adafruit’s site. Here's the datasheet. Interfacing with Analog Controls The joystick is an analog control, consisting of two potentiometers that send a variable voltage depending on the position of the joystick (here’s a video that shows how they work), and it won’t just connect directly to the GPIO pins on the Pi. If your joystick can be pressed down like mine can, then *that *button just has an on/off state and can be connected directly to any regular GPIO pin. But I’ll wire it up same as the potentiometers, since that’s what the articles linked below do as well. To get it to work, you’ll need to learn about the SPI bus protocol and how to enable it on the Pi, then wire up a small chip that uses SPI to bridge the gap between analog controls and the Pi. Fortunately, there’s a set of very helpful tutorials for doing just this, and you’ll find them on the Raspberry Pi Spy website. - Enabling The SPI Interface On The Raspberry Pi - Analogue Sensors On The Raspberry Pi Using An MCP3008 - Using A Joystick On The Raspberry Pi Using An MCP3008 First, you’ll learn how to enable the Serial Peripheral Interface (SPI) bus on certain GPIO pins. Method 1 worked fine for me – you just open up a config screen in Raspbian and select the SPI option. Then you’ll need to wire up the MCP3008 chip correctly. That’ll provide the bridge between the joystick and your Pi. The “using a joystick” article linked above will walk you through it. You don’t *need *to read the “analogue sensors” article, but it’s got some helpful info in it that’s not in the other one. I suggest reading both. Also, for changing colors on the RGB LED, you may want to learn more about pulse-width modulation (PWM). Here are some pictures and a diagram of my setup, which hopefully will help if you get stuck, although Matt provides a good set of pics in his article too. There’s some additional stuff in my circuit that’s not in his, namely the RGB LED and resistors/wires to make it work. I used a 100Ω resistor for red and 220Ω for green and blue, same as here. Fritzing Diagram If you’d like, you can download the original Fritzing file and play around with it. The Fritzing site also has loads of diagrams that people have shared, which you can check out too. If you’re trying to connect a particular component, you might find something in there to help you out. Reading Input You should’ve already verified that Python Spidev (pi-spydev) was installed after you enabled SPI. We’ll need that for reading input from the analog device. Since I’ve been messing with an RGB LED lately, I thought it’d be interesting to map the position of the joystick to the RGB color wheel and then light up the LED appropriately. Imagine the X-axis running horizontal above Blue and Green, and the Y-axis running vertical through Red. Here’s the code in its entirety, or check it out on GitHub. import math import RPi.GPIO as GPIO import spidev # Open SPI bus spi = spidev.SpiDev() spi.open(0, 0) # Define sensor channels (3 to 7 are unused) mcp3008_switch_channel = 0 mcp3008_x_voltage_channel = 1 mcp3008_y_voltage_channel = 2 # Define RGB channels red_led = 36 green_led = 31 blue_led = 37 def read_spi_data_channel(channel): """ Read in SPI data from the channel and return a coordinate position :param channel: integer, between 0-7 :return: integer, between 0-1023 indicating joystick position """ adc = spi.xfer2([1, (8+channel) << 4, 0]) return ((adc[1] & 3) << 8) + adc[2] def convert_coordinates_to_angle(x, y, center_x_pos, center_y_pos): """ Convert an x,y coordinate pair representing joystick position, and convert it to an angle relative to the joystick center (resting) position : integer, between 0-359 indicating angle in degrees """ dx = x - center_x_pos dy = y - center_y_pos rads = math.atan2(-dy, dx) rads %= 2 * math.pi return math.degrees(rads) def adjust_angle_for_perspective_of_current_led(angle, led): """ Take the current LED into account, and rotate the coordinate plane 360 deg to make PWM calculations easier :param angle: integer, between 0-359 indicating current angle of joystick position :param led: 'R', 'G', 'B', indicating the LED we're interested in :return: integer, between 0-359 indicating new angle relative to the current LED under consideration """ led_peak_angle = 90 if led == 'R' else (210 if led == 'B' else 330) return ((angle - led_peak_angle) + 360) % 360 def calculate_next_pwm_duty_cycle_for_led(angle, led): """ Calculate the next PWM duty cycle value for the current LED and joystick position (angle) :param angle: integer, between 0-359 indicating current angle of joystick position :param led: 'R', 'G', 'B', indicating the LED we're interested in :return: integer, between 0-100 indicating the next PWM duty cycle value for the LED """ angle = adjust_angle_for_perspective_of_current_led(angle, led) if 120 < angle < 240: return 0 elif angle <= 120: return 100 - (angle * (100 / 120.0)) else: return 100 - ((360 - angle) * (100 / 120.0)) def is_joystick_near_center(x, y, center_x_pos, center_y_pos): """ Compare the current joystick position to resting position and decide if it's close enough to be considered "center" : boolean, indicating whether or not the joystick is near the center (resting) position """ dx = math.fabs(x - center_x_pos) dy = math.fabs(y - center_y_pos) return dx < 20 and dy < 20 def main(): """ Initializes GPIO and PWM, then sets up a loop to continually read the joystick position and calculate the next set of PWM value for the RGB LED. When user hits ctrl^c, everything is cleaned up (see 'finally' block) :return: None """ # Center positions when joystick is at rest center_x_pos = 530 center_y_pos = 504 GPIO.setmode(GPIO.BOARD) GPIO.setup([red_led, green_led, blue_led], GPIO.OUT, initial=GPIO.LOW) pwm_r = GPIO.PWM(red_led, 300) pwm_g = GPIO.PWM(green_led, 300) pwm_b = GPIO.PWM(blue_led, 300) pwm_instances = [pwm_r, pwm_g, pwm_b] for p in pwm_instances: p.start(0) try: while True: # If joystick switch is pressed down, turn off LEDs switch = read_spi_data_channel(mcp3008_switch_channel) if switch == 0: for p in pwm_instances: p.ChangeDutyCycle(0) continue # Read the joystick position data x_pos = read_spi_data_channel(mcp3008_x_voltage_channel) y_pos = read_spi_data_channel(mcp3008_y_voltage_channel) # If joystick is at rest in center, turn on all LEDs at max if is_joystick_near_center(x_pos, y_pos, center_x_pos, center_y_pos): for p in pwm_instances: p.ChangeDutyCycle(100) continue # Adjust duty cycle of LEDs based on joystick position angle = convert_coordinates_to_angle(x_pos, y_pos, center_x_pos, center_y_pos) pwm_r.ChangeDutyCycle(calculate_next_pwm_duty_cycle_for_led(angle, 'R')) pwm_g.ChangeDutyCycle(calculate_next_pwm_duty_cycle_for_led(angle, 'G')) pwm_b.ChangeDutyCycle(calculate_next_pwm_duty_cycle_for_led(angle, 'B')) # print("Position : ({},{}) -- Angle : {}".format(x_pos, y_pos, round(angle, 2))) except KeyboardInterrupt: pass finally: for p in pwm_instances: p.stop() spi.close() GPIO.cleanup() if __name__ == '__main__': main() I’ve done more work than I usually do to comment the code, so the inputs, outputs, and purpose of these functions are as clear as possible. Always Write Tests! If you check out my joystick project repo on GitHub, you’ll see a separate file with tests in it. Testing your code is vitally important. I found a bug in calculate_next_pwm_duty_cycle_for_led where I was performing integer division by accident, unintentionally discarding the fractional part of the result, which would’ve thrown everything off and been tough to track down. If you’re an aspiring developer, get in the habit now. To run the tests in this project, you’ll need to install the DDT (Data-Driven Tests) package for Python unit testing via pip. General Comments I wrote the adjust_angle_for_perspective_of_current_led function to make calculations easier. Imagine a 360 degree circle overlaying the color wheel. Each color (red, blue, green) is separated by 120 degrees. So if red is at 90 (the top), then blue is at 210 and green is at 330. That function rotates the imaginary circle, placing the LED we’re concerned about at 0 degrees. The is_joystick_near_center function was necessary because the joystick is not that accurate. Even when it’s sitting still, the readings coming off it fluctuate a bit. That’s not a huge deal when the joystick is positioned far away from the center, but imagine what happens when the position is near center and the X and Y coordinates keep jumping around the vertex of our “angle”. The angle varies wildly, so that when the joystick is “at rest”, the color flickers all over the place on the LED. So instead, I just display white if you’re near center. See it in Action If you have a question about any of the code, leave a comment below and I’ll try to clarify. There’s one tricky piece in the read_spi_data_channel() function, and that’s the call to spi.xfer2(). Suffice to say, that’s the pi-spydev module doing work. If you want, check out the spidev_module.c file and do a search for “xfer2″. It’s roughly 100 lines of C code.
https://grantwinney.com/connecting-an-analog-joystick-to-the-raspberry-pi-and-using-it-with-an-rgb-led-to-simulate-a-color-wheel/
CC-MAIN-2021-43
refinedweb
1,834
61.46
I can't run qt quick Hello guys, I am very new in the qml. I looked for my question in the internet but I counldn't any answer. I can't understand. Because It was a little complicated. My question : I can't run a simple qt quick ui app. Debuger says to me : qmlscene: failed to check version of file What's wrong? My code is : import QtQuick 2.1 Rectangle { width: 360 height: 360 color: "#D8D8D8" MouseArea { anchors.fill: parent onClicked: { Qt.quit(); } } Text { anchors.centerIn: parent text: "Hello World" } } @gurolcay After running this app from QtCreator, in the Application Output" window, what does it show ? is the path for qmlscenematch to that where you have installed Qt ? @p3c0 when I run my code, in the application output window appear this message: Starting C:\Qt\5.4\mingw491_32\bin\qmlscene.exe... qmlscene: failed to check version of file 'C:/Users/MehmetG?rol/Development/yavhehe2/yavhehe2.qml', could not open... C:\Qt\5.4\mingw491_32\bin\qmlscene.exe exited with code 0 @gurolcay It seems it is not able to read that path. I notice some special characters in it MehmetG?rol Can you try creating another folder without those special characters and keeping that file in it and the run it ? OoO! It worked. I change my directory. I can't change current folder name. Because Windows gets my user name because of my e-mail adress.(.) Thank you so much! But I have a questions. Why doesn't support creator UTF-8? C:/Users/MehmetGürolthis is correct path. I have to change directory C:\. I hope I was able to explain my question.
https://forum.qt.io/topic/56260/i-can-t-run-qt-quick
CC-MAIN-2019-04
refinedweb
277
79.36
Not everyone is cheering about RSS integration into Windows Longhorn and Internet Explorer 7. With the event of making RSS a native format for Longhorn, many software developers whom make stand-alone feed readers are crying out that Microsoft is once again shutting down a sector of business which in all respects is somewhat true. With the complete integration of the format within the OS, there is no need of stand-alone feed readers. Flexbeta has a nice write up about what MS is trying to accomplish with RSS integration into the OS. Windows + RSS = Something Good? Submitted by Gsurface 2005-06-26 Windows 50 Comments No need? So there is no need for other media players, browsers, im clients? Please. All this does is make things easier for developers and changes almost nothing else. Well, what are the dominant media players, browsers and IM clients on a Windows platform? Funnily enough, they’re the ones that come with the platform. Bundling things like this doesn’t entirely kill off 3rd-party apps, but it makes it a lot harder to compete. Given the simple equation A+B=C, if is A is true and B is true then C must be true. If A is false and B is true then C is false…… If we work backwards with that equation, and we assert that C is true therefore A and B must also be true are we deluding ourselves? It is only when we presume that C is false because either A is false and or B is false that we do not stand prone to an assumptive error. If we change up the equation so that it reads B+C=A we find that there is no way that it can be a true statement, therefore A+B≠C yep, for certain the calculation breaks… ≠ is supposed to be “does not equal”. Proof the assertion is false!!! Of course the most worrying possibility is that M$ wont really get on all that well with plain old RSS and instead devise their own slightly different version of it, and if longhorn takes off in a big way that’s enough leverage to make people use it. does anyone know one single *commercial* rss reader? so what market is it supposed to destroy. oh come on, if ms doesn’t includes tools it is bad because it doesn’t come packed like a linux distro, if it comes with apps it is bad because it destroys markets. decide please. besides, does anyone realy thinks realplayer would be the standard mediaplayer on windows today if ms would not have included wmp? realplayer has such a tiny marketshare becaus it simply sucks, that won’t change with antitrust fillings. I think its kinda stupid to flame MS for including tools in its OS. Everyone thinks that its great that many mainstream Linux distros install with a tool for practically everything. But if MS does it, its critisized. I *do* believe MS should be critisized, but *not* for including the tool. MS should be critisized for not making tools optional or (easily) removable. There is no reason a browser/firewall/antivirus/media player (and now RSS Reader) shouldn’t be *integrated* into the OS. If I buy windows, I shouldn’t be *FORCED* to install/run MS tools. “does anyone know one single *commercial* rss reader?” and on top of that, it’s incredibly easy to write your own reader if you don’t want to pay for one (of course there are dozens of free ones as well). People just want something to bitch about, anytime MS bundles anything all the tin foil hats reappear. Makes me absolutely sick, don’t these people have better things to do? Actually, WinAMP and iTunes both have a wider user base than WMP, and AIM certainly has more users than Windows/MSN Messenger. IE won the wars because it was flat out blew everything else out of the water at the time, not because it was bundled. “does anyone know one single *commercial* rss reader?” FeedDemon: Anyway, so if MS doesn’t include an RSS tool, then maybe they should remove the browser, IM client, media player, file manager, and text editor too. BTW: Doesn’t Apple include RSS capabilities in OSX? How many people were bitching about that? Okay, IE will support RSS news feeds. This is hardly news. <p> RSS is happening without Microsoft. I grabbed this article through Safari’s built-in RSS reader. That’s how I track most RSS-enable sites these days. <p> So … good to see Redmond catching up. I’m sure Microsoft’s mass will encourage more RSS browsing, which is a good thing. But you’d think Microsoft would almost be embarrassed they’re well behind the adoption curve on an emerging technology. Again.”. People who think it is important to know about specific tools install Linux. People who don’t install Windows. The numbers of who does what ought to tell you something. Nobody uses AIM outside the US so yes, MSN is the most widely used out there. im outside us and i use aim and don’t like msn (don’t trust passport thingie) MSFT can integrate an RSS reader into IE. There is nothing wrong with that. What is wrong is that MSFT is once again Extending a Standard for their own personal use. I mean come on how many times does MSFT have to extend a standard and warp it before people will stop letting MSFT walk all over them. IE comes right to mind. MSN Messenger doesnt come with windows Windows messenger does, and its horribly outdated and pretty useless. Yeah it’s just an msn messenger client, but very few people actually use it. There have been proposals but not official “standard” on RSS yet. Most of the implementations have been compatbivle with each other. Interestingly enough all of the standards that have been proposed document the ability for extensions to be added. I don’t see the problem with what MS is doing myself. If MS has to integrate RSS into one of its products it’s Outlook and Outlook Express, not IE. What does IE, a popular browser, have to do with RSS? RSS should be displayed like e-mails in OE. Opera integrated its RSS reader the same way it integrated Usenet Newsgroups into its popular e-mail client and has had much positive feedback on its forum. Firefox has an integrated RSS reader, but the news are displayed as hidden bookmarks, what’s the point? I never remember to have a look at them. What a mess! It’s not like this is some proprietary extention, it has been licensed with CC-BY-SA, which means you are free to copy, re-distribute, make derivatives of, and make commercial products based on it. Are you saying that Microsoft isn’t aloud to write an extension (which others have done in the case of RSS) to anything? Microsoft isn’t aloud to add to something already available? I don’t think I’ve ever used Thunderbird’s RSS aggregation capability, which is similar to the one you mentioned. Plenty of Mac users use Safari’s RSS reader, which seems to be very similar to the new IE7 one. (At least from looking at the screenshots.) And Firefox’s works nicely too, in a minimal way. I put “Live Bookmarks” into my personal toolbar so I can check headlines by clicking on the source website from the toolbar. Someone correct me if I’m wrong but I don’t see this as the same as what happened with Internet Explorer or any other embrace and extend techniques. They are taking RSS adding stuff to it and then releasing the changes under the same license at RSS is under. So if I understand correctly, they are documenting what makes their version better and if it’s popular and it’s actually used then everyone else is free to duplicate it. What’s the problem? who uses IE anyway it would be just a API for who develop RSS reader other then that just a new IE ability it wont change anything. MS is developing it’s own AntiVirus MS is developing it’s own AntiSpy/Adware MS is trying to change the RSS format to it’s own gain. When MS announced it was making a antivirus product AV companies stock tanked and IIRC is still lower than before the announcement. The same could be said again when MS announced it’s own “Free” adware tools. MS should only be in the OS business. It’s okay to market these other things as addon’s for a price but they’re adding these things for free and intergrating them to the point it makes other products less functional if it had not been there in the first place. 3rd party software makers have reason to be pissed off. They built the RSS market and now MS wants to profit off their backs. A/V & adware apps are directly related to OS’s, so MS has a pretty good reason to get into this market. Nothing is stopping, say…Symantec from writing their own OS and bundling it with their A/V software.. sure it hurts 3rd party apps. Ofcourse it does. But It’s microsoft’s OS. Linux distros come with a set of software that is widely popular regardless if there is something better. All OS;s did it. BeOS had NetPositive, Apple has Safari and MS has IE…. Everyone flipped out when IE came on windows but not for everything else. I agree with those not anti-MS. And for the ‘extending standard’: if websites won’t use it, changes will be useless otherwise, it means they added something useful. Anyone remembers that MS Java extensions allowed more than 8kHz sound for example? Holy Standard Java didn’t, at the time… Btw, my SE T610 mobile extends MIDP1.1 with features of MIDP2. So I can hear sound in apps. Btw, almost every Nokia mobile has (good) extensions that make apps written for them incompatble to every other mobile brand. Anyone of the yelling ppl with a Nokia? i say let them do it… you always have the option to install your own ish.. ( although i must admit it would be a nice change to install windows without all the extra crud ) if MS didnt do anything about the viruses and spyware, people would still throw hissy-fits.. so i say let them do it.. the one problem i see about all those bundled and integrated apps that come with windows isnt so much that they are there but that its a pain in the ass (even with that added defaults tool that ms added after one of those lawsuits) to get another program to take over all the tasks of a bundled program. yes some of the problems are related to the programs but there are still times when the integrated ms apps seems to get priority before my defaulted apps. and i cant say i have realy understood rss yet. yes i have the sage extention for firefox and yes i have a bunch of rss links added. i guess the thing is that it should realy be running in the background, allways updating and then poping up a polite icon when a new item have arrived. its a allways on kinda thing it seems. just like mail is best when you can leave the mail software in the background, allways checking for new stuff. hmm, maybe ill try the rss plugin for miranda and use that to look up my feeds Not that I see a point to RSS, and think they could work on other things, the point still remains, they are doing what they should be doing. Adding more function into their product. As a consumer, I want to see a company like MS continue to add to the product. If they made just an OS and left it at that, it would be a pretty bad product, and go the way of BeOS. They are doing the same as apple is doing. Nothing wrong with that. In a ideal world I would be able to put in my OSX or Windows CD and every app I need/want is there out of the box. Of course some of them aren’t that great on both platforms right now, but others are fine, in time maybe both companies will get things right with all their apps. It’s about being pro consumer. And to who mentioned winamp, winamp died when itunes came out. Itunes might be bigger then WMP on windows, but that’s debatable, lots of people still use WMP cause it’s really not that all bad. Certainly became a lot better then winamp during the Winamp 3 phase. Could someone explain the difference between RSS and that “Pull” thing that MS and Netscape talked so much about around the time for Windows 95? or was it 98… I forget. Ever since I first read about RSS I’ve been thinking that it’s just a modernization of an idea that never took off when it was first attempted. Am I wrong? RSS is an extensible standard. You can easily extend it to your needs. You don’t need to change the basics, but you can add your own fileds, features to it without changing the rss standard itself. You can use namespaces to do it, and that’s all. If they want to do something unique, they can do it with different namespaces. MS putting any kind of technology in its OS, as long as all modifications to the underlying protocols are open. What I’m against modifications covered by patents or trade-secrets, even if they are useful. Otherwise we’ll soon see RSS feeds “designed only for MS RSS reader” akin to IE-only sites, owners of which simply don’t care, because “everybody uses Windows”. We don’t want MS to control RSS, do we? those lamers complaining about ms including IE, wmp and now, perhaps rss should be shot, when people buy an OS, they sure as hell expect to get a browser and a media player.. i could understand if they started bundling large application suites like office, advanced 3d applications and such stuff. Try nntp/rss. It will run a local news-server that you can connect to using any compatible client (even Outlook ExpresS) The problem with Windows adopting any technology is that it will become the target for virus writers and crackers. Nobody uses AIM outside the US so yes, MSN is the most widely used out there. I guess you forgot that all the Mac users using iChat are essentially using AIM servers. BTW: Doesn’t Apple include RSS capabilities in OSX? How many people were bitching about that? No one, because a) Apple is not a Monopoly b) Everyone who used to pay for a good RSS reader (I didn’t because there are other good ones out there for free), will continue to do so, because Safaris RSS implementation really isn’t that great. I think its kinda stupid to flame MS for including tools in its OS. Everyone thinks that its great that many mainstream Linux distros install with a tool for practically everything. But if MS does it, its critisized. The day Microsoft includes office as part of windows for no additional cost I will believe on that argument ant stop critizising them for it. But note that the only tools they add to windows are of technologies they do not/want to control, and once they control the technology they allow the included ‘tools’ to stagnate. They will probably destroy RSS. In Europe, MSN is probably the most used IM protocol. AIM is non-existant here. And in my country, The Netherlands, MSN has a 100% market share. The other two big ones are ICQ and Jabber. I guess you forgot that all the Mac users using iChat are essentially using AIM servers. That’s the exact reason why iChat is a completely useless outside the US. They boast about all the cool features, yet it only features AIM and Jabber. I asked Apple once to include MSN support, but never got a reply back. In Europe, MSN is probably the most used IM protocol. AIM is non-existant here. And in my country, The Netherlands, MSN has a 100% market share. The other two big ones are ICQ and Jabber. The other two big ones *in Europe*, I mean. yeah, like they destroyed html… oh come on, rss is such a fragmented standard that there isn’t anything to spoil anyway. besides, afaik it is obligatory to open modifications to rss. it is not like they could keep em secret anyway. >Nobody uses AIM outside the US so yes, >MSN is the most widely used out there. here in germany the most widely used im clients are icq and miranda. aim is quite known and used as well. ”. ” As much as a Linux fan I am, I still cant disagree with the statement. All those years Linux distros had those RSS readers but now MS will steal the limelight saying we have automatic headlines, stock info, sport scores, blog entries, ready for you to read!! It is all about marketing, seriously! and Good Marketing. Even India’s leading daily carried a report on it, even though RSS is nothing new to report out, but the way MS markets stuff is commendable in some ways. A/V & adware apps are directly related to OS’s, so MS has a pretty good reason to get into this market. Nothing is stopping, say…Symantec from writing their own OS and bundling it with their A/V software. That excuse can be tailor-made to any piece of software and hence invalid, if so, what MS should do is to simplify windows so spyware can be identified and removed easily, there are so many undocumented dark corners on windows that even them have a hard time (rootkits).. MS gave not much to the comunity, XML-RPC, BXXP were already there. If they gave it away it is surely because they did not see a way to use it as a weapon. If MS is so keen on being good they have a good chance to be , they just have to update winXP on SP3 in a way in which IE (and the rest of the internet tools) are made optional and can be uninstalled easily. Me thinks not that MSN is the most used chat… I don’t know if there is one most used chat client. However, I do know that SKYPE has become EXTREMELY popular, and many people use it for chatting. “IE won the wars because it was flat out blew everything else out of the water at the time, not because it was bundled.” *LOL* And it has nothing to do that it came bundled? Funny, because on MacOS, where IE4 normally was offered alongside with Netscape, it never became the leading browser. And how do you explain the market dominance of IE nowadays? According to your superiority theory, IE shouldn’t be present at all because it has lost it’s lead a long time ago. So why is it still the dominant browser? Market dominance of IE is of course related to bundling and tying software to the OS (you can’t call something you can’t remove “bundling”, can you?). It has been proven numerous times that people don’t care about technical superiority. The masses are lazy and most of them will use whatever comes preinstalled especially under Windows where Software installation is a pain in the ass compared to cleanly architectured systems and especially since IE and the IE icon manages to pop up all the time despite changing the default and removing the icon from the desktop. Of course it is! That’s something that MS has understood all along, while open source developers think it is a dirty word. Look at most of the comments here. They’re written from the point of view of open source developers (mostly wannabe’s, I suspect). Old arguments about the evils of “embrace and extend” and laments about the future of 3rd-party developers are trotted out. But, look around, how many people have been paying attention? The only people who would choose a software products based on their concern about such issues are geeky open source clones. And, that’s fine. Just don’t fail to notice that the other 98 percent of the human race doesn’t care and is doing something different. It is the product that counts, not the ideology of the people who built it. Rather than whine about MS and RSS, the Linux community ought to asking itself why it couldn’t get itself organized enough to “integrate” RSS a year ago and score all the PR. (That would’ve required speaking with a single voice and getting the distributions and Gnome and KDE, etc., to all agree to do the same thing at the same time — something Linux is woefully unable to do.) IE WAS much better than everything else at the time it came out. It was one of the fastest browsers you could get, for one. And also, software installation is WAY easier in Windows than in Linux. Macs are the best, hands down.… Microsoft is adding RSS to WINDOWS itself … just like my documents or my music or any of those features. and Microsoft is trying to HELP developers … they are not trying to mononpolize the market. The extensions they are adding are good ones, like list-support. classic rss is time based but things like wishlists are not so they wanted to add support for them. also they didn’t wan’t for every rss developer to have to reinvent the wheel … so what if i want to write an rss app i don’t have to care about 0.91 or 1.0 or 2.0 or 2.0+ms i ca just write my app against their framework … the rss integration is simply a usefull example of how extensible the rss platform in windows will be. channel 9 is great for seeing these things from the developers side. they where ahead of their times, and i guess it was based on prorietary binary solutions (and therefor ie could not access netscape stuff and so on). allso, rss fits nicely into the allways on internet world that dsl and cable have given us. finaly, this is still pull, only that you can now set the software to go out, look for changes and then report back. earlyer versions that did so would have to look at the whole website and would maybe miss something or trigger on the change of a gif banner being replaced.
https://www.osnews.com/story/10977/windows-rss-something-good/
CC-MAIN-2021-31
refinedweb
3,849
71.85
Eclipse Community Forums - RDF feed Eclipse Community Forums Re: b3 - resolver model <![CDATA[Copied from an email: >> On Sep 17, 2009, at 10:12 PM, Thomas Hallgren wrote: >> >> On 09/17/2009 01:57 AM, Henrik Lindberg wrote: >> - In Buckminster, there are flags for mutable, source on readers, and >> regexp pattern matching on component names to control the routing. In >> b3, this is instead moved to a regular OSGi type filter. The OSGi >> filter does not have the pattern matching capabilities of a regexp, but >> wildcard at the end, or middle seems to be what everyone is using, so >> this should be fine. > I've been thinking some more about this. Perhaps the OSGi filter is too > limited in it's pattern matching capabilities after all. Or, if not > limited, then at least somewhat obscure. I think people are used to two > types of pattern matching. The regular shell type matching (similar to > what Ant uses) and the regexp. Agree. >> Experience shows that even very experienced developers tend to make >> mistakes with regexp. Forgetting to escape the dot for instance, is very >> common. The most pragmatic approach to pattern matching is probably to >> do use the same approach as Ant. Developers, especially in the build >> domain, are very familiar with that. > I can see us do this in two ways. > 1. We change the model so that the pattern matching is separate from > other types of filters. > 2. Since we have our own OSGi implementation, we can add pattern > matching operator. > > I would opt for #1 since OSGi is a standard. It doesn't feel kosher to > add our own operators to it. Agree, here as well. I will reintroduce my earlier design that allows a composition of different types of filters. (For newcomers who do not know what I am talking about - there will be models posted soon). >> - I modeled that the IResolver takes more that one Filter. I don't know >> if that is a good idea, but it would allow easy composition of shared >> filters for common namespaces; i.e. just add a filter for (or >> (b3.namspace=osgi.bundle)(b3.namespace=eclipse.feature)) instead of >> having to first compose a string with all constraints. Idea being to >> reduce number of instances. For convenience sake, this can just as >> easily be done in a filter factory (a useful implementation is in >> Buckminster). >> > My preference would be one single filter. Ease of composition is > something that needs to be addressed by tooling one way or another. > Yes, agree - especially since the earlier design allowed composition of filters - no need to have an additional way to compose them. - henrik]]> Henrik Lindberg 2009-09-18T10:54:03-00:00
http://www.eclipse.org/forums/feed.php?mode=m&th=180005&basic=1
CC-MAIN-2015-11
refinedweb
449
66.03
This data2xml is a data to XML converter with a nice interface (for NodeJS). This package hasn't been significantly changed for a while. It is considered stable (it is not stagnant). Please use with all your might and enjoy it.data xml data2xml datatoxml js2xml jstoxml json2xml jsontoxml A very simple library that converts a simple data structure into XML. Doesn't support namespaces or attributes, just very simple output. This allows you to write JSON and XML from the same structure without a middle-man. It's written in Coffeescript but the JS is included.xml json json2xml js2xml data2xml data libxml A complete, bidirectional, JXON (lossless JavaScript XML Object Notation) library. Packed as UMD. Implementation of Mozilla's JXON code. Head over to MDN for Documentation.xml json jxon bidirectional loseless badgerfish parker-convention xml-to-js xml2js xml-to-json xml2json js-to-xml js2xml json-to-xml json2xml We have large collection of open source products. Follow the tags from Tag Cloud >> Open source products are scattered around the web. Please provide information about the open source projects you own / you use. Add Projects.
https://www.findbestopensource.com/tagged/json2xml
CC-MAIN-2019-35
refinedweb
189
59.8
Building a web-based application can be one of the most challenging tasks for a development team. Web-based applications often encompass functionality and data pulled from multiple IT systems. Usually, these systems are built on a wide variety of heterogeneous software and hardware platforms. Hence, the question that the team always faces is how to build web applications that are extensible and maintainable, even as they get more complex. Most development teams attack the complexity by breaking the application into small manageable parts, which can be communicated via a well-defined interface. Generally, this is done by breaking the application logic into three basic tiers: the presentation tier, business logic tier, and data access tier. By layering the code into these three tiers, the developers isolate any changes made in one tier from the other application tiers. However, simply grouping the application logic into three categories is not enough for medium to large projects. When coordinating a web-based project of any significant size, the application architect for the project must ensure that all the developers write their individual pieces to a standard framework that their code will "plug" into. If they do not, the code base for the application will be in absolute chaos, because multiple developers will implement their own pieces using their own development style and design. The solution is to use a generalized development framework, which has specific plug-in points for each of the major pieces of the application. However, building an application development framework from the ground up entails a significant amount of work. It also commits the development team to build and support the framework. Framework support forces the development team to exhaust those resources that could otherwise be used for building applications. The next three chapters of this book will introduce the reader to a readily available alternative for building their own web application development framework, the Apache Jakarta Group's Struts development framework. These chapters do not cover every minute detail associated with the Struts development framework; instead, they guide readers on how to use Struts to build the JavaEdge appication, described in Chapter 1. This chapter is going to focus on installing the Struts framework, configuring it, and building the first screen in the JavaEdge application. We cover the following topics in this chapter: In addition to our brief Struts configuration tutorial, we are going to discuss how Struts can be used to build a flexible and dynamic user interface. We will touch briefly some, but not all, of the customer JSP tag libraries available to the Struts developer. Some of the tag libraries that will be covered in this chapter include: Let's begin our discussion with some of the common problems faced while building an application. The JavaEdge application that we are going to develop, is a very simple WebLog (that is, a Blog) that allows the end users to post their stories and comment on the other stories. We have already discussed the requirements of the JavaEdge application in Chapter 1 in the section called The JavaEdge Application. The application is going to be written completely in Java. In addition, all the technologies used to build this application will be based on technology made available by the Apache Group's Jakarta project. In this section, we'll focus on some of the architectural requirements needed to make this application extensible and maintainable. This application is built by multiple developers. To enforce consistency and promote code reuse, we will use an application development framework that provides plug-in points for the developers to add their individual screens and elements. The framework used should alleviate the need for the individual JavaEdge developer to implement the infrastructure code that is normally associated with building an application. Specifically, the development framework should provide: The chosen development framework must provide the scaffolding in which the application is to be built. Without this scaffolding, AntiPatterns like the Tier Leakage and Hardwired AntiPatterns will manifest themselves. (These two AntiPatterns were introduced in Chapter 1.) We will demonstrate how Struts can be used to refactor these AntiPatterns in this chapter. Now, let's start the discussion on the architectural design of the JavaEdge application. The development team decided to use a Model-View-Controller (MVC) pattern, as the basis for the application architecture. The three core components of the MVC pattern, also know as a Model-2 JSP pattern by Sun Microsystems, are shown below: The numbers shown in the diagram represent the flow in which a user's request is processed. When a user makes a request to an MVC-based application, it is always intercepted by the controller (Step 1). The controller acts as a traffic cop, examining the user's request and then invoking the business logic necessary to carry out the requested action. The business logic for a user request is encapsulated in the model (Step 2). The model executes the business logic and returns the execution control back to the controller. Any data to be displayed to the end user will be returned by the model via a standard interface. The controller will then look up, via some metadata repository, how the data returned from the model is to be displayed to the end user. The code responsible for formatting the data to be displayed to the end user, is called the view (Step 3). View contains only the presentation logic and no business logic. When the view completes formatting the output data returned from the model, it will return execution control to the controller. The controller, in turn, will return control to the end user who made the call. The MVC pattern is a powerful model for building applications. The code for each screen in the application consists of a model and a view. Neither of these components has explicit knowledge of the other's existence. These two pieces are decoupled via the controller, which acts as intermediary between these two components. The controller assembles, at run time, the required business logic and the view associated with a particular user request. This clean decoupling of the business and presentation logic allows the development team to build a pluggable architecture. As a result, new functionality and methods to format end-user data can easily be written, while minimizing the chance of any changes disrupting the rest of the application. New functionality can be introduced into the application by writing a model and view and then registering these items to the controller of the application. Let's assume that you have a web application whose view components are JSP pages generating HTML. If you want to rewrite this application to generate PDF files rather than the HTML for the user's requests, you would only need to modify the view of the application. The changes you make to the view implementation will not have an impact on the other pieces of the application. In a Java-based web application, the technology used to implement an MVC framework might look as shown below: An MVC-based framework offers a very flexible mechanism for building web-based applications. However, building a robust MVC framework infrastructure requires a significant amount of time and energy from your development team. It would be better if you could leverage an already existing implementation of an MVC framework. Fortunately, the Struts development framework is a full-blown implementation of the MVC pattern. In the next section, we are going to walk through the major components of the Struts architecture. While Struts has a wide variety of functionalities available in it, it is still in its most basic form an implementation of a Model-View-Controller pattern. The Struts development framework (and many of the other open source tools used in this book) is developed and managed by the Apache Software Foundation's (ASF) Jakarta group. The ASF has its roots in the Apache Group. The Apache Group was a loose confederation of open source developers, who in 1995, came together and wrote the Apache Web Server. (The Apache Web Server is the most popular web server in use and runs over half of the web applications throughout the world.) Realizing that the group needed a more formalized and legal status, for protecting their open source intellectual property rights, the Apache Group reorganized as a non-profit corporation – the Apache Software Foundation – in 1999. The Jakarta group is a subgroup within the ASF, which is responsible for managing Java open source projects that the ASF is currently sponsoring. The Struts development framework was initially designed by Craig R. McClanahan. Craig is a prolific open source developer, who is also one of the lead developers for another well-known Jakarta project, the Tomcat servlet container. He wrote the Struts framework to provide a solid underpinning for quickly developing JSP-based web applications. He donated the initial release of the Struts framework to the ASF, in May 2002. All of the examples in this book are based on Struts release 1.0.2, which is the latest stable release. It is available for download from. There is currently a new release of Struts in beta testing, Release 1.1b. By the time this book is published, Release 1.1 will have been released, which supports all of the Struts features discussed in this book. When relevant, new features and functionality from Release 1.1b will be highlighted and discussed. With this brief history of Struts, let's walk through how a Struts-based application works. Earlier in this chapter, we discussed the basics of the MVC pattern, on which the Struts development framework is based. Now, let's explore the workflow that occurs when an end user makes a request to a Struts-based application. The diagram overleaf illustrates this workflow: We are going to start our discussion with the end user looking at a web page (Step 1). This web page, be it a static HTML page or a JavaServer Page, contains a variety of actions that the user may ask the application to undertake. These actions may include clicking on a hyperlink or image that takes them to another page or submitting an online form that is to be processed by the application. All actions that are to be processed by the Struts framework will have a unique URL mapping (that is, /execute/*) or file extension (that is, *.do). This URL mapping or file extension is used by the servlet container to map all the requests over to the Struts ActionServlet. The Struts ActionServlet acts as the controller for the Struts MVC implementation. The ActionServlet will take the incoming user request (Step 2) and map it to an action mapping defined in the struts-config.xml file. The struts-config.xml file contains all of the configuration information needed by the Struts framework to process an end- user's request. An is an XML tag defined in the struts-config.xml file that tells the ActionServlet the following information: Once the controller has collected all of the above information from the element for the request, it will process the end user's request. If the element indicates that the end user is posting the form data that needs to be validated, the ActionServlet will direct the request to the defined ActionForm class (Step 3). An ActionForm class contains a method called validate(). (The configuration code examples, given later in this chapter, may help you to understand this discussion better.) The validate() method is overridden by the application developer and holds all of the validation logic that will be applied against the data submitted by the end user. If the validation logic is successfully applied, the user's request will be forwarded by the ActionServlet to the Action class for processing. If the user's data is not valid, an error collection called ActionErrors is populated by the developer and returned to the page where the data was submitted. If the data has been successfully validated by the ActionForm class or the does not define an ActionForm class, the ActionServlet will forward the user's data to the Action class defined by the action mapping (Step 4). The Action class has three public methods and several protected ones. For the purpose of our discussion, we will consider only the perform() method of the Action class. This method, which is overridden by the application developer, contains the entire business logic necessary for carrying out the end-user request. Once the Action has completed processing the request, it will indicate to the ActionServlet where the user is to be forwarded. It does this by providing a key value that is used by the ActionServlet to look up from the action mapping. The actual code used to carry out a forward will be shown in the section called Configuring the homePageSetup Action Element. Most of the times, the user will be forwarded to a JSP page that will display the results of their request (Step 5). The JSP page will render the data returned from the model as an HTML page that is displayed to the end user. (Step 6) In summary, a typical web screen, based on the Struts development framework, will consist of: Now that we have completed a conceptual overview of how a single web page in a Struts application is processed, let's look at how a single page from the JavaEdge Blog is written and plugged into the Struts framework. Before diving into the basics of Struts configuration, we need to enumerate the different pieces of the JavaEdge application's source tree. The JavaEdge Blog is laid out in the following directory structure: The root directory for the project called waf. There are several key directories underneath it, as discussed below: The JavaEdge application is built, tested, and deployed with the following software: Tomcat is an implementation of the Sun Microsystems' Servlet and JSP specification. It is considered by Sun Microsystems to be the reference implementation for its specifications. The JavaEdge application is built and deployed around Tomcat. In Chapter 5, the open source application server bundle, JBoss 3.0.4/Tomcat 4.1.12, is used to run the application. Tomcat is available for download at. JBoss is an open source J2EE application server produced by the JBoss Group. It can be downloaded at. MySQL is chosen, because it is one of the most popular open source databases available today. It is highly scalable and extremely easy to install and configure. It is available for download at. Versions 1.5 of the Apache Software Foundation's Ant build utility. It can be downloaded at. A Java-based Open Source search engine. It can be downloaded at. It is discussed in detail in Chapter 7. An alternative JOS development framework from the Jakarta Apache Group. Both Lucene and Velocity are discussed in greater detail in Chapter 6. We will start the JavaEdge Struts configuration, by configuring our application to recognize the Struts ActionServlet. Configuring the ActionServlet Any application that is going to use Struts must be configured to recognize and use the Struts ActionServlet. Configuring the ActionServlet in the web.xml file involves setting up two XML tag elements: The tag that is used to configure the ActionServlet for the JavaEdge application is shown below: MemberFilter com.wrox.javaedge.common.MemberFilter MemberFilter /execute/* action org.apache.struts.action.ActionServlet application ApplicationResources config /WEB-INF/struts-config.xml debug 2 detail 0 validate true validating true 2 action /execute/* default.jsp Anyone who is familiar with Java Servlet configuration will realize that there is nothing particularly sophisticated going on here. The and tags define a filter, written by the JavaEdge team, that checks if the user has logged into the application. If the user has not logged in yet, they will automatically be logged in as an anonymous user. This filter is called every time the Struts ActionServlet is invoked. The tag defines all the information needed to use the Struts ActionServlet in the JavaEdge application. The provides a name for the servlet. The tag indicates the fully qualified Java class name of the Struts ActionServlet. The Struts ActionServlet is highly configurable. The parameters shown in the above configuration are just some of the initialization parameters that can be used to control the behavior of the ActionServlet. More details about the parameters are provided in the table overleaf: Once the element has been configured in the web.xml file, we need to define how the user requests are going to be mapped to the ActionServlet. This is done by defining a tag in the web.xml file. The mapping can be done in one of two ways: In URL prefix mapping, the servlet container examines the URL coming in and maps it to a servlet. The for the JavaEdge application is shown below: ... action /execute/* This servlet mapping indicates to the servlet container that any request coming into the JavaEdge application that has a URL pattern of /execute/* should be directed to the ActionServlet (defined by the shown above) running under the JavaEdge application. For example, if we wanted to bring up the home page for the JavaEdge application we would point our browser to, where JavaEdge is the application name, execute is the URL prefix, and homePageSetup is the Struts action. The servlet container, upon getting this request, would go through the following steps: The second way to map the user's request to the ActionServlet is to use extension mapping. In this method, the servlet container will take all URLs that map to a specified extension and send them to the ActionServlet for processing. In the example below, all of the URLs that end with a *.st extension will map to the Struts ActionServlet: action *.st If we used extension mapping to map the user's requests to the ActionServlet, the URL to get the JavaEdge home page would be, where JavaEdge is the application name, homePageSetup is the Struts action, and .st is the extension. For the JavaEdge application, being built in the next four chapters, we will be using the URL prefix method (this is the best practice for setting up and pre-populating the screens). Configuring the homePageSetup Action Element As the servlet configuration is completed for the JavaEdge application, let's focus on setting up and implementing our first Struts action, the homePageSetup action. This action sends the user to the JavaEdge home page. However, before the user actually sees the page, the action will retrieve the latest postings from the JavaEdge database. These postings will then be made available to the JSP page, called homepage.jsp. This page displays the latest ten stories in a summarized format and allows the user to log in to JavaEdge and view their personal account information. In addition, the JavaEdge reader is given a link to see the full story and any comments made by the other JavaEdge readers. To set up the homePageSetup action, the following steps must be undertaken: It is important to note that Struts follows all of Sun Microsystems' guidelines for building and deploying a web-based applications. The installation instructions, shown here, can be used to configure and deploy Struts-based application in any J2EE-compliant application server or servlet container. Setting up your first struts-config.xml file is a straightforward process. This file can be located in the WEB-INF directory of the JavaEdge project, downloaded from Wrox web site (). The location of the struts-config.xml file is also specified in the config attribute, in the web.xml entry of the ActionServlet. The struts-config.xml file has a root element, called : ... All actions, for the JavaEdge application, are contained in a tag called . Each action has its own tag. To set up the homeSetupAction, we would add the following information to the struts-config.xml file: An action has a number of different attributes that can be set. In this chapter, we will only be concerned with the path, type, and unknown attributes of the element. The other element attributes are discussed in the Chapter 3. Let's now discuss the above-mentioned attributes briefly. Holds the action name. When an end user request is made to the ActionServlet, it will search all of the actions defined in the struts-config.xml and try to make a match, based on the value of the path attribute. Holds the fully qualified name of the Action class. If the user invokes the URL shown in the above bullet, the ActionServlet will instantiate an Action class of type com.wrox.javaedge.struts.homepage.HomePageSetupAction. This class will contain all of the logic to look up the latest ten stories, which are going to be displayed to the end user. Can be used by only one element in the entire struts-config.xml file. When set to true, this tag tells the ActionServlet to use this element as the default behavior, whenever it cannot find a path attribute that matches the end user's requested action. This prevents the user from entering a wrong URL and, as a result, getting an error screen. Since the JavaEdge home page is the starting point for the entire application, we set the homePageSetup action as the default action for all unmatched requests. If more than one tag has its unknown attribute set to true, the first one encountered in the struts-config.xml will be used and all others will be ignored. If the unknown attribute is not specified in the tag, the Struts ActionServlet will take it as false. The false value simply means that Struts will not treat the action as the default action. An tag can contain one or more tags. A tag is used to indicate where the users are to be directed after their request has been processed. It consists of two attributes, name and path. The name attribute is the name of the forward. The path attribute holds a relative URL, to which the user is directed by the ActionServlet after the action has is completed. The value of the name attribute of the tag is a completely arbitrary name. However, this attribute is going to be used heavily by the Action class defined in the tag. Later in this chapter, when we demonstrate the HomePageSetupAction class, we will find out how an Action class uses the tags for handling the screen navigation. When multiple tags exist in a single action, the Action class carrying out the processing can indicate to the ActionServlet that the user can be sent to multiple locations. Sometimes, you might have to reuse the same across multiple tags. For example, in the JavaEdge application, if an exception is raised in the data-access tier, it is caught and rewrapped as a DataAccessException. The DataAccessException allows all exceptions raised in the data access tier, to be handled uniformly by all of the Action classes in the JavaEdge application. (Refer to Chapter 4 for the exception handling). When a DataAccessException is caught in an Action class, the JavaEdge application will forward the end user to a properly formatted error page. Rather than repeating the same tag in each Struts action defined in the application, you can define it to be global. This is done by adding a tag, at the beginning of the struts-config.xml file: .... The tag has one attribute called type, which defines the ActionForward class that forwards the user to another location. Struts is an extremely pluggable framework, and it is possible for a development team to override the base functionality of the Struts ActionForward class, with their own implementation. If your development team is not going to override the base ActionForward functionality, the type attribute should always be set to org.apache.struts.action.ActionForward. After the tag is added to the struts-config.xml file, any Action class in the JavaEdge application can redirect a user to systemError.jsp, by indicating to the ActionServlet that the user's destination is the system.error forward. Now let's discuss the corresponding Action class of the homePageSetup, that is, HomePageSetupAction.java. Building HomePageSetupAction.java The HomePageSetupAction class, which is located in src/java/com/wrox/javaedge/struts/homepage/HomePageSetupAction.java file, is used to retrieve the top postings made by JavaEdge users. The code for this Action class is shown below: package com.wrox.javaedge.struts.homepage; org.apache.struts.action.ActionErrors; import org.apache.struts.action.ActionError; import com.wrox.javaedge.story.*; import com.wrox.javaedge.story.dao.*; import com.wrox.javaedge.common.*; import java.util.*; /** * Retrieves the ten latest posting on JavaEdge. */ public class HomePageSetupAction extends Action { /** * The perform() method comes from the base Struts Action class. We * override this method and put the logic to carry out the user's * request in the overriding method. * @param mapping An ActionMapping class that will be used by the Action * class to tell the ActionServlet where to send the end user. * * * @param form The ActionForm class that will contain any data submitted * by the end user via a form. * @param request A standard Servlet HttpServletRequest class. * @param response A standard Servlet HttpServletResponse class. * @return An ActionForward class that will be returned to the * ActionServlet indicating where the user is to go next. */ public ActionForward perform(ActionMapping mapping, ActionForm form, HttpServletRequest request, HttpServletResponse response) { try { /** * Create a Story Data Access Object and use it to retrieve * all of the top stories. */ StoryDAO storyDAO = new StoryDAO(); Collection topStories = storyDAO.findTopStory(); //Put the collection containing the top stories into the request request.setAttribute("topStories", topStories); } catch(DataAccessException e) { System.out.println("Data access exception raised in HomePageSetupAction.perform()"); e.printStackTrace(); return (mapping.findForward("system.error")); } return (mapping.findForward("homepage.success")); } } Before we begin the discussion the HomePageSetupAction class, let's have a look at the Command design pattern. The Power of the Command Pattern The Action class is an extremely powerful development metaphor, because it is implemented using a Command design pattern. According to the Gang of Four (Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides) a Command pattern: A Command pattern lets the developer encapsulate a set of behaviors in an object, and provides a standard interface for executing that behavior. Other objects can also invoke the behavior, but they have no exposure to how the behavior is implemented. This pattern is implemented with a concrete class, abstract class, or interface. This parent class contains a single method (usually named perform() or execute()) that carries out some kind of action. The actual behavior for the requested action is implemented in a child class (which, in our example, is HomePageSetupAction), extending the Command class. The Struts Action class is the parent class in the Command pattern implementation. The diagram below illustrates the relationship between the Action and HomePageSetupAction classes: The use of the Command design pattern is one of reasons why Struts is so flexible. The ActionServlet does not care how a user request is to be executed. It only knows that it has a class that descends from the ActionForm and will have a perform() method. When the end user makes a request, the ActionServlet just executes the perform() method in the class that has been defined in the struts-config.xml. If the development team wants to change the way in which an end user request is processed, it can do it in two ways: either directly modify the logic in the Action class or write a new Action class and modify the struts-config.xml file to point to the new Action class. The ActionServlet never knows that this changed has occurred. Later in this section, we will discuss how Struts' flexible architecture can be used to solve the Hardwired AntiPattern. With this discussion on the Command pattern, let's go back to the HomePageSetupAction class. The first step, in writing the HomeSetupAction class, is to extend the Struts Action class. public class HomePageSetupAction extends Action Next, the perform() method for the class needs to be overidden. (In the Action class source code, there are several perform() methods that can be overridden. Some of these are deprecated. Other methods allow you to make request to Struts from a non-HTTP based call. For the purpose of this book, we will only by dealing with HTTP-based perform() methods.) public ActionForward perform(ActionMapping mapping, ActionForm form, HttpServletRequest request, HttpServletResponse response){ ... } The perform()/execute() method signature takes four parameters: Finds an ActionForward from the struts-config.xml and returns it to the ActionServlet. This ActionForward class contains all the information needed by the ActionServlet to forward the end users to the next page in the application. A helper class that is used to hold any form data submitted by the end user. The ActionForm class is not being used in our HomePageSetupAction class shown earlier. This class will be discussed in greater detail in Chapter 3 and Chapter 4. A standard HttpServletRequest object passed around within the servlet. A standard HttpServletResponse object passed around within the servlet. Now let's look at the actual implementation of the perform() method. StoryDAO storyDAO = new StoryDAO(); Collection topStories = storyDAO.findTopStories(); The first step, carried out by the perform() method, is to instantiate a StoryDAO object and use it to retrieve a Collection object of StoryVO. Each StoryVO object contained within the collection topStories represents a single row of the data retrieved from the story table in the JavaEdge database. The StoryDAO is an implementation of a J2EE Data Access Object (DAO) pattern. This pattern hides all the implementation details of how the data is retrieved and manipulated from the database. The StoryVO object is an implementation of a J2EE Value Object (VO) pattern. A Value Object pattern wraps the data being passed between the different tiers in a simple Java class containing get()or set() methods to access the data. The physical details of the data are abstracted away from the application consuming the data. The DAO and VO design patterns, along with the JavaEdge database tables, will be discussed in greater detail in Chapter 5. After the storyDAO.findTopStories() method is executed, the topStories object will be placed as an attribute of the request object: request.setAttribute("topStories", topStories); The Collection is being placed in the request. As a result, when the ActionServlet forwards this to the homepage.jsp page (as defined in the struts-config.xml file), the homePage.jsp will be able to walk through each item in the topStories Collection and display the data in it to the end-user. Once the story data has been retrieved, an ActionForward will be generated by calling the findForward() method, in the mapping object passed into the perform() method: return (mapping.findForward("homepage.success")); If a DataAccessException is raised, while executing the call to the storyDAO.findTopStories()method, the exception will be caught and processed: try { ... } catch(DataAccessException e) { System.out.println("Data access exception raised in " + "HomePageSetupAction.perform()"); e.printStackTrace(); return (mapping.findForward("system.error")); } You will notice that, when the DataAccessException is caught, the user is redirected to the global forward system.error. We have finished configuring the struts-config.xml file and built an Action class to pre-populate the JavaEdge's home screen with story data. Before we look at the JSP file, homePage.jsp, let's discuss how to refactor the Hardwired AntiPattern. Refactoring the Hardwired AntiPattern The declarative architecture of the Struts development framework provides a powerful tool for avoiding or refactoring a Hardwired antipattern. (Refer to Chapter 1, for the discussion of Hardwired and other antipatterns.) All activities performed by the user, in a Struts-based application should be captured within an tag defined in the struts-config.xml file. Using an tag gives the developer flexibility in the way in which the screen navigation and application of business rules are carried. According to the author's experience, while building a Struts application, elements defined within the application fall into three general categories: Used to perform any activities that take place before the user sees a screen. In the JavaEdge Home Page example, we used the /HomePageSetup action to retrieve the top stories from the JavaEdge database and place them in an attribute in the HttpServletRequest object. Actions that will process the data collected from the end user. Can be invoked after a user's request has been processed. Usually, it carries out any cleanup action, needed after the user's request has been processed. These three types of actions, are purely conceptual. There is no way in the Struts tag, to indicate that the action being defined is a Setup, Form, or Tear-Down action. However, this classification is very useful for your own Struts applications. A Setup action allows you to easily enforce "pre-condition" logic before sending a user to a form. This logic ensures that, before the user even sees the page, certain conditions are met. Setup actions are particularly useful when you have to pre-populate a page with data. In Chapters 3 and Chapter 4, when we discuss how to collect the user data in Struts, we will find several examples of a Setup action used to pre-populate a form. In addition putting a Setup action before a page gives you more flexibility in maneuvering the user around. This Setup action can examine the current application state of the end users, based on which it can navigate them to any number of other Struts actions or JSP pages. A Form action is invoked when the user submits the data entered in an HTML form. It might insert a record into a database or just perform some simple data formatting on the data entered by the user. A Tear-Down action is used to enforce "post-condition" logic. This logic ensures that after the user's request has been processed, the data needed by the application is still in a valid state. Tear-Down actions might also be used to release any resources, previously acquired by the end-user. As you become more comfortable with Struts, you will prefer chaining together the different actions. You will use the Setup action enforce pre-conditions that must exist when the user makes the initial request. The Setup action usually retrieves some data from a database and puts it in one of the different JSP page contexts (that is, page, request, session, or application context). It then forwards the user to a JSP page that will display the retrieved data. If the JSP page contains a form, the user will be forwarded to a Form action that will process the user's request. The Form action will then forward the user to a Tear Down action that will enforce any post-condition rules. If all post-conditions rules are met, the Tear Down action will forward to the next JSP page the user is going to visit. It's important to note that by using the strategies defined above, you can change application behavior by reconfiguring the struts-config.xml file. This is a better approach than to go constantly into the application source code and modify the existing business logic. With this discussion on the Hardwired antipattern, let's have a look at homepage.jsp, which renders the HTML page that the user will see after the request has been processed. Now we are going to look at how many of the Struts custom JSP tag libraries can be used to simplify the development of the presentation tier. With careful design and use of these tag libraries, you can literally write JSP pages without ever writing a single Java scriptlet. The Struts development framework has four sets of custom tag libraries: We will not be discussing the Struts HTML tag libraries in this chapter. Instead, we will discuss these tags in Chapter 3. Before we begin our discussion of the individual tag libraries, the web.xml file for the JavaEdge application has to be modified, to include the following Tag Library Definitions (TLDs): ... With these TLDs added to the web.xml file, we now begin our discussion by looking at the Struts tags. The template tag libraries are used to break a JSP screen into small manageable pieces, which can easily be customized and plugged into the application. All of the screens in the JavaEdge application are going to be broken into four distinct parts: This section will be displayed at the top of every screen. The first step, in setting up the JavaEdge application to use the templates, is to actually define a template that will be used for the pages. The JavaEdge template is defined in a file named template.jsp. The code for this file is shown below: <%@ taglib uri='/WEB-INF/struts-template.tld' prefix='template' %> The template above sets up an HTML page with four different template plug-in points. Each plug-in is going to allow the individual screen implementing the template to define its own content for that particular plug-in point. These plug-in points can be identified by the tags, embedded throughout the template.jsp file. The homePage.jsp file implements the above template, by using the Struts template tags tag is used to indicate the template that will be used to build the page. In the homePage.jsp file, we are telling the tag to use the template.jsp file. The JSP files that are plugged into the template can be set as the absolute paths based on the application's root (that is, /WEB-INF/jsp) or the paths relative to the where the template.jsp file is located. Since all of the JSP files for the JavaEdge application, are located in one directory, we have chosen not to fully define a relative URL in the individual JSP files. Once we have indicated where the template is located, we can begin plugging in the content we want to display to the end user. The content is going to be plugged into the template through the use of the tag. This tag allows the developer to plug either a literal string value or the contents of a file into the template. To use a literal string value, the direct attribute of the tag must be set to true. For example, in the homePage.jsp file above: The above call will cause the following HTML code to be generated, when the user is directed to the homePage.jsp by the ActionServlet. Today's Top Stories To use the contents of a file in the template, we use the tag without the direct attribute or with the direct attribute set to false. In homePage.jsp, the following code will load the contents of the file homePageContent.jsp and process any JSP code or custom tags from that file appropriately: To summarize, the template tag library has three tags: Now, if you have configured the JavaEdge application based on the download instructions on the Wrox web site (), you should be able to bring up the JavaEdge home page (Remember the URL to bring up this page is. Later on, in the chapter, we will discuss ways to redirect the user to /homePageSetup action, when they first come to the JavaEdge application.). The JavaEdge home page should look like: We are now going to show all the header, content, and footer code that is used throughout the JavaEdge application. We are not going to immediately explain what the code does, because this explanation will be explained as we discuss the other Struts tag libraries. The Screen Header: header.jsp The header.jsp file generates the menu bar above each of the JavaEdge application pages: <%@ Home Page Content: homePageContent.jsp The homePageContent.jsp will generate the output that will be displayed in the middle of the application screen. The HTML generated in this middle section is going to be the majority of the JavaEdge application; the end user will interact Screen Footer: footer.jsp The footer.jsp file generates the blue footer bar at the end of each JavaEdge screen: <%@" %> Well-designed JSP pages use JavaBeans to separate the presentation logic, in the application, from the data that is going to be displayed on the screen. A JavaBean is a regular class that can contain the data and logic. In our home page example, the HomePageSetupAction class retrieves a set of StoryVO classes into a collection and puts them into the session. The StoryVO class is a JavaBean that encapsulates all of the data, for a single story, posted in the Java Edge database. Each data element, stored within a StoryVO object, has a get() and set() method for each property. The code for the StoryVO class is shown below: package com.wrox.javaedge.story; import java.util.Vector; import com.wrox.javaedge.common.ValueObject; import com.wrox.javaedge.member.MemberVO; /** * Holds Story data retrieved from the JavaEdge database. * @todo Need to finish documenting this class */ public class StoryVO extends ValueObject { private Long storyId; private String storyTitle; private String storyIntro; private byte[] storyBody; private java.sql.Date submissionDate; private Long memberId; private MemberVO storyAuthor; public Vector comments = new Vector(); // of type StoryCommentVO public Long getStoryId() { return storyId; } public void setStoryId(Long storyId) { this.storyId = storyId; } public String getStoryTitle() { return storyTitle; } public void setStoryTitle(String storyTitle) { this.storyTitle = storyTitle; } public String getStoryIntro() { return storyIntro; } public void setStoryIntro(String storyIntro) { this.storyIntro = storyIntro; } public String getStoryBody() { return new String(storyBody); } public void setStoryBody(String storyBody) { this.storyBody = storyBody.getBytes(); } public java.sql.Date getSubmissionDate() { return submissionDate; } public void setSubmissionDate(java.sql.Date submissionDate) { this.submissionDate = submissionDate; } public Vector getComments() { return comments; } public void setComments(Vector comments) { this.comments=comments; } public MemberVO getStoryAuthor() { return storyAuthor; } public void setStoryAuthor(MemberVO storyAuthor) { this.storyAuthor = storyAuthor; } } // end StoryVO The JSP specification defines a number of tags that give the developer the ability to manipulate the contents of a JavaBean. The Struts bean tag library offers a significant amount of functionality beyond that offered by the standard tag libraries. The functionality provided by the bean tag library can be broken into two broad categories of functionality: We are going to begin with the most common use of the Struts bean tag, the retrieval and display of data from a JavaBean. Bean Output There are two bean tags available for generating output in the Struts bean library. They are: The tag retrieves a value from a JavaBean and writes it to the web page being generated. Examples of this tag can be found throughout the homePageContent.jsp file. For example, the following code will retrieve the value of the property (storyTitle) from a bean, called story, stored in the page context: To achieve the same result via a Java scriptlet would require the following code: <% StoryVO story = (StoryVO) pageContext.getAttribute("story"); if (story!=null){ out.write(story.getStoryTitle()); } %> The tag supports the concept of the nested property values. For instance, the StoryVO class has a property called storyAuthor. This property holds an instance of a MemberVO object. The MemberVO class contains the data about the user who posted the original story. The homePageContent.jsp page retrieves the values from a MemberVO object, by using a nested notation in the tag. For instance, to retrieve the first name of the user who posted one of the stories to be displayed, the following syntax is used: In the above example, the write tag is retrieving the storyAuthor by calling story.getStoryAuthor() and then the firstName property by calling storyAuthor.getFirstName() The tag has the following attributes that can be configured: The second type of tag for generating output is the Struts tag. The tag is used to separate the static content from the JSP page in which it resides. All the contents are stored in a properties file, independent of the application. The properties file consists of a name-value pair, where each piece of the text that is to be externalized is associated with a key. The tag will use this key to look up a particular piece of text from the properties file. To tell the name of the properties file to the ActionServlet, you need to make sure that the application parameter is set in the web.xml file. The properties file, called ApplicationResources.properties, is placed in the classes directory underneath the WEB-INF directory of the deployed application. In the JavaEdge source tree, the ApplicationResources.properties file is located in working directory/waf/src/web/WEB-INF/classes (where the working directory is the one in which you are editing and compiling the application source). For the purpose of the JavaEdge application, an tag must be configured as shown below: ... application ApplicationResources The static content for the JavaEdge application has not been completely externalized using the functionality. Only the header.jsp file has been externalized. The following example, taken directly from header.jsp, will return the complete URL for the JavaEdge login page: When this tag call is processed, it will retrieve the value for the javaedge.header.logout key from the ApplicationResources.properties file. All of the name-value pairs from the ApplicationResources.properties file used in the header.jsp are shown below: javaedge.header.title=The Java Edge javaedge.header.logout=<a href="/JavaEdge/execute/LogoutSetup">Logout</a> javaedge.header.myaccount=<a href="/JavaEdge/execute/MyAccountSetup">My Account</a> javaedge.header.postastory=<a href="/JavaEdge/execute/postStorySetup">Post a Story</a> javaedge.header.viewallstories=<a href="/JavaEdge/execute/ViewAllSetup">View All Stories</a> javaedge.header.signup=<a href="/JavaEdge/execute/signUpSetup">Sign Up</a> javaedge.header.search=<a href="/JavaEdge/execute/SearchSetup">Search</a> If the tag cannot find this key in the ApplicationResources.properties file, the tag will throw a runtime exception. The tag has the following attributes: Let's have an interesting discussion on the Tight Skins antipattern, before moving on to bean creation. The Tight Skins Antipattern Recollecting our discussion in Chapter 1, the Tight Skins antipattern occurs when the development team does not have a presentation tier whose look and feel can be easily customized. The Tight Skins antipattern is formed, when the development team embeds the static content in the JSP pages. Any changes to the static content result in having to hunt through all of the pages in the application and making the required changes. As we saw above, the tag can be used to centralize all the static content, in an application, to a single file called ApplicationResources.properties. However, the real strength of this tag is that it makes it very easy to write internationalized applications that can support multiple languages. The JavaEdge header toolbar is written to support only English. However, if you want the JavaEdge's header toolbar to support French, you need to follow the following steps: HttpSession session = request.getSession(); session.setAttribute(org.apache.struts.action.Action.LOCALE_KEY, new java.util.Locale(LOCALE.FRENCH, LOCALE.FRENCH) ); Struts stores a Locale object, in the session, as the attribute key org.apache.struts.action.Action.LOCALE_KEY. Putting a new Locale object (which is instantiated with the values for French) will cause Struts to reference the ApplicationResources_fr.properties file, for the time for which the user's session is valid. (Or at least, until a new Locale object, containing another region's information is placed in the user's session.) Bean Creation Struts offers a number of helper tags (bean creation tags), for creating the JavaBeans to be used within a JSP page. With these tags, a number of tasks can be carried out within the JSP page, without the need to write Java scriptlet code. These tasks include: The table below gives a brief summary of the different creation tags available: We have not used any of the bean creation tags in the JavaEdge application. There is simply no need to use them for any of the pages in the application. Also, in the author's opinion, most of the bean creation tags can be used in an Action class using Java code. According to the author's experience, the overuse of the bean creation tags can clutter up the presentation code and make it difficult to follow. For full details on the bean creation tags you can visit the following URLs: (which lists all of the bean tags and provides an explanation of all their attributes) and (which shows several working examples for each of the tags). The logic tag library gives the developer the ability to add conditional and iterative control to the JSP page, without having to write Java scriptlets. These tags can be broken into three basic categories: Iteration Tags The logic tag library has a single tag, called , which can be used to cycle through a Collection object in the JSP page context. Recollect that in the HomePageSetupAction class, a collection of StoryVO objects is placed into the request. This collection holds the latest ten stories posted to the JavaEdge site. In the homePageContent.jsp page, we cycle through each of the StoryVO objects in the request, by using the tag: ... In the above code snippet, the tag looks up the topStories collection in the request object of the JSP page. The name attribute defines the name of the collection. The scope attribute defines the scope, in which the tag is going to search for the JavaBean. The type attribute defines the Java class that is going to be pulled out of the collection. In this case, it is StoryVO. The id attribute holds the name of the JavaBean, which holds a reference to the StoryVO pulled out of the collection. When referencing an individual bean in the tag, we will use the tag. The name attribute of the tag must match the id attribute defined in the . Keep in mind the following points, while using the tag: Conditional Tags The Struts development framework also provides a number of tags to perform basic conditional logic. Using these tags, a JSP developer can perform a number of conditional checks on the common servlet container properties. These conditional tags can check for the presence of the value of a piece of data stored as one of the following types: For instance, in the header.jsp, the Struts conditional and tags are used to determine which menu items are available for a JavaEdge end user. If the user has been successfully authenticated, there will be a JavaBean, called memberVO, present in the user's session. (The code that actually authenticates the user and places a memberVO class in session is located in the LoginAction.java class. If you are interested in seeing the code please review it in this class.) This JavaBean contains all of the user's personal information and preferences. Let's look at a code snippet from the header.jsp: In the JSP above code, a column containing a link to the login URL will be rendered only if the JavaEdge user has not yet logged into the application. The checks the user's session to see if there is a valid memberVO object present in the session. The tag in the above code checks if there is a memberVO object in the user's session. If there is one, a column will be rendered containing a link to the logout page. The and tags are extremely useful, but, in terms of applying the conditional logic, are extremely blunt instruments. Fortunately, Struts provides us with a number of other conditional logic tags. Suppose that the user authentication scheme was changed and the JavaEdge application set a flag indicating that the user was authenticated, by placing a value of true or false in a cookie called userloggedin. You could rewrite the above code snippet, as follows, to use the and tags: We can even use the and tag to check a property in a JavaBean. For instance, we could rewrite the authentication piece of the JavaEdge application to set an attribute (called authenticated) in the memberVO object to a hold a string value of true or false. We could then check the property in the memberVO JavaBean using the following code: When applying the conditional logic tags against a property on a JavaBean, keep two things in mind: There are some other conditional logic tags available. These include: Checks if the value retrieved from a JavaBean property, HttpServletRequest parameter, or HTTP header is greater than the value stored in the value attribute of the tag. Checks if the value retrieved from a JavaBean property, HttpServletRequest parameter, or HTTP header value is less than the value stored in the value attribute of the tag. Checks if the value retrieved from a JavaBean property, HttpServletRequest parameter, or HTTP header value is greater than or equal to the value stored in value attribute of the tag. Checks if the value retrieved from a JavaBean property, HttpServletRequest parameter, or HTTP header value is less than or equal to the value stored in the value attribute of the tag. The logic tags shown above will try to convert the value they are retrieving to a float or double and perform a numeric comparison. If the retrieved value cannot be converted to a float or double, these tags will perform the comparisons based on the string values of the items being retrieved. Here's another interesting discussion about the Tight Skins antipattern. The Tight Skins AntiPattern Revisited A common requirement, for many web applications, is to provide a different look and feel for the same screen(s), depending on who the user is. Many development teams will embed the conditional checks in the JSP code of the application to determine which piece of the screen is to be rendered for the user. However, embedding the conditional logic into every page, for each different user role, is very shortsighted solution. In applications with more than two user roles, it becomes very cumbersome to implement role-based presentation. The JSP code has tags spread all over it and becomes a nightmare to maintain. This is a Tight Skin antipattern, because customizing the look and feel of the page to a particular class of the user becomes very difficult. The JSP code for checking the user's role becomes tightly intertwined with the JSP code rendering the HTML page. However, using the Struts and tags, we can simplify role-based presentation. Also, this makes it very easy to have an application that can support different looks and feels for the same screens. Let's look at a simple example of using the and tags for role-based presentation in the JavaEdge application. Suppose, in the JavaEdge website, we want to provide different headers, footers, and content based on whether the visitor of the JavaEdge site is a registered member or an anonymous user. For the register member, we might want to have a different presentation interface, which provides more functionalities and features than that available to a non-registered member. We could rewrite the homePage.jsp file to perform the following logic: <%@ JavaEdge application has two user roles, anonymous and member. In more sophisticated applications, where we might have several different roles, we can customize the look and feel of each screen to a specific user using the above approach. Every time we need to add a role, we modify the base template for each screen to include new plug-in points for the header, screen content, and footer JSP files specific to that role. By using Struts templates and performing the conditional checks in the template file for each page, we partition the presentation code for each role. Each role has its own JSP files for rendering the application screens. This makes maintaining the JSP code for the individual role easier and lessens the risk that modifications made for one role break the user interface for all the other roles. Movement Tags These logic tags, in the Struts tag library, offer the developer the ability to redirect the user to a new URL. The two movement logic tags are: Let's see how these two tags can be used. To bring up the JavaEdge application, the user needs to point the browser to. This forces the users to know they have to do the /homePageSetup action. An easier solution would be to allow them to go to. In a non-Struts based application, this could be accomplished by setting up a tag in the application's web.xml file. This tag allows you to define the default JSP or HTML file, which is presented when the user comes to the application and does not define a specific page. However, this is a problem for Struts applications. The allows the developer to specify only file names and not URLs or Struts actions. However, using the movement logic tags provides you with the ability to work around this shortcoming. First, we will demonstrate a solution using a tag. We still need to set up the tag in web.xml file of the JavaEdge. We are going to set up a file, called default.jsp, for the default file to be executed. ... default.jsp Next, we add a new tag, called default.action, to tag in the struts-config.xml file of the JavaEdge. ... The last step is to write the default.jsp file. This file contains two lines of code: <%@ taglib uri="/WEB-INF/struts-logic.tld" prefix="logic" %> We can perform the same functionality with the tag. If we implement the default.jsp using a tag, we still need to setup default.jsp in the web.xml file. However, we do not need to add another tag to the located in struts-config.xml. Instead, we just need to write the default.jsp in the following manner: <%@ taglib uri="/WEB-INF/struts-logic.tld" prefix="logic" %> The above code will generate a URL relative to the JavaEdge application (). We are not restricted, while using the , to redirect to a relative URL. We can also use a fully qualified URL and even redirect the user to another application. For instance, we could rewrite the default.jsp as follows: <%@ taglib uri="/WEB-INF/struts-logic.tld" prefix="logic" %> Using both the and the is equivalent to calling the redirect() method in the HttpSessionServletResponse class in the Java Servlet API. The difference between the two tags is the that the tag will let you forward only to a defined in the struts-config.xml file. The tag will let you redirect to any URL. The tag has a significant amount of functionality. However, we have had a brief introduction to what the tag can do. A full listing of all the attributes and functionalities of this tag can be found at. In this chapter, we explored the basic elements of a Struts application and how to begin using Struts to build the applications. To build a Struts application we need to know: We also identified some different areas where Struts can be used to refactor the web antipatterns, that might form during the design and implementation of the web-based applications. Refactoring of the following antipatterns was discussed: We looked at how to chain together Struts actions, to perform the pre-condition, form processing, and post-condition logic. This segregation of the business logic into the multiple applications provides a finer control over the application of the business logic and makes it easier to redirect the user to different Struts actions and JSP pages. While examining this antipattern, we looked at how to use the and tags to implement role-based presentation logic. This chapter lays the foundation for the material covered in Chapters 3 and Chapter 4. In the next chapter, we are going to cover how to implement web-based forms using the Struts form tags. We will also look at the Struts HTML tag library and how it simplifies form development. Finally, the next chapter will focus on how to use the Struts ActionForm class, to provide a common mechanism for validating user data and reporting validation errors back to the user. Introduction
https://flylib.com/books/en/2.46.1/creating_a_struts_based_mvc_application.html
CC-MAIN-2018-09
refinedweb
9,998
54.32
How to implement OpenID authentication and integrate it with TurboGears Identity Note from the original author of this article: A few days after I wrote this page, I found this article: Seems a better way, although I have not digged into. Just for info. What is OpenID? OpenID is an authentication mechanism favouring single-sign-on on the web. If your website implements OpenID authentication (as a client), your site doesn't need to store passwords and ask for simple registration information of your users. If someone has an OpenID (anybody can get an OpenID for free by registering at OpenID provider sites like), he can directly login through this, and your site can access his information. More information about OpenID can be found at: Integrating OpenID with TurboGears Identity Here we are going to discuss about how to integrate OpenID (client part) with TurboGears identity management. It is easy to integrate OpenID authentication with the identity framework in a TurboGears application with some tricks. But before we understand how the integration would work, we must understand some TurboGears identity basics. Let's understand what exactly happens when we call a controller method requiring authentication. Say you have a controller method like this: @expose() @identity.require(identity.not_anonymous()) def some_url(self, param1, param2) return "Hi " + param1 + param2 To call this, you need to invoke¶m2=y. Now, if you are not authenticated, you will be redirected to the login page, where you give your user name and password. When you press the "Login" button in the login form, what is actually invoked is:¶m2=y&user_name=your_name& password=your_password&login=Login. (all on one line) You might like to have a look at login.kid to have an understanding on this. Now let's talk about an interesting rule TurboGears follows. Whenever TurboGears sees an url having user_name, password and login parameters, it removes these parameters, after using them if needed. So, if you invoke:¶m2=y&user_name=your_name& password=your_password&login=Login (all on one line again) after the authentication taking place, what actually some_url will see is only param1 and param2. And, if the authentication fails, you will be redirected to the login page again. Having understood this, let's now have a minimal understanding how OpenID authentication works in general. Very briefly, OpenID authentication is done in two steps: - After getting the OpenID of the user, you call his OpenID site with his name and some other parameters. Let's name the method to call the OpenID site login_begin. - From the OpenID site, you receive authentication information and additional data. Let's name the method to receive this data login_finish. Start to integrate So, to integrate TurboGears identity and OpenID authentication, we need to do the following: Change the login form to post to login_begin instead of ${previous_url}. Your login.kid will now have: <form action="/login_begin" method="POST"> Introduce previous_url as a hidden field, so that its value is preserved. Add this line to the login form: <input type="hidden" name="previous_url" value="${previous_url}"/> Change the id and name of the user_name field to openid_url: <input type="text" id="openid_url" name="openid_url"/> Change the type of password field to hidden: <input type="hidden" id="password" name="password"/> - Write the method login_begin. - Write the method login_finish. In login finish, if OpenID authentication succeeds, you need to set a random password for the user. You may not be able to digest all this now, until you see the tutorial and read through the source code given below. Tutorial - Creating a TurboGears Application with OpenID support Follow these steps below to have an OpenID enabled TurboGears application. This tutorial uses SQLAlchemy and sqlite. - Install SQLAlchemy and sqlite (with pysqlite) on your machine if not already installed. - Install the python library for OpenID support from here. Download the combo pack - latest version. (This code was tested using Python OpenID 1.2.0 Combo and works well with leading OpenID servers, although I am not aware which specification of OpenID it implements.) Create a TurboGears application by the command: tg-admin quickstart -i -s -t tgbig Specify project name and package name as tgopenid. In root.py of the controllers package, ensure that the User class is imported from model.py by having the line: from tgopenid.model import User For OpenID support, we need some imports and utility functions. These are described below. Have these just above the Root class in root.py: ######################################################### # Added for OpenID support ######################################################### import turbogears from turbogears import flash from pysqlite2 import dbapi2 as sqlite from openid.consumer import consumer from openid.store import sqlstore from openid.cryptutil import randomString from yadis.discover import DiscoveryFailure from urljr.fetchers import HTTPFetchingError # Utility functions def _flatten(dictionary, inner_dict): """ Given a dictionary like this: {'a':1, 'b':2, 'openid': {'i':1, 'j':2}, 'c': 4}, flattens it to have: {'a':1, 'b':2, 'openid.i':1, 'openid.j':2, 'c':4} """ if dictionary.has_key(inner_dict): d = dictionary.pop(inner_dict) for k in d.iterkeys(): dictionary[inner_dict +'.' + k] = d[k] def _prefix_keys(dictionary, prefix): " Prefixes the keys of dictionary with prefix " d = {} for k, v in dictionary.iteritems(): d[prefix + '.' + k] = v return d def _get_openid_store_connection(): """ Returns a connection to the database used by openid library Is it needed to close the connection? If yes, where to close it? """ return sqlite.connect("openid.db") def _get_openid_consumer(): """ Returns an openid consumer object """ from cherrypy import session con = _get_openid_store_connection() store = sqlstore.SQLiteStore(con) session['openid_tray'] = session.get('openid_tray', {}) return consumer.Consumer(session['openid_tray'], store) def _get_previous_url(**kw): """ if kw is something like {'previous_url' : 'some_controller_url', 'openid_url' : 'an_openid.myopenid.com', 'password' : 'some_password', 'login' : 'Login', 'param1' : 'param1' 'param2' : 'param2' } the value returned is:? user_name=an_openid.myopenid.com& password=some_password&login=Login¶m1=param1¶m2=param2 (on a single line) """ kw['user_name'] = kw.pop('openid_url') previous_url = kw.pop('previous_url') return turbogears.url(previous_url, kw) Inside the Root controller class, at the bottom, write the code for login_begin and login_finish as below:@expose() def login_begin(self, **kw): if len(kw['openid_url']) == 0: # openid_url was not provided by the user flash('Please enter your openid url') raise redirect(_get_previous_url(**kw)) oidconsumer = _get_openid_consumer() try: req = oidconsumer.begin(kw['openid_url']) except HTTPFetchingError, exc: flash('HTTPFetchingError retrieving identity URL (%s): %s' \ % (kw['openid_url'], str(exc.why))) raise redirect(_get_previous_url(**kw)) except DiscoveryFailure, exc: flash('DiscoveryFailure Error retrieving identity URL (%s): %s' \ % (kw['openid_url'], str(exc[0]))) raise redirect(_get_previous_url(**kw)) else: if req is None: flash('No OpenID services found for %s' % \ (kw['openid_url'],)) raise redirect(_get_previous_url(**kw)) else: # Add server.webpath variable # in your configuration file for turbogears.url to # produce full complete urls # e.g. server.webpath="" trust_root = turbogears.url('/') return_to = turbogears.url('/login_finish', _prefix_keys(kw, 'app_data')) # As we want also to fetch nickname and email # of the user from the server, # we have added the line below req.addExtensionArg('sreg', 'optional', 'nickname,email') req.addExtensionArg('sreg', 'policy_url', '') redirect_url = req.redirectURL(trust_root, return_to) raise redirect(redirect_url) @expose() def login_finish(self, **kw): """Handle the redirect from the OpenID server. """ app_data = kw.pop('app_data') # As consumer.complete needs a single flattened dictionery, # we have to flatten kw. See flatten's doc string # for what it exactly does _flatten(kw, 'openid') _flatten(kw, 'openid.sreg') oidconsumer = _get_openid_consumer() info = oidconsumer.complete(kw) if info.status == consumer.FAILURE and info.identity_url: # In the case of failure, if info is non-None, it is the # URL that we were verifying. We include it in the error # message to help the user figure out what happened. flash("Verification of %s failed. %s" % \ (info.identity_url, info.message)) raise redirect(_get_previous_url(**app_data)) elif info.status == consumer.SUCCESS: # Success means that the transaction completed without # error. If info is None, it means that the user cancelled # the verification. # This is a successful verification attempt. # identity url may be like # strip it to yourid.myopenid.com user_name = info.identity_url.rstrip('/').rsplit('/', 1)[-1] # get sreg information about the user user_info = info.extensionResponse('sreg') u = User.get_by(user_name=user_name) if u is None: # new user, not found in database u = User(user_name=user_name) if user_info.has_key('email'): u.email_address = user_info['email'] if user_info.has_key('nickname'): u.display_name = user_info['nickname'] u.password = randomString(8, "abcdefghijklmnopqrstuvwxyz0123456789") try: u.flush() except Exception, e: flash('Error saving user: ' + str(e)) raise redirect(turbogears.url('/')) app_data['openid_url'] = user_name app_data['password'] = u.password raise redirect(_get_previous_url(**app_data)) elif info.status == consumer.CANCEL: # cancelled flash('Verification cancelled') raise redirect(turbogears.url('/')) else: # Either we don't understand the code or there is no # openid_url included with the error. Give a generic # failure message. The library should supply debug # information in a log. flash('Verification failed') raise redirect(turbogears.url('/')) To test your program, add a method as below: @expose() @identity.require(identity.not_anonymous()) def whoami(self, **kw): u = identity.current.user return "\nYour openid_url: " + u.user_name + \ "\nYour email_address: " + u.email_address + \ "\nYour nickname: " + u.display_name + \ "\nThe following parameters were supplied by you: " + str(kw) - Change login.kid as discussed in the previous section. - Add session_filter.on = True under the global section in app.cfg. OpenID implementation needs session support. - Add server.webpath="" under the global section in dev.cfg. It is needed to build full urls in login_begin. You need a database, called openid_store for OpenID to run. This typically should be different from your application database. To create an OpenID database, run the createstore.py script given below in the project root directory (wherever you have dev.cfg): # createstore.py from pysqlite2 import dbapi2 as sqlite from openid.store import sqlstore con = sqlite.connect('openid.db') store = sqlstore.SQLiteStore(con) store.createTables() In model.py, increase the size of the user_name field in users_table from 16 to 255: Column('user_name', Unicode(255), unique=True), - Create the database for your application by tg-admin sql create. - Test your application! An obvious test case is to try Notes - The sample application is at tgopenid.tar.gz - If you develop an application using OpenID, it might be time consuming while testing the authentication with a live OpenID server like. To save time, you may like to run server.py at python-openid-x.x.x/examples folder in the python library you downloaded from and run it using the command python server.py --port 7999. Then, while logging in from your application, you can use OpenID url as. References - - - - - - Past comments: localhost 2007-05-28 08:13:34 Attached a tg-admin command 'createopenidstore' (loooking for a file configured as 'openid_store', and if this does not exist runs the command given as createstore.py in 10. above).
http://docs.turbogears.org/1.0/RoughDocs/OpenIDWithIdentity
crawl-002
refinedweb
1,756
51.65
we have created our own dll file with common operations and a repository. It is meant to be used by other test suites. In the source code of the c# files, I added c# documentation comments to all the public classes and methods like this: I've also activated the generation of doc comments (at least I think so) in the settings of the project. Code: Select all /// <summary> /// Returns an instance of the designer and connects to the zeb//control server defined in the central database. /// Will create (start) a new designer instance if none exists. /// </summary> public void getDesignerWithLogin() { /*...*/ } The resulting xml file is copied along with the dll file when the library is referenced from a test suite. However, when I'm using methods from the dll file, the doc comments are not shown. How can I fix this? Thanks! Best, Matthias
https://www.ranorex.com/forum/how-do-you-include-c-doc-comments-in-dll-files-t6027.html
CC-MAIN-2020-40
refinedweb
145
63.9
I did check with a fresh 2.6 xxI did check with a fresh 2.6 xxJeremy Hylton wrote: What if you used a special object that would produce a useful errorI like this. Make it a singleton, and put it in the global namespace for Scripts, so that we can write: message if the user tries to access the container. if context is Inaccessible: # Do without access to context I've checked in the changes to the 2.6 branch, 2.7 branch and the head to change the binding behavior for 'container' and 'context': - If the user does not have access to the item, the script will bind an UnauthorizedBinding object instead of the real object, rather than throw an exception at binding time. - Any attribute or item access on the UnauthorizedBinding will throw an Unauthorized, including the name of the binding that the user didn't have access to. The result is that if you have scripts where the script container is inaccessible to the users of the script: - If the script does not reference 'container' in its code, things will work without any action on the part of the site admin - If the script *does* reference 'container' then a meaningful Unauthorized error will be raised. Site admins can either give users the appropriate roles on the script container or give appropriate proxy roles to the scripts to fix any problems. Note that I *didn't* put the UnauthorizedBinding in the script globals to implement the Inaccessible idea above, because: - it is kind of 'featurish', at least in that it really should have some associated documentation etc. - I want to make only absolutely necessary changes at this point and get 2.6.4 and 2.7.0 finalized. If any of the Plone folk who have been running into this issue can try the changes from cvs, I'd appreciate it. thx, Brian Lloyd [EMAIL PROTECTED] V.P. Engineering 540.361.1716 Zope Corporation _______________________________________________ Zope-Dev maillist - [EMAIL PROTECTED] ** No cross posts or HTML encoding! ** (Related lists - ) _______________________________________________ Zope-Dev maillist - [EMAIL PROTECTED] ** No cross posts or HTML encoding! ** (Related lists - )
https://www.mail-archive.com/zope-dev@zope.org/msg15000.html
CC-MAIN-2017-39
refinedweb
355
59.74
Next: Integer Overflow, Previous: Varieties of Unportability, Up: Portable C and C++ [Contents][Index] in GNU gnulib, Legacy Functions in GNU gnulib, and Glibc Functions in GNU gnulib. Please help us keep the gnulib list as complete as possible. exit On ancient hosts, exit returned int. This is because exit pred The C standard says a call free (NULL) does nothing, but some old systems don’t support this (e.g., NextStep). isinf isnan In C99 and later, isinf and isnan are. In C99 and later, int isnan_f (float x) { return x != x; } static int isnan_d (double x) { return x != x; } static int isnan_ld (long double x) { return x != x; } #endif #ifndef isinf # define isinf(x) \ (sizeof (x) == sizeof (long double) ? isinf_ld (x) \ : sizeof (x) == sizeof (double) ? isinf_d (x) \ : isinf_f (x)) static int isinf_f (float x) { return !isnan (x) && isnan (x - x); } static int isinf_d (double x) { return !isnan (x) && isnan (x - x); } static int isinf_ld (long double x) { return !isnan (x) && isnan (x - x); } #endif Some optimizing compilers mishandle these definitions, but systems with that bug typically have many other floating point corner-case compliance problems anyway, so it’s probably not worth worrying about. malloc The C standard says a call malloc (0) is implementation dependent. It can return either NULL or a new non-null pointer. The latter is more common (e.g., the GNU C Library) but is by no means universal. AC_FUNC_MALLOC can be used to insist on non- NULL (see Particular Functions). putenv Posix prefers setenv to putenv; among other things, putenv is not required of all Posix implementations, but setenv is. The C standard says a call realloc (NULL, size) is equivalent to malloc (size), but some old systems don’t support this (e.g., NextStep). signalhandler Normally signal takes a handler function with a return type of void, but some old systems required int instead. Any actual int value In C99 and later, if the output array isn’t big enough and if no other errors occur, snprintf and vsnprintf trunc The C standard says sprintf and vsprintf return the number of bytes written. On some ancient systems (SunOS 4 for instance) they return the buffer pointer instead, but these no longer need to be worried about. sscanf On various old systems, e.g., HP-UX 9, sscanf requires that its input string be writable (though it doesn’t actually change it). This can be a problem when using gcc since it normally puts constant strings in read-only memory (see Incompatibilities of GCC in Using and Porting the GNU Compiler Collection). Apparently in some cases even having format strings read-only can be a problem. strerror_r Posix specifies that strerror_r returns an int, but many systems (e.g., GNU C Library version 2.2.4) provide a different version returning a char *. AC_FUNC_STRERROR_R can detect which is in use (see Particular Functions). strnlen AIX 4.3 provides a broken version which produces the following results: is standard, but some older systems (e.g., HP-UX 9) have _SC_PAGE_SIZE instead. This can be tested with #ifdef. unlink The Posix spec says that unlink causes the given file to be removed only after there are no more open file handles for it. Some non-Posix hosts have trouble with this requirement, though, and some DOS variants even corrupt the file system. unsetenv On MinGW, unsetenv is not available, but a variable ‘FOO’ can be removed with a call putenv ("FOO="), as described under putenv above. va_copy C99 and later provide va_copy for copying va_list variables. It may be available in older environments too, though possibly as __va_copy (e.g., gcc in strict pre-C99 mode). These can be tested with #ifdef. A fallback to memcpy (&dst, &src, sizeof (va_list)) gives maximum portability. va_list va_list is not necessarily just a pointer. It can be a struct (e.g., gcc on Alpha), which means NULL is not portable. Or it can be an array (e.g., gcc in some PowerPC configurations), which means as a function parameter it can be effectively call-by-reference and library routines might modify the value back in the caller (e.g., vsnprintf in the GNU C Library 2.1). >> Normally the. / C divides signed integers by truncating their quotient toward zero, yielding the same result as Fortran. However, before C99 the standard allowed C implementations to take the floor or ceiling of the quotient in some cases. Hardly any implementations took advantage of this freedom, though, and it’s probably not worth worrying about this issue nowadays. Next: Integer Overflow, Previous: Varieties of Unportability, Up: Portable C and C++ [Contents][Index]
http://buildsystem-manual.sourceforge.net/Function-Portability.html
CC-MAIN-2017-17
refinedweb
772
66.03
Visual Studio Tools 2.3.1 with precompiled headers - eos pengwern last edited by I'm building a brand new VS2017 project with the VS addin version 2.3.1 installed. My project is set up to use precompiled headers; and my source files are being imported, having previously been successfully compiled in VS2015 and MinGW. In particular, each of my .cpp files which implements a class for which Q_OBJECT is defined contains the line #include "moc_myClassName.cpp" ...at the end, where of course 'myClassName' is a placeholder for whatever the actual name of the file is. The Qt addin correctly generates the custom build line for each header containing Q_OBJECT so that moc is run and the appropriate moc_*.cpp file is generated. Here is where the problems start, though. What should happen is that, when the moc_*.cpp file is included at the end the class's own .cpp file, because the first line of that file is #include "stdafx.h" ...the whole file, including the moc_*.cpp part at the end, should be compiled with the precompiled header included. What actually happens is that the compiler goes ahead and tries to compile the moc_*.cpp file in its own right anyway. Because that file doesn't contain #include "stdafx.h" the result is an entirely predictable error message: fatal error C1010: unexpected end of file while looking for precompiled header. Did you forget to add '#include "stdafx.h"' to your source? I cannot for the life of me see why the project is trying to compile the moc files in their own right, rather than just relying on the fact that they're included with the classes' .cpp files. The moc files are nowhere to be seen in the Solution Explorer, not even under the 'Generated Files' folder. I have delved in the .vcxproj file and I cannot find these moc_*.cpp files listed anywhere amongst the list of files to be compiled. My question, therefore, is: why is the compiler trying to compile moc files which it hasn't been asked to compile and shouldn't be compiling, and how do I stop it from doing so?
https://forum.qt.io/topic/99671/visual-studio-tools-2-3-1-with-precompiled-headers
CC-MAIN-2022-40
refinedweb
361
75
What about "tolerant:"? let x:Int? = myInts[tolerant: 0] Also "lax:" is short let x:Int? = myInts[lax: 0] What about "tolerant:"? let x:Int? = myInts[tolerant: 0] Also "lax:" is short let x:Int? = myInts[lax: 0] This please: let x:Int? = myInts[probe: 0] What fun it is to bikeshed. Would it be possible to just move forward a proposal without a defined name, and let the core team decide from the suggested options (or come up with a new one)? This feels like the kind of thing where we mostly agree on the functionality, but really struggle on the naming. I've come to the unexpected conclusion over time that I do not want to add a range-checked index. I could not find any use-cases outside of sample code that I need this feature for. If you can provide some real world, real code use-cases, I would probably swing around back to being "for" instead of "against". I would disagree pretty strongly with this approach. It's the role of the community to grapple with the naming, which is a key part of designing the language. I merely say this because of the recent Result proposal. That was one of those cases where the functionality initially presented was revised substantially and immediately put forth for re-review. I think what we need at this point is a champion to put forward a written proposal and an accompanying implementation. The recent Result proposal went through at least three rounds of design before review. It was put forward for review without a consensus design in no small part because the core team felt it was important to have the process completed for Swift 5, for reasons they outlined. Therefore, it is an aberration strenously to be avoided without good reason, not a precedent. There are at least two versions of this proposal already written; the implementation isn't going to be a barrier either. Maybe because if you have some random Index value that you're going to splat on some Collection's subscript without any clue (in advance) if said index value is valid, then your algorithm's design is already fundamentally broken?! (Maybe something like this could be added to the commonly-rejected rationale.) There is the earlier case by @jawbroken where s/he had to check a cell's neighbors in a multi-dimensional grid, and nil-returns help avoid special-casing the (literal) edge and corner cases, but most uses are probably not like this. Just run a loop from startIndex to endIndex, or vice-versa for bi-directional collections, and sprinkle in break and/or continue when you don't need to visit every element. Use zip or Sequence.enumerated() when you need to track counts. Due to array slices, an integer index is not necessarily an offset. Maybe we need an alternate subscript to track offsets. I think that has been proposed before too. Though, thinking more about this now, maybe a subscript(_:default:) on Array, roughly matching the one on Dictionary but being get-only, would cover most use cases and also rescue us all from naming concerns. I need this all the time when parsing files, e.g. migrating legacy state for puzzle games, reading column-based dictionary files, etc. I want my parsers to throw when things go wrong instead of crashing. Checking indices works but is not elegant, and one needs to be very careful to keep everything in sync. Note that in order for this to be useful for me, it would need to work not only for Array but also for Data and include not only at-index but also in-range variant. What do you expect to happen if only some of the input range represents valid indexes? Speaking of the three puzzle apps I mentioned upthread, I expect the following: nil Error Say this happens while the app is launching. When it does, instead of crashing the entire app, the user can play all the unaffected puzzles. Currently, the app shows an alert with a "Contact Support" button, with remote logging (where no user interaction needed) as a possible future improvement. This approach allows for better diagnostics compared to (ofter terrible) crash reports, and the user can always play their puzzles. Win-win! For me, the expected result is rather obvious when requesting a range: Return all elements that are in the input range - so if there is no overlap with the valid indexes, just return an empty slice. The following isn't technically "real world", but I felt a need for it with this test I was trying: import Foundation precedencegroup HighPrecedence { higherThan: BitwiseShiftPrecedence } infix operator .? : HighPrecedence extension Array { static func .? (s: Array, i: Int) -> Element? { return s.indices.contains(i) ? s[i] : nil } } extension FixedWidthInteger { init?(optional s: String?) { guard let str = s, let int = Self(str) else { return nil } self = int } } if let args = CommandLine.arguments .? 1 { switch args { case "e": let rc = Int32(optional: CommandLine.arguments .? 2) ?? 0 print("Exit with exit code \(rc)") exit(rc) case "a": print("Abort exit") abort() case "f": fatalError("Fatal Error exit") default: break } } print("Normal exit") Using the .? operator is just an example. I think [?: index] might make sense if the compiler could support it. So, would it be a [Index: Element], mapping the actually valid indexes to their respective value? (I first thought of a (indices: Range<Index>, elements: Subsequence) tuple, but the valid indexes may not be contiguous.) This would require Index to be Hashable.
https://forums.swift.org/t/add-accessor-with-bounds-check-to-array/16871?page=7
CC-MAIN-2019-09
refinedweb
927
63.59
import uuid unique_key = uuid.uuid4() if you using Python. 2010/4/28 Mark Robson <markxr@gmail.com> > 2010/4/26 Roland Hänel <roland@haenel.me>: >> > Typically, in the SQL world we use things like AUTO_INCREMENT columns >> that >> > let us create a unique key automatically if a row is inserted into a >> table. >> > > auto_increment is an antipattern; it adds an extra key which you don't need > (usually). If your data has a natural candidate key, use that. If not, add > extra parts until you have a unique key. > > If you are using OrderPreservingParitioner, it is really important to use > keys which can give you a lexically sortable range to scan when you need to > find them, so that you can use get_range_slice etc. > > There are other approaches however - in some cases it may be possible to > use columns instead of rows there. But you'd still need keys for the > columns. > > A straightforward way of generating unique IDs for your objects is to add > an incrementing ID (managed locally) to the host name of the (client) node > where they were generated. But this is probably not helpful in most cases. > > Mark > -- Shuge Lee | Lee Li | 李蠡
http://mail-archives.apache.org/mod_mbox/incubator-cassandra-user/201004.mbox/%3Cn2q759d8ad1004271904rea0c12fdkd639a1dec3c6244d@mail.gmail.com%3E
CC-MAIN-2017-13
refinedweb
196
66.54
29 August 2008 21:12 [Source: ICIS news] HOUSTON (ICIS news)--Hurricane Gustav could disrupt the US styrenics market if Total is forced to shut down its facility in Carville, Louisiana, sources said on Friday. The site has a capacity of 1.16m tonnes/year of styrene monomer (SM) and 770,000 tonnes/year of polystyrene (PS), according to global chemical market intelligence service ICIS pricing. The site is located along the Mississippi river about 60 miles (100 km) west of ?xml:namespace> A PS distributor said the complex would be shut down for at least five days if Gustav hit the site directly. “That would put a significant dent in the market,” the source said. Some SM trade participants said the market is viewed as oversupplied, but recent and planned turnarounds in the industry have already tightened supply some. A contract PS buyer, citing experience from hurricanes Katrina and Rita in 2005, said any short term production disruptions would not present a problem to converters, provided they had resin already placed in the railway network. PS buyers with a spot purchasing strategy would likely face greater difficulty in the event of a major hurricane strike in “Spot buyers should be worried every year from August to November - they should be growing lots of gray hairs and aging dramatically,” he added. PS buyers said Gustav could also affect the market by shutting down refineries or oil and gas production and driving up the cost of feedstock benzene. Other styrenics facilities in Along the Dow Chemical operates a 60,000 tonne/year styrene plant in (Additional reporting by Brian Balboa) For more on styrenics plants, visit ICIS plants and projects
http://www.icis.com/Articles/2008/08/29/9152853/hurricane-could-tighten-us-styrenics-spot-market.html
CC-MAIN-2014-15
refinedweb
279
55.07
Tips for creating an iOS Framework If you are looking for a general step-by-step tutorial on how to create a simple Cocoa Touch framework I recommend the following article: Creating a Framework for iOS Update note: This tutorial was updated to iOS 12, Xcode 10, and Swift 4.2 by Lorenzo Boaro. The original tutorial was… In this tutorial, I’d like to show you some quick and useful tips which you can apply when designing and developing your iOS frameworks. I include a use-case of working with Objective-C codebase, interfacing with Swift code, as well as error handling. The approaches mentioned below are based on Apple’s official guidelines, as well as on my own experience building Cocoa Touch Frameworks. I’ve made a quick demo with code examples for both the framework and an app with the framework integrated into it: - - Use Namespaces Namespace prefixes for classes serve a purpose of avoiding pollution of the global namespace. Apple recommends using 3-letter prefixes for third-party framework classes, although they do have a convention to use 2-letter prefixes for their own frameworks (like CL for CoreLocation). For our AwesomeFramework I’m going to use AWE prefix. Nullables Nullables were fairly recently introduced to Objective-C for the purpose of bridging pointers to optionals. Unless you specify explicitly, every pointer would turn into a nullable. For example, NSString * would bridge to String? optional in Swift by default. Nullability specifiers are currently enforced as compiler warnings, and to avoid explicitly specifying nullable/_Nullable or nonnull/_Nonnull for every pointer, you can use audited regions. By using NS_ASSUME_NONNULL_BEGIN you only need to explicitly mark only those pointers that are non-mandatory, except a few notable exceptions, like NSError **, which is always assumed nullable. Preview Bridged Interface One of the neat hacks I discovered recently was previewing your bridged Swift interface without building the framework first, which saves a lot of time. To do that, open an Objective-C header and select the four square icon in the top-left corner, then select Counterparts -> Swift Interface. Unfortunately, Swift interface preview doesn’t seem to work reliably on large projects. In this case try to clean your derived data 🤞. Include you framework headers in the umbrella header and make them public This is commonly overlooked, so I figured I’d mention it here as a reminder. What I mean by this is if say you have a class with a header MyClass.h, you need to select that header and make sure that target membership in the right panel is specified as Public. Besides that, don’t forget to include the header itself in your main framework header like this: #include “MyClass.h” Carthage When deciding whether to use Cocoapods or Carthage, I use the following rule of thumb: use pods if it’s a third-party library, and Carthage when it’s a library I have control over. This comes from following the most simple path to managing dependencies. I find it fairly straight-forward to setup Carthage for my own libraries, whereas most common third-party libraries come with pods only. Besides, I prefer simplicity of Carthage over easiness of Cocoapods. Cocoapods are easier to install, but if there is any issue the complexity of it could easily become a hassle. To prepare a Cartage framework, first you need to make your framework scheme shared. To do that, edit your scheme (⌘<) and select Shared. Don’t forget to commit xcshareddata folder in your source control. Now, to use the framework for local testing in your main app, create a Cartfile with the following content: git "" "branch" To integrate a framework in your main app, you can follow these steps. Don’t forget to link the framework in the General tab, like this: Whenever you make changes to your framework, to make them live in the main app, don’t forget to commit changes to source control and run carthage update in the main app. Error Handling This doesn’t relate directly to frameworks per ce, but is more about general API design. When bridging from Objective-C to Swift, it’s important to keep in mind guidelines for error handling in both. In Objective-C a common error handling practice used in Apple’s own frameworks is C-style, i.e. to have methods which populate NSError**, like this: - (NSData * _Nullable)encrypt:(NSString *)message error:(NSError **)error; When you call such method, you’d check if error is nil, like this: NSError *error = nil; NSData *encryptedMessage = [MyClass encrypt:@”test” error:&error]; XCTAssertNil(error); How does this API translate to Swift? In Swift we want to use try/catch blocks. Good news is that the method signature above automatically translates to func encrypt(_ message: String) throws -> Data Pretty neat, huh? The key here is to have the return pointer ( NSData* in this example) as nullable and have the error parameter trailing in the method signature. Now, the bridged method is throwable, and can be handled like this: guard let encryptedMesaage = try? AWEMainClass.encrypt(“test”) else { return } I find this notation particularly neat, as it combines error handling APIs of try/catch with “golden path” approach of guard-else-return. Error Handling Continued How do we actually define and generate errors within our Objective-C framework? First, we need to define an enum with error constants. I suggest using meaningful comments and naming here to help your API users to distinguish between errors. We can use an enum for that: typedef NS_ENUM(NSInteger, AWEError) { /** General error (please don’t abuse it :). */ AWEErrorFail = 1}; Then, generate an error like this: *error = [NSError errorWithDomain:AWEErrorDomain code:AWEErrorFail userInfo:nil];return nil; Do you know other common pitfalls when building Cocoa Touch Frameworks? Feel free to comment on this article. Thank you for reading!
https://medium.com/@nderkach/tips-for-creating-an-ios-framework-691284cd633a?source=---------3----------------------------
CC-MAIN-2021-04
refinedweb
974
52.09
What is the format of the message going back to gpredict "READ" I am getting "P xxx.xx xx.xx" on the data = connection.recv(24).however, I am not successful in sending the stepping motor data back out to the READ so it can display in Gpredict Rotator Control See under protocol: I have a simple python test script that receives the Rotator Az and EL with no problem All I am doing for testing is getting the data from GPredict and sending right back to Gpredict with lower case "p" I get Read: 0.00 on both Az and EL I am not using the hamlib libary at all in the log file I am seeing 2018/04/23 22:03:08|2|2|gtk-rot-ctrl.c:1189: rotctld returned error 40 with az 340.107770 el 0.000000(p 340.11 0.00)2018/04/23 22:03:09|2|2|gtk-rot-ctrl.c:1189: rotctld returned error 40 with az 340.107770 el 0.000000(p 340.11 0.00)2018/04/23 22:03:10|2|2|gtk-rot-ctrl.c:1189: rotctld returned error 40 with az 340.107770 el 0.000000(p 340.11 0.00)2018/04/23 22:03:11|2|2|gtk-rot-ctrl.c:1189: rotctld returned error 40 with az 340.107770 el 0.000000(p 340.11 0.00)2018/04/23 22:03:12|2|2|gtk-rot-ctrl.c:1189: rotctld returned error 40 with az 340.107770 el 0.000000(p 340.11 0.00) import socketimport sys HOST = None # Symbolic name meaning all available interfacesPORT = 4533 # Arbitrary non-privileged ports = Noneforif s is None: print 'could not open socket' sys.exit(1)conn, addr = s.accept()print 'Connected by', addrwhile 1: data = conn.recv(1024) print 'received: %s' % data outdata = string.lower('%s' % data) print("output;" + outdata) conn.send(outdata) if not data: break conn.close() It doesn't matter whether you use hamlib or not. If you want gpredict to talk to your server your server has to implement the rotctld protocol. This includes acknowledging the commands using the RPRT reply. Please see the protocol description I have linked to above. I managed to get the radio control working in a python script but I lost that script awhile ago. (I went digging thru the GPredict code base to do that one) I installed the hamlib and then used the rotctrld to bridge the connection between my application and Gpredict.I just run echo "+\get_pos" | nc -w 1 localhost 4533to get the positionand echo "+\set_pos 180 0" | nc -w 1 localhost 4533in my code to set the position. It is clugy but it works. I did modify the Dummy code in hamlib to allow for 360 -> 0 rotation. When I get my stepping motors and controller in I will do a complete video kn4kuu.com
https://community.libre.space/t/gpredict-python-rotator-control/1972
CC-MAIN-2018-30
refinedweb
485
75.61
The QPointArray class provides an array of points. More... #include <qpointarray.h> Inherits QMemArray<QPoint>. List of all member functions. A QPointArray is an array of QPoint objects. In addition to the functions provided by QMemArray, QPointArray provides some point-specific functions. For convenient reading and writing of the point data use setPoints(), putPoints(), point(), and setPoint(). For geometry operations use boundingRect() and translate(). There is also the QWMatrix::map() function for more general transformations of QPointArrays. You can also create arcs and ellipses with makeArc() and makeEllipse(). Among others, QPointArray is used by QPainter::drawLineSegments(), QPainter::drawPolyline(), QPainter::drawPolygon() and QPainter::drawCubicBezier(). Note that because this class is a QMemArray, copying an array and modifying the copy modifies the original as well, i.e. a shallow copy. If you need a deep copy use copy() or detach(), for example: void drawGiraffe( const QPointArray & r, QPainter * p ) { QPointArray tmp = r; tmp.detach(); // some code that modifies tmp p->drawPoints( tmp ); } If you forget the tmp.detach(), the const array will be modified. See also QPainter, QWMatrix, QMemArray, Graphics Classes, Image Processing Classes, and Implicitly and Explicitly Shared Classes. Constructs a null point array. See also isNull(). Constructs a point array with room for size points. Makes a null array if size == 0. See also resize() and isNull(). Constructs a shallow copy of the point array a. See also copy() and detach(). If closed is FALSE, then the point array just contains the following four points in the listed order: r.topLeft(), r.topRight(), r.bottomRight() and r.bottomLeft(). If closed is TRUE, then a fifth point is set to r.topLeft(). Destroys the point array. Creates a deep copy of the array. See also detach(). Angles are specified in 16ths of a degree, i.e. a full circle equals 5760 (16*360). Positive values mean counter-clockwise, whereas negative values mean the clockwise direction. Zero degrees is at the 3 o'clock position. See the angle diagram. Sets the points of the array to those describing an arc of an ellipse with width w and height h and position (x, y), starting from angle a1, and spanning angle by a2, and transformed by the matrix xf. The resulting array has sufficient resolution for pixel accuracy. Angles are specified in 16ths of a degree, i.e. a full circle equals 5760 (16*360). Positive values mean counter-clockwise, whereas negative values mean the clockwise direction. Zero degrees is at the 3 o'clock position. See the angle diagram. The returned array has sufficient resolution for use as pixels. Assigns a shallow copy of a to this point array and returns a reference to this point array. Equivalent to assign(a). See also copy() and detach(). Returns the point at position index within the array. Returns TRUE if successful, or FALSE if the array could not be resized (typically due to lack of memory). The example code creates an array with three points (4,5), (6,7) and (8,9), by expanding the array from 1 to 3 points: QPointArray a( 1 ); a[0] = QPoint( 4, 5 ); a.putPoints( 1, 2, 6,7, 8,9 ); // index == 1, points == 2 This has the same result, but here putPoints overwrites rather than extends: QPointArray a( 3 ); a.putPoints( 0, 3, 4,5, 0,0, 8,9 ); a.putPoints( 1, 1, 6,7 ); The points are given as a sequence of integers, starting with firstx then firsty, and so on. See also resize(). This version of the function copies nPoints from from into this array, starting at index in this array and fromIndex in from. fromIndex is 0 by default. QPointArray a; a.putPoints( 0, 3, 1,2, 0,0, 5,6 ); // a is now the three-point array ( 1,2, 0,0, 5,6 ); QPointArray b; b.putPoints( 0, 3, 4,4, 5,5, 6,6 ); // b is now ( 4,4, 5,5, 6,6 ); a.putPoints( 2, 3, b ); // a is now ( 1,2, 0,0, 4,4, 5,5, 6,6 ); Example: themes/wood.cpp. This is an overloaded member function, provided for convenience. It behaves essentially like the above function. Sets the point at array index i to p. Writes the point array, a to the stream s and returns a reference to the stream. See also Format of the QDataStream operators. Reads a point array, a from the stream s and returns a reference to the stream. See also Format of the QDataStream operators. This file is part of the Qt toolkit. Copyright © 1995-2003 Trolltech. All Rights Reserved.
http://vision.lbl.gov/People/qyang/qt_doc/qpointarray.html
CC-MAIN-2014-52
refinedweb
759
69.58
Hello there I hope someone will be able to help me with a project. I want to use digital pin 13 (or any pin for that matter) to replace my power switch on my pc. And then short that pin to gnd when the arduino receives an ir signal from remote conceded to pin 4. I then want to include having all remote buttons do something on the pc. A bit like the tutorial on instructables by HactiCs called “IR remote control your laptop with arduino uno” But it uses vb express. and i dont want to use that. Is there a way to make the code run from within another platform like eventghost or python? I cannot seem to find why its not working probably as I’m a noob. Here is my code #include <IRremote.h> const int RECV_PIN = 4; const int redPin = 13; // Define IR Receiver and Results Objects IRrecv irrecv(RECV_PIN);//irrecv is the receiver object, you can use whatever name you want decode_results results; int buttonState = 0; int togglestate = 0; int incomingByte; int state = HIGH; int reading; int previous = LOW; void setup() { Serial.begin(9600); irrecv.enableIRIn(); // Start the receiver pinMode(redPin, OUTPUT); } void loop(){ if (irrecv.decode(&results)) {// irrecv.decode(&results) :Returns true if a code was receive Serial.println(results.value, HEX); if(results.value==0xFF28D7){ digialWrite(redPin, LOW); delay(500); digitalWrite(redPin, HIGH); } delay(300);// this delay is here to avoid the 0xFFFFFFF irrecv.resume (); // Receive the next value } }
https://forum.arduino.cc/t/test-sketch-for-arduino-controlled-pc-power-switch-not-working/530872
CC-MAIN-2021-25
refinedweb
247
64.81
First, the apology: new to C, and fairly green in coding generally. The problem: trying to create a timestamp in C using strftime(). This works: #include <time.h> ... char timestamp[14]; ...[code to define timestruct] strftime(timestamp,15,"%Y%m%d%H%M%S", timestruct); #include <time.h> ... char *timestamp; ...[code to define timestruct] strftime(timestamp,15,"%Y%m%d%H%M%S", timestruct); timestamp Pointers in C require you to allocate space for them on the heap. In the first case, when you declare char timestamp[14], the 14 in the brackets is telling the compiler to allocate 14 bytes on the stack for timestamp. So you can use it for strings shorted than 14 bytes and you'll be fine. In the second case, you declare char *timestamp but don't allocate any memory to it. Thus when strftime tries to write to it, it is writing to invalid memory that it doesn't have access to so you get a seg fault. To fix this, you need to allocate memory to timestamp by saying char *timestamp = malloc(14). This will point timestamp to 14 bytes allocated on the heap that it can use. The issue with this method is you have to free the memory manually by calling free(timestamp) after you are done using it. The first method is preferred if timestamp is a local variable that doesn't need to exist longer than the scope of the function it is in because it will automatically get cleaned up.
https://codedump.io/share/40KzCl9lFr41/1/using-pointer-destination-in-strftime-c
CC-MAIN-2017-13
refinedweb
253
70.63
Sending an integer as first character of a String through network Tamas Gergely Greenhorn Joined: Oct 21, 2012 Posts: 2 posted Oct 21, 2012 16:42:39 0 Hello, I started to write a client-server application, where the server reads data through a BufferedReader , and client sends data through PrintWriter . The idea was to make the first character of the String a special one. This should be a control character, and so its value shouldn't be treated as a character, instead a control function. During some tests I discovered if I set the value of the char 255 or less, then I can send this through the network, however if I set a greater value to it, then some strange value will be seen on the other side. An int is 4 bytes (signed), a char is 2 bytes (unsigned), I decided to use the 2 last bytes of the int only. To illustrate the problem I wrote a test app: Server: import java.net.*; import java.io.*; public class TestServer { public static void main(String[] args) { ServerSocket serverSocket = null; Socket clientSocket; BufferedReader mIn; String inputLine; try { serverSocket = new ServerSocket(4444); clientSocket = serverSocket.accept(); mIn = new BufferedReader( new InputStreamReader( clientSocket.getInputStream())); for (int i = 0; i < 11; ++i) { inputLine = mIn.readLine(); System.out.println("Character from client: " + inputLine); int chAsInt = inputLine.charAt(0); System.out.println("Character from client as an int: " + chAsInt); System.out.println("Character from client (char->int->char): " + (char)chAsInt); } mIn.close(); serverSocket.close(); } catch (IOException e) { System.err.println("Error."); System.exit(1); } } } Client: import java.io.*; import java.net.*; import java.lang.Character; public class TestClient { public static void main(String[] args) throws IOException { Socket kkSocket = null; PrintWriter out = null; BufferedReader in = null; try { kkSocket = new Socket("localhost", 4444); out = new PrintWriter(kkSocket.getOutputStream(), true); in = new BufferedReader(new InputStreamReader(kkSocket.getInputStream())); } catch (UnknownHostException e) { System.err.println("Don't know about host: taranis."); System.exit(1); } catch (IOException e) { System.err.println("Couldn't get I/O for the connection to server."); System.exit(1); } // 1st out.println('A'); out.flush(); // 2nd int int2 = 0x0041; // decimal 65 = char 'A' out.println( (char)int2 ); out.flush(); // 3rd int int3 = 0x0042; // decimal 66 = char 'B' out.println( (char)int3 ); out.flush(); // 4th int int4 = 0x002B; // char '+' out.println( (char)int4 ); out.flush(); // 5th int int5 = 0x007E; // char '~' out.println( (char)int5 ); out.flush(); // 6th int int6 = 0x007F; // decimal 127 = delete char out.println( (char)int6 ); out.flush(); // 7th int int7 = 0x0101; // decimal 127 = delete char out.println( (char)int7 ); out.flush(); // 8th int int8 = 0x0080; // decimal 128 out.println( (char)int8 ); out.flush(); // 9th int int9 = 0x00FF; // decimal 255 out.println( (char)int9 ); out.flush(); // problem starts below.... // 10th int int10 = 0x0100; // decimal 256 out.println( (char)int10 ); out.flush(); // 11th int int11 = 0x0102; out.println( (char)int11 ); // decimal 258 out.flush(); out.close(); in.close(); kkSocket.close(); } } The output of the Server is: tamas@myhost:~/java/server_client_proba$ java TestServer Character from client: A Character from client as an int: 65 Character from client (char->int->char): A Character from client: A Character from client as an int: 65 Character from client (char->int->char): A Character from client: B Character from client as an int: 66 Character from client (char->int->char): B Character from client: + Character from client as an int: 43 Character from client (char->int->char): + Character from client: ~ Character from client as an int: 126 Character from client (char->int->char): ~ Character from client: Character from client as an int: 127 Character from client (char->int->char): Character from client: ? Character from client as an int: 63 Character from client (char->int->char): ? Character from client: Character from client as an int: 128 Character from client (char->int->char): Character from client: ÿ Character from client as an int: 255 Character from client (char->int->char): ÿ Character from client: ? Character from client as an int: 63 Character from client (char->int->char): ? Character from client: ? Character from client as an int: 63 Character from client (char->int->char): ? When I send char with value 255 or less, then the other side (server) receives it well (first 9 cases in the test aplication), however if I send a char with greater value (last 2 cases) then the server received as an (int)63 (wich is hexa 3F, wich is the question mark.) My questions: 1) Why this strange behavior happens? 2) What do you think of the main idea? (Sending 1 special character before normal characters) Thanks for any help. Paul Clapham Bartender Joined: Oct 14, 2005 Posts: 19115 8 I like... posted Oct 21, 2012 17:43:22 0 It looks like the receiver is converting from bytes to chars. (That would be the InputStreamReader , I think.) And when you send a byte which doesn't normally represent text in whatever the default charset is in the receiver's environment, it converts that byte to "?". So I don't recommend sending binary data and then treating it as text, which is what your current scenario does. Sending an int value as control data is a perfectly normal thing to do when you're sending data over a network, but if you're going to do it, then don't treat everything as text. Use a DataOutputStream to send data and a DataInputStream to receive it. Check out the documentation and you'll see they have methods to send and receive bytes, ints, and so on. Paul Clapham Bartender Joined: Oct 14, 2005 Posts: 19115 8 I like... posted Oct 21, 2012 18:02:27 0 And welcome to the Ranch, Tamas Gergely! Tamas Gergely Greenhorn Joined: Oct 21, 2012 Posts: 2 posted Oct 22, 2012 02:12:59 0 Thanks for your answer! On the server side I am using InputStreamReader , so that may be the problem. I will check out the DataStreams as you have suggested. "And welcome to the Ranch, Tamas Gergely!" Thank you! I agree. Here's the link: subject: Sending an integer as first character of a String through network Similar Threads SocketException is not throwing in Windows Vista while network lost. Socket : MultiClient Server can't get this method to work client / server (communication problem...) When the JVM create a new Wrapper and when the JVM use a living one? ( K&B 5.0) All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/595759/sockets/java/Sending-integer-character-String-network
CC-MAIN-2015-06
refinedweb
1,079
66.23
Check out this quick tour to find the best demos and examples for you, and to see how the Felgo SDK can help you to develop your next app or game! The SlotMachine component allows to easily implement a slot machine with multiple reels and rows. It also provides methods to spin or stop the slot machine and lets you access the visible items in each reel and row. More... The SlotMachine component provides a simple way to create a slot machine for games similar to the popular casino slot games. Just have a look at the games of slotomania, Slotpark or Greentube to see such slot games in action! We also used the SlotMachine component ourselves to create a pirate-themed example slot game called Flask of Rum. Click here for an overview of the game features. You will also find further links to try the demo game or to see the full source code. In addition, we provide a complete step-by-step tutorial how you can create the game here: How to Make a Slot Game with Felgo. In contrast to the demo game or the tutorial, the following sections of this documentation focus on different ways of how you can use the component. The examples guide you from creating a very simple slot machine up to implementing more complex scenarios, that even include multiple different columns or random items. In the last section, we already show you the basics for using the component to make a typical slot game. However, for a detailed description about how to create a slot game, we recommend to read the Flask of Rum slot game tutorial. The component is built upon the concept to separate Models and Views in QtQuick. To specify the items that are displayed on each reel, you can simply state your data as the model and set the appearance of the items using a delegate. The component then automatically uses the correct total width and height depending on the size of the items and reels that you add. By default, it comes with three reels (columns of the slot machine) and three rows. The following code describes a simple slot machine. We keep the default setting of three reels and rows and only set the model and delegate properties: import Felgo 3.0 import QtQuick 2.0 SlotMachine { model: 10 delegate: Text { text: index } } You may use every type of data model that is supported by QML, such as: Furthermore, we also provide the unique SlotMachineModel. This model is made especially for the SlotMachine component and allows you to easily set up a typical slot machine for casino slot games. In the example above, we used an integer value as the model. This fills each slot machine reel with ten items. After that, we set the delegate property to define their appearance. For an integer model, the index of the item is available within the delegate. We use a Text element to display the index, but you could also compose a more complex item or use a custom QML item at this point. The slot machine automatically bases its total size on the size of the text element and the number of reels and rows. You can then use methods like spin, stop or getItem to work with the slot machine. As you see, it is very easy to get a basic slot machine up and running. But it might be interesting to use a more complex model for your slot machine. The following example shows a slot machine that utilizes an array model: import Felgo 3.0 import QtQuick 2.0 SlotMachine { // set different reel and row count rowCount: 2 reelCount: 4 // set width hand height for items defaultReelWidth: 40 defaultItemHeight: 40 // use array as model model: ["red", "green", "blue", "grey"] delegate: Rectangle { color: modelData } } This time, we configure the slot machine to have four reels and only two rows. In contrast to the previous example, we set the defaultReelWidth and defaultItemHeight properties to specify the size of the slot machine items. These values are used if no width or height is set in the delegate element, which is the case with our rectangle. Of course, we could also directly set a width and height for the rectangle. In that case, the slot machine then uses the size of the rectangle as the default width and height, unless a different default value is set additionally. Note: As the Text element in the previous example automatically has a width and height based on the text and font, the defaultReelWidth and defaultItemHeight properties don't apply. The specific width and height of a delegate item will always overwrite the default values - even though the default values are used to calculate the total width and height of slot machine. This behavior is necessary, as it is possible to use independent delegates with different item sizes for each reel of a slot machine. You also find such examples at a later point in the documentation. The example above shows the usage of an array model, which is quite similar to the integer model. In addition to the index, we can now also access the modelData variable within the delegate to work with the value of each array item. In our case, we display rectangles on the slot machine reels and color them differently based on the colors in the array. When you want to create a slot machine for your game, you might face a scenario where it is required to implement multiple, individual reels. For example, you might want the first reel to contain some colors, but the second reel should use different colors. Instead of setting a model and a delegate for the whole slot machine, you have the possibility to manually configure each reel with the SlotMachineReel element to achieve this goal: import Felgo 3.0 import QtQuick 2.0 SlotMachine { defaultReelWidth: 40 defaultItemHeight: 30 // configure first reel to show colors SlotMachineReel { model: ["red", "green", "blue", "grey"] delegate: Rectangle { color: modelData } } // configure second reel with different colors SlotMachineReel { model: ["yellow", "cyan", "magenta", "black"] delegate: Rectangle { color: modelData } } } This code creates a slot machine with two reels - each with four different colors. It is common that the items of a slot machine have the same size on every reel, as they should form a clean row when aligned next to each other. As we do not set a fixed size in our delegates, we can use the defaultReelWidth and defaultItemHeight properties to easily set the size of the items for both reels. If no default width and height is set, the component automatically tries to find an item with a fixed size and then uses these values as the default setting. In addition, it is also possible to mix individual reel definitions with automatically generated reels. The following example puts both of these features into action: import Felgo 3.0 import QtQuick 2.0 SlotMachine { id: slotMachine // configure slot machine to use three reels, as only one SlotMachineReel is // defined manually the slot machine automatically creates two additional reels reelCount: 3 // set up slot machine to show colored rectangles per default model: ["red", "green", "blue", "grey"] delegate: Rectangle { color: modelData } // manually configure first reel to only show borders of the rectangle in different colors SlotMachineReel { model: ["yellow", "cyan", "magenta", "black"] delegate: Rectangle { width: 40 height: 25 border.width: 2 border.color: modelData } } } The important part in this example is the setting of the reelCount property in combination with the SlotMachineReel configuration. Every SlotMachineReel element that you add to the slot machine replaces an automatically generated reel. If there are less SlotMachineReel definitions than the desired reelCount, the component tries to generate the remaining reels based on the default model and delegate properties. In our case, we define the first reel with a SlotMachineReel element. It shows the colors yellow, cyan, magenta and black. The component then automatically creates the remaining two reels that contain the colors red, green, blue and grey based on the default values. We also set the delegate of the first reel to only show the borders of the rectangle, whereas the other reels contain filled rectangles. In this example, we didn't specify a default width and height for the items. The slot machine now tries to find an item with a fixed width and height and uses these values instead. The rectangle of our first reel has a fixed size, which is then also applied to the automatically generated reels. Of course, it would also work the other way round. Pretty nice! One part that we didn't cover yet is how to create individual reels that don't share any width or height settings. Also, we may skip the model and delegate definition for SlotMachineReel elements and use the default setting of the slot machine instead. The next example takes care of these aspects: import Felgo 3.0 import QtQuick 2.0 SlotMachine { // configure slot machine to use three reels, the component creates // one additional reel, because only two reels are defined manually reelCount: 3 // set default sizes defaultReelWidth: 50 defaultItemHeight: 25 // set up slot machine to show colored rectangles per default model: ["red", "green", "blue", "grey"] delegate: Rectangle { color: modelData } // set first reel to show different colors and be wider (but use the default delegate) SlotMachineReel { width: 75 model: ["yellow", "cyan", "magenta", "black"] } // set second reel to only show borders and use a different size SlotMachineReel { width: 75 delegate: Rectangle { width: 50 height: 75 border.width: 2 border.color: modelData anchors.horizontalCenter: parent.horizontalCenter } } } This example perfectly shows the flexibility of the slot machine component. The default size for each item of the slot machine is set to 50 x 25. As we use three reels and three rows, the slot machine has a total width of 150 and a height of 75. The default model and delegate describe rectangles in red, green, blue and grey. We then use a SlotMachineReel definition to change the colors of the first reel to yellow, cyan, magenta and black. The appearance of the rectangles matches the default delegate, which is why we do not set a different delegate. But we want the reel to be bigger, so we set the width to 75. The default delegate, that describes the colored rectangle, hasn't set any fixed width. As a result, the rectangles of the first reel are enlarged to fill the bigger reel width. For the second reel, we do not set a model and use the default colors. But we change the delegate to only show the borders of the rectangle. We also set the size of the rectangle to 50 x 75. The height of 75 matches the slot machines height, so each rectangle covers all three slot machine rows. Also, we add a padding to the reel by defining a bigger reel width and centering the smaller rectangles horizontally. This already shows how easy it is to set up a slot machine with default settings, that you you can then overwrite with very little effort to add exceptions. You can also use methods like getReel, addReel and insertReel to modify your SlotMachineReel definitions at a later point during the game. The most common use-case for a slot machine are slot games like the ones you can find in casinos. Such slot machines consist of multiple reels with different symbols in a random order. You win a certain factor of the amount you bet whenever a line of the same symbol appears on the slot machine. The slot machine is set up in a way, that ensures that every symbol occurs multiple times on a reel. Symbols that only give small rewards are more frequent than symbols with big win factors. If you want to make such a slot machine, you need a model that fills your reels with random symbols, which is quite a stressful task. To easily fill up your reels with a randomly ordered set of symbols based on a frequency setting, we provide a special SlotMachineModel: import Felgo 3.0 import QtQuick 2.0 SlotMachine { // slot machine has five reels and three rows reelCount: 5 rowCount: 3 // the default item size is 60 x 50 defaultReelWidth: 60 defaultItemHeight: 50 // use random model based on symbol types and frequencies for each reel model: SlotMachineModel { // syntax: "type": { [frequency: <int>, ][data: <var>] } symbols: { "red": { frequency: 3 }, // 3 x red "green": { frequency: 3 }, // 3 x green "blue": { frequency: 2 }, // 2 x blue "cyan": { frequency: 2 }, // 2 x cyan "magenta": { frequency: 1 }, // 1 x magenta "yellow": { frequency: 1 } // 1 x yellow } } // define appearance of items delegate: Item { Rectangle { color: modelData.type anchors.fill: parent } Text { text: modelData.type anchors.centerIn: parent } } } This slot machine consists of five reels and three rows with a default item size of 60 x 50. Each item is displayed as a rectangle of a certain color, that has the name of the color written in its center. To fill the reels with items, we only define a single model for all the reels. This SlotMachineModel specifies the available symbols and their frequency on each reel. In our case, each reel is randomly filled with three red blocks, three green blocks, two blue blocks, two cyan blocks, one magenta block and one yellow block. Within the delegate, you can access the data object of the symbol in the same way as you would access the value of an array model: modelData.typecontains the symbol name, that matches the property name of your symbol configuration, e.g. "red", "green", "blue", ... modelData.frequencyholds the frequency value of the symbol. modelData.datais a special property that allows passing custom user data to your delegate. You can use the property for custom user data for every purpose you like. For example, if you want to display the slot machine items as images, you could just pass the filename of the image along with the other symbol data and access it with the modelData.data property: import Felgo 3.0 import QtQuick 2.0 SlotMachine { id: slotMachine // slot machine has four reels and two rows reelCount: 4 rowCount: 2 // the default item size is 50 x 50 defaultReelWidth: 50 defaultItemHeight: 50 // { Image { source: "../assets/"+modelData.data anchors.fill: parent } } } You could also use a more complex object instead of a string at this point, the data property is not modified at all and simply passed to the delegate. As you can see, you can easily implement your own slot machine with random reels by configuring the SlotMachineModel and defining how you want the items to look like. You should be ready to design your own slot machine by now, and by calling the spin or stop method you can already watch the symbols line up in random positions. But maybe you are struggling to check if there are matching symbols aligned next to each other after you stopped the slot machine. The next example extends the previous slot machine with a check for matching symbols when it is stopped. We also want to add an animation to the items when they match each other. Let's replace the previous slot machine with the following implementation: import Felgo 3.0 import QtQuick 2.0 SlotMachine { id: slotMachine anchors.centerIn: parent // slot machine has four reels and two rows reelCount: 4 rowCount: 2 // the default item size is 50 x 50 defaultReelWidth: 75 defaultItemHeight: 75 // { // show image Image { id: image source: "../../assets/"+modelData.data anchors.centerIn: parent anchors.fill: parent scale: 0.8 } // configure animation to enlarge the item and shrink it again SequentialAnimation { id: winAnimation // make image bigger NumberAnimation { target: image property: "scale" duration: 250 to: 1.0 } // shrink it again NumberAnimation { target: image property: "scale" duration: 250 to: 0.8 } } // add a function that starts the animation function startWinAnimation() { winAnimation.start() } } // check for matching symbols when a spin ends and start the animations onSpinEnded: { // check every row of the slot machine for(var rowIndex = 0; rowIndex < slotMachine.rowCount; rowIndex++) { // for every row -> go over all reels and count length of matching symbols var length = 0 var firstSymbol = null for(var reelIndex = 0; reelIndex < slotMachine.reelCount; reelIndex++) { // get model data of currently visible item var modelData = slotMachine.getItemData(reelIndex, rowIndex) // memorize type of first symbol if(firstSymbol == null) firstSymbol = modelData.type // increase length if current symbol type matches first symbol of the row if(modelData.type === firstSymbol) { length++ } // or stop if a different symbol occurs else { break } } // end search for matching symbols on the reels // if we found a match -> animate the images of the symbols that won if(length >= 2) { for(var winIndex = 0; winIndex < length; winIndex++) { // get image item of the row var image = slotMachine.getItem(winIndex, rowIndex) image.startWinAnimation() } } // end animate items } // end check every row } // end spin ended handler } We added an animation to the delegate of our items to let them get bigger and back to normal whenever we call the startWinAnimation function. Note: We start with an initial scale of 0.8 to have some space for making the images bigger. If we would use a scale bigger than 1 during the animation, the images would become larger than the intended item height and width and may move into the area of other aligning items or outside the borders of the slot machine. Except for the changes in the delegate in terms of the animation, we only added a signal handler that is called whenever a slot machine spin is stopped. The algorithm takes care of the following tasks: lengthcounter. To successfully implement this algorithm, it is important that we can access the currently visible items of the slot machine. The two functions that the slot machine component provides for this purpose are: This function returns the model data of the item at a specific reel and row position. We can use this to work with the type, frequency or data properties of our SlotMachineModel configuration. In contrast to the getItemData-function, this function returns the actual item instance of our delegate that we can see in the slot machine. We can use this function to access our image and trigger the animation. Of course, your slot machine design and the logic for finding, checking and animating matching symbols might get more complex in your project. But the slot machine component should be a great asset to help you get started with the basic mechanics for your slot game. Also, the tutorial How to Make a Slot Game with Felgo might help you with problems like: One highly interesting topic is how to control the result of a slot machine. In some cases, it might be cool to set the outcome beforehand and watch the symbols line up just the way you wanted to. For this purpose, we provide a special method stopAt. With this method, you can set the desired item index for each reel. These items then appear on the first row when the slot machine stops. Look at this example to see how you can use it: import Felgo 3.0 import QtQuick 2.0 SlotMachine { reelCount: 12 rowCount: 3 defaultReelWidth: 30 defaultItemHeight: 30 model: ["A","B","C","D","E","F","G","H","I","J","K","L","M","N","O","P","Q","R","S","T","U","V","W","X","Y","Z","-"] delegate: Item { Text { text: modelData font.pixelSize: 16 } } // stop the slot machine at specific positions function vplayRocks() { stopAt([20, 25 , 14, 10, 26, 23, 25, 16, 13, 1, 9, 17]) } } The slot machine consists of three rows and 12 reels, which contain the letters A to Z and a dash. The function vplayRocks then uses the stopAt function of the SlotMachine to specify the outcome for each reel. The first reel will show the letter "U" on the first row, the second reel the letter "Z", ... and so on. If this method is called after we spin the slot machine, we can watch the letters appear one after the other in the way we configured them to. Just try the component yourself and don't hesitate to visit the support forums if you get stuck somewhere along the road! Specifies the default height for all the items on every slot machine reel. If the delegate definition of a SlotMachineReel element does not set a specific height, the default height is used instead. Also, the total height of the slot machine is automatically based on the rowCount and defaultItemHeight property if no fixed height is set. If no defaultItemHeight is specified, the slot machine tries to use the specific height of a reel item as the default height. Specifies the default width for all the reels of the slot machine. If a SlotMachineReel definition does not set a specific width, the default width is used instead. If no defaultReelWidth is set, the component tries to find a reel that has a fixed width or contains items with a fixed width. The default delegate that defines the appearance of the reel items. Unless another delegate is set within a specific SlotMachineReel definition, the default delegate is used. The default model that is used for all reels, if no other model is specified by a SlotMachineReel definition. To create reels with randomly shuffled items, you can use the special SlotMachineModel. Allows to set the number of slot machine reels. If not enough reels are defined by particular SlotMachineReel definitions, the remaining reels are automatically created based on the model, delegate and defaultReelWidth properties of the slot machine. When a spinning slot machine is stopped, each reel is stopped after the previous one using a fixed delay. This property defines the delay time in milliseconds. The default delay is 250 milliseconds. Allows changing the anchoring of the reels. If the slot machine is set up with a fixed width and height, you can use this property to position the reels within the slot machine component. By default, the reels are horizontally and vertically centered. Allows setting the x-position of the reels. If the slot machine is created with a fixed width, you can use this property to set the horizontal reel position within the slot machine component. By default, the reels are horizontally centered. Allows setting the y-position of the reels. If the slot machine is created with a fixed height, you can use this property to set the vertical reel position within the slot machine component. By default, the reels are vertically centered. Allows to set the number of visible rows that the slot machine shows. By default, a slot machine has three rows. Defines the movement speed of the reels when they are spinning. The amount describes the velocity in pixels per second. By default, it is set to 500. Indicates whether the slot machine reels are currently spinning. Indicates whether the slot machine is in the process of stopping the reels. After the stop method is called, the machine stops all of it's reel and emits the reelStopped signal for each reel that successfully stopped moving. After the stop method is called, the machine stops all of it's reel and emits the spinEnded signal after the last reel has fully stopped moving. This signal is emitted when a slot machine spin is started. Appends a new reel to the slot machine after the current last reel. If you want to add a reel without setting any specific reel properties, simply increase the reelCount to generate a reel based on the model, delegate and defaultReelWidth properties of the SlotMachine. Returns the item instance of a specific reel and row of the slot machine. Returns the model data of an item on a specific reel and row of the slot machine. If the SlotMachineModel is used, each item contains a type, frequency and data property based on your model configuration. Returns the item index of a specific reel and row of the slot machine. Returns the complete model for a specific reel of the slot machine. This can be either the specific model of a SlotMachineReel or the default model of the SlotMachine. If the SlotMachineModel was used, the actual symbol array created by the SlotMachineModel is returned. Returns the SlotMachineReel object for a specific reel in the slot machine. You can then use the object to to access the model, delegate and width settings of the reel. Adds a new reel to the slot machine at a specific position. Sets the index for the first item of a slot machine reel. If multiple rows are defined in the slot machine, the topmost item on the first row is considered the first item of the reel. Spins the reels of the slot machine if it is not currently spinning or in the process of stopping. When a spin is successfully started, the spinStarted signal is emitted. Also, the stopInterval parameter may be used to automatically stop the slot machine after the given time in milliseconds has passed. If the slot machine is currently spinning, the reels are stopped one after another based on the reelStopDelay property. When the last reel has stopped, the spinEnded signal is emitted. If the slot machine is spinning, the reels are stopped at a given item index for each reel. The function expects the stopPositions parameter to be an array that holds a target index for each slot machine reel. Updates the model for a specific reel. The items on the reel are redrawn based on the current model setting. For a SlotMachineModel, new items are generated based on the symbol configuration. Updates the models of all reels. The items on the reels are redrawn if the model has changed. For a SlotMachineModel, new items are generated based on the symbol configuration.
https://felgo.com/doc/felgo-slotmachine/
CC-MAIN-2020-40
refinedweb
4,285
61.36
Using opencv to open a IP Cam http stream on Windows I wrote the following Python code: import cv2 cap = cv2.VideoCapture() cap.open("http://<user>:<password>@<ip>/mjpeg.cgi?user=<user>&password=<password&channel=0") while (cap.isOpened()): ret, frame = cap.read() cv2.imshow('frame', frame) if cv2.waitKey(1) & 0xFF == ord('q'): break The code works fine on Linux, but on Windows, I got the following error: $ python renderer.py warning: Error opening file (/build/opencv/modules/videoio/src/cap_ffmpeg_impl.hpp:808) warning: http://<user>:<password>@<ip>/mjpeg.cgi?user=<user>&password=<password&channel=0 (/build/opencv/modules/videoio/src/cap_ffmpeg_impl.hpp:809) I suspect there's something wrong with my Windows opencv build, but couldn't figure out what. Please let me know if you have any ideas. Thanks. make sure, opencv_ffmpeg64.dll is on your PATH I do have opencv_ffmpeg340_64.dll in my path. Also, the code works on Windows if I specify a mp4 file instead of an URL. It also works with the built-in webcam if I specify an index of 0 instead of the URL. Thanks.
http://answers.opencv.org/question/189376/using-opencv-to-open-a-ip-cam-http-stream-on-windows/
CC-MAIN-2018-30
refinedweb
184
61.93
I'm trying to make a scrapper that will print out all the house events on this url: But I get back no results with the above code, any idea why? from bs4 import BeautifulSoup import requests headers = {'User-Agent': 'Mozilla/5.0'} response = requests.get("") soup = BeautifulSoup(response.text, "html.parser") results= soup.find_all('div', {"class": "genre_list"}) for result in results: print(result.find('HOUSE').get_text()) You're not looking for the right elements. You'll need to start with looking for a div that has a class holdevents. You then look for the dl attribute containing House. If found, scrape title and dates and add to a list. from bs4 import BeautifulSoup import requests headers = {'User-Agent': 'Mozilla/5.0'} response = requests.get("") soup = BeautifulSoup(response.text, "html.parser") events = soup.find_all('div', {"class": "holdevent"}) house_events = [] for event in events: genre_list = event.find('dl', {"class": "genre_list"}) if genre_list.find(text='House'): title = event.find('h1', {'class' : 'title'}).a.text date = event.find('h1', {'class' : 'nicedate'}).text house_events.append((title, date)) print(house_events) This will fetch you: [('Tropical Disco fueled by Chandon Passion', 'SAT, 22 Jul 2017'), ('West House Crossover Connection VOL.5 -Zakuro 1st Anniversary', 'SAT, 22 Jul 2017'), ('SUBCULTURE', 'SAT, 22 Jul 2017')]
https://codedump.io/share/Ukzwmiw3xxUF/1/beautifulsoup-does-not-return-results
CC-MAIN-2017-39
refinedweb
206
63.86
Unity3D: JavaScript vs. C# – Part 3 Posted by Dimitri | Dec 4th, 2010 | Filed under Programming This is the third part of a series that show some of the differences between JavaScript and C# when writing scripts for Unity3D game engine. I suggest that you read the first and second post of the series to better understand what is going on here. In this third part, I will point out some differences between JavaScript and C# by writing a script that makes a GameObject move forward. So, let’s start with the programming language that will take the smallest number of lines to make a GameObject move, JavaScript: public var goTransform:Transform; private var vel:int = 2;//how fast the game object is being moved function Awake() { //get this GameObject's Transform goTransform = this.GetComponent(Transform); } // Update is called once per frame function Update() { //moves the containing GameObject forward goTransform.position.z = goTransform.position.z + vel; } This script moves the attached game object forward. Note that it is possible to access and increment the goTransform’s position z property, every update cycle and that causes the GameObject to move forward. Let’s see how the same code is written using C#: using UnityEngine; using System.Collections; public class PawnMover : MonoBehaviour { public Transform goTransform; private int vel = 2;//how fast the game object is being moved void Awake() { //get this GameObject's Transform goTransform = this.GetComponent<Transform>(); } // Update is called once per frame void Update() { //returns a CS1612 error goTransform.position.z=goTransform.position.z + vel;//<=returns a CS1612 error //this is the right way to do it when using C# goTransform.Translate(Vector3.forward * vel);//moves the containing GameObject forward } } Here, C# can’t access the goTransform’s position z property, meaning that it can’t be incremented, like the JavaScript version of the above code. Trying to do it generates a CS1612 error, which means that we are trying to change a value directly and not a reference to that value. To avoid this error when writing scripts with C#, we have to use methods to move GameObjects, such as Translate(), Rotate(), RotateAround(), etc. All these methods are public members of the Transform class. That’s it for this post. I hope I have shed some light in the main differences between these two programming languages, when writing scripts for Unity3D game engine. Don’t forget to check 41 Post for the fourth part of this series, where I will show some differences between JavaScript and C# when pausing code execution (a.k.a. yielding), which is available here: Part 4 – Yielding. Isn’t the Transform type missing on the 12th line of C# version? I mean: goTransform = this.GetComponent(); instead of: goTransform = this.GetComponent(); Anyway, very interesting post! Yes, it is! Thanks! Just fixed that. Fail… I meant goTransform = this.GetComponent(); Oh ok inferior/superior brackets are stripped by the posting algorithm… XD coding of drag and drop the object using mouse control
http://www.41post.com/1665/programming/unity3d-javascript-vs-csharp-3
CC-MAIN-2018-26
refinedweb
494
63.49
[Date Index] [Thread Index] [Author Index] basic namespace question, and access to BinarySearch What is the proper way to get access to a package without adding that package to $ContextPath? If I use Needs, the package name is added to $ContextPath. (Of course I can remove items from $ContextPath.) Motivations: 1. I would like a cleaner namespace, and I don't mind using long names. 2. I would like to avoid the following warning when I want access just to Combinatorica`BinarySearch. General::compat: Combinatorica Graph and Permutations functionality has been superseded by preloaded functionality. The package now being loaded may conflict with this. Please see the Compatibility Guide for details. Follow-up questions: 1. Mathematiaca documentation currently suggests loading Combinatorica to get BinarySearch: However if one does so, the above warning is given. It seems to me that this arrangement is buggy. Is it? 2. I am surprised that BinarySearch is not in the Global context. Am I overlooking similar functionality under another name? The functionality I seek is the insertion point into a sorted list that would maintain insertion order, ideally with left/right option for when the inserted item matches a list item. (See Pythons binary_search, for example.) 3. I see that a BinarySearch function is in the GeometricFunctions package, which in turn is available by default. But I cannot find any related documentation. What do I make of this arrangement? Thank you, Alan Isaac
http://forums.wolfram.com/mathgroup/archive/2013/Sep/msg00035.html
CC-MAIN-2015-32
refinedweb
238
50.53
I have been using the typescript JsonServiceClient with servericestack core version 1.0.40. I am using JWT tokens, but was setting the token expiration to a minute to test out the refresh token. When I call Authenticate, where it does an automatic call to "/access-token", the call to Authenticate continues but an error is thrown. Previous calls to Authenticate work fine. "res.json is not a function"stack:"TypeError: res.json is not a function? at JsonServiceClient.handleError. Here is the code import { JsonServiceClient } from 'servicestack-client'; import { Authenticate } from 'dtos'; var client = new JsonServiceClient(""); client.refreshToken = saved.refreshToken; var request = new Authenticate(); request.provider = "credentials"; request.userName = "my@email.com"; request.password = "password"; client.post(request) .then(response => { // do stuff }) .catch(err => { if (err.responseStatus) { console.log(err.responseStatus.message); } else if (err.response && err.response.status) { console.log(err.response.statusText || err.response.status); } else { // err.message is "res.json is not a function" console.log(err.message || "Unable to login"); } } ); If you need more info I can try and find the exact line it errors, but it's in webpack so couldn't see it tonight. Can you upgrade to the latest version of servicestack-client (0.0.34), if you're using npm you'll need to uninstall then reinstall, as we have several Auth tests that test Authentication. If it's still an issue can you show the raw HTTP Response that causes this error, thx. Thanks. I had 0.0.34 installed anyway. I'll work on getting the raw responses. I was trying to see if I could add some tests into servicestack-client. However, I'm havign trouble getting tests to run. I get this: > servicestack-client@0.0.32 pretest /tmp/servicestack-client > tsc src/index.ts(793,38): error TS2345: Argument of type '{ method: string; mode: string; credentials: string; headers: Headers; compress: boolean; }' is not assign able to parameter of type 'RequestInit'. Types of property 'mode' are incompatible. Type 'string' is not assignable to type 'RequestMode'. tests/client.auth.spec.ts(1,1): error TS6053: File '/tmp/servicestack-client/typings/index.d.ts' not found. tests/client.auth.spec.ts(34,1): error TS2304: Cannot find name 'describe'. tests/client.auth.spec.ts(38,5): error TS2304: Cannot find name 'beforeEach'. tests/client.auth.spec.ts(43,5): error TS2304: Cannot find name 'it'. tests/client.auth.spec.ts(56,5): error TS2304: Cannot find name 'it'. tests/client.auth.spec.ts(75,5): error TS2304: Cannot find name 'it'. tests/client.auth.spec.ts(89,5): error TS2304: Cannot find name 'it'. tests/client.auth.spec.ts(106,5): error TS2304: Cannot find name 'it'. It's a clean Ubuntu 16 / node v6 docker container with a checkout of the v0.0.34 tag. Something obvious I'm missing? edit: found the issue I was missing global typings, had to run "npm install typings && node_modules/.bin/typings install". But also had to change src/index.ts line 793 to add as RequestMode and as RequestCredentials var reqOptions = { method: method, mode: this.mode as RequestMode, credentials: this.credentials as RequestCredentials, headers: this.headers, compress: false }; Do you have the latest version of 2.3.2 TypeScript installed? It uses the W3C Fetch definitions that's embedded in the "es2016" type definitions, see: I do. Completely clean docker container. Node v6.10.0, tsc 2.3.2. I tried changing tsconfig to set it es2016, but that didn't work. ok looks like they changed the type defs in the last release, if you pull latest it should work. I've got a test case that fails. Add this to client.auth.spec.ts it ("Can reauthenticate after an auto refresh access token", async () => { var client = new JsonServiceClient(TEST_URL); var auth = new Authenticate(); auth.provider = "credentials"; auth.userName = "test"; auth.password = "test"; var authResponse = await client.post(auth); var refreshToken = authResponse.refreshToken; let createExpiredJwt = createJwt(); createExpiredJwt.jwtExpiry = "2000-01-01"; const expiredJwt = await client.post(createExpiredJwt); var bearerToken = expiredJwt.token; client = new JsonServiceClient(TEST_URL); client.bearerToken = bearerToken; client.refreshToken = refreshToken; auth.password = "notvalid"; authResponse = await client.post(auth); expect(client.bearerToken).not.eq(expiredJwt.token); }); BTW it's likely your last assertion is wrong, it should be: expect(authResponse .bearerToken).not.eq(expiredJwt.token); Also this works with fetch in the browser, which you can test if you run http-server then go to http-server So the issue would be due to the node-fetch js impl, i'll check to see if there's a workaround. node-fetch Actually the test assertion was wrong, the invalid password request should throw an error, it passed in the browser since it was using the existing session cookies so they needed to be cleared. But it did highlight the issue which should be resolved with this commit. This change is available from v0.0.35 on npm. Thanks I'll try that out. Yes, sorry, I didn't update the assertion once I realised it was the failed login that was causing the "res.json() is not a function".The test should check if a login returns 401. I check it in my main code and see if it's resolved. Updating my main project to 0.0.35 is now failing because it can't find those two new types, RequestMode and RequestCredentials. I find typescript typings all a bit unintuitive but I expected it to pick them up automatically. Do you know if this happening generally? What version of TypeScript are you using? if it's not the latest can you upgrade TypeScript, i.e. by installing/uninstalling it? All working. Thanks, that pointed me at the issue. I had 2.3.2 globally installed and reinstalled, but then saw an old 2.2.2 in node_modules which was being used by webpack ts-loader. The original problem is solved.
https://forums.servicestack.net/t/jsonserviceclient-error-thrown-when-access-token-expires/3968
CC-MAIN-2017-30
refinedweb
982
54.08
In this C++ tutorial, let us see about Templates, its types with an example program in C++ Introduction of Templates Templates define the structure of a function or a class irrespective of the data types used. So, while using templates, the same function can be used with different data types. The template may be defined as the blueprint for functions & classes. Templates don’t define the data types during function or class declaration. The data types for the function are specified only at the time of function call whereas the data types for the class are specified at the time of installation of class. So, templates enable the programmer to do generic programming. Function Templates Function templates provide support for generic functions. Generic functions define the code for the function but don’t specify the datatypes. The datatypes for the variables used in the function are specified when a call is made to the function. Templates are used only when the body of the function is exactly the same for different data types. Syntax template <class T> T func (T arg) { } When the compiler encounters the “template” keyboard, it doesn’t generate any code. it doesn’t generate the machine code as it doesn’t know the data type. It keeps a note of the template so that it can be used later. When a call is made to the function & the data types used in the function are specified, the compiler actually generates the code. Compiler derives the data type from the data type of the argument passed to the function & generates the code for the specific datatypes i.e. if int value is passed, T would be replaced by int in the function template. This is known as an instantiation of the template. Each instance is known as the template function. Benefits Using the function template, the source code file is comparitively small as only a single version of the function is required in the source code for different data types. If ever a change is required in the function, the changes have to be made at one place & would be effective for all the data types used in various function calls. C++ Program The following C++ program is to find the sum of two numbers using function template. #include<iostream.h> #include<conio,h> template <class T> T sum(T a, T b) { return (a + b); } void main() { clrscr(); cout<<"\nSum Of integers : "<<sum(4,9); cout<<"\nSum Of long : "<<sum(234334,34539); cout<<"\nSum Of floats : "<<sum(4.5,9.9); cout<<"\nSum Of doubles : "<<sum(344544.54,92423.45); cout<<"\nSum Of chars : "<<sum('A','K'); } Output sum of integers : 13 sum of long : 268873 sum of floats : 14.4 sum of doubles : 436967.99 sum of chars : 1/4 Class Template A template can also be defined for classes. Sample C++ program for class templates The following code defines a cylinder class that has radius & height variables as members of the class. The class has two member function, area and circumference. #include<iostream.h> #include<conio.h> template <class T> class cyinder { T radius; T height; public: cylinder(T r, T h) { radius = r; height = h; } void area() { cout<<"\nArea of cylinder = "<<3.14f* radius * radius * height <<endl; } void circumference() { cout<<"\nCircumference = "<<2 * 3.14f * radius * height <<endl; } }; void main() { clrscr(); cylinder <int> obj1(12,34); cylinder <float> obj2(23.4,45.6); obj1.area(); obj1.circumference(); obj2.area(); obj2.circumference(); } Output Area of cylinder = 15373.44043 Circumference = 2562.23999 Area of cylinder = 78401.828125 Circumference = 6701.01123 -
https://www.codeatglance.com/cpp-templates/
CC-MAIN-2020-40
refinedweb
594
64.3
Summary Usage The input must be a feature layer, a table view, or a raster layer that has an attribute table; it cannot be a feature class or table. Records from the Join Table can be matched to more than one record in the input layer or table view. For more information on one-to-one, many-to-one, one-to-many, and many-to-many joins, see About joining and relating tables.. The Join Table can be any of the following types of tables: a geodatabase table, a dBASE file, an INFO table, or an OLE DB table. The input layer or table view must have an ObjectID field. The Join Table is not required to contain an ObjectID field. Field properties, such as aliases, visibility, and number formatting, are maintained when a join is added or removed. If a join with the same table name already exists—for example, if the layer A is joined to a table B—running the tool again to join table B will result in a warning that the join already exists.. When saving results to a new feature class or table, you can use the Qualified Field Names environment to control if the joined output field names will be qualified with the name of the table the field came from.. Learn more about performance tips for joining data AddJoin_management (in_layer_or_view, in_field, join_table, join_field, {join_type}) Code sample AddJoin example 1 (Python window) The following Python window script demonstrates how to use the AddJoin function in immediate mode. import arcpy arcpy") AddJoin example 2 (stand-alone script) This stand-alone script shows the AddJoin function as part of a workflow to join a table to a feature class, then extract desired features. # Name: AttributeSelection.py # Purpose: Join a table to a featureclass and select the desired attributes # Import system modules import arcpy as err: print(err.args[0]) Environments Licensing information - ArcGIS Desktop Basic: Yes - ArcGIS Desktop Standard: Yes - ArcGIS Desktop Advanced: Yes
https://desktop.arcgis.com/en/arcmap/10.5/tools/data-management-toolbox/add-join.htm
CC-MAIN-2020-50
refinedweb
329
55.47
I'm currently trying to install fonts across a bunch of servers. I've been able to use a script to copy the fonts over and "install" them onto the server but I need to be able to access the fonts without having to turn off the server or log off the account. I found Windows AddFontResource() Have you tried using the win32api library? It has the SendMessage() function which can be used in conjunction with the windll.gdi32.AddFontResource() in ctypes For example installing a TTF font file: import win32api import ctypes import win32con ctypes.windll.gdi32.AddFontResourceA("C:\\Path\\To\\Font\\font.ttf") win32api.SendMessage(win32con.HWND_BROADCAST, win32con.WM_FONTCHANGE)
https://codedump.io/share/FglIMwOvOlb2/1/python-version-of-addfontresource
CC-MAIN-2017-17
refinedweb
111
60.85
Primers • Python Tips and Tricks - Python Built-in Methods - Strings - Using isinstance()vs. type()for type-checking - Lists - Create a copy of a list using =vs. <list>.copy() - Get counter and value while looping using enumerate() list.append()vs. list.extend()vs. += - Get Elements - Unpacking - Join Iterables - Interaction Between Two Lists - Apply Functions to Elements in a List - Tuple - Dictionaries - Function - Classes - Abstract Classes: Declare Methods without Implementation classmethod: What is it and When to Use it getattr: a Better Way to Get the Attribute of a Class __call__: Call your Class Instance like a Function @staticmethod: use the function without adding the attributes required for a new instance - Property Decorator: A Pythonic Way to Use Getters and Setters __str__and __repr__: Create a String Representation of a Python Object¶ attrs: Bring Back the Joy of Writing Classes! - Datetime - Best Practices - Code Speed - Python Built-in Libraries - Sort the elements in the list by the key - Group elements in the list by the key Python Built-in Methods - This section covers some useful Python built-in methods and libraries. Strings Using isinstance() vs. type() for type-checking isinstance()caters for inheritance (an instance of a derived class is an instance of a base class, too), while checking for equality of type does not (it demands identity of types and rejects instances of subtypes, a.k.a. subclasses). For your code to support inheritance, isinstance()is less bad than checking identity of types because it seamlessly supports inheritance. It’s not that isinstanceis, however, quite a special case—a builtin type that exists only to let you use isinstance()(both strand unicodesubclass basestring). Strings are sequences (you could loop over them, index them, slice them, …), but you generally want to treat them as “scalar” types—it’s somewhat inconvenient (but a reasonably frequent use case) to treat all kinds of strings (and maybe other scalar types, i.e., ones you can’t loop on) one way, all containers (lists, sets, dicts, …) in another way, and basestringplus isinstance()helps you do that—the overall structure of this idiom is something like: s1 = unicode("test") s2 = "test" isinstance(s1, basestring) ## Returns True isinstance(s2, basestring) ## Returns True - A gotcha with isinstance()is that the booldatatype is a subclass of the intdatatype: issubclass(bool, int) ## Returns True Index of a Substring using str.find() or str.index() - To find the index of a substring in a string, use the str.find()method which returns the index of the first occurrence of the substring if found and -1otherwise. sentence = "Today is Saturaday" - Find the index of first occurrence of the substring: sentence.find("day") ## Returns 2 sentence.find("nice") ## Returns -1 - You can also provide the starting and stopping position of the search: ## Start searching for the substring at index 3 sentence.find("day", 3) ## Returns 15 - Note that you can also use str.index()to accomplish the same result. Replace a String with Another String Using Regular Expressions To either replace one string with another string or to change the order of characters in a string, use re.sub(). re.sub()allows you to use a regular expression to specify the pattern of the string you want to swap. In the code below, we replace 3/7/2021 with Sunday and replace 3/7/2021 with 2021/3/7. import re text = "Today is 3/7/2021" match_pattern = r"(\d+)/(\d+)/(\d+)" re.sub(match_pattern, "Sunday", text) ## Returns 'Today is Sunday' re.sub(match_pattern, r"\3-\1-\2", text) ## Returns 'Today is 2021-3-7' Lists Create a copy of a list using = vs. <list>.copy() - When you create a copy of a list using the =operator, a change in the second list will lead to the change in the first list. It is because both lists point to the same object. l1 = [1, 2, 3] l2 = l1 l2.append(4) l2 ## Returns [1, 2, 3, 4] l1 ## Returns [1, 2, 3, 4] l1 is l2 ## Returns True since they are the same object - Instead of using the =operator, use the copy()method. Now any changes to the second list will not reflect in the first list. l1 = [1, 2, 3] l2 = l1.copy() l2.append(4) l2 ## Returns [1, 2, 3, 4] l1 ## Returns [1, 2, 3] Get counter and value while looping using enumerate() - Rather than using for i in range(len(array))to access both the index and the value of the array, use enumerate()instead. It produces the same result but it is much cleaner. arr = ['a', 'b', 'c', 'd', 'e'] ## Instead of this for i in range(len(arr)): print(i, arr[i]) ## Prints ## 0 a ## 1 b ## 2 c ## 3 d ## 4 e ## Use this for i, val in enumerate(arr): print(i, val) ## Prints ## 0 a ## 1 b ## 2 c ## 3 d ## 4 e list.append() vs. list.extend() vs. += - To add a list to another list, use the list.append()method or +=. To add elements of a list to another list, use the list.extend()method. a = [1, 2, 3] a.append([4, 5]) a ## Returns [1, 2, 3, [4, 5]] a = [1, 2, 3] a.extend([4, 5]) a ## Returns [1, 2, 3, 4, 5] a = [1, 2, 3] a += [4, 5] a ## Returns [1, 2, 3, 4, 5] Get Elements random.choice(): Get a Randomly Selected Element from a List - Besides getting a random number, you can also get a random element from a Python list using random. In the code below, “stay at home” was picked randomly from a list of options. import random to_do_tonight = ['stay at home', 'attend party', 'do exercise'] random.choice(to_do_tonight) ## Returns 'attend party' random.sample(): Get Multiple Random Elements from a List - To get nrandom elements from a list, use random.sample. import random random.seed(1) nums = [1, 2, 3, 4, 5] random_nums = random.sample(nums, 2) random_nums ## Returns [2, 1] heapq: Find n Max Values of a List To extract nmax values from a large Python list, using heapqwill speed up the code. In the code below, using heapqis >2x faster than using sorting and indexing. Both methods try to find the max values of a list of 10000 items. import heapq import random from timeit import timeit random.seed(0) l = random.sample(range(0, 10000), 10000) def get_n_max_sorting(l: list, n: int): l = sorted(l, reverse=True) return l[:n] def get_n_max_heapq(l: list, n: int): return heapq.nlargest(n, l) expSize = 1000 n = 100 time_sorting = timeit("get_n_max_sorting(l, n)", number=expSize, globals=globals()) time_heapq = timeit('get_n_max_heapq(l, n)', number=expSize, globals=globals()) ratio = round(time_sorting/time_heapq, 3) print(f'Run {expSize} experiments. Using heapq is {ratio} times' ' faster than using sorting') ## Prints Run 1000 experiments. Using heapq is 2.827 times faster than using sorting Unpacking How to Unpack Iterables - To assign items of a Python iterables (such as list, tuple, string) to different variables, you can unpack the iterable like below. nested_arr = [[1, 2, 3], ["a", "b"], 4] num_arr, char_arr, num = nested_arr num_arr ## Prints [1, 2, 3] char_arr ## Prints ['a', 'b'] Extended Iterable Unpacking: Ignore Multiple Values when Unpacking - To ignore multiple values when unpacking a Python iterable, add * to _ as shown below. - This is called “Extended Iterable Unpacking” and is available in Python 3.x. a, *_, b = [1, 2, 3, 4] print(a) ## Prints 1 b ## Prints 4 _ ## Prints [2, 3] Join Iterables join(): Turn an Iterable into a String - To turn an iterable into a string, use join(). - In the code below, elements are joined in the list fruits using ,. fruits = ['apples', 'oranges', 'grapes'] fruits_str = ', '.join(fruits) print(f"Today, I need to get some {fruits_str} in the grocery store") ## Prints "Today, I need to get some apples, oranges, grapes in the grocery store" zip(): Create Pairs of Elements from Two Iterators - To to create pairs of elements from two lists use the zip()method which aggregates them in a list of tuples. nums = [1, 2, 3, 4] string = "abcd" combinations = zip(nums, string) combinations ## Prints [(1, 'a'), (2, 'b'), (3, 'c'), (4, 'd')] nums = [1, 2, 3, 4] chars = ['a', 'b', 'c', 'd'] comb = zip(nums, chars) comb ## Returns [(1, 'a'), (2, 'b'), (3, 'c'), (4, 'd')] - You can also unzip the list of tuples back to it’s original form by using zip(*list_of_tuples): nums_2, chars_2 = zip(*comb) nums_2, chars_2 ## Returns ((1, 2, 3, 4), ('a', 'b', 'c', 'd')) Interaction Between Two Lists set.intersection(): Find the Intersection Between Two Sets - To get the common elements between two iterators, convert them to sets then use set.intersection()(Python 2) or the &operator (Python 3). requirement1 = ['pandas', 'numpy', 'statsmodel'] requirement2 = ['numpy', 'statsmodel', 'sympy', 'matplotlib'] ## Python 2 intersection = set.intersection(set(requirement1), set(requirement2)) list(intersection) ## Returns ['statsmodel', 'numpy'] ## Python 3 intersection = set(requirement1) & set(requirement2) list(intersection) ## Returns ['statsmodel', 'numpy'] <set>.difference(): Find the Difference Between Two Sets - To find the difference between two iterators, convert them to sets then apply <set>.difference()(Python 2) or the -operator (Python 3) to the sets. a = [1, 2, 3, 4] b = [1, 3, 4, 5, 6] ## Python 2 ## Find elements in a but not in b diff = set(a).difference(set(b)) list(diff) ## Returns [2] ## Find elements in b but not in a diff = set(b).difference(set(a)) list(diff) ## Returns [5, 6] ## Python 3 ## Find elements in a but not in b diff = set(a) - set(b) list(diff) ## Returns [2] ## Find elements in b but not in a diff = set(b) - set(a) list(diff) ## Returns [5, 6] set.union(): Find the Union Between Two Sets - To get the union of elements from two sets, use set.union()(Python 2) or the |operator (Python 3). requirement1 = ['pandas', 'numpy', 'statsmodel'] requirement2 = ['numpy', 'statsmodel', 'sympy', 'matplotlib'] ## Python 2 union = set.union(set(requirement1), set(requirement2)) list(union) ## Returns ['sympy', 'statsmodel', 'numpy', 'pandas', 'matplotlib'] ## Python 3 union = set(requirement1) | set(requirement2) list(union) ## Returns ['sympy', 'statsmodel', 'numpy', 'pandas', 'matplotlib'] Apply Functions to Elements in a List any(): Check if Any Element of an Iterable is True - To check if any element of an iterable is True, use any(). In the code below, any()find if any element in the text is in uppercase. text = "abcdE" any(c.isupper() for c in text) ## Returns True all(): Check if All Elements of an Iterable Are Strings - To check if all elements of an iterable are strings, use all()and isinstance(). l = ['a', 'b', 1, 2] all(isinstance(item, str) for item in l) ## Returns False filter(): Get the Elements of an Iterable that a Function Evaluates True To get the elements of an iterable that a function returns true, use filter(). In the code below, the filter method gets items that are fruits: def get_fruit(val: str): fruits = ['apple', 'orange', 'grape'] return val in fruits items = ['chair', 'apple', 'water', 'table', 'orange'] fruits = filter(get_fruit, items) print(list(fruits)) ## Returns ['apple', 'orange'] map(): Apply a Function to Each Item of an Iterable - To apply the given function to each item of a given iterable, use map. nums = [1, 2, 3] list(map(str, nums)) ## Returns ['1', '2', '3'] multiply_by_two = lambda num: num * 2 list(map(multiply_by_two, nums)) ## Returns [2, 4, 6] sort(): Sort a List of Tuples by the First or Second Item - To sort a list of tuples by the first or second item in a tuple, use the sort()method. To specify which item to sort by, use the keyparameter. prices = [('apple', 3), ('orange', 1), ('grape', 3), ('banana', 2)] ## Sort by the first item by_letter = lambda x: x[0] prices.sort(key=by_letter) prices ## Returns [('apple', 3), ('banana', 2), ('grape', 3), ('orange', 1)] ## Sort by the second item in reversed order by_price = lambda x: x[1] prices.sort(key=by_price, reverse=True) prices ## Returns [('apple', 3), ('grape', 3), ('banana', 2), ('orange', 1)] Tuple slice: Make Your Indices More Readable by Naming Your Slice - Have you ever been confused when looking into code that contains hardcoded slice indices? Even if you understand it now, you might forget why you choose specific indices in the future. data = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] some_sum = sum(data[:8]) * sum(data[8:]) - If so, name your slice. Python provides a nice built-in function for that purpose called slice. By using names, your code is much easier to understand. JANUARY = slice(0, 8) FEBRUARY = slice(8, len(data)) some_sum = sum(data[JANUARY] * sum(data[FEBRUARY])) print(some_sum) ## Prints 684 Dictionaries Merge two dictionaries - Starting Python 3.5, you can use dictionary unpacking options: {'a': 1, **{'b': 2}} ## Returns {'a': 1, 'b': 2} {'a': 1, **{'a': 2}} ## Returns {'a': 2} Note that if there are overlapping keys in the input dictionaries, the value in the last dictionary for the common key will be stored. You can use this idea to merge two dictionaries: d1 = {'a': 1} d2 = {'b': 2} {**d1, **d2} ## Returns {'a': 1, 'b': 2} - However, Python 3.9 or greater provides the simplest method to merge two dictionaries: d1 = {'a': 1} d2 = {'b': 2} d3 = d1 | d2 ## Returns {'a': 1, 'b': 2} - To merge two dictionaries in Python 3.4 or lower: d1 = {'a': 1} d2 = {'b': 2} d2.update(d1) ## Returns {'a': 1, 'b': 2} max(dict) - Applying maxon a Python dictionary will give you the largest key. To find the key with the largest value in a dictionary, utilize the keyparameter (similar to sort) in the maxmethod in conjunction with lambda functions or itemgetter. from operator import itemgetter birth_year = {"Ben": 1997, "Alex": 2000, "Oliver": 1995} max(birth_year) ## Returns "Oliver" max_val = max(birth_year, key=lambda k: birth_year[k]) max_val ## Returns "Alex" max_val = max(birth_year.items(), key=itemgetter(1)) max_val ## Returns ('Alex', 2000) max_val[0] ## Returns "Alex" dict.get(): Get the Default Value of a Dictionary if a Key Doesn’t Exist - Refer the Python 3 Primer for examples and use-cases on this topic. dict.fromkeys() - To create a dictionary from a list and a value, use dict.fromkeys(). For instance, we can use dict.fromkeys()to create a dictionary of furnitures’ locations: furnitures = ['bed', 'table', 'chair'] loc1 = 'IKEA' furniture_loc = dict.fromkeys(furnitures, loc1) furniture_loc ## Returns {'bed': 'IKEA', 'table': 'IKEA', 'chair': 'IKEA'} … or create a dictionary of food’s locations: food = ['apple', 'pepper', 'onion'] loc2 = 'ALDI' food_loc = dict.fromkeys(food, loc2) food_loc ## Returns {'apple': 'ALDI', 'pepper': 'ALDI', 'onion': 'ALDI'} - These results can be combined into a location dictionary like below: locations = {**food_loc, **furniture_loc} locations {'apple': 'ALDI', 'pepper': 'ALDI', 'onion': 'ALDI', 'bed': 'IKEA', 'table': 'IKEA', 'chair': 'IKEA'} Function **kwargs: Pass Multiple Arguments to a Function Sometimes you might not know the arguments you will pass to a function. If so, use **kwargs. **kwargsallow you to pass multiple arguments to a function using a dictionary. In the example below, passing **{'a':1, 'b':2}to the function is similar to passing a=1, b=1to the function. Once **kwargsargument is passed, you can treat it like a Python dictionary. parameters = {'a': 1, 'b': 2} def example(c, **kwargs): print(kwargs) for val in kwargs.values(): print(c + val) example(c=3, **parameters) ## Prints ## {'a': 1, 'b': 2} ## 4 ## 5 Decorator in Python Do you want to add the same block of code to different functions in Python? If so, use a decorator! In the code below, the decorator tracks the time of the function say_hello: import time def time_func(func): def wrapper(): print("This happens before the function is called") start = time.time() func() print('This happens after the funciton is called') end = time.time() print('The duration is', end - start, 's') return wrapper - Now all we need to do is to add @time_func before the function say_hello. @time_func def say_hello(): print("hello") say_hello() - which outputs: ``` This happens before the function is called hello This happens after the function is called The duration is 0.0002987384796142578 s ``` - Decorator makes the code clean and shortens repetitive code. If we want to track the time of another function, for example, func2(), I can just use: @time_func def func2(): pass func2() - which outputs: ``` This happens before the function is called This happens after the funciton is called The duration is 4.38690185546875e-05 s from typing import List, Dict ``` Classes Abstract Classes: Declare Methods without Implementation Sometimes you might want different classes to use the same attributes and methods. But the implementation of those methods can be slightly different in each class. A good way to implement this is to use abstract classes. An abstract class contains one or more abstract methods. An abstract method is a method that is declared but contains no implementation. The abstract method requires subclasses to provide implementations. from abc import ABC, abstractmethod class Animal(ABC): def __init__(self, name: str): self.name = name super().__init__() @abstractmethod def make_sound(self): pass class Dog(Animal): def make_sound(self): print(f'{self.name} says: Woof') class Cat(Animal): def make_sound(self): print(f'{self.name} says: Meows') Dog('Pepper').make_sound() Cat('Bella').make_sound() ## Prints ## "Pepper says: Woof ## Bella says: Meows" classmethod: What is it and When to Use it When working with a Python class, To create a method that returns that class with new attributes, use classmethod. Classmethod doesn’t depend on the creation of a class instance. In the code below, classmethodinstantiates a new object whose attribute is a list of even numbers. class Solver: def __init__(self, nums: list): self.nums = nums @classmethod def get_even(cls, nums: list): return cls([num for num in nums if num % 2 == 0]) def print_output(self): print("Result:", self.nums) ## Not using class method nums = [1, 2, 3, 4, 5, 6, 7] solver = Solver(nums).print_output() ## Prints Result: [1, 2, 3, 4, 5, 6, 7] solver2 = Solver.get_even(nums) solver2.print_output() ## Prints Result: [2, 4, 6] getattr: a Better Way to Get the Attribute of a Class To get a default value when calling an attribute that is not in a class, use getattr()method. The getattr(class, attribute_name)method simply gets the value of an attribute of a class. However, if the attribute is not found in a class, it returns the default value provided to the function. class Food: def __init__(self, name: str, color: str): self.name = name self.color = color apple = Food("apple", "red") print("The color of apple is", getattr(apple, "color", "yellow")) ## Prints "The color of apple is red" print("The flavor of apple is", getattr(apple, "flavor", "sweet")) ## Prints "The flavor of apple is sweet" print("The flavor of apple is", apple.sweet) --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) /tmp/ipykernel_337430/3178150741.py in <module> ----> 1 print("The flavor of apple is", apple.sweet) AttributeError: 'Food' object has no attribute 'sweet' __call__: Call your Class Instance like a Function - To call your class instance like a function, add the __call__()method to your class. class DataLoader: def __init__(self, data_dir: str): self.data_dir = data_dir print("Instance is created") def __call__(self): print("Instance is called") data_loader = DataLoader("my_data_dir") ## Instance is created data_loader() ## Instance is called Instance is created Instance is called @staticmethod: use the function without adding the attributes required for a new instance - Have you ever had a function in your class that doesn’t access any properties of a class but fits well in a class? You might find it redundant to instantiate the class to use that function. That is when you can turn your function into a static method. All you need to turn your function into a static method is the decorator @staticmethod. Now you can use the function without adding the attributes required for a new instance. import re class ProcessText: def __init__(self, text_column: str): self.text_column = text_column @staticmethod def remove_URL(sample: str) -> str: """Replace url with empty space""" return re.sub(r"http\S+", "", sample) text = ProcessText.remove_URL("My favorite page is") print(text) ## Prints "My favorite page is " Property Decorator: A Pythonic Way to Use Getters and Setters If you want users to use the right data type for a class attribute or prevent them from changing that attribute, use the property decorator. In the code below, the first color method is used to get the attribute color and the second color method is used to set the value for the attribute color. class Fruit: def __init__(self, name: str, color: str): self._name = name self._color = color @property def color(self): print("The color of the fruit is:") return self._color @color.setter def color(self, value): print("Setting value of color...") if self._color is None: if not isinstance(value, str): raise ValueError("color must be of type string") self.color = value else: raise AttributeError("Sorry, you cannot change a fruit's color!") fruit = Fruit("apple", "red") fruit.color ## Prints The color of the fruit is: #'red' fruit.color = "yellow" Setting value of color... ## --------------------------------------------------------------------------- ## AttributeError Traceback (most recent call last) ## /tmp/ipykernel_337430/2513783301.py in <module> ## ----> 1 fruit.color = "yellow" ## ## /tmp/ipykernel_337430/2891187161.py in color(self, value) ## 17 self.color = value ## 18 else: ## ---> 19 raise AttributeError("Sorry, you cannot change a fruit's color!") ## 20 ## 21 ## AttributeError: Sorry, you cannot change a fruit's color! __str__ and __repr__: Create a String Representation of a Python Object¶ To create a string representation of an object, add __str__and __repr__. __str__shows readable outputs when printing the object. __repr__shows outputs that are useful for displaying and debugging the object. class Food: def __init__(self, name: str, color: str): self.name = name self.color = color def __str__(self): return f"{self.color} {self.name}" def __repr__(self): return f"Food({self.color}, {self.name})" food = Food("apple", "red") ## Invokes __str__() print(food) ## Prints "red apple" ## Invokes __repr__() food ## Prints Food(red, apple) attrs: Bring Back the Joy of Writing Classes! - Do you find it annoying to write an __init__()method every time you want to create a class in Python? class Dog: def __init__(self, age: int, name: str, type_: str = 'Labrador Retriever'): self.age = age self.name = name self.type_ = type_ def describe(self): print(f"{self.name} is a {self.type_}.") - If so, try attrs. With attrs, you can declaratively define the attributes of a class. import attr @attr.s(auto_attribs=True) class Dog: age: int name: str type_: str = "Labrador Retriever" def describe(self): print(f"{self.name} is a {self.type_}.") pepper = Dog(7, "Pepper", "Labrador Retriever") - The instance created using attrshas a nice human-readable __repr__(). pepper ## Returns Dog(age=7, name='Pepper', type_='Labrador Retriever') pepper.describe() Pepper is a Labrador Retriever. - You can also turn the attributes of that instance into a dictionary. attr.asdict(pepper) {'age': 7, 'name': 'Pepper', 'type_': 'Labrador Retriever'} - You can also compare two instances of the same class using the first attribute of that class. bim = Dog(8, 'Bim Bim', 'Dachshund') pepper < bim ## Returns True Datetime datetime + timedelta: Calculate End DateTime Based on Start DateTime and Duration Provided an event starts at a certain time and takes a certain number of minutes to finish, how do you determine when it ends? Taking the sum of datetimeand timedelta(minutes) does the trick! from datetime import date, datetime, timedelta beginning = '2020/01/03 23:59:00' duration_in_minutes = 2500 ## Find the beginning time beginning = datetime.strptime(beginning, '%Y/%m/%d %H:%M:%S') ## Find duration in days days = timedelta(minutes=duration_in_minutes) ## Find end time end = beginning + days end ## Returns datetime.datetime(2020, 1, 5, 17, 39) Use Dates in a Month as the Feature - Have you ever wanted to use dates in a month as the feature in your time series data? You can find the days in a month by using calendar.monthrange(year, month)[1] like below. import calendar calendar.monthrange(2020, 11)[1] ## Returns 30 Best Practices - This section includes some best practices to write Python code. Use _ to Ignore Values - When assigning the values returned from a function, you might want to ignore some values that are not used in future code. If so, assign those values to underscores _. def return_two(): return 1, 2 _, var = return_two() var ## Returns 2 - If you want to repeat a loop a specific number of times but don’t care about the index, you can also use _. for _ in range(5): print('Hello') ## Prints ## Hello ## Hello ## Hello ## Hello ## Hello Python Pass Statement If you want to create code that does a particular thing but don’t know how to write that code yet, put that code in a function then use pass. Once you have finished writing the code in a high level, start to go back to the functions and replace pass with the code for that function. This will prevent your thoughts from being disrupted. def say_hello(): pass def ask_to_sign_in(): pass def main(is_user: bool): if is_user: say_hello() else: ask_to_sign_in() main(is_user=True) Code Speed - This section will show you some ways to speed up or track the performance of your Python code. Concurrently Execute Tasks on Separate CPUs - If you want to concurrently execute tasks on separate CPUs to run faster, consider using joblib.Parallel. It allows you to easily execute several tasks at once, with each task using its own processor. from joblib import Parallel, delayed import multiprocessing def add_three(num: int): return num + 3 num_cores = multiprocessing.cpu_count() results = Parallel(n_jobs=num_cores)(delayed(add_three)(i) for i in range(10)) results ## Returns [3, 4, 5, 6, 7, 8, 9, 10, 11, 12] Compare The Execution Time Between Two Functions - If you want to compare the execution time between two functions, try timeit.timeit(). You can also specify the number of times you want to rerun your function to get a better estimation of the time. import time import timeit def func(): """comprehension""" l = [i for i in range(10_000)] def func2(): """list range""" l = list(range(10_000)) expSize = 1000 time1 = timeit.timeit(func, number=expSize) time2 = timeit.timeit(func2, number=expSize) print(time1/time2) ## Prints 2.6299518653018685 - From the result, we can see that it is faster to use list range than to use list comprehension on average. Python Built-in Libraries - This section covers Python Built-in libraries such as collections, functools, and itertools. Collections collectionsis a built-in Python library to deal with Python dictionary efficiently. This section will show you some useful methods of this module. collections.Counter: Count The Occurrences of Items in a List - Counting the occurrences of each item in a list using a for-loop is slow and inefficient. char_list = ['a', 'b', 'c', 'a', 'd', 'b', 'b'] def custom_counter(list_: list): char_counter = {} for char in list_: if char not in char_counter: char_counter[char] = 1 else: char_counter[char] += 1 return char_counter custom_counter(char_list) ## Returns {'a': 2, 'b': 3, 'c': 1, 'd': 1} - Using collections.Counteris more efficient, and all it takes is one line of code! from collections import Counter Counter(char_list) ## Returns Counter({'a': 2, 'b': 3, 'c': 1, 'd': 1}) - In my experiment, using Counter is >2x times faster than using a custom counter. from timeit import timeit import random random.seed(0) num_list = [random.randint(0, 22) for _ in range(1000)] numExp = 100 custom_time = timeit("custom_counter(num_list)", globals=globals()) counter_time = timeit("Counter(num_list)", globals=globals()) print(custom_time/counter_time) ## Returns 2.6199148843686806 - To get the most frequently occurring element in the list: from collections import Counter a = [1, 2, 3, 5, 4, 2, 3, 1, 5, 4, 5] print(Counter(a).most_common(1)[0][0]) ## Returns 5 print(max(set(a), key = a.count)) 1## Another way; also returns 5 namedtuple: Tuple with Named Fields - If you need to create creating a tuple with named fields, consider using namedtuple: Point = namedtuple('Point', ['x', 'y']) p = Point(11, y=22) ## Instantiate with positional or keyword arguments p[0] + p[1] ## Returns 33; indexable like the plain tuple (11, 22) x, y = p ## Unpack like a regular tuple x, y ## Returns (11, 22) p.x + p.y ## Returns 33; Fields also accessible by name p ## Returns Point(x=11, y=22); readable __repr__ with a name=value style Defaultdict: Return a Default Value When a Key is Not Available - If you want to create a Python dictionary with default value, use defaultdict. When calling a key that is not in the dictionary, the default value is returned. from collections import defaultdict classes = defaultdict(lambda: 'Outside') classes['Math'] = 'B23' classes['Physics'] = 'D24' classes['Math'] ## Returns 'B23' classes['English'] ## Returns 'Outside' Note that the first argument to defaultdictwhich is default_factory, requires a callable, which implies either a class or a function. You could also achieve similar functionality using dict.get()](), however note that this requires specifying the default value at every fetch-item call rather than once when defining the dictionary. classes = {} classes.get("English", "Outside") ## Returns 'Outside' Itertools itertools[] is a built-in Python library that creates iterators for efficient looping. This section will show you some useful methods of itertools. 3.2.1. itertools.combinations: A Better Way to Iterate Through a Pair of Values in a Python List¶ If you want to iterate through a pair of values in a list and the order does not matter ((a,b) is the same as (b, a)), a naive approach is to use two for-loops. num_list = [1, 2, 3] for i in num_list: for j in num_list: if i < j: print((i, j)) (1, 2) (1, 3) (2, 3) However, using two for-loops is lengthy and inefficient. Use itertools.combinations instead: from itertools import combinations comb = combinations(num_list, 2) ## use this for pair in list(comb): print(pair) (1, 2) (1, 3) (2, 3) 3.2.2. itertools.product: Nested For-Loops in a Generator Expression¶ Are you using nested for-loops to experiment with different combinations of parameters? If so, use itertools.product instead. itertools.product is more efficient than nested loop because product(A, B) returns the same as ((x,y) for x in A for y in B). from itertools import product params = { “learning_rate”: [1e-1, 1e-2, 1e-3], “batch_size”: [16, 32, 64], } for vals in product(*params.values()): combination = dict(zip(params.keys(), vals)) print(combination) {‘learning_rate’: 0.1, ‘batch_size’: 16} {‘learning_rate’: 0.1, ‘batch_size’: 32} {‘learning_rate’: 0.1, ‘batch_size’: 64} {‘learning_rate’: 0.01, ‘batch_size’: 16} {‘learning_rate’: 0.01, ‘batch_size’: 32} {‘learning_rate’: 0.01, ‘batch_size’: 64} {‘learning_rate’: 0.001, ‘batch_size’: 16} {‘learning_rate’: 0.001, ‘batch_size’: 32} {‘learning_rate’: 0.001, ‘batch_size’: 64} 3.2.3. itertools.starmap: Apply a Function With More Than 2 Arguments to Elements in a List¶ map is a useful method that allows you to apply a function to elements in a list. However, it can’t apply a function with more than one argument to a list. def multiply(x: float, y: float): return x * y nums = [(1, 2), (4, 2), (2, 5)] list(map(multiply, nums)) ————————————————————————— TypeError Traceback (most recent call last) /tmp/ipykernel_38110/240000324.py in TypeError: multiply() missing 1 required positional argument: ‘y’ To apply a function with more than 2 arguments to elements in a list, use itertools.starmap. With starmap, elements in each tuple of the list nums are used as arguments for the function multiply. from itertools import starmap list(starmap(multiply, nums)) [2, 8, 10] 3.2.4. itertools.compress: Filter a List Using Booleans¶ Normally, you cannot filter a list using a list. fruits = [‘apple’, ‘orange’, ‘banana’, ‘grape’, ‘lemon’] chosen = [1, 0, 0, 1, 1] fruits[chosen] ————————————————————————— TypeError Traceback (most recent call last) /tmp/ipykernel_40588/2755098589.py in TypeError: list indices must be integers or slices, not list To filter a list using a list of booleans, use itertools.compress instead from itertools import compress list(compress(fruits, chosen)) [‘apple’, ‘grape’, ‘lemon’] 3.2.5. itertools.groupby: Group Elements in an Iterable by a Key¶ If you want to group elements in a list by a key, use itertools.groupby. In the example below, I grouped elements in the list by the first element in each tuple. from itertools import groupby prices = [(‘apple’, 3), (‘orange’, 2), (‘apple’, 4), (‘orange’, 1), (‘grape’, 3)] key_func = lambda x: x[0] Sort the elements in the list by the key prices.sort(key=key_func) Group elements in the list by the key for key, group in groupby(prices, key_func): print(key, ‘:’, list(group)) apple : [(‘apple’, 3), (‘apple’, 4)] grape : [(‘grape’, 3)] orange : [(‘orange’, 2), (‘orange’, 1)] 3.2.6. itertools.zip_longest: Zip Iterables of Different Lengths¶ zip allows you to aggregate elements from each of the iterables. However, zip doesn’t show all pairs of elements when iterables have different lengths. fruits = [‘apple’, ‘orange’, ‘grape’] prices = [1, 2] list(zip(fruits, prices)) [(‘apple’, 1), (‘orange’, 2)] To aggregate iterables of different lengths, use itertools.zip_longest. This method will fill missing values with fillvalue. from itertools import zip_longest list(zip_longest(fruits, prices, fillvalue=’-‘)) [(‘apple’, 1), (‘orange’, 2), (‘grape’, ‘-‘)] References - Python Dictionary Tips - Python Implementations of Data Structures - What are the differences between type() and isinstance()? - Quora: What is the height, size, and depth of a binary tree? - What is the time complexity of collections.Counter() in Python? - Python Time Complexity - Big-O of list slicing - What is the time complexity of slicing a list? Citation If you found our work useful, please cite it as: @article{Chadha2020DistilledPython3Tips, title = {Python 3 Tips}, author = {Chadha, Aman}, journal = {Distilled AI}, year = {2020}, note = {\url{}} }
https://aman.ai/code/python-tips/
CC-MAIN-2022-40
refinedweb
5,532
62.17
[homePage] - [docum] - [source] - [testing] - [experience] Tutorial for the Java versionIf you are reading this document for the first time, we recommend you to continue straight through. You can also jump directly to the section of your interest: - Bottom-up design style - Top-down design style - The library of data structures - - adding a new class to the library - - deriving a new data structure from an existing class - - as a wrapper of a Java container - - expanding Java container to a bi-directional association - - slow but easier way to develop new associations Except for a few minor syntax differences, the association libraries in C++ and in Java are quite similar. If you already read the C++ tutorial, you could probably use the Java version without much thinking about it. This tutorial will show you that, just like the C++ version, this library supports two styles of software development: Whether you use the bottom-up or top-down approach, you eventually reach the stage when both the code and the model evolve simultaneously, and the IN_CODE modelling supports that in the most elegant manner.Whether you use the bottom-up or top-down approach, you eventually reach the stage when both the code and the model evolve simultaneously, and the IN_CODE modelling supports that in the most elegant actual coding begins only later when the entire architecture has been well thought through. However, you do not(!) manipulate the UML diagram in a graphical environment. Instead, you edit a short textual UML description (schema), and the diagram is re-drawn automatically. The advantage of using the schema is that, even at the early stages of planning the architecture, it automatically gives you a functional code skeleton which already compiles and which you can instantly evolve. Note that your design is always model driven (MDA=Model Driven Architecture). In contrast to other tools which assume that the model is an independent entity outside of your source, here the model (the schema) is always an integral part of your code. The idea which lead to this tight integration is to implement all data structures and relations as associations and not as containers (collections) supplied with Java or provided by other class libraries. There is a big difference between associations and containers. Associations control mutual cooperation of two or more classes, while in containers just one class controls objects or another class. Associations naturally support intrusive data structures including graphs, many-to-many, and various design patterns which cannot be implemented as containers. Intrusive data structures are important because they are better protected against errors, are generally faster, use less space than containers, and do not trigger heap access when working with the data structures. As you will see in the last section (Adding a new data structure to the library), any container can be treated as a simple association, and entire libraries such as Java collections can be easily included in the association inserting any code related to the relations among them. The name is not treated as an attribute, but rather as a relation, therefore it does not appear in this code yet. Then, separately and usually in one block of code, you declare the relations (associations) among the classes. The association library (jlib) gives you a multitude of choices but, as you will see, the initial choice isn't critical. It is easy to switch and experiment with different choices. Let's implement the data organization with these associations: Note that since the purpose of jlib is to eliminate reference from your application classes, we strongly recommend to use data organization Name for all variable length strings (names): and you declare the associations in a separate file (e.g. called ds.def for 'data structure definitions): // declare the relations (schema) one association per line Association LinkeList1<Department,Employee> empl; Association LinkedList1<Department,Department> dHier; Association SingleLink<Department,Manager> boss; Association Name<Employee> eName; Note that the syntax is identical with how you declare associations in the C++ version (alib). Using the methods which jlib gives you for the associations, you can build and manipulate your data organization. Most associations dealing with multiple objects have iterators. For example, here is a simple program which builds a tree with a few Departments and populates it with Employees. Note how each association (data structure) has a name (here empl, dHier, boss, eName) which is then used as an identifier when operating on the data structure. In this case, the application classes form package test1. They also must import automatically generated jlib classes from package jlibGen. For example, if directory jtut has two subdirectories test1 and jlibGen, where jtut\test1 stores the application classes (Department.java, Employee.java, Manager.java, test1.java) plus file ds.def, while jtut\jlibGen is reserved for the generated association classes, you can compile and run this program from directory jtut like this (file test1\tt.bat): THE FIRST LINE calls the code generator, codegen. Codegen does not mangle your code, it only generates additional *.java files with the customized associations as you requested them in ds.def. Codegen needs 4 parameters: - file which declares all your associations (here ds.def), - path to the library of association templates (always jlib\lib), - directory to deposit the classes for requested associations, - file describing the package/import statements for the generated classes. In this example, file import has only one line:import test1.*; THE SECOND LINE compiles all *.java files in directories jtut\jlibGen and jtut\test1 with the main() in file test1\test1.java. THE THIRD LINE runs the test1 program and deposits the results into file test1\res. If you call codegen with the -u option everything is the same, except that beside generating the requested associations codegen also generates the logic of the corresponding UML diagram and deposits it into file layout.inp (fixed name). Program layout then uses this file as input and generates file display.svg with a properly layed out UML class diagram. File display.svg can be viewed with most Internet browsers or using special utilities. On the accompanying CD, this is all prepared as file test1\ss.bat: The UML class diagram includes not only associations but also inheritance. In order to recover this information, codegen must browse through all the source (all *.java files) and search for the syntax indicating inheritance. For this reason, when you call codegen with the -u option, you must provide a file which provides the location of all the source files (here srcList). You can create this file under DOS or Windows by using: dir *.java or, under UNIX, by: ll *.java. When option -u is used, the last parameter provides the title text for the UML diagram. Program layout is an independent executable so it really does not matter in what language it has been written. The existing version is in C++ but, as a conversion exercise, we are also planning its Java version. Option -s is for generating the svg display file. The program requires two parameters: - File describing the screen size in pixels and the default fonts for your UML diagrams. This file can be either coded manually or generated automatically and then used for all displays on your computer (param.txt). - File describing the logic of the UML diagram, usually file layout.inp generated by codegen. The generation of the UML layout is not a simple task because it does not have a clean, mathematical objective -- the result should be simple and esthetically pleasing -- and we had to combine various tricks from the electronic CAD to achieve this goal which, to a human mind, appears deceivingly simple. In order to demonstrate how associations and intrusive data structures increase the safety of data structures, program test1 includes two typical errors. The first one is caught by the compiler, the second is caught at the run-time. On the first attempt to compile by invoking tt.bat, you get a compiler error indicating that, on line 16, addTail(e,d2) has wrong parameter types. The method has types addTail(Department,Employee) and not addTail(Employee,Department). Note that similar type checking is NOT available for Java language containers which compile and then mysteriously crash under similar circumstances. After you correct the error, the program compiles, but when it runs it printsboss.add() error: already has a linkthen it continues running and eventually crashes. You don't have to search for the reason of the crash. The message tells you that you attempted to add a boss twice to the same department. This type of error checking is NOT available in any existing Java or C++ container library. After correcting the second line marked with the 'error' comment, the program runs and prints correct results:dept=100 manager=C.Black dept=110 manager=A.Green J.Fox K.Doe dept=111 manager=B.Brown S.Winter I.Springer B.Summers dept=112 manager=G.Gray F.Beech H.Oats dept=120 manager=B.White WARNINGS: (1) If the first error were empl.addTail(m,e), a good Java compiler would catch the type incompatibility. Unfortunately, Microsoft J++ does not catch that one and then the program crashes without giving you any clues what is wrong. The problem is in the compiler, not in the jlib library. (2) If you forward the results into a file, for examplejava test1.test1 > test1\resyou do not get the run-time error message. The message is printed into file res which is then scrapped when the program crashes. Note that even if the program does not compile, you already get the UML diagram which you can view by invoking ss.bat and then going to display.svg with your Internet browser:): If you invoke test2\tt.bat, the program compiles and runs with the same results as before. Since the class Company and the hash table of the the Employees are not used, it does not matter that we have not specified the functions required for hashing. The new UML diagram is produced regardless whether the compiler finds errors or not: Let's now evolve our program test2 by connecting the root of all the Departments to a Company object and by storing Employees in the hash table supporting fast searches for employees by their names. We also want to be able to find, for a given employee, all his or her superiors. When writing this code, I discovered that the data must support traversal of Departments and Employees in both directions, which means that the two LinkedList1 data structures must be replaced by Aggregate2 where each child knows its parent. Also, I found that it would be handy to replace the SingleLink 'boss' by a DoubleLink with the same name. Below is the new code as stored in directory test3. The relatively few blue changes are the result of changing the data organization -- they were easy to implement because the compiler told me what had to be modified and why. The green changes implement the new features, in other words they are add-ons to the original program. Note the two new methods for the Employee class, eHash_hash() and eHash_equals() as required for the hash table. Unless you want to use your own hashing algorithm, these funtions just pick up the default from class eHash: The result of running this program they only generate a rough skeleton which you often must modify by hand. The big problem with the existing tools is that they try to match UML associations with container based data structures such as provided by Java language or C++ class libraries such as STL. Since the two concepts - associations and containers - do not match, the tools can generate only a rough code and cannot properly support model evolution. In particular, in some situations, it is impossible to retrieve the UML information automatically. For example, assume that programmers who are implementing the software add 3 pointers and 2 collections to the model. Unless you know their intentions it is impossible to guess whether this is a new, complex association or just 5 simple ones, or perhaps only expansion (change) of some existing associations. For more explanation see the book Next Software Revolution. Here is an example of how IN_CODE modelling eliminates these problems. Let's assume that we have a warehouse which stores parts required for the manufacturing of several different products. The parts are identified by their ID number (their bar code), the products are identified by their names. Right away, we see that we need 3 basic entities which should be represented as classes: Warehouse, Part, Product - with one-to-many relations between Warehouse and Part and between Product and Part. Instead of wasting time on playing with a graphical tool, you simply describe this model in a few lines of text. Note that at this stage of the design we do not care much about how the associations are implemented, and we use general associations such as uni-directional one-to-many" called Uni1toX in jlib. If you have read Part 1 (bottom-up approach), you already know what the various parts of this code mean: Then you invoke this little batch file (tt.bat) Besides compiling the program which so far does nothing, tt.bat generates file layout.inp which can be instantly converted into the UML class diagram by calling ss.bat and then looking at layout.svg by your favourite browser: Note that even if you do not have *.java files for your classes yet, you still can derive the UML just from file ds.def without compiling the software (file test4\ttt.bat): and then you generate the UML diagram by invoking test4\ss.bat (the diagram is the same). If your design involves inheritance, you have to tell codegen about it by supplying *.java files at least for those classes that use inheritance. Without that, codegen does not know about the inheritance and cannot display it in the UML diagram. Let's continue evolving our original design. After you discuss the UML diagram with your client, several issues come up: All this results in only a few changes (shown in red):All this results in only a few changes (shown in red): - The warehouse should keep the count of the parts currently in stock. Class Part should be renamed PartType and have an int member count. - There should not be just parts, but also assemblies which combine parts and other, simpler assemblies. - The current model does not provide access to individual products. - The client forgot to tell you there is not just one warehouse but several of them. - It would make sense to add one more class, Company, in order to encapsulate the entire problem. Invoking ss.bat will give you easily create such data organization and measure the change in the Java free memory. Note that this method of estimating the size of data is different from what you typically do in C++ (see the C++ tutorial) where you can simply multiply sizeof(className) by the number of the objects of the given class. Let's follow the Java approach: The result is 724840 bytes, very close to 717896 bytes required for the C++ version - see the C++ tutorial. Both results are only first rough estimates anyway. The program will evolve and with every new member or data structure objects will grow in size . Let's assume that after you have done all that, the new requirements come in: All this results in the following additions of our code. Note how compact and efficient our textual representation is.All this results in the following additions of our code. Note how compact and efficient our.... public class Supplier { // new class public ZZ_Supplier ZZds; public Supplier(){ZZds=new ZZ_Supplier();} } Association Uni1toX<Company,Supplier> suppliers; Association Uni1toX<Supplier,PartType> supply; Association Hash<Company,PartType> partHash; ...and when you invoke ss.bat in directory jtut\test6, you get the new UML diagram: At this point one of the programmers who is present at the meeting asks: "... and could we calculate the orders to individual suppliers if we want to build n products of type x?" The logic of this calculation is simple but it requires, for a given PartType, to know who is the supplier. This means that the association 'supply' must be re-defined as bi-directional:Association Uni1toX<Supplier,PartType> supply; // old model Association Bi1toX<Supplier,PartType> supply; // new model And the new UML diagram has a small change (no arrow) on the 'supply' link: At this point you may want to proceed with the implementation and at any time, as you evolve the software, you can compile and test the features you just implemented. The progress is fast (rapid development), yet you are getting a solid, production-grade code. The UML diagram always perfectly matches the code, and major changes of the architecture are easily absorbed even on large volume of code. The association classes have been designed so that the compiler tells you where your code requires changes -- usually only in surprisingly few places. Note that, from this moment on, there is no difference between the top-down and bottom-up approaches. You evolve both the code and the architecture. However, when using associations, they both are controlled by the textual UML model which is a part of your code (file ds.def). Gradually, you will replace the generic Associations (Uni1toX, Bi1toX,..) by more specific data structure such as Bag, LinkList2, Aggregate2 etc. Again this will result in no or only a few changes of your code. For example, if you re-define the data organization like this (see directory jtut\test6 files ds.alt and ttt.bat) Association LinkedList1 warehouses; Association LinkedList1 products; Association Uni1toX stored; Association Bag needed; Association Bag assemble; Association Name prodName; Association LinkedList2 suppliers; Association Aggregate2 supply; Association Hash partHash; The Java compiler tells you that you cannot use methods warehouses.add() and products.add() on lines 24 and 33 of test6.java. The reason is that for LinkedList1 you must use either addHead() or addTail() instead. That's all. Regardless of the compiler errors, you still get the new UML diagram. Note that the association stored is still left as generic, without any instruction about its implementation: PART 3: The library of data structures For a detailed description of the currently available classes, see incode\jlib\doc\jClasses.doc. IMPORTANT: Note that this library is well protected against a wide range of errors, and it has been designed for the maximum performance and ease of use. The penalty for all this is a bit of additional work when building the library itself. 3.1 Adding a new data structure to the library This Chapter describes how to design a new library class (data structure or association) and add it either to your own library (myLib) or to the standard jlib library (jlib/lib). The library classes use parameters $$,$0,$1,$2 in a style similar to C++ templates or macros. These parameters allow the code generator (codegen) to create associations which are customized to the participating classes, just like the C++ compiler which expands C++ templates. However, codegen does a bit more -- it also generates special classes that bind together classes that form the association. For example inpublic class Company { public ZZ_Company ZZds; Company(){ZZds=new ZZ_Company();} }class ZZ_Company is one of these transparently generated classes. If you have experience with data structures and C++ templates, you may intuitively understand why parameters $$, $0, $1, $2 are needed and how you should use them. If your background is different and you find these parameters confusing, follow the slow route first. It introduces these parameters in a gradual, more logical way. After this detour. you can then return to this spot and continue reading. For additional help, look at the book "Next Software Revolution" by Jiri Soukup. Note that general association describes a cooperation among several classes. However, the existing code generator (codegen) supports only associations which connect at most two classes. This restriction is not conceptual and will soon be removed. As elsewhere in this tutorial, a uni-directional set. It is based on the doubly-linked list for which the operation remove() is very fast without changing the order of children. We will show how to code this data structure and add it to the library as MyLinkedList2. Note that jlib has associations Ring2 and LinkedList2. Ring2 is a simple ring of children. LinkedList2 is derived from Ring2 and adds a parent to the ring. Association MyLinkedList2 which we are going to code will have the same functionality as LinkedList2, but its internal design will be different. It will not be derived from another class, it will be coded from scratch. Before we start to code, remind yourself how the data structure will be used. Here are several examples.Association MyLinkedList2<Company,Product> products; Association MyLinkedList2<Company,Employee> employees; Association MyLinkedList2<Product,Component> assembly; $1 $2 $$ The last line shows the three parameters which we will need for coding the generic form of the association: - $$ is the name of the association - $1 is the name of the first class participating in the association, usually the more important one, called "parent" or "holder" - $2 is the second class participating in the association, usually called "element" or "child". The Java code for MyLinkedList2 will include 4 classes: - MyLinkedList2, the data structure itself (its controls), - MyLinkedList2Iterator which will help the user to traverse the list, - MyLinkedList2Parent which will provide the data and references needed in the parent class, - MyLinkedList2Child which will provide the data and references needed in the child class. The files with these classes will have the type *.jt and not *.java (for example MyLinkedList2.jt) because they are not regular Java files but special Java templates. After you read the entire PART 3, you may learn more by analyzing other associations from jlib/lib. File registry is in particular useful when browsing through this directory. Let's return to our MyLinkedList2, and start with MyLinkedList2Parent will provides the parent's reference to the tail of the children list. FILE MyLinkedList2Parent.jtThis code is easy to read and understand. The next class, MyLinkedList2Child, has pointers next and prev which implement the doubly linked ring FILE MyLinkedList2Child.jt The previous two classes did not contain any logic, only the references implementing the data structure. On the other hand, class MyLinkedList2 includes no references, only methods which control the association: FILE MyLinkedList2.jt For this associaiton, the iterator may allow to traverse the children in both directions but, for the sake of simplicity, let's implement only the forward traversal:Association MyLinkedList2>Product,Component< assembly; Product p; Component c; assembly_Iterator it=new assembly_Iterator(); ... for(c=it.fromHead(p); c!=null; c=it.next()){ ... } // forward traversal Note that iterators usually store some temporary data and their methods are not(!) static: FILE MyLinkedList2Iterator.jt By now, you probably see the pattern of how the $-parameters are used: You probably agree with me that, with these parameters, the code is still quite readable and definitely less cluttered than C++ templates. Prefix $$ prevents the collision of names when two classes are connected by several associations of the same type:You probably agree with me that, with these parameters, the code is still quite readable and definitely less cluttered than C++ templates. Prefix $$ prevents the collision of names when two classes are connected by several associations of the same type: - $$ is used only as a prefix for all the classes, - S1 is used as the parent or the first participating class, - $2 is used as the child or the second participating class. - $0 is used as a prefix to all references in the parent or child classes.Association Aggregate2<Product,Component> assembly1; Association Aggregate2<Product,Component> assembly2; Association Aggregate2<Product,Component> assembly3; This is similar to using multiple Java containers. For example:public class Product { private Vector assembly1; // used as a vector of Components private Vector assembly2; // used as a vector of Components private Vector assembly3; // used as a vector of Components // ... } This situation causes no complications for simple containers but becomes an issue in a library like jlib which is type protected and supports bi-directional and intrusive associations. The next step is to move the new classes to our new library, directory myLib. If you wanted to add the new association to the jlib library, you would add these files to directory jlib\lib. This would be rather pointless though since, as we explained, jlib/lib already contains association LinkedList2 which is more sophisticated than MyLinkedList2 in this simple example. Note that, just like C++, Java will soon have templates. These templates will allow parametrization by type and could potentially replace jlib parameters $1 and $2. However, they will not replace parameters $$ and $0, and will not eliminate the need for the code generator (codegen). After you move the four files to myLib, you also have to register the new association by creating file myLib\registry with the following line. (If you already have other classes in myLib, simply add this line anywhere in myLib\registry which is already there -- the order of the lines is not important): jlib Ver.2.0, the FSM class is not included yet. This example is based on class FSM from the Pattern Template Library (PTL), the library with classes that are generally easy to port to jlib. A special registration code is used for commonly occuring many-to-many associations. from directory doc\jtut\test7): NOTE: The recommended way to deal with variable length names is to use the jlib association Name and not the String references (as used here, see cName and pName). The objective of jlib is to eliminate all user controlled references from the application classes. As an exercise, you may try to replace these two members by the association Name. This code assumes that the application classes Product.java, Component.java, and test7.java are in directory jtut\test7 as package test7, while your new library with classes representing MyLinkedList2 are in jtut\myLib. and codegen is called to deposit the requested files to jtut\jlibGen. The following batch file compiles and runs this test from jtut (see jtut\test7\tt.bat): If your new library class had an error -- and who can claim that all his/her code always works without debugging -- the compiler message will refer to generated *.java class in directory jlibGen and not to its generic *.jt form from directory myLib. For example, if file MyLinkedList2Iterator.jt variable ret has a wrong type The compiler tells you that in file jlibGen\assembly_MyLinkedList2Iterator.java on line 15 there is a mismatch of types between ret and next. Without looking at directory jlibGen, you can go straight to myLib\MyLinkedList2Iterator.jt, and you will see that 'ret' should be declared as $2 and not as $1. If you want to avoid such indirect debugging, you can take the slow route. You first fully design and debug the association with specific classes, then replace them with parameters $$,$0,$1,$2, and the remaining errors are only a few and easy to find. 3.2 Deriving a new association from a simpler existing associationYou guessed it correctly, now we are going to expand LinkedList2 into something else, and it will be Aggregate2, which is LinkedList2 where each child knows its parent. This is exactly how it is done in jlib\lib, and the code listings shown below are taken from there. (If you want to expand MyLinkedList2 to MyAggregate2, just change the names accordingly.) We will have again four new files: Aggregate2Parent.jt, Aggregate2Child.jt, Aggregate2.jt, and Aggregate2Iterator.jt, plus one more file Aggregate2ParentAggregate2Child.jt for the situations when the same class is used both as the parent and as the child. We did not discuss this situation for MyLinkedList2, but LinkedList2 from jlib\lib uses such a class. Note that LinkedList2 methods which work without a change do not have to be re-coded or even listed in Aggregate2.jt. This applies in particular to Aggregate2Iterator which inherits all its methods from LinkedList2. Some methods of Aggregate2, for example remove(), have fewer calling parameters because each child knows its parent now. FILE Aggregate2Parent.jt: FILE Aggregate2Child.jt: FILE Aggregate2ParentAggregate2Child.jt: FILE Aggregate2.jt: The line which we have to add to the registry file is more complicated than it was for the association coded from scratch. It has to describe how Aggregate2 (and its parameters) are derived from the LinkedList2 (and its parameters). Character ':' is used to record inheritance, two examples from the existing library (jlib/lib/registry): Here the Array refers only to ArrayElements, no data or references are injected into class ArrayElements. Bag inherits from Array but its second parameter, class BagElement is still passive (no injected data). 3.3 Converting a Java container to an association Since containers are only special (simple) associations, converting a Java container to an association is only a matter of re-writing the interface. For example, class Vector1 from jlib/lib is just the Java Vector class with a slightly different interface and additional type protection: The three critical parts of the push_back() call are: a,b,vec. Their order reflects your way of thinking depending on which approach you use. When using the traditional Java style, a is first on your mind, then you think about vec as a part of A and, finally, you mentally add b to it. When treating the relation as an association, you first think about the model and the association vec and only then you decide which a and b will be involved. Here are the two classes which represent Vector1 in jlib\lib: FILE Vector1Parent.jt: FILE Vector1.jt: WARNING: The jlib association Vector1 was mechanically derived from Java Vector without properly re-testing individual methods. The conversion was so simple and straightforward that the resulting class can be considered reasonably safe. 3.4 Expanding the Java Vector to a bi-directional association Since we have converted Java Vector to an association (Vector1), we can derive the bi-directional association (Vector2) from Vector1. The main addition is that each element of the vector array must keep a reference to to the object which holds the array. Because of its intrusive nature, this data organization cannot be implemented as a Java container. FILE Vector2Parent.jt: FILE Vector2Child.jt: FILE Vector2.jt:
http://www.codefarms.com/docs/incode/jtutor.htm
CC-MAIN-2017-39
refinedweb
5,065
53.61
> Il giorno 28 dic 2018, alle ore 00:41, Tejun Heo <tj@kernel.org> ha scritto:> > Hello, Paolo.> > On Sun, Dec 23, 2018 at 12:00:14PM +0100, Paolo Valente wrote:>> 4.21 is coming ... and the legacy proportional share interface will>> be gone with cfq. This will break legacy code using the>> proportional-share interface on top of bfq. This code may just fail>> when trying to use interface files that don't exist any longer.> > Sounds like inheriting .cfq namespace would be the easiest. Would> that work?> For bfq, yes, but what will, e.g., Josef do when he adds his newproportional-share implementation? Will he add a new set of names notused by any legacy piece of code?What's the benefit of throwing away months of work, on which we agreedbefore starting it, and that solves a problem already acknowledged byinterested parties?Thanks,Paolo> Thanks.> > -- > tejun
https://lkml.org/lkml/2018/12/30/25
CC-MAIN-2019-18
refinedweb
150
78.55
C Programming Preprocessor and Macros Preprocessor extends the power of C programming language. Line that begin with # are called preprocessing directives. Use of #include Let us consider very common preprocessing directive as below: #include <stdio.h> Here, "stdio.h" is a header file and the preprocessor replace the above line with the contents of header file. Use of #define Preprocessing directive #define has two forms. The first form is: #define identifier token_string token_string part is optional but, are used almost every time in program. Example of #define #define c 299792458 /*speed of light in m/s */ The token string in above line 2299792458 is replaced in every occurance of symbolic constant c. C Program to find area of a cricle. [Area of circle=πr2] #include <stdio.h> #define PI 3.1415 int main(){ int radius; float area; printf("Enter the radius: "); scanf("%d",&radius); area=PI*radius*radius; printf("Area=%.2f",area); return 0; } Output Enter the radius: 3 Area=28.27 Syntactic Sugar Syntactic sugar is the alteration of programming syntax according to the will of programmer. For example: #define LT < Every time the program encounters LT, it will be replaced by <. Second form of preprocessing directive with #define is: Macros with argument Preprocessing directive #define can be used to write macro definitions with parameters as well in the form below: #define identifier(identifier 1,.....identifier n) token_string Again, the token string is optional but, are used in almost every case. Let us consider an example of macro definition with argument. #define area(r) (3.1415*(r)*(r)) Here, the argument passed is r. Every time the program encounters area(argument), it will be replace by (3.1415*(argument)*(argument)). Suppose, we passed (r1+5) as argument then, it expands as below: area(r1+5) expands to (3.1415*(r1+5)*(r1+5)) C Program to find area of a circle, passing arguments to macros. [Area of circle=πr2] #include <stdio.h> #define PI 3.1415 #define area(r) (PI*(r)*(r)) int main(){ int radius; float area; printf("Enter the radius: "); scanf("%d",&radius); area=area(radius); printf("Area=%.2f",area); return 0; } Predefined Macros in C language How to use predefined Macros? C Program to find the current time #include <stdio.h> int main(){ printf("Current time: %s",__TIME__); //calculate the current time } Output Current time: 19:54:39
http://www.programiz.com/c-programming/c-preprocessor-macros
CC-MAIN-2015-18
refinedweb
392
59.6
This article will guide you through building your first Photino Desktop app (built on top of .NET Core) which will run on all of the Big 3 Platforms (Linux, Mac, Windows). The future you've always dreamed of is finally here: Build Your Desktop App Once, Run It Anywhere. Yes, this future does come with HTML5 (HTML, JavaScript, CSS) but it is fine, my seasoned Desktop Developer Friend. It's fine because now you have the power of the .NET Core Framework behind you. Build your User Interface one time (using HTML5, JavaScript & CSS) while leveraging all the power of .NET Core to get to the Desktop API functionality (read/write files, Cryptographic APIs, everything that is exposed via .NET Core). I've written a Password Generator (windows store link[^] FOSS (Fully Open Source Software) so you can get the source at my Github link[^]. If you're going to write a Password Generator that people are going to use, it is going to have to run on every known platform so no matter where a user needs her password, it will be available. The original version is written using ElectronJS (Chrome engine) and runs on all the major platforms also. Now that Photino has arrived, I am going to convert the app to .NET Core and it has an easy path to do so. By the way, Photino is backed by the good people at CODE Magazine and you can see all the documentation at tryphotino.io. Also, as I said, it's all Open Source & you can get all the code at github. Here's a quick, example of a FileViewer I'm working on. Remember, the UI is built on HTML5, JavaScript & CSS, but it is able to call local "desktop" APIs via .NET Core -- Directory.GetFiles(), etc. FileViewer Directory.GetFiles() But, to see what Photino can do for you, let's write our first program using the library. After cloning the code to a fresh system that didn't have .NET Core installed that when I installed .NET Core 6.x the project wouldn't build. .NET Core 6.x is the new standard so it would be a pain to have to also install an old version. Instead of doing that, you can simply update the HelloPhotino.NET.csproj* file to reference .net6.0. HelloPhotino.NET.csproj* *This name is the default project name that the Photino template gives your project. I should've renamed it. 😖 Just open the .csproj file in your editor and change the following line: .csproj <TargetFramework>net5.0</TargetFramework> Just change the 5 to a 6 and then you'll be able to build. <TargetFramework>net6.0</TargetFramework> I'll start off assuming that you do have .NET Core 5 or 6 installed. You can determine which version you have with the following command: $ dotnet --version Open up a command line prompt and run the following: $ dotnet new -i TryPhotino.VSCode.Project.Templates That will simply add a list of project templates which will be available to the dotnet new command dotnet new You can run the following command to see the list of all Project Templates (you will see the new ones included in the list): $ dotnet new -l // that's a lowercase L for list You'll see a list of all project templates which looks something like the following: style="width: 640px; height: 514px" alt="Image 2" data-src="/KB/Articles/5333548/photino_001.png" class="lazyload" data-sizes="auto" data-> Now that we have the Photino project templates installed, we can go to development directory (I name mine dev/dotnet/photino/ to contain all my photino projects) and then issue the following command. photino ~/dev/dotnet/photino $ dotnet new photinoapp -o FirstOne Running that command will: -o Once you create the boilerplate project, you can run it immediately. Just move into the new directory and run: $ dotnet run // compiles & runs the app The app will start up & a popup dialog will appear in the middle of it to demonstrate that you can do things via JavaScript. style="width: 600px; height: 439px" alt="Image 3" data-src="/KB/Articles/5333548/photino_002.png" class="lazyload" data-sizes="auto" data-> Click the [Close] button so you can see the main interface. width="603" alt="Image 4" data-src="/KB/Articles/5333548/photino_003.png" class="lazyload" data-sizes="auto" data-> Click the [Call .NET] button and you'll see the following: width="602" alt="Image 5" data-src="/KB/Articles/5333548/photino_004.png" class="lazyload" data-sizes="auto" data-> Nothing too amazing so far. Let's take a look at the files and code which are included in the project so we can get an idea of what is really happening. After that, we'll make a "Desktop API" call via C# which would never work in a Web App, to prove that this application really is quite amazing. Here's one big snapshot of the project from within Visual Studio Code which shows a lot of detail. style="width: 640px; height: 440px" alt="Image 6" data-src="/KB/Articles/5333548/photino_005.png" class="lazyload" data-sizes="auto" data-> On the top right side inside the Program.cs file, you can see that we have our normal Main() method that we've come to know and love. Main() This is an actual C# .NET Program. The magic is in the fact that it auto-loads the WebView2 (Microsoft docs) as the main Form interface and then loads your target HTML inside that WebView2 control. Form WebView2 If we scrolled a bit further down in the code, you would see that the last call that the Main() method makes is the following Photino library call: Photino .Load("wwwroot/index.html"); Of course, as you can see over on the left, that index.html file is located in the wwwroot folder. The index.html file looks like the following: style="width: 640px; height: 526px" alt="Image 7" data-src="/KB/Articles/5333548/photino_006.png" class="lazyload" data-sizes="auto" data-> It's all just simple HTML but that file makes up the entire User Interface for this app. That's pretty amazing. That means you can now take any HTML5 (web-based) app and wrap it inside of Photino and turn it into a desktop app which will run on any Mac, Linux or Windows machine natively. As an experiment, I created a template Photino project, took my web-based C'YaPass app (Password Generator), dropped in the HTML (index.html), JavaScript and CSS files and ran the Photino app and got the following with no code changes. style="width: 640px; height: 366px" alt="Image 8" data-src="/KB/Articles/5333548/photino_007.png" class="lazyload" data-sizes="auto" data-> That app uses HTML5 Canvas, localStorage and various other HTML technology but runs perfectly on any desktop. Canvas localStorage That app also generates SHA-256 hash codes (for use as passwords) via a JavaScript function. Now, with Photino, I can remove the JavaScript and use the .NET Core Cryptopgraphy libraries to make everything a bit cleaner. I can do that because I can make calls to the desktop APIs via C# within the Photino framework. Let's see how we can make a simple call to a .NET API. To prove this out, we really do need to make a call to the Desktop API via C#. Desktop To do this work, we will: I'll add the completed code at the top of this article so you can try it out easily. The code that does that auto-popup is annoying so I removed it. To keep this simple, I am going to add a new button right under the existing one (from the project template): <button id="callApiButton" onclick="callApi()">Call API</button> FYI - Yes, I know that many people don't like having the event-handler (onclick) right on the HTML element, but this is simplified for our example. onclick After adding it, you can run and see the button exists, but does nothing. If you're following along to run the app, just go to your project command line and type: $ dotnet run width="600" alt="Image 9" data-src="/KB/Articles/5333548/photino_008.png" class="lazyload" data-sizes="auto" data-> Now, let's go make the button do something. I'm going to add a new JavaScript file (api.js) and include it at the top of the index.html file. The api.js file will include the code to handle the callApi() function. callApi() I'm copying the boilerplate code out of the index.html which is used to send a message to the app when the first button is clicked: window.external.sendMessage('Hi .NET! 🤖'); That is JavaScript code which is used to interact the Photino library which handles the message sending. The message the template project sends is very naive because it is just a string. In reality, we'll probably want / need to send some kind of structure which contains: string I'm going to create a JavaScript object, then use JSON.stringify (create perfect JSON) to send the string across to the C# side which will then deserialize it and get the command out. JSON.stringify Here's the entire code list of api.js: function callApi(){ let message = {}; // create basic object message.command = "getUserProfile"; message.parameters = ""; let sMessage = JSON.stringify(message); console.log(sMessage); window.external.sendMessage(sMessage); } In this case, I'm not using any other parameters but I'm passing them in anyways. Also, I didn't have to create a separate sMessage variable but I'm doing that so you can take a look at the actual string (JSON) that we are passing across. sMessage If you're following along, don't forget to add the reference to our new api.js at the top of index.html. After you've got it all set up, run the app ($ dotnet run) and click the new button. You will see some logging in your console window (from Photino.net) and you'll see the received message popup in the app. style="width: 640px; height: 235px" alt="Image 10" data-src="/KB/Articles/5333548/photino_009.png" class="lazyload" data-sizes="auto" data-> This isn't complete yet though, because we want it to capture the message.Command and act accordingly (call a specific desktop API). message.Command To do that work, we need to change the Program.cs to parse out the JSON we sent into an appropriate object. We need to do that work on the C# side of things. I've added a new folder named Model (for Domain Model objects) and I've created the new DTO class file named WindowMessage.cs. (You'll see this all in the final code attached to this article.) Here's the simple code that will now make it extremely easy to use the C# JSON serializer/deserializer in our code. using System; class WindowMessage{ public WindowMessage(String command, String parameters) { this.Command = command; this.Parameters = parameters; this.AllParameters = parameters.Split(',',StringSplitOptions.RemoveEmptyEntries); } public String Command{get;set;} public String[] AllParameters{get;set;} public String Parameters{get;set;} } The incoming parameters will be a comma-delimited string and then the class will automatically split on it and create an array of String that are the parameters we may want to use. String Let's go use this code now. In Program.cs, the main Message Handler (from project template) is a simplified method which looks like the following: .RegisterWebMessageReceivedHandler((object sender, string message) => { var window = (PhotinoWindow)sender; // The message argument is coming in from sendMessage. // "window.external.sendMessage(message: string)" string response = $"Received message: \"{message}\""; // Send a message back the to JavaScript event handler. // "window.external.receiveMessage(callback: Function)" window.SendWebMessage(response); }) You can see that the incoming message is just a string. Of course, in our new code, we are guaranteeing that we send a WindowMessage object (via JSON). WindowMessage Because C# makes JSON deserialization so easy, we can add the following code to deserialize into our DTO (WindowMessage) and handle the Command value. Command I added using statements at the top of Program.cs: using using System.Text.Json; using System.Text.Json.Serialization; Now I can add the following code at the top of the .RegisterWebMessageReceivedHandler() function call: .RegisterWebMessageReceivedHandler() WindowMessage wm = JsonSerializer.Deserialize<WindowMessage>(message); This will parse the incoming message String into our target DTO. message String Now, our code in the .RegisterWebMessageRecievedHandler() looks like: .RegisterWebMessageRecievedHandler() .RegisterWebMessageReceivedHandler((object sender, string message) => { var window = (PhotinoWindow)sender; WindowMessage wm = JsonSerializer.Deserialize<WindowMessage>(message); switch(wm.Command){ case "getUserProfile":{ window.SendWebMessage($"I got : {wm.Command}"); break; } default :{ // The message argument is coming in from sendMessage. // "window.external.sendMessage(message: string)" string response = $"Received message: \"{wm.Parameters}\""; // Send a message back the to JavaScript event handler. // "window.external.receiveMessage(callback: Function)" window.SendWebMessage(response); break; } } }) We simply deserialize the JSON into our DTO and then switch on the wm.Command value. wm.Command NOTE: I made a change to the original Button JavaScript so it'll pass a valid WindowMessage object too, but you can take a look at that code on your own. Button Here's what a run looks like when you click the new button. width="602" alt="Image 11" data-src="/KB/Articles/5333548/photino_010.png" class="lazyload" data-sizes="auto" data-> We can successfully run various C# code now, dependent upon what our Command in our WindowMessage is. Seasoned Devs: Isn't it interesting how this all harkens back to the original Windows Message loop (of Windows API programming) and handling messages? Well, this was supposed to be a fast introduction to Photino, so let's add a call to a .NET API and call it a day. However, wrap this up properly, we also need to show you how to use the value that is returned back to the User Interface side (HTML). To get the value back, we need to register a Message Receiver on the User Interface side when the app loads. We'll do two things: onload initApi() Here's the code (in api.js)which will be initialized when the app starts (on HTML load). function initApi(){ window.external.receiveMessage(response => { response = JSON.parse(response); switch (response.Command){ case "getUserProfile":{ alert(`user home is: ${response.Parameters}`); document.querySelector("#output").innerHTML = `${response.Parameters}`; break; } default:{ alert(response.Parameters); break; } } }); } This code will get a response (sent from the C# side) after the Desktop API is called. It will contain the value of the User's Home directory (retrieved via C# with Environment.GetFolderPath(Environment.SpecialFolder.UserProfile)). Environment.GetFolderPath(Environment.SpecialFolder.UserProfile) Once this code (JavaScript) receives the value, it will display it using an alert() and write it into the main HTML using document.querySelector("#output").innerHTML. alert() document.querySelector("#output").innerHTML Here's the final C# code. .RegisterWebMessageReceivedHandler((object sender, string message) => { var window = (PhotinoWindow)sender; WindowMessage wm = JsonSerializer.Deserialize<WindowMessage>(message); switch(wm.Command){ case "getUserProfile":{ wm.Parameters = Environment.GetFolderPath(Environment.SpecialFolder.UserProfile); window.SendWebMessage(JsonSerializer.Serialize(wm)); break; } default :{ // The message argument is coming in from sendMessage. // "window.external.sendMessage(message: string)" wm.Parameters = $"Received message: \"{wm.Parameters}\""; // Send a message back the to JavaScript event handler. // "window.external.receiveMessage(callback: Function)" window.SendWebMessage(JsonSerializer.Serialize(wm)); break; } } }) Here's a snapshot after I click the new button. width="605" alt="Image 12" data-src="/KB/Articles/5333548/photino_011.png" class="lazyload" data-sizes="auto" data-> Now, you go and try it and make some of your own apps. Remember, you can now take this code & build it and deploy it to any OS and it will run properly. Amazing! Is this the new way to build desktop apps? I think it is a pretty cool way to build a User Interface that will run on any platform. I think it's amazing and I will continue to pursue further.
https://codeproject.freetls.fastly.net/Articles/5333548/Photino-Open-Source-for-Building-Cross-Platform-De
CC-MAIN-2022-40
refinedweb
2,659
57.57
I know this is a homework question, but I have reached the end of my rope and really need guidance. I have had four years of Java programming then walked into my CS 3 class and was given an assignment in python. I have never worked in Python before but tried writing some sample programs that seemed to work OK, but this project just won't cooperate. We have a text file(the Gettysburg Address) and we have to read it from sys.stdin into a dictionary which is to be sorted and have each word only once as the key with the word count for the value, then print out the dictionary. What I have is: import sys word_dictionary = {} l = [] m = [] for s in sys.stdin: l = s.split() #split the line into a list of words for i in l: m.append(i) #add elements of l to m since l will be overwritten for j in m: j.strip(' ,.?!') j.lower() m.sort() for k in m: if k in word_dictionary: word_dictionary[k]=word_dictionary[k]+1 else: word_dictionary[k]=1 for word in word_dictionary: print word,word_dictionary[word] However when I run this(python lab1.py getty.txt) nothing happens. If I put a print statement at the top where I initialize the lists and dictionary, it gets printed, but no other print statements are reached. Is there something wrong with my code? Or is my computer just really slow at reading the text file? Any help for an absolute novice?
https://www.daniweb.com/programming/software-development/threads/143690/help-for-a-novice-python-coder
CC-MAIN-2017-22
refinedweb
254
81.12
Hi, I have been struggling to figure this out and would really appreciate any help anyone can give: I have a config file: setenv test_host somehost setenv test_port 12345 setenv test_sub RRR:MMM before I compile and run my configreader.c file, I type "source config" to set the environments. Once that is done I compile and run my configreader.c file. The configreader is supposed to cout the "somehost" value. the "12345" value, and "RRR:MMM" value. I use the getenv() command. THe configreader is also supposed to parse out the RRR:MMM and put RRR and MMM into a char array or maybe a string so that I can refer to it somewhere else. I have tried to use sprintf and strtok, but I very confused as to what to do to be able to parse "RRR:MMM" into 2 char arrrays or string. Here is the code for configreader.c: #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <iostream> #include <string.h> main() { char *env; int port = 0; char *env2; char buffer[50]; char *p; char one[10]; char two[10]; /*this works*/ env = getenv("test_host"); cout<<"this is the host "<<env<<endl; port = atoi(env); cout<<"this is the port number "<< port<<endl; /* QUESTION HERE???? */ env2 = getenv("test_sub"); cout<<"this is the test_sub"<<env2<<endl; int n = sprintf(buffer, "%s"); env2 = strtok (buffer, ":"); do { p = strtok(NULL, ":"); cout<<p<<endl; one = p; }while(p); /* ?? want to be able to retrieve one and two which should contain RRR and MMM */ }//end main THank you for any help anyone can provide to help me make this program work![CODE]
http://cboard.cprogramming.com/cplusplus-programming/43321-question-about-getenv-etc.html
CC-MAIN-2014-49
refinedweb
273
72.97
1. Priority queue 1.1 concept Queue is a first in first out data structure, but in some cases, the data operated may have priority. Generally, when leaving the queue, elements with high priority may be required to leave the queue. In this scenario, it is obviously inappropriate to use the queue. For example, when playing games on the mobile phone, if a call comes in, the system should process the queue first. In this case, our data structure should provide two basic operations, one is to return the highest priority object, and the other is to add a new object. This data structure is called priority queue. 1.2 introduction to common interfaces 1.2.1 characteristics of PriorityQueue Note about the use of PriorityQueue: - When using, you must import the package where PriorityQueue is located, that is: import java.util.PriorityQueue; - The elements placed in the PriorityQueue must be able to compare the size, and cannot insert objects that cannot compare the size, otherwise classcastexpectation exception will be thrown. - Cannot insert null object, otherwise NullPointerExpection will be thrown. - There is no capacity limit. You can insert any number of elements, and its internal capacity can be expanded automatically. - The time complexity of inserting and deleting elements is O ( l o g 2 N ) O(log_2N) O(log2N). - The underlying PriorityQueue uses a heap data structure (described later). - PriorityQueue is a small heap by default - that is, the smallest element is obtained every time. 1.2.2 introduction to common interfaces of PriorityQueue - Construction of priority queue Only several construction methods are listed here. For other methods, please refer to the help document. public class Priority { static void TestPriorityQueue(){ //Create an empty priority queue PriorityQueue<Integer> p1=new PriorityQueue<>();//The default capacity is 11 //Create an empty priority queue with the underlying capacity of initialCapacity PriorityQueue<Integer> p2=new PriorityQueue<>(100); ArrayList<Integer> list=new ArrayList<>(); list.add(4); list.add(3); list.add(2); list.add(1); PriorityQueue<Integer> p3=new PriorityQueue<>(list); System.out.println(p3.size()); System.out.println(p3.peek()); } public static void main(String[] args) { TestPriorityQueue(); } } Note: by default, the PriorityQueue queue is a small heap. If a large heap is required, the user needs to provide a comparator. - User provided comparator public class Card { String rank; String suit; public Card(String rank,String suit){ this.rank=rank; this.suit=suit; } } public class CardCmp implements Comparator<Card>{ @Override public int compare(Card o1, Card o2) { o1.rank.compareTo(o2.rank) ; return 0; } } Verification: the elements placed in the PriorityQueue must be able to compare sizes, otherwise classcastexpectation will be thrown public static void method3(){ PriorityQueue<Card> p=new PriorityQueue<>(new CardCmp()); p.offer(new Card("A","♠")); p.offer(new Card("K","♠")); } - By default, it is a small heap. How to create a large heap---- The starting element is the largest public static void method4(){ PriorityQueue<Integer> p=new PriorityQueue<>(new Comparator<Integer>() { @Override public int compare(Integer o1, Integer o2) { return o2-o1;//o2-o1 is a large pile and o1-o2 is a small pile } }); p.offer(5); p.offer(1); p.offer(4); p.offer(2); p.offer(3); } Insert / delete / get the highest priority element public static void method2(){ PriorityQueue<Integer> p=new PriorityQueue<>(); p.offer(5); p.offer(1); p.offer(4); p.offer(2); p.offer(3); System.out.println(p.size()); p.offer(null);//Null pointer exception System.out.println(p.peek());//Get the top of heap element -- that is, the element with the highest or lowest priority p.poll(); p.poll(); p.poll(); System.out.println(p.peek()); p.clear(); if(p.isEmpty()){ System.out.println("p is empty"); }else{ System.out.println("p is not empty"); } } The following is the capacity expansion method of PriorityQueue: public class Test { private static final int MAX_ARRAY_SIZE=Integer.MAX_VALUE-8; private void grow(int minCapacity){ int oldCapacity= queue.length; int newCapacity=oldCapacity+((oldCapacity<64)?(oldCapacity+2):(oldCapacity>>1)); if(newCapacity-MAX_ARRAY_SIZE>0){ newCapacity=hugeCapacity(minCapacity); queue= Arrays.copyOf(queue,newCapacity); } } private static int hugeCapacity(int minCapacity){ if(minCapacity<0){ throw new OutOfMemoryError(); return (minCapacity>MAX_ARRAY_SIZE)?Integer.MAX_VALUE:MAX_ARRAY_SIZE; } } } Description of priority queue expansion: - If the capacity is less than 64, it is expanded by twice the oldCapacity. - If the capacity is greater than 64, it is expanded by 1.5 times the oldCapacity. - If the capacity exceeds MAX_ARRAY_SIZE, according to MAX_ARRAY_SIZE for capacity expansion. 1.3 application of priority queue top-k: the largest or smallest top k data. top-k problem: the minimum number of K class Solution { public int[] smallestK(int[] arr, int k) { if(arr==null){ return new int[0]; } //Construct a small heap with all the elements in the array PriorityQueue<Integer> p=new PriorityQueue<>(); for(int i=0;i<arr.length;++i){ p.offer(arr[i]); } //Get the first k elements in the heap int[] ret =new int[k]; for(int i=0;i<k;++i){ ret[i]=p.poll(); } return ret; } } 2. Simulation implementation of priority queue The underlying PriorityQueue uses the data structure of the heap, and the heap actually adjusts some elements based on the complete binary tree. 2.1 concept of reactor If there is a key set K={ k 0 , k 1 , k − 2 , . . . , k n − 1 k_0,k_1,k-2,...,k_n-1 k0, k1, K − 2,..., kn − 1}, store all its elements in the order of a complete binary tree. In a one-dimensional array, if ki < = k2i + 2 (KI > = k2i + 2) I = 0,1,2... Is satisfied, it is called small heap (or large heap). The heap with the largest root node is called the maximum heap or large root heap, and the heap with the smallest root node is called the minimum heap or small root heap. Nature of heap: - The value of a node in the heap is always not greater than or less than the value of its parent node; - There is always a complete binary tree in the heap; 2.2 storage mode of heap According to the concept of heap, heap is a complete binary tree, so it can be stored in sequence according to the rules of sequence. Note: sequential storage is not suitable for incomplete binary trees. Reason: in order to restore the binary tree, empty nodes must be stored in the space, which will lead to low space utilization. After the elements are stored in the array, the tree can be restored according to the nature 5 of the binary tree part. Assuming i is the subscript of the node in the array, there are: - If I is 0, the node represented by I is the root node, otherwise the parent node of I node is (i-1)/2 - If 2i+1 is less than the number of nodes, the left child subscript of node i is 2i+1, otherwise there is no left child - If 2i+2 is less than the number of nodes, the subscript of the right child of node i is 2i+2, otherwise there is no right child 2.3 creation of heap 2.3.1 reactor downward adjustment How to create a heap of data in the set {27,15,19,18,28,34,65,49,25,37}? It is found from the above figure that the left and right subtrees of the root node have fully satisfied the nature of the heap, so you only need to adjust the root node downward. Downward adjustment process: - Let the parent mark the node to be adjusted, and the child mark the left child of the parent. - If the left child of the parent exists, that is, child < size, perform the following operations to know that the left child of the parent does not exist - parent whether the right child exists. Find the smallest child among the left and right children and let the child mark - Compare the parent with the child. If the parent is larger than the child, exchange the two. If the large element in the parent moves downward, the subtree may not meet the properties, so it needs to continue to adjust downward // Function: adjust the binary tree with parent as the root // Premise: it must be ensured that the left and right subtrees of the parent have met the characteristics of the heap // Time complexity: O(logN) private void shiftDown(int parent){ // By default, let the child mark the left child first - because: the parent may have left or not right int child = parent*2 + 1; // The while loop condition can ensure that the left child of the parent must exist // However, there is no guarantee that the right child of the parent exists while(child < size){ // 1. Find the younger of the left and right children if(child+1 < size && array[child+1] < array[child]){ child += 1; } // 2. The younger child has been found // Test whether parents and children meet the characteristics of heap if(array[parent] > array[child]){ swap(parent, child); // If large parents go down, the subtree may not meet the characteristics of heap // Therefore, it needs to continue to adjust downward parent = child; child = parent*2 + 1; }else{ // A binary tree with a parent as the root is already a heap return; } } } Note: when adjusting the binary tree with parent as the root, the left and right subtrees of the parent must be heap before downward adjustment. Time complexity: from the root node to the leaf node, the number of comparisons is the height of the complete binary tree, i.e O ( l o g 2 N ) O(log_2N) O(log2N) 2.3.2 creation of heap So how to adjust the ordinary sequence {1,5,3,8,7,6}, that is, the left and right subtrees of the root node do not meet the characteristics of the heap? public MyPriorityQueue(Integer[] arr){ // 1. Copy the elements in arr to the array array = new Integer[arr.length]; for(int i = 0; i < arr.length; ++i){ array[i] = arr[i]; } size = arr.length; // 2. Find the penultimate leaf node in the current complete binary tree // Note: the penultimate leaf node is just the parent of the last node // The number of the last node is size-1, and the index of the penultimate non leaf node is (size-1-1)/2 int lastLeafParent = (size-2)/2; // 3. From the position of the penultimate leaf node to the position of the root node, use downward adjustment for(int root = lastLeafParent; root >= 0; root--){ shiftDown(root); } } 2.3.3 time complexity of reactor construction Because the heap is a complete binary tree, and the full binary tree is also a complete binary tree, the full binary tree is used here for simplification. Suppose the height of the tree is h: Note: leaf nodes do not need to be adjusted because they are adjusted from the penultimate non leaf node. Level 1, 2 0 2^0 20 nodes, need to move down the h-1 layer; Level 2, 2 1 2^1 21 nodes, need to move down the h-2 layer; Level 3, 2 2 2^2 22 nodes, need to move down the h-3 layer; Level 4, 2 3 2^3 23 nodes, need to move down the h-4 layer; ... Layer h-1, 2 h − 2 2^{h-2} 2h − 2 nodes, need to move down 1 layer; The total number of steps to move the node: T ( n ) = 2 0 ∗ ( h − 1 ) + 2 1 ∗ ( h − 2 ) + 2 2 ∗ ( h − 3 ) + 2 3 ∗ ( h − 4 ) + . . . + 2 h − 3 ∗ ( 2 ) + 2 h − 2 ∗ ( 1 ) T(n)=2^0*(h-1)+2^1*(h-2)+2^2*(h-3)+2^3*(h-4)+...+2^{h-3}*(2)+2^{h-2}*(1) T(n)=20∗(h−1)+21∗(h−2)+22∗(h−3)+23∗(h−4)+...+2h−3∗(2)+2h−2∗(1) ------------ ① 2 T ( n ) = 2 1 ∗ ( h − 1 ) + 2 2 ∗ ( h − 2 ) + 2 3 ∗ ( h − 3 ) + 2 4 ∗ ( h − 4 ) + . . . + 2 h − 2 ∗ ( 2 ) + 2 h − 1 ∗ ( 1 ) 2T(n)=2^1*(h-1)+2^2*(h-2)+2^3*(h-3)+2^4*(h-4)+...+2^{h-2}*(2)+2^{h-1}*(1) 2T(n)=21∗(h−1)+22∗(h−2)+23∗(h−3)+24∗(h−4)+...+2h−2∗(2)+2h−1∗(1) -------------② ② - ① offset subtraction: T ( n ) = 1 − h + 2 1 + 2 2 + 2 3 + 2 4 + . . . + 2 h − 2 + 2 h − 1 T(n)=1-h+2^1+2^2+2^3+2^4+...+2^{h-2}+2^{h-1} T(n)=1−h+21+22+23+24+...+2h−2+2h−1 = 2 0 + 2 1 + 2 2 + 2 3 + 2 4 + . . . + 2 h − 2 + 2 h − 1 − h =2^0+2^1+2^2+2^3+2^4+...+2^{h-2}+2^{h-1}-h =20+21+22+23+24+...+2h−2+2h−1−h = 2 h − 1 − h =2^h-1-h =2h−1−h And because n= 2 h − 1 2^h-1 2h−1, h = l o g 2 ( n + 1 ) h=log_2(n+1) h=log2(n+1) T ( n ) = n − l o g 2 ( n + 1 ) ≈ n T(n)=n-log_2(n+1)≈n T(n)=n−log2(n+1)≈n 2.4 heap insertion and deletion 2.4.1 insertion of reactor The insertion of the heap requires a total of two steps: - First put the elements into the underlying space (Note: when the space is insufficient, it needs to be expanded) - Adjust the last newly inserted node upward until it meets the nature of the heap private void shiftUp(int child){ //Find child's parents int parent = (child-1)/2; while(child != 0){ if(array[child] < array[parent]){ swap(child, parent); child = parent; parent = (child-1)/2; }else{ return; } } } 2.4.2 deletion of heap Note: the deletion of the heap must be the top element of the heap. - Swap the top element with the last element - Reduce the number of valid elements in the heap by one - Adjust the heap top element downward 2.5 implementation of priority queue with heap simulation public class MyPriorityQueue { Integer[] array; int size; // Number of valid elements boolean offer(Integer e){ if(e == null){ throw new NullPointerException("When inserting, the element is null"); } ensureCapacity(); array[size++] = e; // Note: when a new element is inserted, it may destroy the nature of the heap - it needs to be adjusted upward shiftUp(size-1); return true; } // Delete the elements at the top of the heap public Integer poll(){ if(isEmpty()){ return null; } Integer ret = array[0]; // 1. Exchange the top element of the heap with the last element in the heap swap(0, size-1); // 2. Reduce the number of effective elements in the heap by one size--; // size -= 1; // 3. Adjust the top element down to the proper position shiftDown(0); return ret; } public int size(){ return size; } public boolean isEmpty(){ return size == 0; } public void clear(){ size = 0; } public int peek(){ return array[0]; } } 3. Application of reactor 3.1 implementation of PriorityQueue Encapsulating priority queues with heap as the underlying structure 3.2 heap sorting Heap sorting is sorting with the idea of heap, which is divided into two steps: - Build pile Ascending order: build a pile Descending order: build small piles - Use the idea of heap deletion to sort public static void swap(int [] array,int left,int right){ int temp=array[right]; array[right]=array[left]; array[left]=array[right]; } public static void shiftDown(int[] array,int size, int parent){ int child =parent*2+1; while(child<size){ //Find the older of the left and right children if(child+1<size&&array[child+1]>array[child]){ child+=1; } //Parents are smaller than older children if(array[parent]<array[child]){ swap(array,parent,child); } } } //Hypothesis: ascending order public static void heapSort(int[] array){ //1. Build pile --- build a large pile in ascending order and a small pile in descending order for(int root=(array.length-2)>>1;root>=0;root--){ shiftDown(array,array.length,root); } //2. Use the idea of heap deletion to sort - adjust down int end=array.length-1;//Mark the last element with end while(end!=0){ swap(array,0,end); shiftDown(array,end,0); end--; } } 3.3 Top-k problem Top-k problem: find the first K largest or smallest elements in the data set. Generally, the amount of data is relatively large. For the top-k problem, the easiest way to think of is sorting, but if the amount of data is very large, sorting is not very desirable. The best way is to solve it with heap. The basic idea is as follows: - Use the first K elements in the data set to build the heap The first K largest elements build a small heap The first K smallest elements are built in a pile - Compare the remaining N-K elements with the heap top elements in turn. If not, replace the heap top elements After comparing the remaining N-K elements with the top elements in turn, the remaining K elements in the heap are the first K minimum or maximum elements.
https://programmer.group/619bb6bfd5387.html
CC-MAIN-2021-49
refinedweb
2,852
50.26
So many preludes! January 2, 2013 Michael Snoyman I'm happy to announce new versions of basic-prelude, classy-prelude, and classy-prelude-conduit, as well as the addition of the new package classy-prelude-yesod. This is the first release of these packages which I consider stable enough for general use, and I encourage people to have a look. You can also check out the basic-prelude and classy-prelude repos on Github. Since it's been a while since I discussed classy-prelude, let's start with a quick recap, beginning with the motivation: I think that the standard Prelude is lacking in a few ways: - It encourages some bad practices (e.g., the partial headfunction). - It promotes certain datatypes (e.g., String) over better alternatives (e.g., Text). - Some commonly used functions are not exported (e.g., mapMaybe). - Since it sticks to concrete types in many cases (usually lists), it uses up the common namespace for function names (e.g., length), thereby requiring programmers to use qualified imports on a regular basis. I think the first point stands on its own: there's a basic question we need to ask ourselves about what we consider idiomatic Haskell code, and in my opinion, partial functions is not a part of it. While that's an important point to discuss, it's relatively straight-forward, so I won't be dwelling on it any further. The other three points rest around a central philosophy I have: programmers are inherently lazy (in a good way), and will often choose the path of least resistance. To demonstrate, consider the difference between using String and Text for some simple concatenation: -- String version name = firstName ++ " " ++ lastName -- Text version import qualified Data.Text as T name = firstName `T.append` " " `T.append` lastName (Without OverloadedStrings, this would be even longer.) It's not that the second version is really that much worse than the first, it's just slightly less convenient. And due to that extra bit of work, Text gets used less often. You can see the same thing with Map versus associated lists, Vector and lists, and so on. Note: As Herbert pointed out to me, with GHC 7.4 and up, you could just use the <> operator provided by Data.Monoid instance of `T.append`. So consider the case where you need to use some Prelude functions that require a String. -- String version main = putStrLn $ "Invalid name: " ++ name -- Text version import qualified Data.Text as T import qualified Data.Text.IO as TIO import Data.Monoid ((<>)) main = TIO.putStrLn $ "Invalid name: " <> name By comparison, in classy-prelude this becomes: import ClassyPrelude main = putStrLn $ "Invalid name: " ++ name If you think that my assessment so far doesn't warrant any changes to our tooling, turn back now, this blog post isn't for you. If you are interested in some kind of a solution to this issue, I have two options for you. basic-prelude Points 1-3 above can actually be solved pretty easily: just create a new prelude module that has better defaults. BasicPrelude, provided by the basic-prelude package, does just this. It exports more helper functions, avoids partial functions, and exports a bunch of the common datatypes, like ByteString and HashMap. Another important premise of BasicPrelude is that it doesn't replace any existing types or type classes. It reuses the exact same Monad typeclass that is exported by Prelude. That means that code using BasicPrelude is completely compatible with "normal" Haskell code. Another way to put it is that BasicPrelude is a non-revolutionary approach to improving the Haskell programming experience. basic-prelude is actually split up into two modules: BasicPrelude and CorePrelude. BasicPrelude simply re-exports everything provided by CorePrelude, and then adds in some missing components. CorePrelude is intended to be a foundation for the creation of other preludes. It was originally part of classy-prelude, but was then separated out by Dan Burton, who now maintains basic-prelude with me. CorePrelude tries to export components that would be usable by all Prelude replacements. For now, our simple barometer of this is "would both BasicPrelude and ClassyPrelude use this?" BasicPrelude sticks to monomorphic functions for the most part, with a strong bias towards lists (just like standard Prelude). It doesn't really do much that's controversial, and should be a great approach to try out for people experimenting with an alternate prelude. And if you don't like something about it, you can either file an issue, or just create your own fork. Due to the design of basic-prelude, forking does not create incompatible code, so it's not a high-impact move. classy-prelude classy-prelude is the more radical prelude. As mentioned, it builds on top of CorePrelude, just like BasicPrelude does. The distinction, however, is that instead of providing monomorphic, list-biased functions, it creates a slew of typeclasses and provides polymorphic functions. Unlike many common typeclasses, these typeclasses are not intended to be used in a polymorphic context themselves, but rather to avoid the need to use qualified imports to disambiguate names. In other words, we're using typeclasses for namespacing purposes only. (Despite this, we actually have a fairly thorough test suite covering the behavioral laws of these type classes. So you could theoretically write polymorphic code with classy-prelude, it's just not what I originally intended.) This namespacing approach was fairly uncommon (perhaps classy-prelude was the first usage of it?) when I first started classy-prelude, and as a result I was unsure how well it would turn out in practice. At this point, I've been using classy-prelude for a number of my projects (both personal and at work), and the approach is certainly viable. I personally greatly prefer it to the non-classy approach, and will almost certainly be using it for the foreseeable future- likely until we get a better namespacing solution in GHC itself. There are of course some downsides, some of which can be worked around: Error messages become more confusing. I have no good solution to that right now. I don't think the messages are too daunting for experienced Haskellers, but I would not recommend classy-preludeto beginners. In some cases it is impossible for the compiler to figure out which type you mean. For example, the following monomorphic code is completely unambiguous: import qualified Data.Map as Map foo :: Text -> Int foo name = Map.lookup name people where people = Map.singleton "Michael" 28 However, the equivalent classycode is problematic: foo :: Text -> Int foo name = lookup name people where people = singleton "Michael" 28 The problem is that both singletonand lookupare polymorphic, and are not used in the result, so there's no way to know which container to use. Fortunately, there's an easy workaround: the as* functions. In our case, we just replace the last line with: people = asMap $ singleton "Michael" 28 Overall, the code is still shorter. As an added bonus, you can now simply switch asMapto asHashMapto swap which container structure you use. In some cases, keeping the same name can go beyond the capabilities of the type system. For example, when working on classy-prelude-yesod, overloading insertfor both its usage in Persistent and containers like Mapproved to be a bit problematic, specifically when not using the return value. For example, the following code doesn't compile: _ <- runDB $ insert $ Person "Michael" 28 The options in this case are either to not use the overloaded name and instead use a separately named function (in this case, insertDB), or to use a disambiguating helper function ( voidKey) that fixes the types- similar to the asMapfunction described above. Those two solutions look like: voidKey $ runDB $ insert $ Person "Michael" 28 runDB $ insertDB $ Person "Michael" 28 I'm not sure yet how I feel about these two approaches, but it definitely stresses the point that we're using the typeclass system in an extreme manner. classy-prelude-conduit and classy-prelude-yesod These two packages build on top of classy-prelude to provide even more common functionality. The former provides conduit and xml-conduit functions, while the latter adds on top of that yesod, persistent, http-conduit, and a few other things. classy-prelude-yesod has not been as thoroughly exercised as the other packages discussed here, so you're more likely to run into issues with it. Conclusion These alternate preludes are not for everyone, but I think they offer a lot to certain audiences. If you want to try out a smaller move, I'd recommend BasicPrelude. If you want to be a bit more experimental, go for classy-prelude. If you decide to drop either one at any point, it should not be overly onerous to switch back to normal, monomorphic, qualified imports. I'm definitely interested to hear people's experience with these packages. There are still lots of improvements to be made, more common functionality to be added, and documentation to be written. Now's a great time to get involved!
http://www.yesodweb.com/blog/2013/01/so-many-preludes
CC-MAIN-2015-32
refinedweb
1,506
62.68
PIC Interface keypad with PIC and transmit the data. On the receiver side, we will receive the data wirelessly and show which key is pressed on the LCD. - We will use encoder and decoder IC to transmit 4 bit data. - Reception Frequency will be 433Mhz using cheap RF TX-RX module available in the market. Before going into the schematics and codes, let’s understand the workings of RF module with Encoder-Decoder ICs. Also go through below two articles to learn how to interface LCD and Keypad with PIC Microcontroller: - LCD Interfacing with PIC Microcontroller using MPLABX and XC8 - 4×4 Matrix Keypad Interfacing with PIC Microcontroller 433MHz RF Transmitter and Receiver Module:. Need of Encoder and Decoders: This RF Sensor has few drawbacks:- - One way communication. - Only One channel - Very Noise interference. Due to this drawback we have used encoder and decoder ICs, HT12D and HT12E. D stands for decoder which will be used in Receiver side and E stands for Encoder which will be used in Transmitter side. This ICs provide 4 channels. Also due to encoding and decoding the noise level is very low. In the above image, left one is HT12D the decoder and right one is HT12E, the encoder one. Both ICs are identical. A0 to A7 is used for special encoding. We can use microcontroller pins to control those pins and set configurations. The same configurations need to be matched on the other side. If both configurations are accurate and matched, we can receive data. These 8 pins can be connected to Gnd or VCC or left open. Whatever configurations we do in the encoder, we need to match the connection on the decoder. In this project we will left open those 8 pins for both encoder and decoder. 9 and 18 pin is VSS and VDD respectively. We can use the VT pin in HT12D as notification purposes. For this project we did not used it. The TE pin is for transmission enable or disable pin. The important part is the OSC pin where we need to connect resistors is to provide oscillation to the encoder and decoder. The decoder needs higher oscillation than the decoder. Typically the Encoder resistor value will be 1Meg and the Decoder value is 33k. We will use those resistors for our project. DOUT pin is the RF Transmitter data pin on HT12E and DIN pin in the HT12D is used to connect the RF module data pin. In HT12E, AD8 to AD11 is four channel input which gets converted and serially transmitted through RF module and the exact reverse thing happens in HT12D, the serial data received and decoded, and we get 4 bit parallel output across the 4 pins D8 to D11. Components Required: - 2 – Bread board - 1 – LCD 16×2 - 1 – Keypad - HT12D and HT12E pair - RX-TX RF Module - 1- 10K preset - 2 – 4.7k resistor - 1- 1M Resistor - 1- 33k resistor - 2- 33pF ceramic capacitors - 1 – 20Mhz crystal - Bergsticks - Few single strand wires. - PIC16F877A MCU - PIC18F4520 MCU - A screw driver for controlling the frequency pot, need to be insulated from human body. Circuit Diagram: Circuit Diagram for Transmitter side (PIC16F877A): We have used PIC16F877A for Transmitting purpose. The Hex keypad connected across the PORTB and the 4 channels connected across last 4 bits of PORTD. Pin out as follows- 1.AD11 = RD7 2.AD10 = RD6 3.AD9 = RD5 4.AD8 = RD4 Circuit Diagram for Receiver Side (PIC18F4520): In the above image, the Receiver circuit is shown. The LCD is connected across PORTB. We used internal oscillator of PIC18F4520 for this project. The 4 channels are connected the same way as we did before in the transmitter circuit. This is the Transmitter side– And receiver side in separate breadboard– Code Explanation: There are two parts of the code, one is for the Transmitter and one is for the Receiver. You can download complete code from here. PIC16F877A code for RF Transmitter: As always first, we need to set the configuration bits in the pic microcontroller, define some macros, including libraries and crystal frequency. The AD8-AD11 port of the Encoder ic is defined as RF_TX at PORTD. You can check code for all those in the complete code given at the end. We used two functions, void system_init(void) and void encode_rf_sender (char data). The system_init is used for pin initialization and keyboard initializations. The keyboard initialization is called from the keypad library. The keypad port is also defined in the keypad.h. We made the PORTD as output using TRISD = 0x00, and made the RF_TX port as 0x00 as default state. void system_init(void){ TRISD = 0x00; RF_TX = 0x00; keyboard_initialization(); } void encode_rf_sender (char data){ if(data=='1') RF_TX=0x10; if(data=='2') RF_TX=0x20; if(data=='3') ……… …. .. …. …. In the main function we first receive the keyboard button pressed data using switch_press_scan() function and store the data in key variable. After that we have encoded the data using encode_rf_sender() function and changing the PORTD status. PIC18F4520 code for RF Receiver: As always, we first set the configuration bits in PIC18f4520. Its little bit different from PIC16F877A, you can check the code in the attached zip file. We included the LCD header file. Defined the D8-D11 port connection of Decoder IC across PORTD using #define RF_RX PORTD line, connection is same as used in the Encoder section. The LCD port declaration is also done in the lcd.c file. #include <xc.h> #include "supporing_cfile\lcd.h" #define RF_RX PORTD As stated before we are using internal oscillator for the 18F4520, we have used system_init function where we configured the OSCON register of the 18F4520 to set the internal oscillator for 8 MHz. We also set the TRIS bit for both LCD pins and the Decoder pins. As HT–12D provides output at D8-D11 ports, we need to configure the PORTD as input to receive the output. void system_init (void){ OSCCON = 0b01111110; // 8Mhz, , intosc //OSCTUNE = 0b01001111; // PLL enable, Max prescaler 8x4 = 32Mhz TRISB = 0x00; TRISD = 0xFF; // Last 4 bit as input bit. } We configured the OSCON register at 8 MHz, also made port B as output and port D as input. Below function is made using the exact reverse logic used in the previous transmitter section. Here we get the same hex value from the port D and by that hex value we identify which switch was pressed in the transmitter section. We can identify each key press and submit the correspondent character to the LCD. void rf_analysis (unsigned char recived_byte){ if(recived_byte==0x10) lcd_data('1'); if(recived_byte==0x20) lcd_data('2'); if(recived_byte==0x30) ……. ….. … … ……….. The lcd_data is called from the lcd.c file. In the main function we first initialize the system and LCD. We took a variable byte, and stored the hex value received from port D. Then by the function rf_analysis we can print the character on LCD. void main(void) { unsigned char byte = 0; system_init(); lcd_init(); while(1){ lcd_com(0x80); lcd_puts("CircuitDigest"); lcd_com (0xC0); byte = RF_RX; rf_analysis(byte); lcd_com (0xC0); } return; } Before running it, we have tuned the circuit. First we have pressed the ‘D’ button in the keypad. So, the 0xF0 is being continuously transmitted by the RF transmitter. We then tuned the receiver circuit until the LCD shows the character ‘D’. Sometimes the module is tuned properly from the manufacturer, sometimes it is not. If everything is properly connected and not getting the button pressed value in the LCD then there are possibilities that the RF Receiver is not tuned. We have used the Insulated screwdriver for decreasing wrong tuning possibilities due to our body inductance. This is how you can interface the RF Module to the PIC Microcontroller and communicate between two PIC microcontrollers wirelessly using RF Sensor. You can download the complete code for Transmitter and Receiver from here, also check the demonstration Videobelow. PIC code for Transmitter Side: /* */Keypad.h” #define RF_TX PORTD /* Hardware related definition */ #define _XTAL_FREQ 200000000 //Crystal Frequency, used in delay /* Other Specific definition */ void system_init(void); void encode_rf_sender (char data); void main(void){ system_init(); char Key = ‘n’; while(1){ Key = switch_press_scan(); encode_rf_sender(Key); } } /* * System Init */ void system_init(void){ TRISD = 0x00; RF_TX = 0x00; keyboard_initialization(); } void encode_rf_sender (char data){ if(data==’1′) RF_TX=0x10; if(data==’2′) RF_TX=0x20; if(data==’3′) RF_TX=0x30; if(data==’4′) RF_TX=0x40; if(data==’5′) RF_TX=0x50; if(data==’6′) RF_TX=0x60; if(data==’7′) RF_TX=0x70; if(data==’8′) RF_TX=0x80; if(data==’9′) RF_TX=0x90; if(data==’0′) RF_TX=0x00; if(data==’*’) RF_TX=0xa0; if(data==’#’) RF_TX=0xb0; if(data==’A’) RF_TX=0xc0; if(data==’B’) RF_TX=0xd0; if(data==’C’) RF_TX=0xe0; if(data==’D’) RF_TX=0xf0; } PIC code for Receiver Side: /* * File: main.c * Author: Sourav Gupta *CircuitDigest.com * Created on 17 May 2018, 12:18 */ // PIC18F4520 Configuration Bit Settings // ‘C’ source line config statements // CONFIG1H #pragma config OSC = INTIO7 // Oscillator Selection bits (Internal oscillator block, CLKO function on RA6, port function on RA7) #pragma config FCMEN = OFF // Fail-Safe Clock Monitor Enable bit (Fail-Safe Clock Monitor disabled) #pragma config IESO = OFF // Internal/External Oscillator Switchover bit (Oscillator Switchover mode disabled) // CONFIG2L #pragma config PWRT = OFF // = ON // Watchdog Timer Enable bit (WDT enabled) #pragma config WDTPS = 32768 // Watchdog Timer Postscale Select bits (1:32768) // CONFIG3H #pragma config CCP2MX = PORTC // CCP2 MUX bit (CCP2 input/output is multiplexed with RC1) #pragma config PBADEN = OFF // PORTB A/D Enable bit (PORTB<4:0> pins are configured as analog input channels on Reset) #pragma config LPT1OSC = OFF // Low-Power Timer1 Oscillator Enable bit (Timer1 configured for higher power operation) #pragma config MCLRE = ON // MCLR Pin Enable bit (MCLR pin enabled; RE) // #pragma config statements should precede project file includes. // Use project enums instead of #define for ON and OFF. #include <xc.h> #include “supporing_cfile\lcd.h” #define RF_RX PORTD void system_init (void); void rf_analysis (unsigned char recived_byte); void main(void) { unsigned char byte = 0; system_init(); lcd_init(); while(1){ lcd_com(0x80); lcd_puts(“CircuitDigest”); lcd_com (0xC0); byte = RF_RX; rf_analysis(byte); lcd_com (0xC0); } return; } void system_init (void){ OSCCON = 0b01111110; // 8Mhz, , intosc //OSCTUNE = 0b01001111; // PLL enable, Max prescaler 8×4 = 32Mhz TRISB = 0x00; TRISD = 0xFF; // Last 4 bit as input bit. } void rf_analysis (unsigned char recived_byte){ if(recived_byte==0x10) lcd_data(‘1’); if(recived_byte==0x20) lcd_data(‘2’); if(recived_byte==0x30) lcd_data(‘3’); if(recived_byte==0x40) lcd_data(‘4’); if(recived_byte==0x50) lcd_data(‘5’); if(recived_byte==0x60) lcd_data(‘6’); if(recived_byte==0x70) lcd_data(‘7’); if(recived_byte==0x80) lcd_data(‘8’); if(recived_byte==0x90) lcd_data(‘9’); if(recived_byte==0x00) lcd_data(‘0’); if(recived_byte==0xa0) lcd_data(‘*’); if(recived_byte==0xb0) lcd_data(‘#’); if(recived_byte==0xc0) lcd_data(‘A’); if(recived_byte==0xd0) lcd_data(‘B’); if(recived_byte==0xe0) lcd_data(‘C’); if(recived_byte==0xf0) lcd_data(‘D’); } Read more detail:PIC to PIC Communication using RF Module JLCPCB – Prototype 10 PCBs for $2 (For Any Color) China’s Largest PCB Prototype Enterprise, 600,000+ Customers & 10,000+ Online Orders Daily See Why JLCPCB Is So Popular:
https://pic-microcontroller.com/pic-to-pic-communication-using-rf-module/
CC-MAIN-2019-18
refinedweb
1,814
61.97
Details Description The current implementation expects to be given a ClassExpression for each class to Newify (for the Rubyesque style). Classes defined within the same script won't yet be processed into a ClassExpression but can be found in the source unit. The goal is to make the following work: import groovy.transform.Immutable abstract class Tree {} @Immutable class Branch extends Tree { Tree left, right } @Immutable class Leaf extends Tree { int val } @Newify([Branch, Leaf]) def t = Branch(Leaf(1), Branch(Branch(Leaf(2), Leaf(3)), Leaf(4))) assert t.toString() == 'Branch(Leaf(1), Branch(Branch(Leaf(2), Leaf(3)), Leaf(4)))' Activity - All - Work Log - History - Activity - Transitions
https://issues.apache.org/jira/browse/GROOVY-4876
CC-MAIN-2017-34
refinedweb
109
53.21
The other day I played with DYP-A01 ultrasonic distance sensor on my Nerves-powered Raspberry Pi. Here is my note. Long story short - I wrote an Elixir program that enables us to communicate with an ultrasonic sensor through serial ports on Raspberry Pi. - I was able to measure distance connecting an ultrasonic sensor to: - UART Rx pin on Raspberry Pi Zero - UART Rx pin on Raspberry Pi 4 - USB port on Raspberry Pi 4 Nerves firmware I use Nerves to build firmware for my Raspberry Pi. I am not going to talk about Nerves firmware here, it is a platform for building firmware in Elixir programming language. There are good resources out there. I personally started Nerves by reading Nerves official documentation. Then I asked questions on #nerves channel in Elixir Slack when I got stuck. People are nice and kind in the community. Here are some example firmware projects that you can do experiment with. In the following YouTube videos, Frank Hunleth, a co-author of the Nerves Project, talks about Nerves for beginners. - Elixir in Embedded Systems using Nerves Livebook (YouTube) - Elixir Wizards Live: Frank and the Wizards (YouTube) Hardware used Here is a list of what I used in my experiment. Wiring Wiring can be done through either USB or GPIO pins. USB I purchased DYP-A01 ultrasonic distance sensor and USB to TTL Serial Cable based on what I read in Adafruit's catalog.. At first I had no idea how I am supposed to connect them together. While the USB cable has four wires coming our of it, at the end of the sensor's cable is one connector. After a while, I learned that I could use jumper wires to connect them. In my case, thankfully the wires are nicely color-coded, which makes the wiring easy. For this particular sensor, one wire is unneeded because we transmit no signal from our Raspberry Pi to the sensor. We just receive the signals that is sent periodically from the sensor, which is called "UART auto output" in the sensor's data sheet. GPIO pins Alternatively we can connect the sensor through GPIO pins on Raspberry Pi. Elixir program I package the necessary code as Elixir library dypa01. It can be installed by adding dypa01 to your list of dependencies in your firmware's mix.exs file: def deps do [ {:dypa01, "~> 0.1"} ] end Find serial port name First shell into your Nerves-powered Raspberry Pi. You can list all serial ports that are currently attached by running: iex> Circuits.UART.enumerate %{ "ttyAMA0" => %{}, "ttyS0" => %{}, "ttyUSB0" => %{ description: "CP2102 USB to UART Bridge Controller", manufacturer: "Silicon Labs", product_id: 60000, serial_number: "0001", vendor_id: 4292 } } You can find the default Nerves serial port for UART in /boot/config.txt. iex> cmd "cat /boot/config.txt | grep tty" # Enable the UART (/dev/ttyS0) 0 In my case, I found out that my Rasperry Pi Zero uses ttyAMA0 and my Rasperry Pi 4 uses ttyS0 for UART. Measure distance Once the serial port name is found, it is easy to read distance data from the ultrasonic distance sensor. # Start a gen server for interacting with a DYP-A01 sensor on port ttyAMA0 iex> {:ok, pid} = DYPA01.start_link(port_name: "ttyAMA0") {:ok, #PID<0.1407.0>} # Measure the current distance iex> DYPA01.measure(pid) {:ok, %DYPA01.Measurement{distance_mm: 1680, timestamp_ms: 321793}} Discussion (0)
https://dev.to/mnishiguchi/use-dyp-a01-ultrasonic-distance-sensor-in-elixir-bp4
CC-MAIN-2021-43
refinedweb
560
64.51
According to Matthias Urlichs:> In dist.linux.kernel, article <908456202.25458@noris.de>,> Chip Salzenberg <chip@perlsupport.com> writes:> > Experience indicates otherwise. And consider how inconvenient it is> > now to have two ifdef blocks -- one for the conditional code itself> > and another at the top of the function for the variables that the code> > needs. Bleh.> > Whatever happened to the idea of> #ifdef FOOBAR> {> int foo_now, bar_later;> [...]> }> #endif> ?That's great unless you have this: #ifdef FOOBAR int foo_var = foo_init(); #endif ... #ifdef FOOBAR foo_use(foo_var); #endif ... #ifdef FOOBAR foo_use_again(foo_var); #endifThe C++ approach also makes ifdef'ing existing code easier -- you oftendon't have to introduce new scopes that the C approach would require.--
https://lkml.org/lkml/1998/10/21/135
CC-MAIN-2015-11
refinedweb
114
60.01
Custom ValidationKristaps Dzonsons The most common validations for uploaded data seem to be JPG and PNG—just enough to make sure that the image really is one of the two, and won't crash horribly in the main application when being parsed. Given that kvalid_string(3) only covers common types, how do we handle custom validation? Let's consider all three types, using the common libgd library to abstract image handling. Source Code We could parse directly using libpng and so on, but for the sake of our example, this is a bit easier. Our mission is to make sure that a JPG or PNG file is readable. In our example, we'll create validation functions, register them with a named input field, then access the parsed data in our web application. To do so, we override the validator as described in khttp_parse(3) (scan down to valid). It notes that the KPAIR_DOUBLE, KPAIR_STRING, and KPAIR_DOUBLE are provided for validators that set the parsed field of struct kpair. However, if our validator sets KPAIR__MAX, we don't use the parsed field at all. The return code of the function will tell whether to bucket the pair in fieldmap (success) or fieldnmap (failure). The web application will then need to know to use the val and valsz if the validated pair instead of the parsed fields. To wit, we'll need to create the following functions, with the given header file: #include <gd.h> int kvalid_png(struct kpair *kp) { gdImagePtr im; int rc; im = gdImageCreateFromPngPtr(kp->valsz, kp->val); if ((rc = (im != NULL))) gdImageDestroy(im); kp->type = KPAIR__MAX; return rc; } int kvalid_jpeg(struct kpair *kp) { gdImagePtr im; int rc; im = gdImageCreateFromJpegPtr(kp->valsz, kp->val); if ((rc = (im != NULL))) gdImageDestroy(im); kp->type = KPAIR__MAX; return rc; } Now we need to hook these into validations. Let's assume that our HTML inputs are called jpeg and png, for simplicity. enum key { KEY_JPEG, KEY_PNG, KEY__MAX }; static const struct kvalid keys[KEY__MAX] = { { kvalid_jpeg, "jpeg" }, /* KEY_JPEG */ { kvalid_png, "png" }, /* KEY_PNG */ }; That's it! Now, our application logic can simply check for the existence of KEY_JPEG or KEY_PNG in the fieldmap table of struct kreq, and be guaranteed that the results will be usable (at least by libgd). The valid interface can do all sorts of more complicated things—for example, we could have converted JPEGs, TIFFs, and other formats all into PNG files during validation by reading into a gdImagePtr, then writing the results of gdImagePngPtr into the val and valsz members. These would then be written into the validated data, and all of our images would then be PNG.
https://kristaps.bsd.lv/kcgi/tutorial3.html
CC-MAIN-2021-21
refinedweb
434
59.13
The upcoming 0.9.9 version of the Profiler will partially expose the use of custom views. These views are used internally by the Profiler to create complex graphical UIs using short XML strings. While at the moment extensions can use PySide to create complex UIs, it’s better to avoid it if possible, since it involves an extra dependency and also because PySide might not be ported to Qt 5 in the future. But let’s see a code snippet: from Pro.UI import * ctx = proContext() v = ctx.createView(ProView.Type_Custom, "Debug Directory") v.setup("<ui><vsplitter><table id='0'/><hex id='1'/></vsplitter></ui>") ctx.addView(v) These few lines will display the following view: Controls can be organized in layouts (hlayout/vlayout), splitters (hsplitter/vsplitter) and tabs (tab). These elements are called containers. Available controls are: label, pie, plot, table, tree, hex, text and media. More controls will be available in the future and not all of the current ones can be used as it is. Some controls make sense only in combination with a callback to be notified about changes of the state of the control. The notification system will be made available to Python as well in the future, but it made sense to release a partial solution in the meantime, because many views don’t require notifications and only need a way to display information at the end of an operation. Let’s see for example how to make use of the UI above to display information. This code replicates the Debug Directory UI in Portable Executables. from Pro.UI import * ctx = proContext() obj = ctx.currentScanProvider().getObject() dbgdir = obj.DebugDirectory().MakeSingle() dbgdata = ctx.currentScanProvider().getObjectStream() dbgdata.setRange(*obj.DebugDirectoryData(dbgdir)) v = ctx.createView(ProView.Type_Custom, "Debug Directory") v.setup("<ui><vsplitter><table id='0'/><hex id='1'/></vsplitter></ui>") v.getView(0).setStruct(dbgdir) v.getView(1).setData(dbgdata) ctx.addView(v) Elements in a view can have attributes. We’ve only seen the id attribute used to identify the embedded controls. There are two kind of attributes: shared attributes and individual ones. Only controls have these shared attributes: width, height, min-width, max-width, fixed-width and fixed-height. If a c is prefixed to the width/height word, then the size can be expressed in characters. e.g.: fixed-cwidth=’10’. Additionally, since version 1.3, there’s also wfixed and hfixed. Both are booleans which, if true, set the fixed size policy. Here’s a list of individual attributes for controls and containers. - ui - bgcolor (e.g. ffffff) - hlayout/vlayout (hl/vl) - margin - spacing - align (hcenter, vcenter, center, top, left, bottom, right) - hsplitter/vsplitter (hs/vs) - sizes/csizes (separated by -) - tab - index - titles (separated by 😉 - label - bgcolor (e.g. ffffff) - select (bool) - margin - text - readonly (bool) - linenr (bool, show line number) - hline (bool, highlight current line) - hword (bool, highlight current word) - wrap (bool) - combo (since version 1.3) - edit (bool) - text (string, only if editable) - btn (since version 1.3) - text (string, only if editable) - check (since version 1.3) - checked (bool) - text (string, only if editable) - tline (text-line, since version 2.5) While this post doesn’t present many usage examples, we’ll try to show additional ones in future posts.
https://cerbero-blog.com/?p=1242
CC-MAIN-2022-40
refinedweb
542
50.73
A small pythonic alternative to discord.py Project description 🤖 An ultra light library to develop discord bots with Python Get lightdiscord To install the library, you can just run the following command: # Linux/macOS python3 -m pip install -U lightdiscord # Windows py -3 -m pip install -U lightdiscord Key features :warning: If the size of the library and the proximity with the discord api is not absolutely necessary for you, may be a better option. - Easy to use and quick to learn - Currently the smallest alternative to discord.py - Support custom listeners - Support multiple bot instances - Full support for Bot and User accounts - Support proxies - Customizable user agent - Low level: directly interact with the discord api and manage cache as you want How to use? First, you need to import lightdiscord. You can then create a bot object, specify a token and optional features: - user: A boolean (True for user accounts, False by default) - listeners: A dictionnary containing your events listeners and the API endpoint - proxy: A proxy (None for no proxies, None by default) - user_agent: The user agent sent to discord bot = lightdiscord.Bot( "YOUR_TOKEN", listeners={"READY": on_ready} ) To start the bot, you need to use an async function. Here is an example with asyncio import asyncio async def main(loop): await bot.start() if __name__ == "__main__": loop = asyncio.get_event_loop() loop.run_until_complete(main(loop)) loop.run_forever() You can check Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages. Source Distribution lightdiscord-1.2.1.tar.gz (3.7 kB view hashes)
https://pypi.org/project/lightdiscord/1.2.1/
CC-MAIN-2022-05
refinedweb
267
52.09
IRC log of ws-desc on 2006-07-06 Timestamps are in UTC. 13:55:46 [RRSAgent] RRSAgent has joined #ws-desc 13:55:46 [RRSAgent] logging to 13:55:58 [Jonathan] RRSAgent, set log world 13:56:10 [Arthur] Arthur has joined #ws-desc 13:56:13 [chathura] chathura has joined #ws-desc 13:56:29 [lmandel_] lmandel_ has joined #ws-desc 13:56:42 [Arthur] ACTION: Lawrence - violate operation style assertions 13:57:09 [ibm] q+ 13:57:33 [Arthur] ACTION: Youenn - define documents for stub generation 13:57:48 [Arthur] ACTION: Jonathan - create validation-report stylesheet 13:58:03 [Arthur] ACTION: John - resolve Woden component model interchange 13:58:03 [Arthur] format errors 13:58:29 [Arthur] ACTION: Arthur - add xpaths for soap and http to document coverage report 13:58:44 [Arthur] ACTION: Philippe - violate http binding assertions 13:59:01 [Arthur] ACTION: Chathura - will do interop tests with Youenn and Lawrence 14:02:03 [chathura] oneway EPR 14:05:21 [lmandel_];%20charset=utf-8#styles 14:06:05 [lmandel_];%20charset=utf-8#RPCStyle-5016-summary 14:06:14 [lmandel_];%20charset=utf-8#RPCStyle-5018-summary 14:06:28 [lmandel_];%20charset=utf-8#RPCStyle-5014-summary 14:12:27 [youenn] youenn has joined #ws-desc 14:13:03 [youenn] ping epr is: 14:13:36 [youenn] echo axis epr is: 14:41:24 [Bozhong] Bozhong has joined #ws-desc 14:48:10 [chathura] 14:56:25 [TonyR] TonyR has joined #ws-desc 14:58:01 [TonyR] zakim, what is the code? 14:58:04 [Zakim] sorry, TonyR, I don't know what conference this is 14:58:15 [pauld] zakim, this will be ws_desc 14:58:15 [Zakim] ok, pauld, I see WS_DescWG()11:00AM already started 14:58:25 [TonyR] zakim, what is the code? 14:58:25 [Zakim] the conference code is 97394 (tel:+1.617.761.6200), TonyR 14:58:51 [pauld] zakim, read agenda from 14:58:51 [Zakim] working on it, pauld 14:58:54 [Zakim] agenda+ Dial in information (members only) [.1]: 14:58:56 [Zakim] agendum 1 added 14:58:57 [Zakim] agenda+ Issue CR052: Serializing only part of the body with HTTP binding 14:58:59 [Zakim] agendum 2 added 14:59:04 [Allen] Allen has joined #ws-desc 14:59:06 [Zakim] agenda+ Issue CR053: Allow absolute URI in {location} [.1] 14:59:13 [Zakim] agendum 3 added 14:59:17 [Zakim] agenda+ Issue CR054: URIPath feedback [.1] 14:59:19 [Zakim] agendum 4 added 14:59:22 [Zakim] agenda+ Issue CR055: Clarification needed on HTTP Transfer Coding [.1] 14:59:25 [Zakim] agendum 5 added 14:59:27 [Zakim] agenda+ Issue CR056: Questions on {http method} and {safety} extension 14:59:29 [Zakim] agendum 6 added 14:59:31 [Zakim] agenda+ Issue CR058: Suggestion to change {safety} to {safe} [.1] 14:59:34 [Zakim] agendum 7 added 14:59:35 [Zakim] agenda+ Issue CR059: Editorial: Where does {http location ignore uncited} 14:59:37 [Zakim] agendum 8 added 14:59:39 [Zakim] agenda+ Issue CR060: Why don't whttp:authenticationType and {http 14:59:41 [Zakim] agendum 9 added 14:59:43 [Zakim] agenda+ Issue CR061: service and binding name shown as QNames in 14:59:46 [Zakim] agendum 10 added 14:59:47 [Zakim] agenda+ Issue CR062: Re:service and binding name shown as QNames in 14:59:49 [Zakim] agendum 11 added 14:59:51 [Zakim] agenda+ Welcome new member: Eran Chinthaka 14:59:55 [Zakim] agendum 12 added 14:59:59 [Zakim] agenda+ Issue CR063: Prefix declarations in inlined schema [.1] 15:00:03 [Zakim] agendum 13 added 15:00:07 [Zakim] agenda+ Features "at risk" 15:00:15 [Zakim] agendum 14 added 15:00:23 [Zakim] agenda+ Approval of minutes: (Youenn was also present.) 15:00:25 [Zakim] agendum 15 added 15:00:27 [Zakim] agenda+ Review of Action items [.1]. 15:00:29 [Zakim] agendum 16 added 15:00:31 [Zakim] agenda+ Administrivia 15:00:33 [Zakim] agendum 17 added 15:00:35 [Zakim] agenda+ Issue CR022: Component Values Must Be Context Independent [.1] 15:00:37 [Zakim] agendum 18 added 15:00:39 [Zakim] agenda+ Issue CR037: Comments on Part 2, Chapter 6 [.1] 15:00:41 [Zakim] agendum 19 added 15:00:43 [Zakim] agenda+ Issue CR044: Parts 1 and 2 Treat Defaults Inconsistently with each 15:00:45 [Zakim] agendum 20 added 15:00:47 [Zakim] agenda+ Issue CR045: Inline schemas with no target namespace [.1] 15:00:51 [Zakim] agendum 21 added 15:00:53 [Zakim] done reading agenda, pauld 15:00:57 [pauld] zakim, agenda-1 15:01:01 [Zakim] agendum 1, Dial in information (members only) [.1]:, dropped 15:01:09 [TonyR] zakim, who is on the phone? 15:01:09 [Zakim] On the phone I see ??P2 15:01:19 [TonyR] zakim, ??p2 is me 15:01:23 [Zakim] +TonyR; got it 15:01:30 [Zakim] +??P39 15:01:33 [JacekK] JacekK has joined #ws-desc 15:01:34 [Allen] zakim, ??p39 is Allen 15:01:34 [Zakim] +Allen; got it 15:01:48 [Zakim] +??P37 15:01:50 [Zakim] +JacekK 15:02:19 [ibm] 0 15:02:26 [Zakim] + +1.905.413.aaaa 15:02:45 [Jonathan] Zakim, aaaa is interopevent 15:02:47 [Zakim] +Glen 15:02:48 [GlenD] GlenD has joined #ws-desc 15:02:49 [Zakim] +interopevent; got it 15:03:05 [JacekK] JacekK has changed the topic to: agenda: 15:03:06 [Roberto] Roberto has joined #ws-desc 15:03:17 [plh] plh has joined #ws-desc 15:03:41 [Zakim] + +1.585.377.aabb 15:03:58 [Jonathan] Zakim, interopevent holds Jonathan, Arthur, plh, youenn, Jkaputin lmandel_, chathura 15:03:58 [Zakim] +Jonathan, Arthur, plh, youenn, Jkaputin, lmandel_, chathura; got it 15:05:11 [Zakim] +Roberto 15:05:28 [Bozhong] +bozhong 15:06:47 [Jonathan] zakim, aabb is zehler 15:06:47 [Zakim] +zehler; got it 15:07:13 [chathura] hey Glen seeing you after some time 15:08:32 [Arthur] scribe: Arthur 15:08:58 [Arthur] meeting: WSD WG Telecon 15:09:07 [chathura] yah!:) 15:09:16 [Zakim] +Paul_Downey 15:09:25 [PeteZ] PeteZ has joined #ws-desc 15:09:45 [Arthur] agenda: 15:10:16 [pauld] zakim, next agendum 15:10:16 [Zakim] agendum 2. "Issue CR052: Serializing only part of the body with HTTP binding" taken up 15:10:23 [pauld] zakim, agenda? 15:10:23 [Zakim] I see 20 items remaining on the agenda: 15:10:24 [Zakim] 2. Issue CR052: Serializing only part of the body with HTTP binding 15:10:26 [Zakim] 3. Issue CR053: Allow absolute URI in {location} [from .1] 15:10:28 [Zakim] 4. Issue CR054: URIPath feedback [from .1] 15:10:29 [Zakim] 5. Issue CR055: Clarification needed on HTTP Transfer Coding [from .1] 15:10:32 [Zakim] 6. Issue CR056: Questions on {http method} and {safety} extension 15:10:35 [Zakim] 7. Issue CR058: Suggestion to change {safety} to {safe} [from .1] 15:10:37 [Zakim] 8. Issue CR059: Editorial: Where does {http location ignore uncited} 15:10:39 [Zakim] 9. Issue CR060: Why don't whttp:authenticationType and {http 15:10:42 [Zakim] 10. Issue CR061: service and binding name shown as QNames in 15:10:44 [Zakim] 11. Issue CR062: Re:service and binding name shown as QNames in 15:10:48 [Zakim] 12. Welcome new member: Eran Chinthaka 15:10:51 [Zakim] 13. Issue CR063: Prefix declarations in inlined schema [from .1] 15:10:54 [Zakim] 14. Features "at risk" 15:10:55 [Zakim] 15. Approval of minutes: (Youenn was also present.) 15:10:58 [Zakim] 16. Review of Action items [from .1] 15:10:59 [Zakim] 17. Administrivia 15:11:02 [Zakim] 18. Issue CR022: Component Values Must Be Context Independent [from .1] 15:11:03 [Zakim] 19. Issue CR037: Comments on Part 2, Chapter 6 [from .1] 15:11:04 [Zakim] 20. Issue CR044: Parts 1 and 2 Treat Defaults Inconsistently with each 15:11:06 [Zakim] 21. Issue CR045: Inline schemas with no target namespace [from .1] 15:11:08 [Zakim] +Anish 15:11:09 [pauld] zakim, whuptish 15:11:09 [Zakim] I don't understand 'whuptish', pauld 15:11:13 [Arthur] chair: Jonathan 15:11:46 [Arthur] Date: July 6, 2006 15:12:17 [PeteZ] PeteZ has joined #ws-desc 15:12:31 [Arthur] topic: action item review 15:13:50 [Arthur] topic: administrivia 15:14:14 [Arthur] jonathan: there will be a telecon next two weeks to clear up open issues 15:14:40 [Arthur] jonathan: assume august holiday from telecons - but still hold implementor telecons 15:15:47 [Arthur] jonathan: maybe no telecon July 27 due to potential absence of Jonathan and Tony 15:16:22 [plh] Arthur: we have indeed shown interop. Canon and Apache implementation are talking to each other. We have recorded message logs. 15:16:38 [plh] ... we now have reports that show all the covered assertions. 15:17:05 [plh];%20charset=utf-8 15:17:35 [plh] Arthur: lots of red in the assertion report. 15:17:50 [plh] ... please contribute that violate the assertions. 15:18:40 [plh] ... Jonathan will check the results produced by the implementations with the expected ones. 15:19:03 [plh] ... we only have two MEPs implemented. Anybody contemplating doing the others? 15:19:30 [plh] ... for the remainder of the events, we'll keep focusing on test coverage 15:19:51 [Arthur] arthur: red = shame 15:21:35 [plh] Tony: why 2 or 3 is yellow? 15:21:56 [Zakim] -??P37 15:22:07 [plh] Jonathan: we could put green if >= 1. no yellow 15:23:40 [Arthur] glen: any cases for extensions and features/property? 15:24:36 [Arthur] arthur: one for Feature, 0 for Property, all the Part 2 extensions 15:24:51 [Arthur] glen: will contribute some later 15:25:07 [Arthur] ACTION: Glen to contribute some extension test cases 15:25:31 [Arthur] topic: CR050 15:26:32 [GlenD] +1, sounds right 15:27:01 [GlenD] oh right, this discussion 15:27:18 [GlenD] +1 add to Interchange Format, -1 add to Component Model, I think 15:27:34 [Arthur] jonathan: recall that we resolved CR050 with no action since wsdlx is a "characteristic" of the implementation so the wsdlx:safety property gets added to all Operation components even if there is no markup in the documents 15:27:35 [PeteZ] PeteZ has joined #ws-desc 15:27:36 [Roberto] I agree with Glen 15:27:52 [JacekK] me too agrees with Glen 15:28:10 [Arthur] jonathan: there was a proposal to add {extensions} to Description component 15:28:26 [Jonathan] 15:29:31 [Arthur] Jonathan: make this CR069 15:29:58 [Bozhong] Bozhong has joined #ws-desc 15:32:11 [Arthur] glen: agree to putting {extensions} in the component model interchange format put not in the component model 15:32:16 [JacekK] q+ 15:32:24 [Roberto] q+ 15:32:25 [Jonathan] q- ibm 15:32:33 [Jonathan] ack jacek 15:33:02 [Jonathan] ack roberto 15:33:12 [Arthur] jacek: nice to have {extensions} in component model since it is api guidance 15:33:32 [Arthur] roberto: agree with glen re interchange format 15:33:40 [Bozhong] Bozhong has joined #ws-desc 15:33:43 [GlenD] +q 15:33:45 [GlenD] q+ 15:33:50 [Arthur] tony: {extension} useful for caching documents too 15:34:10 [Zakim] +??P0 15:34:12 [Arthur] roberto: {extensions} not an intrinsic part of the component model 15:34:16 [Arthur] q+ 15:34:35 [Jonathan] ack glen 15:35:19 [Jonathan] q+ 15:35:22 [Arthur] glen: +1 to roberto, BUT make be useful if a spec used that property, i.e. it needed to refer to that extension 15:35:25 [Zakim] -Anish 15:36:07 [JacekK] q+ 15:36:17 [Jonathan] ack arthur 15:36:38 [JacekK] q+ to suggest trying to ask for objections one way or the other and just do what doesn't get objections 15:36:40 [Jonathan] ack jon 15:37:07 [Arthur] arthur: need to know what extensions are in effect in order to decide validity of the component model 15:37:41 [Arthur] jonathan: a spec and just refer to another spec, doesn't need an explicit property 15:38:17 [Arthur] jonathan: the infoset spec started as a library of terms, not a data model 15:38:41 [Arthur] q+ 15:39:05 [Jonathan] ack jacek 15:39:05 [Zakim] JacekK, you wanted to suggest trying to ask for objections one way or the other and just do what doesn't get objections 15:39:25 [Arthur] jacek: this is a minor issue so let's vote 15:39:32 [Jonathan] ack arthur 15:41:30 [Arthur] tony: we need to be able to state component model validity so need to know extensions 15:41:48 [Arthur] jonathan: any objections to the status quo? 15:41:58 [Arthur] jonathan: no objections 15:42:09 [Arthur] jonathan: any objections to adding it? 15:42:16 [Arthur] glen: yes 15:42:20 [Arthur] roberto: yes 15:42:55 [Arthur] jonathan: RESOLUTION: closed with no action 15:47:09 [Arthur] topic: cr056 15:48:58 [Zakim] -Glen 15:49:13 [Arthur] arthur: {safety} may not be present if the wsdlx is not "engaged" 15:49:34 [TonyR] zakim, who is making noise? 15:49:37 [GlenD] Incidentally, I think this issue is fairly clear cut, and the problem is that we don't have a notion of "processor" and "engaged extensions". 15:49:44 [Zakim] TonyR, listening for 10 seconds I heard sound from the following: TonyR (4%) 15:49:50 [JacekK] q+ to suggest moving to CR044 discussion, will have to leave in 10 min 15:50:02 [JacekK] q+ to suggest moving to CR044 discussion after this item, will have to leave in 10 min 15:50:14 [GlenD] Any extension that you choose to have "engaged" in your environment may well add any properties it wants to the component model, regardless of what is or is not present in the actual markup. IMHO, of course. 15:50:53 [GlenD] It's up to the extension writers to do a good job at specifying how and when properties should be added/mutated. 15:52:08 [Jonathan] ack jacek 15:52:08 [Zakim] JacekK, you wanted to suggest moving to CR044 discussion, will have to leave in 10 min and to suggest moving to CR044 discussion after this item, will have to leave in 10 min 15:52:11 [Arthur] RESOLUION: close with no action 15:53:01 [Arthur] topic: CR044 15:53:11 [JacekK] my version of CR44 writeup: 15:53:42 [Arthur] arthur: by "engaged" I mean supported in the component model 15:59:18 [Arthur] Jonathan: can be close this as editorial or do we need to see final text? 15:59:27 [Arthur] tony: i'd like to see the text 15:59:50 [Zakim] -JacekK 15:59:57 [Arthur] ACTION: Roberto to propose text for CR044 and related interface-less binding text 16:00:07 [Arthur] topic: CR067 16:00:40 [Jonathan] 16:02:09 [Arthur] q+ 16:02:47 [Jonathan] ack arthur 16:03:18 [Zakim] -??P0 16:06:09 [Arthur] Arthur: we should split the definition of the properties and their occurance, like XSD particles 16:06:59 [Arthur] Jonathan: I propose we just clarify this in the SOAP binding, i.e. say that the HTTP properties occur when the transport is HTTP 16:14:15 [Arthur] RESOLUTION: Accept Jonathan's proposal 16:16:17 [Arthur] ACTION: Arthur to update cm interchange schema to make http cookies optional in the soap extension 16:19:06 [sanjiva] sanjiva has joined #ws-desc 16:22:44 [Arthur] topic: CR037 16:23:22 [Arthur] Arthur: the spec only defines a few serializations but doesn't prevent others so I suggest we close this with no action 16:23:33 [Arthur] RESOLUTION: close with no action 16:24:34 [Arthur] topic: CR055 16:25:08 [Jonathan] jacek's proposal: 16:33:07 [Zakim] -Allen 16:33:08 [Zakim] -zehler 16:33:09 [Zakim] -Roberto 16:33:11 [Zakim] -Paul_Downey 16:33:13 [Zakim] -TonyR 16:33:17 [Jonathan] zakim, kick ibm 16:33:17 [Zakim] I don't understand 'kick ibm', Jonathan 16:33:22 [Roberto] Roberto has left #ws-desc 16:33:22 [TonyR] TonyR has left #ws-desc 16:33:22 [Zakim] -interopevent 16:33:24 [Zakim] WS_DescWG()11:00AM has ended 16:33:25 [Zakim] Attendees were TonyR, Allen, JacekK, +1.905.413.aaaa, Glen, +1.585.377.aabb, Jonathan, Arthur, plh, youenn, Jkaputin, lmandel_, chathura, Roberto, zehler, Paul_Downey, Anish 16:33:37 [Arthur] ACTION: John to write proposal for CR055 based on discussion and Jacek's email 16:33:40 [Jonathan] rrsagent, draft minutes 16:33:40 [RRSAgent] I have made the request to generate Jonathan 18:35:51 [Zakim] Zakim has left #ws-desc 19:03:23 [plh] plh has joined #ws-desc 19:40:22 [lmandel] lmandel has joined #ws-desc 19:41:18 [lmandel_] lmandel_ has joined #ws-desc 20:01:16 [Arthur] ?
http://www.w3.org/2006/07/06-ws-desc-irc
CC-MAIN-2021-43
refinedweb
2,868
61.5
Member Since 4 Years Ago 325. cbj4074 left a reply on Rental Management DB Schema My inclination is that you should in fact sum the negative values when calculating the cost of maintenance. Anything that is an expense should be stated as a negative value, in my opinion, even when displayed on a report. If for whatever reason you want to display it as a positive value, then just multiply it by -1 for display purposes, on a per-report basis, as needed. In any case, positive-vs-negative values is not a particularly important consideration, given the use-case you describe, and you could certainly store the type ( Income or Expense), but taking that approach will complicate your queries unnecessarily, in my opinion. My advice is to start building this. You can stare at the schema all day, but until you scaffold-out some models, pencil in the relationships, and play with it in Tinker, it'll be difficult to see any shortcomings in your logic. cbj4074 left a reply on Rental Management DB Schema The polymorphic relationship seems okay upon first glance, although, I'm not sure you even need the type to track Income vs. Expense. Why not use positive values to represent income and negative values to represent expense? That would make the computations a lot simpler, too. cbj4074 left a reply on Testing Delete Method In Phpunit, "Expected Status Code 200 But Received 419." @sutherland is correct in that the VerifyCsrfToken middleware is disabled automatically when running unit tests (and has been since Laravel 5.2). So, if you receive a TokenMismatchException when conducting HTTP tests, the likely explanation is that Laravel's runningUnitTests() method is returning false, which will happen if your application environment is not set to testing. In my case, I had set the env to testing using a phpunit.xml file, and then run a test directly, e.g., by right-clicking the test class in an IDE, and the wrong environment was used. cbj4074 left a reply on How Best To Copy A Row Of Data From One Table To Another? Eloquent and events are two different things. I know that. My point was that using Eloquent to "copy" the row from one table to another provides the ability to take advantage of Eloquent-specific events. Suppose I need to perform some other arbitrary task any time I copy a row in this manner. That functionality would come standard if I used Eloquent, whereas I would have to build it manually if I use a raw query (or Query Builder). Then eloquent wouldn't work since eloquent converts to normal sql at runtime. It's a shortcut language. Again, I know this. My point was that Laravel's SQL grammar implementation will evolve over time to accommodate changes in database-specific SQL syntax. It provides a level of abstraction that prevents me from having to worry about quoting style or other database-specific nuances that may evolve over time. Speaking of quoting syntax, what if I later need to switch from MySQL to PostgreSQL (or any other DB)? That's another good reason not to use raw SQL. I'm guessing you've not used PostgreSQL in your projects, or are never likely to switch to it, because quoting style is one of the most significant departures from MySQL, and presents a real problem for raw queries. In any case, I'll mark my own reply as Accepted, because, clearly, there's no "better" method for this than those already discussed. Why Taylor saw it fit to include a replicate() method but not the cross-table equivalent is anyone's guess. cbj4074 left a reply on How Best To Copy A Row Of Data From One Table To Another? @Cronix I "figured it out" before I started the thread. I wanted to know if there is a more "appropriate" method with respect to my specific needs, similar to replicate(), but that works across tables. Apparently, there wasn't then, and there isn't now. Sure, a raw SQL query might be more "efficient", but it lacks the benefits of a DBAL (which we get with Query Builder), and the benefits of Events (which we get with Eloquent). I know I'm boring you here, but to illustrate to other less knowledgeable users the reasons for which Query Builder or Eloquent might be preferable in certain scenarios as compared to raw SQL: userstable (and my deleted_userstable)? I'd have to hunt-down and edit any raw queries, whereas had I used a model-driven approach, I wouldn't have to do anything. I'm sure there are other drawbacks to using raw SQL for this. Regarding the Eloquent approach vs raw SQL, the lack of support for Events is self-explanatory. Tangentially, what's the rub here? That I commented on a two-year-old thread? Or that I haven't chosen an Answer? If it's resurrecting an old thread, the passage of time doesn't make something any less relevant, especially given that there still isn't a move() or clone() method (or whatever name makes sense) for copying or cloning a model from one table to another. Closing or otherwise discouraging posts in "old threads" is one of the most annoying practices on the Internet. It forces users to duplicate topics that have been discussed at length, often with valuable (and still relevant) contributions that are sidelined when the existing thread is closed. And that's to say nothing of automatic notifications that contributors to the "old" thread may have received had the new/duplicate thread not been created. If "resurrecting" bothers people, then Jeff should add a "Don't bump thread with my reply" checkbox. @jlrdw Thanks for the useful contribution... that's exactly what I intend to do. :) I'll keep you posted. cbj4074 left a reply on How Best To Copy A Row Of Data From One Table To Another? @mehedi101 But won't that work only when duplicating rows in the same table? I want to retrieve a model and save it to a different table. So far, the only way I've found to do this is (as @chileno suggested) by retrieving the model instance and passing it into Query Builder via DB::table('users_deleted')->insert($user->toArray()) or into Eloquent using Mass Assignment, via DeletedUser::create($user->toArray()). To be clear, neither of these is a "bad approach". I can live with either one. :) cbj4074 left a reply on Accessing HTTPS Homestead Sites With Laravel Dusk I've reduced the issue noted in my previous post to the fact that the HOME=/home/vagrant environment variable appears not to be set when chromedriver is started using the startChromeDriver() method. I had hoped that adding the following to the relevant phpunit.xml would fix the issue: <php> <env name="HOME" value="/home/vagrant"/> </php> But no such luck. And I don't like having to hard-code that path in several places. I ended-up leaving that line commented-out, installing chromedriver globally, and then adding it to the Supervisor config, in /etc/supervisor/conf.d/chromedriver: [program:chromedriver] environment=HOME=/home/vagrant process_name=%(program_name)s_%(process_num)02d command=/usr/local/bin/chromedriver autostart=true startretries=99 autorestart=true user=vagrant numprocs=1 redirect_stderr=true Note the environment=HOME=/home/vagrant. This is the special sauce that enables chromedriver to find the user-trusted certificates in $HOME/.pki/nssdb. With all of this in place, my Dusk tests pass with HTTPS enabled, even on a freshly-provisioned Homestead instance. :) A couple points of note: I have no idea why it is insufficient simply to add the self-signed Homestead CA cert ( /etc/nginx/ssl/ca.homestead.homestead.crt) to the operating system's trusted CA store. Chrome seems not to look there... so, where from is it getting its "built-in" list of trusted CAs? Similarly, adding the aforementioned file to the user-specific certificate store doesn't do the job, either; it is necessary to add each individual certificate to be trusted, even though they are all issued by the Homestead CA. My understanding, based on , is that a CA can be trusted with the C type (in contrast to P). For example: certutil -d sql:$HOME/.pki/nssdb -A -t "C,," -n /etc/nginx/ssl/ca.homestead.homestead.crt -i /etc/nginx/ssl/ca.homestead.homestead.crt But that doesn't seem to do the job. If either of these measures could be made to work, it would be simpler to trust every Homestead-CA-issued certificate on the box. In the meantime, I've added the following to my Vagrant provisioning process: #!/bin/sh # Install the latest Chrome version to prevent "Chrome version must be >= X" # when running Dusk tests. sudo curl -sS -o - | sudo apt-key add sudo sh -c 'echo "deb stable main" >> /etc/apt/sources.list.d/google-chrome.list' sudo apt-get -y update sudo apt-get -y install google-chrome-stable # Install the latest chromedriver globally; using the Composer-installed version # from any given project is less reliable if using a mounted filesystem. CHROME_DRIVER_VERSION=`curl -sS chromedriver.storage.googleapis.com/LATEST_RELEASE` wget -N -nv -P ~/ unzip -qq ~ # Add all Homestead-generated certificates to the "vagrant" user's trusted # certificate store for Chrome. For more info: # sudo apt-get -y install libnss3-tools # We don't want a password on the certificate database; see: # if [ -d "$HOME/.pki/nssdb" ]; then rm -rf $HOME/.pki/nssdb fi mkdir -p $HOME/.pki/nssdb certutil -N -d $HOME/.pki/nssdb --empty-password # Iterate through all Homestead-generated certificates and add them to the trusted # store. Many of the certificates may be identical (because Homestead generates # wildcards), in which case "certutil -d sql:$HOME/.pki/nssdb -L" command will # show only one instance of each identical cert. for file in /etc/nginx/ssl/*.crt; do [ -e "$file" ] || continue certutil -d sql:$HOME/.pki/nssdb -A -t "P,," -n "$file" -i "$file" done cbj4074 left a reply on Accessing HTTPS Homestead Sites With Laravel Dusk Hmm, I'm not completely satisfied with this solution just yet. For whatever reason, the certificate overrides are effective only when chromedriver is started from the command line, manually, e.g. /home/vagrant/code/laravel/vendor/laravel/dusk/bin/chromedriver-linux, and not when it's started from within Laravel via this method: /** * Prepare for Dusk test execution. * * @beforeClass * @return void */ public static function prepare() { static::startChromeDriver(); } As yet, I have no idea why this is, given that PHP is running as the vagrant user when the Dusk test is invoked, and should therefore have access to the certificate database. To workaround this bizarre limitation, I've commented-out the above line and am starting chromedriver in the background before running my Dusk tests. I'd love to nail-down this odd anomaly! cbj4074 left a reply on Accessing HTTPS Homestead Sites With Laravel Dusk cbj4074 left a reply on Accessing HTTPS Homestead Sites With Laravel Dusk I finally figured this out. Apparently, on Ubuntu, Chrome looks to an obscure and completely non-obvious location for TLS certificate authorization overrides. Basically, the trick here was to determine how to replicate trusting a self-signed certificate in the GUI version of Chrome, but via the CLI. (See this helpful article for more information: ) For self-signed certificates, such as those that Homestead generates during provisioning, it is necessary to add them to this arcane sqlite database in the effective user's (in this case, the vagrant user's) Home directory before Chrome (and in turn, the standalone chromedriver) will trust the certificates. Chromium's (and therefore Chrome's) certificate handling is described in moderate detail at . Firstly, certutil must be installed: $ sudo apt-get install libnss3-tools The following commands must be executed as the vagrant user, assuming a Homestead environment (otherwise, as whomever the chromedriver will be running). On a new system, it's possible that the certificate database does not yet exist, in which case it is necessary to create it before performing the subsequent steps: $ mkdir -p $HOME/.pki/nssdb $ certutil -N -d $HOME/.pki/nssdb --empty-password (the --empty-password switch is necessary to automate this process) To add a given certificate: $ certutil -d sql:$HOME/.pki/nssdb -A -t "P,," -n /etc/nginx/ssl/site.example.com.crt -i /etc/nginx/ssl/site.example.com.crt To confirm the addition (list all "accepted" overrides): $ certutil -d sql:$HOME/.pki/nssdb -L And if for any reason one desires to remove an override: $ certutil -D -d sql:$HOME/.pki/nssdb -n /etc/nginx/ssl/site.example.com.crt Now, my tests pass when the APP_URL is! In your case, given that you have a valid wildcard that is not self-signed, it must be that your certificate does not contain the SubjectAltName field, which is now required as of Chrome 58, or you're missing an intermediate CA or two in the trust chain (can occur with less common CAs). If your certificate does include SubjectAltName, then you should use curl from the CLI in Homestead (or whatever environment you're running Dusk from within) to determine which intermediate CA certificate is missing from the trust chain. cbj4074 left a reply on Accessing HTTPS Homestead Sites With Laravel Dusk Thanks for the reply! I really appreciate it. After further testing, I concur. It seems that the standalone "chromedriver" that ships with Dusk either lacks support for TLS entirely or lacks awareness of Ubuntu's CA certificate store (it may be using a separate Java store). It's also possible that the binary includes only common CA certificates, which would, of course, preclude the use of a self-signed certificate. My hunch is that it will be necessary to install Selenium and chromedriver manually, or perhaps even build one or both from source, to add support for custom CAs. I have no choice but to figure it out, so I'll post back whenever that may be. cbj4074 left a reply on Accessing HTTPS Homestead Sites With Laravel Dusk @ORRD Were you ever able to resolve this? I'm having the exact same problem and it's proving rather frustrating. I'm surprised that more people don't have this problem, given that Homestead uses TLS for all sites by default currently. I, too, am using a wildcard certificate, and while it is self-signed, it is trusted at the operating system level in Homestead (I add the CA to the trusted store upon provisioning). cbj4074 left a reply on Laracasts Forum Does Not Handle CSRF Token Expiry Gracefully; Login Fails Silently cbj4074 left a reply on Laracasts Forum Does Not Handle CSRF Token Expiry Gracefully; Login Fails Silently @somnathsah That won't solve the problem, because in the use-case I describe, I had never logged-in to begin with. I had come to the site, left it open overnight, and then tried to login the next day without refreshing the page first. It should not be the user's responsibility to "always remember to refresh the page before you try to login!" Furthermore, the lack of an error message when login fails due to an expired CSRF token is unacceptable. This seems obvious, but some kind of error message should accompany any form submission failure in a web application. This is easy to fix. And there is no need even to show an error message. As I said in the OP, when the user submits the login form, check the response in JS for a token expiry error. If such an error is present, make another AJAX request to the server to request a new CSRF token, and then in the callback, update the login form with that new token, and re-submit the form automatically. This fixes the issue, in all scenarios, and it's completely transparent to the user. cbj4074 left a reply on Laracasts Forum Does Not Handle CSRF Token Expiry Gracefully; Login Fails Silently @jlrdw This is a usability problem; the fact that I hit "Login" and "nothing happens" is inexcusable, in my opinion. Whatever actions I may or may not have taken on the site prior to attempting to login are irrelevant. Whether or not login functions correctly should not be contingent upon my browsing habits and for how long my browser has been open during any given session. If you had built this website for a client, and the client told you, "Hey, people are trying to login and when they click the Login button, nothing happens," would you tell your client, "Oh, yeah, don't worry about that, just tell people to close their browsers every time they finish using their computers and this won't happen." I hope not. @somnathsah That advice is sound, in general, but using redirect()->back()->withInput() in this instance would be inappropriate, and provide a poor user experience, because the login form is presented in a modal dialog. A proper fix for this issue must utilize AJAX. cbj4074 started a new conversation Laracasts Forum Does Not Handle CSRF Token Expiry Gracefully; Login Fails Silently Suppose that I'm visiting the laracasts.com website and I leave it open after reading an article. The next day, I come back to the computer and decide I'd like to login and participate in the thread I'm viewing. When I click Login, and enter my credentials, the login form submission fails silently, with no visual feedback to the user whatsoever. I had to open the browser's DevTools console to see that the underlying cause is an expired CSRF token. I find this puzzling, especially on an otherwise well-built website, presumably built and maintained by Laravel experts. :) For what it's worth, I would avoid using something like because it will keep a tremendous number of sessions alive for no good reason. While it may be tempting to say "Your token has expired, please refresh the page", it would be a lot more elegant to modify the JS that processes the authentication response such that if the token is expired, a new one is requested and used in a subsequent request that is sent automatically, and completely transparently where the user is concerned. This has been a problem for quite some time (maybe since launch); it would be nice to see it fixed. cbj4074 left a reply on Group Forge Changed To Www-data, Can't Fix Okay, so, php-fpm is running as the www-datauser, and nginx is running as the forge user. You didn't specify whether either of these users belongs to a group, and if so, which one. ;) Given your comments, however, it seems that the forge user likely belongs to a group of the same name, as does the www-data user. Please confirm. Also, it is important to note that both of these users must have appropriate access to the filesystem for the site to function as intended, because both php-fpm and nginx need a certain level of access in this configuration. Try this (and do the same to any other directories, as necessary): $ sudo chown -R forge:www-data ./storage $ sudo chmod -R 770 ./storage What is the result? The best way to troubleshoot this further is to create a shell script that you can edit and refine between executions, until you have it "just right". Something like this, from within the top-level Laravel application directory: $ vim ./set-perms.sh And then paste-in the following: #!/bin/sh chown -R forge:www-data ./storage chmod -R 770 ./storage To execute it: $ sudo ./set-perms.sh You can then edit the contents of the script, re-run it, see if it works as you require, and if not, edit and run it again. Rinse and repeat until it's dialed-in! cbj4074 left a reply on Group Forge Changed To Www-data, Can't Fix 1.) As which user does php-fpm run, and does this user belong to a group? If so, which group? 2.) As which user does nginx run, and does this user belong to a group? If so, which group? Generally speaking, your entire project tree, on the filesystem, should be owned by the same user and group. For example, my storage directory looks like this (I'm not using Forge, but the point still stands): drwxr-x--- 6 web1 client1 4.0K Jan 5 15:31 storage In my case, php-fpm runs as the web1 user, and nginx runs as the www-data user. web1 is in the client1 group, and client1 (which, again, is a group) is in the www-data group. (I'm running Ubuntu 16.04 LTS in a relatively "vanilla" configuration.) As you can see from the permissions set on my storage directory, 0750 should be sufficient, if you have configured everything appropriately. The subdirectories look the same: drwxr-x--- 6 web1 client1 4.0K Jan 5 15:31 . drwxr-x--- 13 web1 client1 4.0K Jan 5 15:32 .. drwxr-x--- 3 web1 client1 4.0K Jan 5 15:31 app drwxr-x--- 2 web1 client1 4.0K Jan 5 15:31 files drwxr-x--- 5 web1 client1 4.0K Jan 5 15:31 framework drwxr-x--- 2 web1 client1 4.0K Jan 5 15:31 logs My advice is to untangle this problem once and for all, determine which ownership and permissions are necessary, and create a simple shell script that is capable of "fixing" both at any time, should this occur again in the future. If you're able to answer the two questions I asked, above, I'm happy to provide additional guidance! cbj4074 left a reply on Failed To Load Resource: The Server Responded With A Status Of 500 (HTTP/2.0 500) Those permissions don't necessarily look problematic. You should be able to run composer commands in production (or any other environment), provided you run them as an appropriate user; that is, the user under whom PHP runs for the host in question (and not "root"!). I wouldn't change the permissions until you have concrete evidence to suggest that they are problematic as is. If you are seeing "Access denied" or similar in a log, then please post the specific message. I concur with @lostdreamer_nl in that you should see evidence of the 500 error in your web-server logs. You can tail the log while you hit the site with your iPhone and see what activity is logged for the IP address in question: $ sudo tail -f -n 80 /var/log/nginx/error.log (adjust the path to the log as necessary) cbj4074 left a reply on Trouble With Blade Section Inheritance Thank you, @poppabear (who helped me through a side channel)! I should have known to go back to the Laravel 4.2 documentation, as it explains the @overwrite directive, which is exactly what I was missing: template-2.blade.php @extends('template-1') @section('myMarkup') OVERRIDE markup goes here. @overwrite This produces the following, as expected: <div> DEFAULT markup goes here. OVERRIDE markup goes here. </div> For whatever reason, this information was expunged from the documentation between versions 5.0 and 5.1. This is a recurring pattern in the Laravel documentation. Crucial information is expunged between versions, and repeated attempts to have it re-added (i.e., PRs on GitHub) are ignored. Extremely frustrating. And, aside from that, the current documentation is flatly incorrect in that it states the following at the bottom of : In this example, the sidebar section is utilizing the @parent directive to append (rather than overwriting) content to the layout's sidebar. The implication is that the default behavior, when extending a template and defining a section with the same name as a parent template, is to overwrite the section completely. That assertion is provably untrue, as evidenced above. cbj4074 started a new conversation Trouble With Blade Section Inheritance Hello! Blade template inheritance seems so straightforward, yet when I cobble-together the most basic scenario, inheritance doesn't behave the way I had expected. Take, for example, these three Blade templates: template-1.blade.php @section('myMarkup') DEFAULT markup goes here. @endsection @yield('myMarkup') template-2.blade.php @extends('template-1') @section('myMarkup') OVERRIDE markup goes here. @endsection template-3.blade.php <div> @include('template-1') @include('template-2') </div> Route: Route::get('/test', function () \\{ return view('template-3'); }); Requesting this route produces the following output: <div> DEFAULT markup goes here. DEFAULT markup goes here. </div> I had expected the following instead: <div> DEFAULT markup goes here. OVERRIDE markup goes here. </div> What might I be missing here? Thanks for any help! cbj4074 left a reply on Class Log Does Not Exist I just encountered this in Laravel 5.1. As @jbloomstrom and @jhoff suggest earlier in the thread, modifying Illuminate\Container\Container.php temporarily enables one to catch the underlying error that occurs prior to the logging error. In my case, the underlying problem turned-out to be: Symfony\Component\Debug\Exception\FatalThrowableError: Call to undefined method Closure::__set_state() in /var/www/example.com/laravel/bootstrap/cache/config.php:66 The problem seemed to occur suddenly, but I came to realize that it began when I executed php artisan config:cache, which creates the file bootstrap/cache/config.php, and this is the file in which the offending call, to an apparently-undefined function, Closure::__set_state(), is made. I still don't know which library is at fault, but will update this post should I discover the culprit. UPDATE: The underlying problem in this instance is that one or more configuration files defines a closure. Laravel does not allow for this (see: ), but the problem manifests only after the configuration is cached. To fix the issue, hunt-down the offending key in bootstrap/cache/config.php, determine which configuration file the line is associated with, and remove any closures from the source file. cbj4074 left a reply on Set Password On Uploaded Files I'm not familiar with any of those libraries, in particular, but why not simply use 7-zip to create the ZIP archive with encryption and a password? Provided you have the ability to install 7-zip, or request that it be installed for you, it should be as simple as calling the command-line executable and passing it appropriate arguments. In 7-zip, the -p switch is used to add a password to the archive. You can find examples of the specific syntax about halfway down this page (search for the word "secret"): cbj4074 left a reply on Disable Xdebug For those running PHP 7 on Debian or Ubuntu, the commands vary slightly: $ sudo phpdismod xdebug $ sudo service php7.0-fpm restart cbj4074 left a reply on Cartalyst Platform Still Maintained? Hello, everyone! I am pleased and honored to introduce myself as Cartalyst's Community Liaison, effective today. I am grateful that my eagerness and enthusiasm towards Cartalyst's products attracted the company's attention and lead it to reach-out to me in this capacity. As a brief introduction, I've been using Cartalyst's products in a professional capacity for about 18 months. A little more than halfway through that period, I felt as though my familiarity with the products was sufficient to offer my candid opinion in a similar thread. As a then-newish-user, my hope is that this post will hearten and encourage others who may be interested but still "on the fence": Since writing that post, my appreciation for Cartalyst's work has only grown stronger. The more familiar I become with the code-base, the more I appreciate the effort and expertise that belies the Cartalyst Arsenal. Similarly, the more I interact with the amazing user community and the talented developers who reach for the Cartalyst toolbox every day, the more I enjoy being a part of the group. It's a true pleasure to learn from these individuals -- both the Cartalyst developers and the ever-growing group of subscribers -- and give-back wherever I'm able. I recognize several usernames in the above-mentioned thread as individuals who later subscribed to the Cartalyst Arsenal, which I mention only as testament to the fact that Cartalyst's products speak for themselves. Perhaps some of those individuals would be willing to provide an update as to their respective experiences. As noted above, one of the challenges that Cartalyst has faced, historically, is keeping the user-community up-to-date and apprised of all the goings-on behind the scenes. As so many growing businesses are, Cartalyst is torn between staying afloat with work that earns immediate returns and work that strives to satisfy future needs. Rest assured that development is as active as ever and that, in my humble opinion, Cartalyst sits atop a veritable gold-mine of awesome, yet-to-be-seen products that promise to reshape the ways in which many of us build applications on the Web. To offer a bit about myself, I've been working in PHP professionally since 2003. In the spirit of Einsteinian wisdom, the more I learn, the more I realize just how little I know. And Cartalyst's developers' code has been rather influential in cementing that bit of age-old wisdom for me. :) Truly, I am humbled. The code is super-clean, well-organized, and impressive in every respect. As someone who has written a framework himself, of perhaps nearly equal scope, I can say that these developers are well-qualified. I look forward to providing everyone with regular updates in the future and assisting Cartalyst in all of the areas noted in this thread. Please don't hesitate to reach-out to me directly or to visit me on Cartalyst's Gitter channel, where I spend most of my life! :) Your replies and questions are welcome, and I, like @drsii, will do my best to respond quickly and candidly. Thanks so much! cbj4074 left a reply on Bug? Changing A Column Of Table That Has An Enum Type. This bit me today, too. The fix is simple. (Finding it, much less so.) Credit to . In the database migration file, add a constructor method, like so: public function __construct() { DB::getDoctrineSchemaManager()->getDatabasePlatform()->registerDoctrineTypeMapping('enum', 'string'); } Problem solved! At least in Laravel 5.1 (I haven't tried this in 5.2). I should note that, in my specific case, I'm attempting to rename a different column on the table (not one that is of the ENUM() type), and I don't see any unwanted fallout from this approach. That's why this "bug" is so weird; it prevents one from modifying any column on a table that contains one or more ENUM() columns. For anyone who is trying to modify an actual ENUM() column, this approach will likely change the column type to VARCHAR() or similar and the column will then accept any string. cbj4074 left a reply on How Best To Copy A Row Of Data From One Table To Another? @JoeDawson Hello, and thanks for taking a look! I'm not sure I understand your question. Are you asking how I am adding the row to be copied to the source table in the first place? To explain a bit more, I have two tables: users and users_deleted. When a user is deleted, I need to copy the user record from users to users_deleted. (I'm aware of Eloquent's soft-delete capability, but it's not appropriate for this specific situation.) Currently, the tables share identical structures, but that may not be the case in the future. It is possible that users_deleted may contain a different number of columns, in which case the model-driven approach may not be viable longer-term (unless it's possible to add/remove properties [columns] on the model dynamically, before calling save() against it). I'm not opposed to doing this in two steps, e.g., select the data and then insert it with Query Builder. I'm just trying to understand what options are available before settling on an approach. Thanks again! Happy to answer any other questions (and please do let me know if I didn't answer your previous question adequately). cbj4074 started a new conversation How Best To Copy A Row Of Data From One Table To Another? I have a need to copy a row of data from one database table to another. What is the best means by which to accomplish this in Laravel? The question at is very similar, but the accepted answer seems to assume that some version of the record already exists in both tables (whereas in my case, the record exists only in the first table). I found the /Illuminate/Database/Eloquent/Model::setTable() method, and it seems to "work" in that when I inspect the model after calling it, the #table property reflects the new value, but when I call save() on the model, the data is not written to the DB. Yet, the save() call returns true. Further, I notice that if I pass an invalid/non-existent table name, e.g., setTable('table_does_not_exist'), the call still returns true. Any assistance in this regard would be much appreciated! Thanks in advance! cbj4074 left a reply on Migrate To Laravel With Old Hashed User Passwords. There is already some sound advice here (and cheers for the actual code-snippet, @MikeHopley ), but I'd like to underscore a couple of crucial points that are not Laravel-specific (even if that means they are slightly off-topic). 1.) At no time should the old hash be stored in a separate column (even if the plan is to destroy it within some reasonable time-frame); doing so introduces significant risk without justification. This is exactly how Ashley Madison passwords were cracked ( ): The recommended approach is to store the algorithm along with the hash e.g. MD5:hash:salt or bcrypt12:hash:salt. This allows you to easily identify what algorithm to use on a per-user basis. When you deem an encryption strategy obsolete you can still protect your existing users by wrapping their existing hash in a new algorithm; in this case that would be bcrypt-ing the existing md5 hashes and storing something like md52bcrypt:hash:salt. 2.) In an ideal implementation (and I haven't done the research required to know if Laravel qualifies), there is no need to have a separate has_migrated field, or similar. This is for two reasons: a) this type of migration should be an ongoing process that happens every time that significant risk against a given hashing algorithm emerges publicly, thereby rendering such a column illogical, and b) it is redundant, because in an ideal implementation, the algorithms that have been used to compute a hash are embedded in the stored value. 3.) There is no need to send out an email blast at any point, because in an ideal implementation, every user's password is wrapped in the newest/strongest hashing algorithm, even if there are several hashing "layers" for any given user's password. One might visualize this strategy with the following pseudo-code: scrypt(bcrypt(sha1(md5('theuserspassword')))). One of the more comprehensive road-maps that I've seen regarding this subject may be found at: @uther_bendragon/sustainable-password-hashing-8c6bd5de3844" target="_blank"> Stay safe out there! ;) cbj4074 left a reply on How Does URL::setRootControllerNamespace() Work In Versions >= 5.1? Thanks to both of you. That's an excellent point; I see no reason not to use route() instead of action(), given that we assign a name to every route. I'm content with that approach; it more or less obviates the need to use URL::setRootControllerNamespace(), and keeps the syntax brief. Much appreciated! cbj4074 left a reply on How Does URL::setRootControllerNamespace() Work In Versions >= 5.1? Thanks for taking a look, @martinbean . I appreciate it. You make a worthwhile point about the ambiguity that URL::setRootControllerNamespace() may introduce. I am already using route group namespaces in the same way you demonstrate. But doing so still does not make it possible to abbreviate the controller namespace that is passed to the action() helper. For example, if I define one of my package's routes as such Route::group([ 'namespace' => 'MyVendor\MyPackage\Http\Controllers' ], function () { Route::get('/locations', array('as'=>'locations','uses'=>'LocationController@index')); } }); and then I try to do this in a Blade template {{action('LocationController@index', ['locationId' => $location->id])}} then an ErrorException is thrown. I would really prefer to avoid the need to do this instead: {{action('MyVendor\MyPackage\Http\Controllers\LocationController@index', ['locationId' => $location->id])}} Despite the potential ambiguity that modifying the root controller namespace introduces, its benefits can be significant. Is my assessment accurate? Or have I missed your point entirely? Thanks again for your help! cbj4074 started a new conversation How Does URL::setRootControllerNamespace() Work In Versions >= 5.1? I noticed that Taylor scrubbed the section of the Controller documentation that describes the URL::setRootControllerNamespace() method. He also scrubbed the documentation for the action() helper function. I requested that he restore the documentation ( ) and he agreed to do so. Shortly thereafter, he removed it again ( ). I've been relying on this method, in conjunction with the action() helper function, to simplify the generation of URLs to controller actions. These two functions seem rather useful, and I'm at a loss as to why there is this push to expunge them from the documentation. One possible explanation is that calling URL::setRootControllerNamespace() explicitly simply isn't necessary anymore. Were this to be the case, I'd accept it and move on with life. However, when I comment-out the call to URL::setRootControllerNamespace('MyVendor\MyPackage\Http\Controllers'); in my custom service provider, some controller-routes continue to resolve correctly, while others do not; they fail with something like: ErrorException in UrlGenerator.php line 590: Action App\Http\Controllers\MyController@myMethod not defined. (that "automatically" generated namespace is wrong; it should be MyVendor\MyPackage\Http\Controllers). I find this behavior curious because all of the my application's controllers utilize the same namespace, and they all extend the same base controller. Ultimately, my question is this: Does anybody know how to generate a URL to a controller action while using only the portion of the class name relative to one's controller namespace, in Laravel >= 5.1, without calling URL::setRootControllerNamespace() explicitly first? Taylor states in that this method is "called automatically for you in 5.1", but refuses to explain how it works. Thanks for any assistance! cbj4074 left a reply on Where Do You Set The Public Directory In Laravel 5? With this "new"/"better" method, you don't put that in /public/index.php. In fact, you don't put that function anywhere. In the simplest terms possible (basically, duplicating the correct [though, not accepted] answer on StackOverflow): 1.) In /bootstrap/app.php (not /public/index.php!), change the $app variable's declaration to this: $app = new Illuminate\Foundation\Application( realpath(__DIR__.'/../') ); To be clear, in an unmodified installation, the lines to be replaced with the above snippet are here: 2.) Create the file /app/Application.php. Populate it with the following contents: <?php namespace App; class Application extends \Illuminate\Foundation\Application { public function publicPath() { return $this->basePath . '/../../web'; } } Be sure to change the return value in the above method to reflect the actual path to the desired public directory on your system. That's all there is to it! Happy to answer any questions if you're still stuck. cbj4074 left a reply on Where Do You Set The Public Directory In Laravel 5? @shanecp Thank you for sharing this tip! I'm delighted to find that this does, in fact, do the job! Very simple and elegant as compared to some of the alternatives posited (including my own). I still had to modify index.php because my relative directory structure (between Laravel's "base path" and its "public path") deviates from the default, but this approach is definitely the best I've seen to date. Thanks again for taking the time to let us know! cbj4074 left a reply on How Can I Disable The Xdebug? @EliyaCohen Bashy's solution is rather helpful because it is more generic, but for the sake of posterity, the preferred means by which to disable xdebug on Debian (or Ubuntu), which you appear to be using, is: php5dismod xdebug This simply deletes the symbolic links to xdebug.ini (most commonly in /etc/php5/fpm/conf.d and /etc/php5/cli/conf.d). To renable xdebug, it's the opposite: php5enmod xdebug Conversely, this creates symbolic links for xdebug.ini in the effective conf.d directories (for FPM, CLI, etc.). It's necessary to restart the PHP daemon (if using one) to render the changes effective. cbj4074 left a reply on Where Do You Set The Public Directory In Laravel 5? @inyansuta My best guess is that Laravel is using PHP's __DIR__ constant, so as long as your relative directory structure is the same, everything "just works". It's when the relative structure changes that employing the measures discussed in this thread becomes necessary. cbj4074 left a reply on [L5] Nginx Error Instead Of Json Response What's the URI that you're requesting? Are you sure that you are, in fact, hitting the associated route? In other words, if you put a dd(); just before if(!$article){, is it printed? cbj4074 left a reply on Laracasts Painfully Slow? Getting the occasional 504 Gateway Timeout, but I'm sure Jeffrey knows that all too well. And I'm helping by creating frivolous traffic. :) cbj4074 left a reply on Where Do You Set The Public Directory In Laravel 5? @ferrolho , I apologize for the long-delayed reply. I could have sworn that I had enabled notifications for this thread, but, apparently, I had not. I don't use PHP's built-in HTTP server via Laravel, so I had not tried php artisan serve, but I just tried it on my development server and the HTTP server at least starts: # php artisan serve Laravel development server started on I receive a 500 error when I try to browse to, but that could be for so many different reasons. I have a particularly complex stack configuration (out of necessity), so I'm not at all surprised that PHP's built-in HTTP server doesn't work for me out-of-the-box. I probably need to pass it a path, at a minimum, but, unfortunately, I can't spare the time to troubleshoot it further. I won't belabor the issues that Taylor (and others) raise in this discussion, but I urge you to consider using Homestead, plain-old VirtualBox, or something better suited to the task: But let's try to work through your issue, nonetheless. Firstly, this bit should be placed at the top of index.php, not the bottom: function public_path($path = '') { return realpath(__DIR__); } But in looking back at my notes regarding this subject, I appear to have gone a slightly different direction some time after my initial post in this thread. I removed that function in favor of what seems to be a slightly more involved, although less error-prone, approach. I've prepared a GitHub Gist of the changes that I typically make to index.php: All one should have to do is change lines 23 and 24 to suit his environment. The default values are intended to work with a "stock" Laravel configuration. Regarding the Service Provider, your code looks okay to me. You remembered to add the Service Provider to the 'providers' array in /config/app.php, correct? If the revisions to your index.php don't solve the problem, I'd need additional details regarding the error that you receive: [ErrorException] chdir(): No such file or directory (errno 2) I suspect that this is related to using PHP's built-in HTTP server without specifying a path to the document root. Because I don't use it, I'm not sure how php artisan serve sets the web-server's document root, but you might experiment with starting the server manually and see if you can get it to behave. Something like this: php -S localhost:8000 -t public/ where public/ represents the correct path on your particular system. If you manage to get that to work, then we may have yet one more place within the Laravel source to override the public path definition. Please do let us know what you find! Again, sorry for the late reply! cbj4074 left a reply on Where Do You Set The Public Directory In Laravel 5? @mtvs_dev Good point about not using App::bind() in index.php. And this reason is in addition to the reason I describe in my first post within this thread (item 4). @iboinas There are problems with the approach you describe above. Most notably, the problem that I address in item 3 above: this approach is incompatible with environment-detection via the .env file. cbj4074 left a reply on Is A Subscription To Cartalyst Worth It? My predecessor found Cartalyst, so I can't take credit for the sound decision to subscribe. We've had a multi-developer subscription for about a year and I must agree that it's worth every penny. I'm not in any way affiliated; I'm just a very satisfied subscriber. In our particular case, the most compelling arguments to be made in its favor are: 1.) The product is excellent. I've learned more about Laravel from reading Cartalyst's code than I have from reading any book or following any tutorial. The code is as well-written as any I've seen. Many large, popular products have horrible, unsightly code hidden under the hood. After nearly a year with Cartalyst, I haven't seen a single line of code that made me shake my head. I have the most experience with Platform and Sentinel (the non-FOSS successor to Sentry), which I consider to be Cartalyst's "flagship" packages. These two packages, coupled with the auxiliary components that contribute to their make-up, have thus far been sufficient for all of our application development needs (and we are a sizable corporate enterprise). These two packages alone provide content management, robust authentication, role-based authorization, a theme engine with asset-queuing and complete template and asset inheritance (with fallbacks), and more. Extremely powerful. 2.) The entire product-line is designed with extensibility at the forefront. It's of utmost importance that we're able to override functionality to suit our specific needs. Platform is especially powerful in this regard because of its Extension system. Platform provides a GUI-driven tool for spawning new Extensions, which makes them effortless to create and configure. Each Extension is a mini-Laravel installation, which is portable, self-contained, and features dependency-management (one Extension can require another, for example), and allows the developer to override quite literally any aspect of the base application's behavior -- including logic, templates, assets, localization strings... they've thought of everything. 3.) The support is top-notch. Cartalyst is extremely responsive to support inquiries, bugs/issues, and Pull Requests on GitHub. Support is not limited to clarifying finer points of the documentation; they never let themselves off easy. If the team can't solve your problem off-the-cuff (and they usually do), the developers will roll-up their sleeves and dig into your code, which is unprecedented in my experience. Even when it's "operator error", Cartalyst will show you where you've gone wrong, make the necessary changes, and return your code in working condition. Now that's support. As somebody who has opened many bug reports of varying severity, I've been rather impressed with Cartalyst's ability to fix issues and tag the corresponding releases quickly. 4.) Cartalyst worries about semantic versioning, release tagging, release cycles, etc., so I don't have to. Anybody who has ever had to manage a large, non-private code-base knows all too well the monumental effort required. Laravel's refusal to follow semantic versioning thus far has made the job even more cumbersome, yet the Cartalyst team ensures that its code is released in lockstep with changes in Laravel and that there are no "upstream surprises". Offloading this responsibility to Cartalyst has been worth the cost alone. 5.) The cost is incredibly reasonable and effortless to justify to management. Cartalyst should (and probably will) charge more in the future. I have to assume that the current rate is "introductory" and designed to build the user-base. Even if the cost doubled, we wouldn't hesitate. Given that the subscription costs as little as an hour of one's billable time each month, the support alone will pay for the subscription in a matter of days. It's all too easy to waste several non-billable hours struggling with some snag that the Cartalyst team would be able to address quickly and expertly. 6.) The private Gitter chat channel. I've learned as much from the Gitter chat channel as I have from any other learning aid. There is a social aspect to the chat channel that makes learning Cartalyst fun. There is a sense of companionship, camaraderie, and mutual best-interest that fosters creativity and productivity. Inevitably, somebody in that chat channel has the answer (and oftentimes it's another subscriber, not even a Cartalyst staff-member). 7.) The license is fair and reasonable. The ability to continue operating websites that are built upon Cartalyst components in the event that we terminate the relationship for any reason is imperative to our company. The licensing terms allow for this and are entirely reasonable in every other respect. Many organizations take this ability for granted when dealing with proprietary software licenses. All of that said, the other assessments in this thread are fair and accurate. The documentation is good, but it does lag behind the code-base. But, I would rather have mature code with decent documentation than young, poorly-vetted code with exceptional documentation. My understanding is that they are fully-aware of the need for more tutorials, more walk-throughs, and more examples in the manual, and are working diligently to meet that need. Knowledge worth acquiring is difficult to obtain. The learning curve is quite steep, and as davernz noted, trying to learn Laravel and Cartalyst's packages at the same time is a considerable undertaking. As someone who did precisely that, it took me several months of 8-hour days to feel like I could more or less build anything I might need (and I'm a senior developer with over 10 years of full-time development experience, mostly in PHP). But once your skills are dialed-in, the sky is the limit. cbj4074 left a reply on Ho To Access Config Variables In Laravel 5? 1.) There's no need to use the Config facade to get configuration values; there is now an equivalent helper-method, config(). 2.) In Laravel 5, configurations are no longer "namespaced", which is critical to understand when creating a configuration file (and particularly when merging configuration files). In Laravel 4, the configuration options for a specific package were nested under a namespaced array, like so: return [ 'myextension' => [ 'host' => 'localhost', 'port' => 443, ], ]; In Laravel 5, the equivalent is: return [ 'host' => 'localhost', 'port' => 443, ]; 3.) Rather than store your custom configuration values in /config/app.php, a cleaner approach is to store them in your own configuration file, e.g., /config/myname-mypackage.php. You would then retrieve all values with something like: $config = config('myname-mypackage'); To retrieve only an individual value, simply use dot-notation: $config = config('myname-mypackage.host'); It is possible to nest configuration files within subdirectories (within /config/), too, which can be necessary in order to prevent conflicts ( @sdebacker asked about this -- great question ). If you create your configuration file at, for example, /config/myname/mypackage.php, you would access the values using dots in place of the directory separators: $config = config('myname.mypackage'); 4.) It's simple to merge configuration files using a service provider's register() method: public function register() { $this->mergeConfigFrom( __DIR__.'/../config/config.php', 'myname-mypackage' ); } (note that we omit the .php extension in the second argument) 5.) If you're building your own packages, you can specify any number of configuration files to be published when the new vendor:publish artisan command is executed: public function boot() { $this->publishes([ __DIR__.'/../config/config.php' => config_path('myname-mypackage.php'), ]); } For more information, refer to the relevant doc at . cbj4074 left a reply on Where Do You Set The Public Directory In Laravel 5? As a new Laravel user, I find it staggering that there is no simple, elegant, centralized, and environment-aware means by which to define both the "private" and the "public" filesystem paths that are referenced throughout the application. While it may make sense to nest the public directory beneath the application root for source-control and packaging/distribution purposes, that is an unrealistic structure in a real-world deployment scenario. The scenario that the OP describes is far more realistic, wherein the "public" directory is actually the web-server's "document root", and the rest of the application resides above and outside of the document root. The default directory structure inspires less capable users to dump the entire application (including everything outside of the public directory) into the web-server's document root. This practice should be discouraged, as it exposes a broad attack-vector: the ability to access application resources via HTTP. One might argue that a properly-configured server does not expose the application to unnecessary risk when Laravel is deployed as-is, but there is no guarantee as to the environment's viability. If PHP is upgraded and the process goes awry, thereby causing the web-server to present .php files as raw file downloads, an active attacker could conceivably "scrape" most or all source code. This type of mis-configuration has the potential to expose application details, ranging from filesystem paths to various credentials (cringe). My research on this subject has yielded five possible approaches to using a more sensible "public" path (and I'm sure there are others): 1.) Using a symbolic link (or junction point, on NTFS volumes) This method is not portable. While it may be an acceptable solution for a solo developer, in a collaborative development environment, it's a non-option because it doesn't travel with the application source-code and is therefore cumbersome to implement consistently. 2.) Using IoC container in /app/Providers/AppServiceProvider.php This is not a bad approach, but as @patrickbrouwers notes, it suffers from a considerable flaw: the public_path() helper function remains unaffected in the context of configuration files, which causes the public path to be defined incorrectly in many third-party packages. Also, if I were going to implement this approach, I would create a new Service Provider, rather than modify the included AppServiceProvider class (only because doing so reduces the likelihood of having to merge-in upstream changes). 3.) Using IoC container in /bootstrap/app.php This works well enough -- until environment-detection via the .env file is necessary. Any attempt to access $app->environment() at this stage yields, Uncaught exception 'ReflectionException' with message 'Class env does not exist'. Furthermore, in order to minimize the impact of upstream changes, my preference is to limit customization to as few files as possible, which brings me to the next method. 4.) Using IoC container in /index.php Given that /public/index.php already requires modifications to function out-of-the-box on many systems (the fact that realpath() is not used in the require and require_once statements causes the application to fail fatally in environments in which PHP enforces open_basedir restrictions, which do not allow nor resolve relative path notations, such as ../), my preference would be to make this change (to the public path definition) in the same file. But, as with the above method, attempts to perform environment-detection cause a fatal error in this context, because the required resources have not yet been loaded. 5.) Using custom IoC container in /app/Providers/*.php, coupled with overriding public_path() helper function I settled on this method because it is the only method that solves the third-party package configuration problem (mentioned in method #2) and accounts for environment-detection. The public_path() helper function (defined in /vendor/laravel/framework/src/Illuminate/Foundation/helpers.php) is wrapped in if ( ! function_exists('public_path')), so, if a function by the same name is defined before this instance is referenced, it becomes possible to override its functionality. Given that I have already had to modify index.php (due to the open_basedir problems that I explained in #4), I elected to make this change in index.php, too. //Defining this function here causes the helper function by the same name to be //skipped, thereby allowing its functionality to be overridden. This is //required to use a "non-standard" location for Laravel's "public" directory. function public_path($path = '') { return realpath(__DIR__); } Next, I created a custom Service Provider (instead of using AppServiceProvider.php, the reasons for which I explain in #2): php artisan make:provider MyCustomServiceProvider The file is created at /app/Providers/MyCustomServiceProvider.php. The register() method can then be populated with something like the following: /** * Register the application services. * * @return void */ public function register() { if (env('PUBLIC_PATH') !== NULL) { //An example that demonstrates setting Laravel's public path. $this->app['path.public'] = env('PUBLIC_PATH'); //An example that demonstrates setting a third-party config value. $this->app['config']->set('cartalyst.themes.paths', array(env('PUBLIC_PATH') . DIRECTORY_SEPARATOR . 'themes')); } //An example that demonstrates environment-detection. if ($this->app->environment() === 'local') { } elseif ($this->app->environment() === 'development') { } elseif ($this->app->environment() === 'test') { } elseif ($this->app->environment() === 'production') { } } At this point, everything seems to be working over HTTP. And while all of the artisan commands that I've tried have worked as expected, I have not exhausted all possible CLI scenarios. I welcome any comments, improvements, or other feedback regarding this approach. Thanks to everyone above for contributing to this solution!
https://laracasts.com/@cbj4074
CC-MAIN-2019-35
refinedweb
9,729
54.02
Hi folks, rather than discussing about which operator symbol to use for record access, which is really a question of personal taste we should try to seriously discuss the proposals and get to a solutions! We all seem to agree that records are broken in Haskell. In order to fix that we need a new and most probably incompatible solution. However, I think the new solution should go in a new version of the Haskell standard (among other things :-) ). I would strongly disadvice to try to stick with the old system and improve it. Just because there are lots of different opinions we should still try to find a reasonable solution soon. Desite the minor problem of '.' that dominated the discussion so far, what are the REAL offences against Simons proposal [1] (denoted as SR in the following)? To me it sounds like a very reasonable starting point. Which other proposals exist? I quote David Roundy's list of problems [2] with a short annotation whether SR solves them: 1. The field namespace issue. solved by not sharing the same namespace with functions 2. Multi-constructor getters, ideally as a function. not solved. only possible by hand - As stated by Wolfgang Jeltsch [3] another datatype design might be better - I can image a solution is within SR example: > data Decl = DeclType { name :: String, ... } > | DeclData { name :: String, ... } > | ... > d :: Decl in addition to > d.DeclType.name > d.DeclData.name we provide (only if save, see 3.) > d.name 3. "Safe" getters for multi-constructor data types. not supported as it is - with the above suggestion it could be possible (don't know if desireable) 4. Getters for multiple data types with a common field. - solved with contrains > getx :: (r <: { x :: a }) => r -> a 5. Setters as functions. doesn't seem to be supported, or I don't see it right now. 6. Anonymous records. Supported 7. Unordered records. I don' t understand it. points added from me: 8. Subtyping Supported, quite nicely 9. higher order versions for selecting, updateing ... [4] not supported seems important to me, any solutions? Regards Georg [1] [2] [3] [4] Am Donnerstag, 17. November 2005 19:08 schrieb Dimitry Golubovsky: > > > I found it useful to use (mainly for debugging purposes) > > mapM (putStrLn . show) <some list> > > if I want to print its elements each on a new line. > > -- > Dimitry Golubovsky > > Anywhere on the Web -- ---- Georg Martius, Tel: (+49 34297) 89434 ---- ------- --------- -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 189 bytes Desc: not available Url :
http://www.haskell.org/pipermail/haskell/2005-November/016931.html
CC-MAIN-2014-23
refinedweb
423
67.76
Contents JavaScript for Trac and plugin development JavaScript is used in Trac to add dynamics to web interface elements on the browser side: - expanding/folding in the TracBrowser - providing keyboard shortcuts - and many other features This page lists some of the uses of this language in Trac. We also adhere to a JavaScript coding style. Third party libraries jQuery Trac makes heavy use of jQuery library. Access to this library contents is provided through the main function named $, which is just a shortcut for the jQuery namespace and it is possible to use the full name instead. However, other libraries may use $ too, which may cause conflicts. To avoid these conflicts, switch jQuery into non-conflict mode with jQuery.noConflict() call. This is well explained in. You will see many blocks in Trac that use $ for jQuery. They do it in their local scope defined by nameless function or closure: (function($) { /* some code that uses $ */ })(jQuery) There is a good description of closures and (function(){})() construct at. $(document).ready() To execute and modify the DOM tree the JavaScript function should wait until a page fully loads. With jQuery it looks like: $(document).ready(function_name); In non-conflicting mode, code that executed in Trac on page startup is enveloped in closure and looks like: jQuery(document).ready(function($) { ... }); Upgrade Minified versions of a given release x.y.z of jQuery can be found in URLs with the following pattern: Don't forget to update the default value for the jquery_location setting. See for example r16094. jQuery UI Since Trac version 1.0, we also bundle jQuery UI, a set of standard user interface elements (UI). Upgrade A specific x.y.z version of the minified Javascript code can be downloaded from URLs using the following pattern: This correspond to the minified version of the whole of jQuery UI, ie we don't use a subset of what is currently used by Trac on purpose, so that plugins can assume they have access to all jQuery UI components. We use a custom theme for the CSS, built using the jQuery UI Theme Roller. The jquery-ui.css file contains a link to the theme roller with preselected values. Themeroller link for Trac 1.0.0. jQuery UI Timepicker Addon We use the Timepicker extension for the standard jQuery UI datepicker. Get the release that matches the jQuery UI version: The jquery-ui-timepicker-addon.js file is usually compressed using the Google Closure Compiler, which gives good results. The minified file is saved in trac/htdocs/js/jquery-ui-addons.js. Finally, the jquery-ui-timepicker-addon.css file is merged with the trac/htdocs/css/jquery-ui-addons.css file. TracIni settings After an upgrade, don't forget to update the default values for the jquery_ui_location and jquery_ui_theme_location settings. See for example r16094.
https://trac.edgewall.org/wiki/TracDev/JavaScript
CC-MAIN-2019-26
refinedweb
472
65.42
UI5Lab is a community driven project to establish a repository for UI5 Custom Control Libraries. It is meant to gather forces throughout the community and enable everyone to use UI5 Custom Controls easily. More information about UI5Lab can be found here: So far we have collected more than 20 community libraries and the project has matured quite a bit over the last months. Our initial prototype used grunt and bower to load the UI5 dependencies. Recently, we have switched all UI5Lab repositories to npm modules and the new UI5 tooling. Thanks @mlauffer, @Stermi, @WittmerJ and @Randombyte for the help! While the old library structure still works ok, we recommend switching to the new tooling for all community libraries. Bower is deprecated and the new UI5 tools are much simpler to use than grunt. In this guide, I will explain the simple steps to make it work based on my UI5Lab library openui5.parallax. The same code base is also used for the ui5lab.geometry template library. Assuming you have followed our UI5Lab documentation to create your library only a few steps are required: - Switch from bower to OpenUI5 npm modules - Switch to the new UI5 build and development tooling - Test the new setup and check your resource paths - Publish the new library version to npm or GitHub - Update the library dependency in UI5Lab-central Let’s go do it step by step… 1. Switch from bower to OpenUI5 npm modules This step is quite simple: - Delete the bower.json file - Modify your package.json file to load the npm modules instead of the deprecated bower modules. - Update your .gitignore and .npmignore and throw out those bower paths there, too. - Finally, don’t forget to update your documentation – bower install is not needed anymore in the development setup steps. Package.json Add the new @openui5 “dependencies” and remove the bower “devDependencies” "dependencies": { "@openui5/sap.f": "^1.60.0", "@openui5/sap.m": "^1.60.0", "@openui5/sap.ui.core": "^1.60.0", "@openui5/sap.ui.layout": "^1.60.0", "@openui5/themelib_sap_belize": "^1.60.0", "parallax-js": "^3.1.0" } Note: If you wish to continue using Grunt, you have to adjust your Gruntfile.js a bit. You can find instructions in this commit. We don’t to it here as we switch to the new UI5 tooling immediately. 2. Switch to the new UI5 build and development tooling Now it is time to switch the server and build tools stack as well. We need add the dev dependency to the new UI5 tooling and exchange the start and build scripts. We also add some new ui5lab metadata to the package.json. Finally, we add a ui5.yaml configuration file to the project and adjust the ignore files. Gruntfile.js Delete it, we don’t need it anymore Package.json - Replace the scripts with the commands “ui5 serve” and “ui5 build”, add the “prepare” script - Increase your library version and the ui5lab-browser dependency to 1.0.0 - Add a section “ui5lab” and fill in namespace, icon, and display name (for future innovations) - add the “@ui5/cli” dev dependency { "name": "openui5-parallax", "version": "1.0.0", "description": "A library wrapping parallax.js in UI5 controls to create stunning layered effects", "private": false, "scripts": { "test": "echo \"Error: no test specified. Please add some.\" && exit 1", "prepare": "node prepare", "start": "ui5 serve -o", "build": "ui5 build" }, "ui5lab": { "namespace": "openui5.parallax", "icon": "dimension", "displayName": "Parallax" }, "dependencies": { "@openui5/sap.ui.core": "^1.60.0", "@openui5/sap.m": "^1.60.0", "@openui5/themelib_sap_belize": "^1.60.0", "@openui5/sap.ui.layout": "^1.60.0", "@openui5/sap.f": "^1.60.0", "parallax-js": "^3.1.0" }, "devDependencies": { "ui5lab-browser": "^1.0.0", "@ui5/cli": "^0.2.3", "fs-extra": "^3.0.0" } } ui5.yaml Define a library project for the UI5 tooling, apps would use type “application”. This will instruct the tooling to serve resources properly and build a library preload accordingly. specVersion: '0.1' metadata: name: openui5-parallax type: library Note: There is additional documentation available on GitHub if you like to do something fancy or need special treatment for your lib. For example, in my libary openui5-parallax i had to define a shim for loading the thirdparty library resources properly. Try the standard way first, if it does not work for your thirdparty dependencies, have a look at my library as a reference. prepare.js Add a “prepare.js” file to your root folder – this will enable the new way of loading the UI5Lab browser and fill the libraries.json file automatically with your library namespace once after installation. var fs = require('fs-extra'); // copy ui5lab browser to the right location for local testing fs.copySync('./node_modules/ui5lab-browser/dist/', './test/ui5lab/browser'); // read library namespace from package.json var oPackage = require('./package.json'); var sNamespace = oPackage.ui5lab.namespace || "ui5lab.library"; // add library namespace to browser library list var sBrowserLibraryFile = './test/ui5lab/browser/libraries.json'; fs.readFile(sBrowserLibraryFile, 'utf8', function (err, data) { if (err) { return console.log(err); } var result = data.replace(/\[((\r)?\n\t)*\]/m, '[\r\n\t\t"' + sNamespace + '"\r\n\t]'); fs.writeFile(sBrowserLibraryFile, result, 'utf8', function (err) { if (err) return console.log(err); }); }); .gitignore Add the line to ignore reference to the UI5Lab browser in your project. If you deliver your library via npm, we recommend to also ignore the dist folder to not clutter your GitHub repo with duplicate files: # only for testing, these resources should not be checked in test/ui5lab/browser/ .npmignore Add the lines to not accidentially publish the browser to your npm module # only for testing, these resources should not be delivered test/ui5lab/browser/ dist/test-resources/ui5lab/browser/ 3. Test the new setup and check your resource paths - Delete the node_modules folder and the package-lock.json if they exist - Run “npm install” to get all new modules installed - Run “npm start” A browser should open with the root folder of your library. Be sure to check the library is shown in the local UI5Lab browser and the samples are working fine by checking all entry points Browser: Sample: index.json Make sure this file is located in a folder named as your library namespace, in my case “test-resources/openui5/parallax”, otherwise the library cannot be displayed in the browser. Later on, we plan to get rid of this metadata file and switch complelety to package.json – but thats another story. ParallaxScene.html Make sure the base href is correct. By default, this is 4 times up the folder tree, but it could be different in your library depending on your namespace. This is essential as the central UI5Lab browser switches your samples to load UI5 from the CDN and defines an additional resource path for your library on the fly. If you don’t define a proper <base href=”…”>, the sample will fail in the UI5Lab-central project. <!DOCTYPE HTML> <html> <head> <meta http- <meta charset="UTF-8"> <base href="../../../../"> <title>Sample - openui5.parallax.ParallaxScene</title> <script id="sap-ui-bootstrap" src="../../../../resources/sap-ui-core.js" data- </script> 4. Publish the new library version Increase version in the package.json (1.0.x to signalize that you switched to the new tooling and conventions) and run the build on your library to create a library preload and minified files in the “dist” folder ui5 build Publish your migrated library to npm npm publish Note: if you wish to check what will be published, run “npm pack” and check the created archive first. Don’t forget deleting the generated tgz file before publishing, otherwise the tgz file will also be published 🙂 Push your changes to GitHub. If you chose to not publish to npm don’t forget to also push the “dist” folder so that the optimized resources can be loaded in the UI5Lab-central project. 5. Update the library dependency in UI5Lab-central If you publish to npm, you need to edit the package.json in the project UI5Lab-central and update your library to your new version. If you publish to GitHub, just let us know or wait until the next build runs. package.json "dependencies": { ... "openui5-parallax": "^1.0.0" }, If you want to make sure everything works fine, you can test all changes locally by following the instructions in the README.md in UI5Lab-central. That’s it! Create a pull request, sit back and relax while we review your change. I hope this guide will help migrating your libraries. If you have an questions, hop onto Slack channel #ui5lab and let us know. If you would rather see the delta in the code, check these three commits: Have fun! Michael Awesome Michael! I’m glad I could contribute to the project with some suggestions and pull requests. Keep pushing the community forward! Very nice….good to see we are now trying to keep things simple and what mostly world wide web is following. My experience in Dealing with bower/grunt has always been not good sadly. #npm is something which i have always liked. Thanks Nabheet
https://blogs.sap.com/2018/11/02/migrate-your-ui5lab-library-to-the-new-ui5-tooling/
CC-MAIN-2020-50
refinedweb
1,507
65.52
September was our best month yet We've been growing pretty consistently since the site launched in 2005, but September was notable in that we smashed a number of our records. We had the greatest number of people come to the site (5.6 M), the greatest number of people return (2.9 M), the greatest number of new pro memberships, and our first and second highest traffic days where over a quarter of a million people visited Instructables in a single day. We didn't break any records for new Instructables (the last month of the Craftsman contest from last year still holds that one), but September wasn't too far behind. And all this in a month with only 30 days! October is my favorite month. Here's to it bringing in some amazing costumes and smashing even more records. October is my favorite month. Here's to it bringing in some amazing costumes and smashing even more records. Any chance you (or Ed, or Claude, or somebody) could add some time history plots so we can put those numbers into context? WE WANT GRAPHS (That's as loud as I get without forking over any dough) That is nothing but good news!
http://www.instructables.com/community/September-was-our-best-month-yet/
CC-MAIN-2017-47
refinedweb
205
80.21
The standard way for a parent component to interact with its child elements is via props, e.g., to modify a child, you’d re-render it with new props. That’s not news. However, what happens when you need to access DOM nodes or React elements created in the render method of your components? Props don’t exactly help you out here. Well, this is where refs come in handy. Jump ahead: - When to use refs - Don’t overuse refs - Creating refs - Accessing refs - Exposing DOM refs to parent components - Callback refs - Legacy API: String refs When to use refs The best use cases for refs arise when you’re trying to perform imperative actions such as: - Managing focus, text selection, or media playback - Working with imperative animation - Integrating with third-party DOM libraries To build more consistent React apps, you should avoid using refs for anything that can be done declaratively, i.e., via the standard React data flow with state and props. For example, instead of exposing open() and close() methods on a Modal component, pass an isOpen prop and have that managed internally via React state. Don’t overuse refs When you first encounter refs or are new to React, you may fall into the trap of thinking refs are an easy way to “get things done” in your app. They certainly fit the imperative model you may be used to working with. If you find yourself thinking this, take a moment to re-evaluate how such data flow could be resolved via the typical React data flow i.e via state and props. Oftentimes, the problem at hand may be solved by just lifting state to a parent component higher in the component tree and passing that state value down to the needed component/element. Check out our tutorial on how to lift a component’s state using React Suspense. N.B., the next sections show examples of how to work with Refs. These examples use React.createRef, the API introduced in React 16.3. If you’re using an earlier version of React, see the section below on using callback refs. Creating refs There are two steps to creating ref objects. 1.) Create a ref object using React.createRef: class App extends React.Component { // see this this.myRef = React.createRef() } You create a ref by invoking the createRef object and typically assign it to an instance property, e.g., this.myRef as seen in the example above. This is done so that the ref can be referenced throughout the component. 2.) Reference the created ref in the render method: class App extends React.Component { myRef = React.createRef() render() { //look here return <div ref={this.myRef}>I am a div</div> } } After creating the ref object, you pass it on to the required element via the special ref attribute. Accessing refs After passing a ref to an element in the component render method, a reference to the DOM node becomes accessible, as seen below: const domNode = this.myRef.current Note that the current property is referenced on the created ref object, where this.myRef represents the ref passed to a DOM node via the ref attribute. Look at the code block above. What value does the variable domNode contain? Well, that depends on the type of node the ref attribute is passed to — it’s always different. Here are the different options: 1.) When the ref attribute is passed to an HTML element, the ref object receives the underlying DOM element as its current property. //render <div ref={this.myRef} /> //node contains HTMLElement (div) const node = this.myRef.current 2.) When the ref attribute is passed to a custom class component, the ref object receives the mounted instance of the component. //render <MyClassComponent ref={this.myRef} /> //node contains MyClassComponent class instance const node = this.myRef.current 3.) Refs can’t be passed to a function component because they don’t have instances. //render: don't do this. <MyFunctionalComponent ref={this.myRef} The next sections will demonstrate examples of the aforementioned node types. Adding a ref to a DOM element Consider an example of focusing a text input on clicking a button: To do this, create a ref object that holds the text input DOM node and access the DOM API to focus the input, as show below: class App extends React.Component { // create a ref to hold the textInput DOM element textInput = React.createRef(); focusTextInput = () => { // get the input dom node from the ref object const inputDOMNode = this.textInput.current; // Use the browser imperative api: call the focus method inputDOMNode.focus(); }; render() { return ( <div> {/* pass a ref attribute to the input element */} <input type="text" ref={this.textInput} /> <input type="button" value="Focus input" onClick={this.focusTextInput} /> </div> ); } } Note that when the component mounts, React assigns the current property of the ref object with the input DOM element. When the component is unmounted, this is assigned to null. The ref updates are also guaranteed to happen before the componentDidMount or componentDidUpdate lifecycle methods. See a demo below. Adding a ref to a class component What if we decided to refactor the previous example to pass a ref attribute to a class component, e.g., MyTextInput? How much changes? Take a look below: class CustomTextInput extends React.Component { myInputRef = React.createRef(); focusTextInput = () => { // get dom node from ref props const inputDOMNode = this.myInputRef.current; inputDOMNode.focus(); }; render() { // pass ref on from props return <input type="text" ref={this.myInputRef} />; } } We have a class component, CustomTextInput, with a focusTextInput function. Note that the CustomTextInput component doesn’t invoke this function but creates and accesses the text input DOM node. So, if you wanted to trigger this function handler from another component, e.g., to focus the input on mount or when a button is clicked, the following works perfectly: class App extends React.Component { // create a ref to hold the textInput DOM element textInputClassInstance = React.createRef(); focus = () => { // call the child handler // this is possible because the ref object received the child class instance this.textInputClassInstance.current.focusTextInput(); }; componentDidMount() { this.focus(); } render() { return ( <div> {/* pass a ref attribute to the class component. Ref receives the class instance */} <CustomTextInput ref={this.textInputClassInstance} /> <input type="button" value="Focus input" onClick={this.focus} /> </div> ); } } See a demo below: Refs and function components Remember that you should not use the ref attribute on function components because they don’t have instances: class Parent extends React.Component { textInput = React.createRef(); render() { // This will NOT work! return ( <MyFunctionComponent ref={this.textInput} /> ); } } If you want to pass a ref to a function component, consider using forwardRef, perhaps in conjunction with useImperativeHandle. You could also consider converting said component to a class component. See useImperativeHandle in action in our React Hooks reference guide. As stated earlier, you shouldn’t pass a ref to a functional component, but you can create and use refs within functional components, as seen below: function CustomTextInput(props) { // create ref object using useRef const textInput = useRef(null); function handleClick() { textInput.current.focus(); } return ( <div> <input type="text" ref={textInput} /> <input type="button" value="Focus the text input" onClick={handleClick} /> </div> ); } Exposing DOM refs to parent components Occasionally, you may want to access a child’s DOM node from a parent component. This is far from ideal as it breaks component encapsulation. However, there are legitimate use cases that could warrant this technique, e.g., triggering focus or measuring the size or position of a child DOM node. So, how should you approach this problem? Firstly, you could add a ref to the child component, as we did in an earlier example. This is not a perfect solution since you get a component instance and not the child DOM node. Also, this doesn’t work with functional components. With React 16.3 or higher, consider using ref forwarding for such cases. Ref forwarding allows you to expose a child’s DOM node to a parent component. Ready to go deeper? Read our advanced tutorial on forwardRef. With React 16.2 or lower, ref forwarding isn’t supported. So what do you do? Consider explicitly passing a ref as a differently named prop. function CustomTextInput(props) { return ( <div> {/* Pass ref to input node*/} <input ref={props.inputRef} /> </div> ); } class Parent extends React.Component { // create ref object inputElement = React.createRef(); render() { return ( {/* pass ref object as a differently named prop: inputRef*/} <CustomTextInput inputRef={this.inputElement} /> ); } } Note that the approach above requires some code to be added to the child component. In more stringent use cases, you may have no control over the child component implementation. Your best bet is to use findDOMNode(), but note that this is discouraged and deprecated in StrictMode. It is highly recommended to avoid exposing DOM nodes if you don’t need do. Callback refs In all the examples we’ve discussed, the ref object has been directly passed on via the ref attribute. However, React supports another way to set refs; these are called callback refs. Instead of passing a ref object via the ref attribute: myRefObject = React.createRef() ... <div ref={myRefObject} /> Pass a function: myRefCallback = (node) => {} ... <div ref={myRefCallback} /> React will pass the component instance or DOM element as an argument to the function. This can then be stored and accessed elsewhere! Callback refs give you a lot more control over actions to be performed when refs are set and unset. Remember that the ref callback is called with the DOM element when the component mounts, and it is called with null when it unmounts. Consider the example below, which creates a ref callback and stores a DOM node in an instance property — a commonly implemented pattern. class App extends React.Component { //create instance variable textInput = null; setTextInputRef = (element) => { // save DOM element received in the instance variable: textInput this.textInput = element; }; return ( <div> <input type="text" ref={this.setTextInputRef} /> <input type="button" value="Focus input" onClick={this.focusTextInput} /> </div> ); } } See a demo below: As with object refs, you can pass callback refs between components, e.g., parent and child. See the example below: function CustomTextInput(props) { return ( <div> {/*child component passes ref to callback*/} <input ref={props.inputRef} /> </div> ); } class Parent extends React.Component { render() { return ( {/* pass a callback ref to the child component*/} <CustomTextInput inputRef={el => this.inputElement = el} /> ); } } In the example above, the Parent component holds the DOM value corresponding to the child text input in the instance variable inputElement. Note that the ref callback is passed to the child component, CustomTextInput via a differently named prop: inputRef. CustomTextInput then passes the callback on to the ref attribute set on the <input> element. Caveats with callback refs If you define a ref callback as an inline function, a new instance of the function will be created with each render. The callback is then called twice: first will null, and then again with the DOM element. Essentially, React has to clear the old ref and set up a new one. This shouldn’t matter so much in most cases, but to avoid this, define the ref method as a bound method on the class (for class components). Legacy API: String refs It is worth mentioning that an older ref API supported ref attributes as plain strings, e.g., myTextInput and the DOM node accessed as this.refs.myTextInput. Don’t do this anymore. String refs are now considered legacy, they have some issues, and they are likely going to be removed in a future release of React..
https://blog.logrocket.com/react-reference-guide-refs-dom/
CC-MAIN-2021-04
refinedweb
1,907
58.08
So here is what I have: I have a shared Calendar, called "Site Visits". That I have written a macro to add appointment items to the calendar. The macro is ran via a button on the toolbar, and it will ultimately be installed on several employees' computers. The purpose of the code is to create a User Form from which the user can select a number of employees, input a date and site(s), and then Log the trip. When the user logs the trip it sends out a group email to everyone and saves the date as an appointment to the shared calendar. Currently, the code runs like a dream on my system. However, when I share the calendar with another user and import the module and useform to his system it bugs out when setting the Calendar Folder if I specify the calendar using the .Folders(CALENDAR NAME AS STRING) function. When I delete this identifier, it will set the calendar but when I try to add the appointment item to the calendar using the Object.Items.Add("IPM.Appointments"), it bugs out. Any help would be appreciated. Private Sub CommandButton3_Click() Dim objOutlook As Outlook.Application Dim objOutlookMsg As Outlook.MailItem Dim objOutlookRecip As Outlook.Recipient Dim objOutlookAttach As Outlook.Attachment Dim grp As Outlook.DistListItem Dim myolApp As Outlook.Application Dim mynamespace As NameSpace Dim mycalendar As Object Dim myrecipient As Outlook.Recipient Set myolApp = CreateObject("Outlook.Application") Set mynamespace = myolApp.GetNamespace("MAPI") Set myrecipient = mynamespace.CreateRecipient("Site Visits") myrecipient.Resolve Set mycalendar = mynamespace.GetSharedDefaultFolder(myrecipient, Outlook.OlDefaultFolders.olFolderCalendar).Folders("Site Visits") Set olapt = mycalendar.Items.Add(olAppointmentItem) I've only included the code that pertains to the issue. If you need to see more code to get a better understanding of what is going on please let me know. Cheers. Hello, > it will ultimately be installed on several employees' computers. VBA macros are not designed for distributing on multiple PCs. I'd recommend developing a COM add-in instead. See Managed add-ins built with Visual Studio (VSTO) for more information in MSDN. The Outlook Solutions section provides all the required walkthroughs. > However, when I share the calendar with another user and import the module and useform to his system it bugs out when setting the Calendar Folder Did you try to debug the code? What error message/code do you get? I'd recommend breaking the chain of calls, declare each property and method call on a separate line. Thus, you will be able to find a problematic property or method easily. You can try to use OpenSharedFolder method of the Namespace class. Do you get a valid folder using the GetSharedDefaultFolder method of the Namespace class? Did you check out the return value? > it bugs out Could you please be more specific? Error code/message? Does the user have required permissions for adding new appointments?
https://social.msdn.microsoft.com/Forums/en-US/5bd9c4ef-38de-4a46-88e9-55bc530f5c06/add-appointment-item-to-shared-calendar-from-multiple-users?forum=outlookdev
CC-MAIN-2020-50
refinedweb
481
51.75
I'm writing a simple program to calculate power of 2. A user would enter the number of times they want to calculate the power of 2, let say the user enter 4, my program would returns 2,4,8,16.Here is the code import java.util.Scanner; public class PowersOf2 { public static void main(String[] args) { int numPowersOf2; //How many powers of 2 to compute int nextPowerOf2 = 1; //Current power of 2 int exponent = 0; //Exponent for current power of 2 -- this //also serves as a counter for the loop Scanner scan = new Scanner(System.in); System.out.println("How many powers of 2 would you like printed?"); numPowersOf2 = scan.nextInt(); //print a message saying how many powers of 2 will be printed //initialize exponent -- the first thing printed is 2 to the what? System.out.println("Here are the first " + numPowersOf2 + " power of 2"); while (exponent<numPowersOf2) { //print out current power of 2 nextPowerOf2=nextPowerOf2*2; exponent++; System.out.println(nextPowerOf2); //find next power of 2 -- how do you get this from the last one? //increment exponent } } } Just print nextPowerOf2 before you change it. while (exponent<numPowersOf2) { System.out.println(nextPowerOf2); nextPowerOf2=nextPowerOf2*2; exponent++; }
https://codedump.io/share/q4RQ8UgklZfm/1/program-calculating-power-of-2
CC-MAIN-2017-13
refinedweb
198
55.84
I have a small question about how to set stop condition for "while". For example, I have a list and in "while" I treat this list in certain condition (add an element or remove an element according to different condition), when the list doesn't change. That is to say when the list of N's == list of N-1's, "while" stops. How to write the stop condition for this question? Thanks a lot in advance! I put the code here but I'm sorry if the code is not easy to understand. - Code: Select all from numpy import matrix import numpy as np names = [ "Bloemfontein", "Cape Town", "Durban", "East London", "George", "Johannesburg", "Kimberley", "Mmabatho", "Graskop", "Oudtshoorn", "Port Elizabeth", "Umtata"] #distance matrix d = [[]] lit=[q for q in range(len(names))] lt=[[q] for q in range(len(names))] #Merge the names by average linkage min_distance = float('inf') min_pair = (-1, -1) #Find the firt two nearest neighbor to merge for i in range(len(lit)): for j in range(i+1, len(lit)): distance = d[lit[i]][lit[j]] if distance < min_distance: min_distance = distance min_pair = (i, j) i, j = min_pair if min_distance>500: print 'distance value is too big' else: lt[i]=lt[i]+lt[j] del lt[j] #del lit[j] while len(lt)>1: min_distance3 = float('inf') min_pair3 = (-1, -1) #find the nearest neighbors to each group for f in range(len(lt)): for g in range(f+1,len(lt)): dis=0 if len(lt[f])!=0 and len(lt[g])!=0: for n in range(len(lt[f])): for m in range(len(lt[g])): dis = dis+d[lt[f][n]][lt[g][m]] mind=dis/(len(lt[f])*len(lt[g])) if mind < min_distance3: min_distance3 = mind min_pair3 = (f, g) r,t=min_pair3 md=min_distance3 #if the distance is smaller than the predefined value if md<=500: lt[r]=lt[r]+lt[t] del lt[t] print lt,len(lt) Here, I put condition "len(lt)>1" to "while", but as the condition "if md<=500" is not always satisfied (but if I set 100000 which is bigger than any distance value in the matrix instead of 500, the condition works), the list "lt" becomes stable after some time. So the condition "len(lt)>1" does not work. Here, the real stop condition is that, "if lt of N's == lt of N-1's", but I don't know how to do this.
http://www.python-forum.org/viewtopic.php?p=6875
CC-MAIN-2015-32
refinedweb
409
58.55
If you're like me, early on in the OO language world, you didn't hear much about OO concepts and how they apply to good application design. It may be why nearly all early, large OO applications developed in the late ‘90s to early 2000s were poorly designed, even if using any design conventions at all. If these apps haven't been “reworked” to utilize modern-day web-app concepts, they are difficult to work with and hard to maintain, which means just keeping things at the status quo. A well-designed app can grow in usage and is always easy to use and easy to maintain and extend. For businesses creating web-based applications, this means increased efficiency and getting faster to market with new features. Overall, it means saving money and growing your business. In this article, you see how OO concepts apply to good application design. If you are new to these concepts, hopefully you can appreciate how effective they are in understanding how these concepts make OO languages more popular over procedural languages. If you are already familiar with these concepts, maybe there will be some new things you didn't know about them. Core OO Concepts Encapsulation The idea behind this concept is that your OO classes are essentially a black box. Users of the class should not know how it works internally and neither should other classes. An example would be using a Calculator class. A user simply types in the equation and then gets the answer. How the calculator arrived at the answer is hidden from the user. (Although the user in this case likely has a good idea.) Also important is that other classes that use the Calculator class do not know how the answer was obtained. The calculator's internal logic is not visible and as such, the class is encapsulated. To encapsulate functionality of a class in an OO language, an Interface class is used. You can think of an Interface class as the declarations to a set of methods in a class. The Interface is all the user and other classes have access to. The actual implementation of the class is hidden. For example, the interface to a Calculator class could be add(X, Y) (returns a String) subtract (X, Y) (returns a String) divide(X,Y) (returns a String) multiply(X,Y) (returns a String) To use the interface, another method simply calls the operation with some numbers, that is, add(4,5). The answer is returned as a string to the class that invoked the interface: ICalculator calculator = new ICalculator(); String result = calculator.add(4,5); Something else the interface does is enable the functionality of a class to be changed without having to change this code anywhere else. The methods that use the interface do not need to be changed in any way. This is great for testing with different implementations or changing and extending functionality. Another good reason for using interfaces is that they are contracts on how a method should be implemented. By specifying what method declarations can be used in an interface, this determines how the method should be coded. A good example of interfaces acting as contracts is the Java specification. Java specifications (that is, JPAs) define a contract as to what methods can be coded and how (what to pass in as variables, and so on). Interfaces are an important part of many popular design patterns. Are there any disadvantages to using interfaces? Yes, but very few. A disadvantage to using an interface is that users of the interface must implement all methods defined in the interface. Although this enforces the contract part of the interface, many methods an interface defines are not necessary. For example, large business applications often have large interfaces used by all clients; although, only some of the operations apply or are relevant. This leads you to the Interface Segregation Principal. The principal states that any interfaces that are large and do not apply to all clients should be broken down into smaller interfaces. Breaking large interfaces down to smaller interfaces ensures that only some interfaces will be used and not others, depending on their relevance for users of the interface. These smaller interfaces are often referred to as role interfaces. Inheritance Probably the most discussed OO concept is Inheritance. Several design patterns also use inheritance. The concept of Inheritance is that one class inherits the methods of another class. Often, the classes inherited are a parent class of an object. For example, a Circle class would inherit the parent class methods of a class or interface called Shape. Circle would then override the methods defined in Shape. In Java, the code to inherit an interface would look like class Circle implements Shape If Shape is an interface, then other objects who share the same attributes (that is, color, height, and width) can also use Shape. For example, Square could also implement (inherit) the attributes Shape provides. The advantage to inheritance is that you can abstract out common attributes that are similar to a set of objects. In this example, the Shape class has methods and attributes that other objects need to implement, along with their own methods. A Circle would implement method operations and attributes that are exclusive only to a circle (that is, radius), along with those inherited from Shape. Can a class inherit multiple other classes? Yes, though in Java, you can do so only with interfaces and abstract classes. With Java, by extending multiple interfaces, you are essentially doing the same thing as inheriting from multiple classes. The caveat, though, is that with interfaces, you are required to implement all method declarations of said interfaces. With abstract classes, however, you do not have to implement all methods as with interfaces. You can think of abstract classes as partial classes. The advantage to inheriting from abstract classes is that you do not have to implement/override all methods of the abstract class. There are three ways for subclasses to inherit and override/implement methods from an abstract (parent) class: - If a base class method is abstract, the subclass can override this method. - If a base class method has a constructor with a concrete implementation, a subclass must override this method of the base class. - If a base class has a public, static, and final method, no subclass can (or should) override this method of this base class. Composition Before wrapping up inheritance, you should also know there are basically two ways a subclass can inherit from a parent class. Composition is the term used to describe the relationship between the parent and child objects (or base and subclass). There are two types of compositions: association and aggregation. An aggregation composition is an object composed of other objects forming a complex component. An example would be a car. A car has an engine, pistons, and so on. The relationship between the car and its parts is an aggregation. An association composition is a relationship that defines a service for the child object provided by the parent object. For example, a car has a garage. The garage is the service component because it complements the car but is not part of a car. Polymorphism Polymorphism means that an interface or abstract class has the capacity to take on different forms by representing different objects when accessed by different methods. A good example of polymorphism in Java is your factory classes. A factory class returns different object types based on what was passed into the factory from a calling method. A simple example of this would be an abstract class called Car acting as the base class used by a factory class: public abstract class Car{ public abstract String make(); } Some subclasses of Car could be Oldsmobile and Tesla: public class Oldsmobile extends Car { @Override public String make() { return "Oldsmobile" } } public class Tesla extends Car { @Override public String make() { return "Tesla" } } You can get different responses using the same abstract class to determine the vehicle make when passing in an attribute specific for that make to a factory class: public class CarFactory { public Car getCar(String type) { if ("electric".equals(type)) { return new Tesla(); if ("cutless".equals(type)) { return new Oldsmobile(); } } } Testing this factory with a driver class, you have public class Demo { public static void main(String[] args) { CarFactory carFactory = new CarFactory(); Car c1 = carFactory.getCar("electric"); System.out.println("c1 Make: " + c1.make()); Car c2 = carFactory.getCar("cutless"); System.out.println("c2 Make: " + c2.make()); } } By using the same abstract class and returning different types, the definition for polymorphism is supported by a factory class. You could easily replace the abstract class with an interface. Conclusion This article was a primer for those who may need a refresher on OO concepts to aid in better application design. By revisiting or learning these concepts for the first time, you can benefit by providing more robust applications while reducing maintenance. You also learned how factory classes can be good examples of polymorphism.
http://www.informit.com/articles/article.aspx?p=2102828
CC-MAIN-2019-51
refinedweb
1,506
53.92
snd_pcm_playback_flush() Play out all pending data in a PCM playback channel's queue and stop the channel Synopsis: #include <sys/asoundlib.h> int snd_pcm_playback_flush( snd_pcm_t *handle); Arguments: - handle - The handle for the PCM device, which you must have opened by calling snd_pcm_open() or snd_pcm_open_preferred() . Library: libasound.so Use the -l asound option to qcc to link against this library. Description: The snd_pcm_playback_flush() function blocks until all unprocessed data in the driver queue has been played._PLAYBACK). Make sure that you don't mix and match plugin- and nonplugin-aware functions in your application, or you may get undefined behavior and misleading results. Returns: Zero on success, or a negative error code. Errors: - -EBADFD - The pcm device state isn't ready. - -EINTR - The driver isn't processing the data (Internal Error). - -EINVAL - Invalid handle. - -EIO - An invalid channel was specified, or the data wasn't all flushed. Classification: QNX Neutrino
https://developer.blackberry.com/playbook/native/reference/com.qnx.doc.neutrino.audio/topic/libs/snd_pcm_playback_flush.html
CC-MAIN-2020-34
refinedweb
149
58.69
Shouldn't you use all_month during the review by your seniors? I was told, so I was curious and examined it. Since it was defined in DateAndTime :: Calculations of ActiveSupport, read the surrounding methods as well. I took a look at the internal implementation of all_month. It's an internal implementation of rails / rails, but you can see it in the readable code. def all_month beginning_of_month..end_of_month end ref: activesupport/lib/active_support/core_ext/date_and_time/calculations.rb Looking inside the same file, there were all_xxx methods other than all_month. There are many methods that can be used just by changing the target range. There is a method to judge whether it is the future or the past! You can use what is rails / rails without redefining it! !! Noticed. Date.yesterday.past? => true Date.yesterday.future? => false Date.tomorrow.future? => true Date.tomorrow.past? => false next_xxx and prev_xxx are defined. prev_xxx is aliased to last_xxx. Date.current => Sun, 11 Jul 2020 Date.current.next_week => Mon, 13 Jul 2020 Date.current.prev_week => Mon, 29 Jun 2020 Date.current.last_week => Mon, 29 Jun 2020 Recommended Posts
https://linuxtut.com/i-was-curious-about-all_month-and-read-activesupport-dateandtime-calculations-d2d84/
CC-MAIN-2020-50
refinedweb
182
53.58
Ok so, i'm currently trying to make a application that performs similar to a game of hangman. i currently have it set up to have the leader of the game choose a word, and when he/she types it in it stores it as a variable, and also stores the length to then tell the players later. this is what i have right now: what i have works just as i want it, but now i need to get to a point to where i can look directly to the string defined and find individual characters.what i have works just as i want it, but now i need to get to a point to where i can look directly to the string defined and find individual characters.Code:#include <iostream> #include <cstring> using namespace std; int main() { int strikes; // to tell the plyers how many wrong guesses int wordlength; //to tell plyers how long the word is char guess; //what the players will be defining to play the game char gameword[30]; //the designated word from the leader cout<<"For Game Leader:\nEnter Game Word Here:"; cin.getline(gameword, 25); //leader of game enters the word of the game here wordlength=strlen ( gameword ); //stores the length of the word into "wordlength" if( wordlength>25){ cout<<"The Word You Entered Was Too Long For This Game\n"; cout<<"Please Try Another Word."; //checks to see if word will fit in the string allowed } else { cout<<"CONGRATULATIONS! You Have Chosen A Valid Word\n"; cout<<"Now You Are Ready To Play The Game\n\n\n"; //tells the leader he/she has chosen a valid word, and it's time to play } cout<<"For Players:\nThe Game Word Has A Length Of "<<wordlength; //reveals the length of the game word to the players cin.get(); } as the title says, i'm fairly new to this. i understand the logic, of most things. SO am i able to analyze the string made by the leader of the game and have, when the player types a single character as their guess to tell the player tht yes or no the letter is there, and there if the letter is there where as in: leader chooses hangman as the gameword player chooses a say yes, and position 2 then player chooses x i say no and give them 1 strike(u have certain number left)
https://cboard.cprogramming.com/cplusplus-programming/102920-im-new-=.html
CC-MAIN-2017-30
refinedweb
402
62.35
I'm not an expert on singletons, and I only have one bit of code that uses it. It's part of a debugging system from one of NVIDIA's SDKs, and I find their code to be useful in my engine. Now, this singleton class works fine in Windows, but not for UNIX based OSes like MacOSX. This is what it looks like: #include <cassert> template <typename T> class Singleton { static T* ms_Singleton; public: Singleton( void ) { assert( !ms_Singleton ); int offset = *(int*)(T*)1 - *(int*)(Singleton <T>*)(T*)1; ms_Singleton = (T*)(*(int*)this + offset); } virtual ~Singleton( void ) { assert( ms_Singleton ); ms_Singleton = 0; } static T& GetSingleton( void ) { assert( ms_Singleton ); return ( *ms_Singleton ); } static T* GetSingletonPtr( void ) { return ( ms_Singleton ); } }; template <typename T> T* Singleton <T>::ms_Singleton = 0; So, it crashes in the constructor, starting at the second line. The windows version had (int) instead of *(int*). I just changed that so XCode would stop complaining about it. It could be something simple. Right now, I'm so tired, I can barely stay awake. So, I was hoping someone who isn't as braindead as I am right now could point me in the right direction. Thanks. Shogun.
http://www.gamedev.net/topic/659087-this-singleton-keeps-crashing/
CC-MAIN-2016-30
refinedweb
193
53.21
25 July 2012 06:32 [Source: ICIS news] SINGAPORE (ICIS)--?xml:namespace> Sales for the first six months of the year surged 64.6% year on year to Swfr1.96bn, while earnings before interest, tax, depreciation and amortisation (EBITDA) increased by 23.4%, the company said in a statement. But EBITDA margins for the period at 16.6% was 5.6 percentage points lower compared with the same period in 2011, it said. Lonza’s operating profit for January-June 2012 increased 23.5% to Swfr168m, with margins at 8.6%, down from 11.4% in the previous corresponding period, the company said. Taking out one-off items in the financial results, Lonza said its core net profit increased 16.8% year on year to Swfr125m, with core operating profit up 39.2% at Swfr199m. The company hopes to deliver 10-15% growth in operating profit this year. “However, it goes without saying that the volatility of the current macroeconomic situation in some parts of the world can always have a negative effect on all strategic and operational efforts,” Lonza said. ($1 = Swfr0.9
http://www.icis.com/Articles/2012/07/25/9580724/swiss-lonza-h1-net-profit-slips-3.1-on-weak-margins.html
CC-MAIN-2014-35
refinedweb
184
69.99
bin/kc.[sh|bat] export --help In this guide, you are going to understand the different approaches for importing and exporting realms using JSON files. To export a realm, you can use the export command. Your Keycloak server instance must not be started when invoking this command. bin/kc.[sh|bat] export --help To export a realm to a directory, you can use the --dir <dir> option. bin/kc.[sh|bat] export --dir <dir> When exporting realms to a directory, the server is going to create separate files for each realm being exported. You are also able to configure how users are going to be exported by setting the --users <strategy> option. The values available for this option are: different_files: Users export into different json files, depending on the maximum number of users per file set by --users-per-file. This is the default value. skip: Skips exporting users. realm_file: Users will be exported to the same file as the realm settings. For a realm named "foo", this would be "foo-realm.json" with realm data and users. same_file: All users are exported to one explicit file. So you will get two json files for a realm, one with realm data and one with users. If you are exporting users using the different_files strategy, you can set how many users per file you want by setting the --users-per-file option. The default value is 50. bin/kc.[sh|bat] export --dir <dir> --users different_files --users-per-file 100 To export a realm to a file, you can use the --file <file> option. bin/kc.[sh|bat] export --file <file> When exporting realms to a file, the server is going to use the same file to store the configuration for all the realms being exported. If you do not specify a specific realm to export, all realms are exported. To export a single realm, you can use the --realm option as follows: bin/kc.[sh|bat] export [--dir|--file] <path> --realm my-realm To import a realm, you can use the import command. Your Keycloak server instance must not be started when invoking this command. bin/kc.[sh|bat] import --help After exporting a realm to a directory, you can use the --dir <dir> option to import the realm back to the server as follows: bin/kc.[sh|bat] import --dir <dir> When importing realms using the import command, you are able to set if existing realms should be skipped, or if they should be overridden with the new configuration. For that, you can set the --override option as follows: bin/kc.[sh|bat] import --dir <dir> --override false By default, the --override option is set to true so that realms are always overridden with the new configuration. To import a realm previously exported in a single file, you can use the --file <file> option as follows: bin/kc.[sh|bat] import --file <file> You are also able to import realms when the server is starting by using the --import-realm option. bin/kc.[sh|bat] start --import-realm When you set the --import-realm option, the server is going to try to import any realm configuration file from the data/import directory. Each file in this directory should contain a single realm configuration. If a realm already exists in the server, the import operation is skipped. When importing a realm, you are able to use placeholders to resolve values from environment variables for any realm configuration. { "realm": "${MY_REALM_NAME}", "enabled": true, ... } In the example above, the value set to the MY_REALM_NAME environment variable is going to be used to set the realm property.
https://www.keycloak.org/server/importExport
CC-MAIN-2022-21
refinedweb
603
63.29
Hibernate Hard Facts In the third article of this series, I will show how to tweak Hibernate so as to convert any database data types to and from any Java type and thus decouple your database model from your object model. Custom type mapping Hibernate is a very powerful asset in any application needing to persist data. As an example, I was tasked this week to generate the Object-Oriented model for a legacy database. It seemed simple enough, at first glance. Then I discovered a big legacy design flaw: for historical reasons, dates were stored as number in the YYYYMMDD format. For example, 11th december 2009 was 20091211. I couldn’t or rather wouldn’t change the database and yet, I didn’t want to pollute my neat little OO model with Integer instead of java.util.Date. After browsing through Hibernate documentation, I was confident it made this possible in a very simple way. Creating a custom type mapper The first step, that is also the biggest, consist in creating a custom type. This type is not a real “type” but a mapper that knows how to convert from the database type to the Java type and vice-versa. In order to do so, all you have is create a class that implements org.hibernate.usertype.UserType. Let’s have a look at each implemented method in detail. The following method gives away what class will be returned at the end of read process. Since I want a Date. In my case, I read from a single numeric column, so I should return a single object array filled with Types.INTEGER. public int[] sqlTypes() { return new int[] {Types.INTEGER}; } This method will check whether returned class instances are immutable (like any normal Java types save primitive types and Strings) or mutable (like the rest). This is very important because if false is returned, the field using this custom type won’t be checked to see whether an update should be performed or not. It will be of course if the field is replaced, in all cases (mutable or immutable). Though there’s is a big controversy in the Java API, the Date is mutable, so the method should return true. public boolean isMutable() { return true; } I can’t guess how the following method is used but the API states: Return a deep copy of the persistent state, stopping at entities and at collections. It is not necessary to copy immutable objects, or null values, in which case it is safe to simply return the argument. Since we just said Date instances were mutable, we cannot just return the object but we have to return a clone instead: that’s made possible because Date’s clone() method is public. public Object deepCopy(Object value) throws HibernateException { return ((Date) value).clone(); } The next two methods do the real work to respectively read from and to the database. Notice how the API exposes ResultSet object to read from and PreparedStatement object to write to. public Object nullSafeGet(ResultSet rs, String[] names, Object owner) throws HibernateException, SQLException { Date result = null; if (!rs.wasNull()) { Integer integer = rs.getInt(names[0]); if (integer != null) { try { result = new SimpleDateFormat("yyyyMMdd").parse(String.valueOf(integer)); } catch (ParseException e) { throw new HibernateException(e); } } } return result; } public void nullSafeSet(PreparedStatement statement, Object value, int index) throws HibernateException, SQLException { if (value == null) { statement.setNull(index, Types.INTEGER); } else { Integer integer = Integer.valueOf(new SimpleDateFormat("yyyyMMdd").format((String) value)); statement.setInt(index, integer); } } The next two methods are implementations of equals() and hasCode() from a persistence point-of-view. public int hashCode(Object x) throws HibernateException { return x == null ? 0 : x.hashCode(); } public boolean equals(Object x, Object y) throws HibernateException { if (x == null) { return y == null; } return x.equals(y); } For equals(), since Date is mutable, we couldn’t just check for object equality since the same object could have been changed. The replace() method is used for merging purpose. It couldn’t be simpler. public Object replace(Object original, Object target, Object owner) throws HibernateException { Owner o = (Owner) owner; o.setDate(target); return ((Date) original).clone(); } My implementation of the replace() method is not reusable: both the owning type and the name of the setter method should be known, making reusing the custom type a bit hard. If I wished to reuse it, the method’s body would need to use the lang.reflect package and to make guesses about the method names used. Thus, the algorithm for creating a reusable user type would be along the lines of: - list all the methods that are setter and take the target class as an argument - if no method matches, throw an error - if a single method matches, call it with the target argument - if more than one method matches, call the associated getter to check which one returns the original object The next two methods are used in the caching process, respectively in serialization and deserialization. Since Date instances are serializable, it is easy to implement them. public Serializable disassemble(Object value) throws HibernateException return (Date) ((Date) value).clone(); } public Object assemble(Serializable cached, Object owner) throws HibernateException { return ((Date) cached).clone(); } Declare the type on entity Once the custom UserType is implemented, you need to make it accessible it for the entity. @TypeDef(name="dateInt", typeClass = DateIntegerType.class) public class Owner { ... } Use the type The last step is to annotate the field. @TypeDef(name="dateInt", typeClass = DateIntegerType.class) public class Owner { private Date date; @Type(type="dateInt") public Date getDate() { return date; } public void setDate(Date date) { this.date = date; } } You can download the sources for this article here. To go further: - Hibernate UserType’s Javadoc - Other UserType implementation examples (for Hibernate v2): note that some implementations are just plain wrong but it gives a good starting point (Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.) Gary Hirschhorn replied on Thu, 2010/02/18 - 6:23pm Thanks for the great article. We sometimes store our Java Dates as database type Long in order to keep the millisecond precision (MySql native date type only has precision to the second), so we modified the class appropriately and it worked great. We found one error though. In nullSafeGet(), you make the call to ResultSet.wasNull() before reading the column value. However, the JavaDoc API for wasNull() specifies "Note that you must first call one of the getXXX methods on a column to try to read its value and then call the method wasNull to see if the value read was SQL NULL." In our case, our column allows nulls but we were getting Longs of 0 instead. Simply moving the rs.getXXX() call before the wasNull() call solved the problem. Thanks again for a great article. Nicolas Frankel replied on Fri, 2010/02/19 - 2:32am in response to: Gary Hirschhorn
http://java.dzone.com/articles/hibernate-hard-facts
CC-MAIN-2014-23
refinedweb
1,154
55.24
Good debts have the power to make you rich, bad debts can ruin you Loans are like a double-edged sword. They can help you financially but they can also ruin you totally on the money front. We are told from the time we are young that it is a bad idea to be indebted to others for money — that “borrowing” is a bad word and we should try and manage within our means. At one level this is good advice but whether a loan has a positive outcome or a negative one, depends on how well you handle the situation. If you invest correctly, if you put your funds into the right place — by taking a loan — then the loan becomes an asset for you. Take on debts that are “good debts” Any time that you take a loan, it will have to be paid back with interest. So plan your finances carefully. Take on debts that are “good debts” and avoid “bad debts”. The good debts have the power to grow your money, if handled and invested properly — this type of loan will make you rich. The bad debts are money down the drain. When you buy on credit — this is also a type of loan that you are taking — you are deferring payment for your expenses. But if you can’t pay it back in time, you are charged a heavy penalty and this will end up decreasing your funds. If you borrow money, ensure that you can afford to pay it back — if not then do not borrow money. If money is invested and you are getting a good rate of interest on it, and a loan has a lower rate of interest for repayment, it makes sense for you to stay invested and take a loan for whatever else you need. If you take a loan for investment purposes — like buying a house, land, stocks and shares etc — these are instruments that will make you money in the long run by increasing in value and give you a regular return (rent, dividend etc). As long as your payout is less than your inflow — this type of loan can make your rich. Loans that will make you poorer Taking a loan for your daughter’s wedding or to go on an expensive holiday, or to buy your son a high-end luxury car, or to buy capital goods and jewelry that strictly speaking you do not need is not a good idea. These expenses have no returns so your loan here will just make you poorer because you have to pay back what you have borrowed. There is nothing intrinsically wrong in taking loans — if you can find the right channel for parking the funds that you have borrowed then you will gain from loan you have taken. If the value of what you using the loan for does not go up and / or does not generate income, then you should definitely not take a loan to buy it. But if you anticipate that it will and you know where and how to invest it, then go ahead and take that loan. And start your efforts at becoming rich. There is no magic to becoming rich with loans — all that you have to do is manage your loan sensibly so that you can make money from it and increase your net worth.
https://medium.com/@madaboutmoney/good-debts-have-the-power-to-make-you-rich-bad-debts-can-ruin-you-32a41b27777c
CC-MAIN-2018-26
refinedweb
566
72.09
The moc reads a C++ source file. If it finds one or more class declarations that contain the "Q_OBJECT" macro, it produces another C++ source file which contains the meta object code for this class. Among other things, meta object code is required for the signal/slot mechanism, runtime type information and the dynamic property system. The C++ source file generated by the moc must be compiled and linked with the implementation of the class (or it can be #included into the class' source file). Using the moc is introduced in chapter 7 of the Qt Tutorial. Chapter 7 includes a simple Makefile that uses the moc and of course source code that uses signals and slots. The moc is typically used with an input file containing class declarations like this skeleton: skeleton. to actually be used as sets. Another macro "Q_CLASSINFO" can be used to attach additional name/value-pairs to the classes' meta object: class MyClass : public QObject { Q_OBJECT Q_CLASSINFO( "Author", "Oscar Peterson") Q_CLASSINFO( "Status", "Very nice class") public: MyClass( QObject * parent=0, const char * name=0 ); ~MyClass(); }; The three concepts, signals and slots, properties and class informations, can be combined. The output produced by the moc must be compiled and linked, just as the other C++ code of your program; otherwise the building of your program will fail in the final link phase. By convention, this is done in one of the following two ways: Method A is the normal method. Method B can be used in cases where one for some reason wants the implementation file to be self-contained, or in cases where the Q_OBJECT class is implementation-internal and thus should not be visible in the header file. For anything but the simplest test programs, it is recommended to automate the running of the moc. By adding some rules to the Makefile of your program, make can take care of running moc when necessary and handling the moc output. We recommend using Trolltech's free makefile generation tool tmake for building your Makefiles. This tool recognizes both Method A and B style source files, and generates a Makefile that does all necessary moc handling. tmake is available from. If, on the other hand, you want to build of the following form: moc_NAME.cpp: NAME.h moc $< -o $@ You must also remember to add moc_NAME.cpp to your SOURCES (substitute your favorite name) variable and moc_NAME.o or moc_NAME.objto your OBJECTS variable. (While we prefer to name our C++ source files .cpp, the moc doesn't know that, to forgot to compile or #include the moc-generated C++ code, or (in the former case) include that object file in the link command. is a QObject.) You can not use virtual inheritance in the QObject branch of the inheritance tree. The following example shows one wrong and one correct class declaration: class Wrong : virtual public QObject, virtual public OtherClass { [...] }; class Right : public QObject, virtual public OtherClass { [...] }; This problem occurs if you are using multiple inheritance. If you reimplement a virtual function as a slot and that function was originally declared in a class that does not inherit QObject, your program may crash when a signal triggers the slot. (This only happens with some compilers.) The following example shows one wrong and two correct slot definitions. class BaseClass { [...] virtual void setValue( int ); }; class SubClass : public QObject, public BaseClass { [...] public slots: void setValue( int ); //virtual from BaseClass, error. void slotSetValue( int i ) { setValue(i); } //new function, ok. void setName( const char* ); // virtual from QObject, ok. }; (For those interested in C++ internals: The cause of this problem is that a slot is internally represented as a function pointer, and invoked on a QObject pointer. ) In most cases where you would consider that, we think inheritance is a better alternative. Here is an example of illegal syntax: class someClass : public QObject { Q_OBJECT [...] public slots: void apply(void (*applyFunction)(QList*, void*), char*); // illegal }; You can work around this restriction like this: typedef void (*ApplyFunctionType)(QList*, void*); class someClass : public QObject { Q_OBJECT [...] public slots: void apply( ApplyFunctionType, char *); }; (It may sometimes be even better to replace the function pointer with inheritance and virtual functions, signals or slots.) Sometimes it will work, but in general, friend declarations can not be placed in signals or slots sections. Put them in the good old private, protected or public sections instead. Here is an example of the illegal syntax: class someClass : public QObject { Q_OBJECT [...] signals: friend class ClassTemplate<char>; // illegal }; The C++ feature of upgrading an inherited member function to public status is not extended to cover signals and slots. Here is an illegal example: class Whatever : public QButtonGroup { [...] public slots: void QButtonGroup::buttonPressed; // illegal [...] };(a) ); [...] }; A #define without arguments will work as expected. Here's an example: class A { Q_OBJECT public: class B { public slots: // illegal void b(); [....] }; signals: class B { // illegal void b(); [....] }: }; It is a mystery to me why anyone would put a constructor on either the signals or slots sections. You can [...] }; Since signal->slot binding occurs at run-time, it is conceptually difficult to use default parameters, which are a compile-time phenomenon. This will fail: class SomeClass : public QObject { Q_OBJECT public slots: void someSlot(int x=100); // illegal }; Declaring signals and slots with template-type parameters will not work as expected, even though the moc will not complain. Connecting the signal to the slot in the following example, the slot will not get executed when the signal is emitted: [...] public slots: void MyWidget::setLocation (pair<int,int> location); [...] public signals: void MyObject::moved (pair<int,int> location); However, you can work around this limitation by explicitly typedef'ing the parameter types, like this: typedef pair<int,int> IntPair; [...] public slots: void MyWidget::setLocation (IntPair location); [...] public signals: void MyObject::moved (IntPair location); This will work as expected. In the following example, classes x::A and x::B are defined: namespace x { class A : public QObject { Q_OBJECT public: ... }; } namespace x { class B : public A { Q_OBJECT public: ... }; } Unfortunately, moc will not understand the class B : public A { line. You have either to write class B : public x::A { or define classes A and B in the same namespace block. This limitation will disappear with Qt 3.0. ) // illegal Q_ENUMS( Priority ) // illegal; [...] };
http://doc.trolltech.com/2.3/moc.html
crawl-001
refinedweb
1,040
55.64
Conditional Constraints in Python: better error messagesAnswered It appears that MVars are not allowed in conditional constraints in the Python interface, but the error messages are rather mystifying. For example import gurobipy as gp model = gp.Model() avars = model.addMVar((10, 10), vtype=gp.GRB.BINARY, name='a') svars = model.addMVar((10, 10), vtype=gp.GRB.BINARY, name='s') model.addConstr((avars[0, 0] == 1) >> (svars[0, 0] <= 0)) gives the error message AttributeError: 'MLinExpr' object has no attribute 'size' If I then do import numpy as np tempa = np.array(avars.tolist()) model.addConstr((tempa[0, 0] == 1) >> (svars[0, 0] <= 0)) I get the error TypeError: must be a real number, not MLinExpr But if I then do temps = np.array(svars.tolist()) model.addConstr(tempa[0, 0] == 1) >> (temps[0, 0] <= 0)) everything works ok. I now undertand the current restriction that every variable involved in a conditional constraint must be a Var and not part of an MVar, but it would be really helpful if the error messages were not so cryptic. This issue has been addressed in the latest Gurobi version. Which Gurobi version are you using? Could you please update to the latest version 9.5.0? Best regards, Jaromił0 Please sign in to leave a comment.
https://support.gurobi.com/hc/en-us/community/posts/4417232674193-Conditional-Constraints-in-Python-better-error-messages?page=1#community_comment_4417297221521
CC-MAIN-2022-21
refinedweb
213
59.3
Tell us what you think of the site. EDIT: Nevermind it was a simple mistake. I can’t use cd and pc together because they conflict, so using pc alone worked. I just started messing around with PyMel in Maya 2011 today so I’m sure I’m doing something wrong but I can’t figure it out. In the code below the polyCut is applied to the center of the object and not at the origin like I want. It seems the cutPlaneCenter (pc) flag/argument is being ignored. import pymel.core as pm pm.polyCut( cd="X", pc=(0, 0, 0) )
http://area.autodesk.com/forum/autodesk-maya/python/cutplanecenter-being-ignored-edit-solved/page-last/
crawl-003
refinedweb
103
76.42
Build WordPress App with React Native #8: building Single Post Screen kris Originally published at kriss.io on ・5 min read Build Wordpress Client App with React Native (29 Part Series) This. Here, the most prominent features talked about in the book are the dark theme , offline mode, infinite scroll and many more. You can discover much more in this series.this inspiration to do this tutorial series came from the React Native Mobile Templates In case of wanting to learn from the beginning, all the previous parts for this tutorial series are available below: - Build WordPress Client App with React Native #1: Overview - Build WordPress Client App with React Native #2: Setting Up Your Environment - Build WordPress Client App with React Native #3: Handle Navigation with React navigation - Build WordPress Client App with React Native #4: Add Font Icon - Build WordPress Client App with React Native #5 : Home Screen with React native paper - Build WordPress Client App with React Native #6 : Using Html renderer and Moment - Build WordPress Client App with React Native #7: Add pull to refresh and Infinite scroll Since we have the list of articles in the Home Screen, we need to display full articles as well. For that, we are going to create the SinglePost screen which will display the overall article. Here, we will learn how to fetch a single article from the WordPress API. First, we need to add the SinglePost screen to our navigator in the App.js file. For that, we need to import the SinglePost screen and add it to the stack navigator as shown in the code snippet below: import SinglePost from './src/screens/SinglePost'; const StackNavigator = createStackNavigator({ DashboardTabNavigator: DashboardTabNavigator, SinglePost: SinglePost, }); export default createAppContainer(StackNavigator); Then, we need to add the navigation from Home Screen to the SinglePost screen. The idea is to navigate to the SinglePost screen when we click on any article card in the Home screen article list. As a result, the full information in the article is displayed in the SinglePost screen. For that, we need to wrap the FlatList template by using TouchableOpacity component and add the navigation configuration to its onPress event as shown in the code snippet below: <TouchableOpacity onPress={() => this.props.navigation.navigate('SinglePost', { post_id: item.id, }) }> <Card style={{ shadowOffset: {width: 5, height: 5}, width: '90%', borderRadius: 12, alignSelf: 'center', marginBottom: 10, }}> .................................. // other code </Card> </TouchableOpacity> Implementing SinglePost Screen Now, we are going to implement the UI of the SinglePost screen. For that, we need to create a new file called SinglePost.js in the ‘./screens/’ folder. In the SinglePost screen, we want to display the cover image of the article and full content of the article along with the author avatar, name and published date of the article. The overall implementation of the SinglePost screen is provided in the code snippet below: import React from 'react'; import { Avatar, Button, Card, Title, Paragraph, List, } from 'react-native-paper'; import HTML from 'react-native-render-html'; import { View, ScrollView, ActivityIndicator, Dimensions } from 'react-native'; import moment from 'moment'; export default class SinglePost extends React.Component { constructor(props) { super(props); this.state = { isloading: true, post: [], }; } Here, we have imported the necessary packages which are required to build the SinglePost screen. We have also defined the required states that are isloading and Now, we need to fetch the respective post from the WordPress API using fetch method. First, we need to get the post id that we sent from the Home Screen page as a prop. And then, use it to fetch the particular post from the API as shown in the code snippet below: componentDidMount() { this.fetchPost(); } async fetchPost() { let post_id = this.props.navigation.getParam('post_id') const response = await fetch( `{post_id}` ); const post = await response.json(); this.setState({ post: post, isloading: false, }); } Here, we have fetched the single post using the API and parsed it in JSON format as well. Then, we have set the fetched post to the post state variable and changed the isloading state to false. Now, we need to show the post template based on the isloading state. If the isloading state is true then the ActivityIndicator template is displayed which displays the spinner loader. After the data is successfully fetched the post template is shown and the loader disappears. The overall implementation is provided in the code snippet below: render() { let post = this.state.post; if (this.state.isloading) { return ( <View style={{ paddingVertical: 20, borderTopWidth: 1, borderColor: '#CED0CE', }}> <ActivityIndicator animating </View> ); } return ( <ScrollView> <Card> <Card.Content> <Title>{post[0].title.rendered} </Title> <List.Item title={`${post[0]._embedded.author[0].name}`} description={`${post[0]._embedded.author[0].description}`} left={props => { return ( <Avatar.Image size={55} source={{ uri: `${post[0]._embedded.author[0].avatar_urls[96]}`, }} /> ); }} /> <List.Item title={`Published on ${moment( post[0].date, 'YYYYMMDD' ).fromNow()}`} /> <Paragraph /> </Card.Content> <Card.Cover source={{ uri: post[0].jetpack_featured_media_url }} /> <Card.Content> <HTML html={post[0].content.rendered} imagesInitialDimensions={{ width: Dimensions.get('window').width, height: Dimensions.get('window').width * 2, } }/> </Card.Content> </Card> )} </ScrollView> ); } } Here, for the post template, we have used different components from the react-native package as well as the react-native-paper package. The parent component is the ScrollView component which enables the Scrolling of the screen vertically. Then, the ScrollView component wraps the template with the Card component and its sub-components. Hence, we will get the following result in the emulator screen: As we can see, we have successfully implemented the overall UI for the SinglePost screen as well. Summary In this chapter, we learned how to implement the overall UI of the SinglePost screen. First, we learned how to set up the navigation to the SinglePost screen by clicking on any article post from the home screen list. The SinglePost screen displays the overall content of the single article. We used different components from react-native-paper in order to implement the UI. We also made use of the moment and react-native-render-html package on this screen. Lastly, we learned how to fetch the data of the single article post from the WordPress API using the fetch function. The post Build WordPress Client App with React Native #8: Implementing SinglePost Screen appeared first on Kriss. Build Wordpress Client App with React Native (29 Part Series) Is GraphQL the future of APIs? Graphs are everywhere! What are the main benefits of the data graph structure? Is GraphQL the future of APIs?
https://practicaldev-herokuapp-com.global.ssl.fastly.net/kris/build-wordpress-app-with-react-native-8-building-single-post-screen-4219
CC-MAIN-2020-05
refinedweb
1,067
54.93
Up to [cvs.NetBSD.org] / src / lib / libc / gen Request diff between arbitrary revisions Default branch: MAIN Revision 1.13 / (download) - annotate - [select for diffs], Sun Sep 26 02:26:59 2010 UTC (7 years, 4 months ago) by yam: +12 -9 lines Diff to previous 1.12 (colored) fix rewinddir on nfs. fix PR/42879 (and probably PR/40229.) Revision 1.12 / (download) - annotate - [select for diffs], Wed May 17 20:36:50 2006 UTC (11.11: +11 -9 lines Diff to previous 1.11 (colored) PR/24324: Arne H Juul: Re-implement seekdir/telldir using a pointer of locations per directory instead of a global hash table to avoid memory leak issues, and incorrect results. Revision 1.11 / (download) - annotate - [select for diffs], Tue Jan 24 19:33:10 2006 UTC (12 years ago) by christos Branch: MAIN Changes since 1.10: +15 -4 lines Diff to previous 1.10 (colored) rename __func to _func_unlocked, and add their prototypes in extern.h instead of exposing them in dirent.h. More locking consistency fixes. Revision 1.10 / (download) - annotate - [select for diffs], Thu Aug 7 16:42:56 2003 UTC :12 2000 UTC (18 20 04:39:04:03 1999 UTC (18 years, 5 months ago) by lukem Branch: MAIN Changes since 1.6: +8 -2 lines Diff to previous 1.6 .6 / (download) - annotate - [select for diffs], Mon Jul 21 14:07:26 1997 UTC (20 years, 7 months ago) by jtc:12 1997 UTC (20 years, 7 months ago) by christos Branch: MAIN Changes since 1.4: +3 -2 lines Diff to previous 1.4 (colored) Fix RCSID's Revision 1.4.4.1 / (download) - annotate - [select for diffs], Mon Sep 16 18:40:33:04:15 1995 UTC (22 years, 9 months ago) by jtc Branch: ivory_soap Changes since 1.3: +3 -1 lines Diff to previous 1.3 (colored) #include "namespace.h" Added __weak_reference defns. Revision 1.4 / (download) - annotate - [select for diffs], Sat Feb 25 08:51:37 1995 UTC : +7 -2 lines Diff to previous 1.3 (colored) clean up Id's on files previously imported... Revision 1.3.2.2 / (download) - annotate - [select for diffs], Wed Jul 27 14:39:50 1994 UTC (23 years, 6 months ago) by jtc Branch: netbsd-1-0 Changes since 1.3.2.1: +49 -0 lines Diff to previous 1.3.2.1 (colored) to branchpoint 1.3 (colored) next main 1.4 (colored) Add RCS Id's Revision 1.3.2.1, Wed Jul 27 14:39:49 1994 UTC (23 years, 6 months ago) by jtc Branch: netbsd-1-0 Changes since 1.3: +0 -49 lines FILE REMOVED file rewinddir.c was added on branch netbsd-1-0 on 1994-07-27 14:39:50 +0000 Revision 1.3 / (download) - annotate - [select for diffs], Wed Jul 27 14:39:49 1994 UTC (23 years, 6 months ago) by jtc, ivory_soap Changes since 1.2: +2 -1 lines Diff to previous 1.2 (colored) Add RCS Id's Revision 1.2 / (download) - annotate - [select for diffs], Wed Jul 27 14:37:41 1994 UTC (23 years, 6 months ago) by jtc Branch: MAIN Changes since 1.1: +5 :29 1994 UTC (23 years, 6 months ago) by jtc Branch: WFJ-920714, CSRG CVS Tags: lite-2, lite-1 Changes since 1.1.1.1: +5 -6 lines Diff to previous 1.1.1.1 (colored) opendir() & friends from 4.4lite. Revision 1.1.1.1 / (download) - annotate - [select for diffs] (vendor branch), Sun Mar 21 09:45:37 1993 UTC (24.
http://cvsweb.netbsd.org/bsdweb.cgi/src/lib/libc/gen/rewinddir.c
CC-MAIN-2018-09
refinedweb
598
84.37
An XHTML to PDF renderer. More... #include <Wt/Render/WPdfRenderer> An XHTML to PDF renderer. This class implements an XHTML to PDF renderer. The rendering engine supports only a subset of XHTML. See the documentation of WTextRenderer for more information. The renderer renders to a libharu PDF document (using WPdfImage). By default it uses a pixel resolution of 72 DPI, which is the default for a WPdfImage, but differs from the default used by most browsers (which is 96 DPI and has nothing to do with the actual screen resolution). The pixel resolution can be configured using setDpi(). Increasing the resolution has the effect of scaling down the rendering. This can be used in conjunction with setFontScale() to scale the font size differently than other content. Usage example: Font information is embedded in the PDF. Fonts supported are native PostScript fonts (Base-14) (only ASCII-7), or true type fonts (Unicode). See addFontCollection() for more information on how fonts are located. Creates a new PDF renderer. The PDF renderer will render on the given page is not 0, then rendering will happen on this first page (and its page sizes will be taken into account). Default margins are 0, and the default DPI is 72. Adds a font collection. If Wt has been configured to use libpango, then font matching and character selection is done by libpango, and calls to this method are ignored. See WPdfImage::addFontCollection() for more details. If Wt was not configured to use libpango, you will have to add the directories where Wt should look for fonts. You will also have to specify the required font in the HTML source, e.g.: Creates a new page. The default implementation creates a new page with the same dimensions as the previous page. You may want to specialize this method to add e.g.~headers and footers. Returns the current page. This returns the page last created using createPage(), or the page set with setCurrentPage() page. Returns the margin. Returns the margin at given side (in pixel units). Implements Wt::Render::WTextRenderer. Returns the page height. Returns the total page height (in pixel units), including vertical margins. Implements Wt::Render::WTextRenderer. Returns the page width. Returns the total page width (in pixel units), including horizontal margins. Implements Wt::Render::WTextRenderer. Sets the resolution. The resolution used between CSS pixels and actual page dimensions. Note that his does not have an effect on the de facto standard CSS resolution of 96 DPI that is used to convert between physical WLength units (like cm, inch and point) and pixels. Instead it has the effect of scaling down or up the rendered XHTML on the page. The dpi setting also affects the pageWidth(), pageHeight(), and margin() pixel calculations. The default resolution is 72 DPI. Sets the page margins. This sets page margins, in cm, for one or more sides. Returns a paint device to render a given page. The render() method calls this function once for each page it wants to render. Implements Wt::Render::WTextRenderer.
https://webtoolkit.eu/wt/wt3/doc/reference/html/classWt_1_1Render_1_1WPdfRenderer.html
CC-MAIN-2021-31
refinedweb
505
59.9
Sorry, I generated the diff from a different tree that wasn't synced to head (had the same change in both trees originally). If that is the only problem, you can ignore it and try the rest. I can generate another diff later too. Advertising - Justin On Nov 19, 2016 21:27, "Mark Millard" <mar...@dsl-only.net> wrote: > [Top post about patch issues.] > > Looking at the patch it seems to be designed for when #else was in use: > > > -#else > > +#elif defined(BOOKE) > > but -r308817 already has the 2nd line (BOOKE). Your patch shows: > > > Index: sys/powerpc/include/pmap.h > > =================================================================== > > --- sys/powerpc/include/pmap.h (revision 308718) > > +++ sys/powerpc/include/pmap.h (working copy) > > So it looks like you started from before -r308817 . > > Trying it (I'm at -r308860): > > > Patching file sys/powerpc/include/pmap.h using Plan A... > > Hunk #1 succeeded at 74. > > Hunk #2 succeeded at 84. > > Hunk #3 succeeded at 132. > > Hunk #4 succeeded at 145. > > Hunk #5 failed at 180. > > Hunk #6 succeeded at 194. > > Hunk #7 succeeded at 210. > > 1 out of 7 hunks failed--saving rejects to sys/powerpc/include/pmap.h.rej > > > # more sys/powerpc/include/pmap.h.rej > > @@ -179,13 +180,13 @@ > > struct slb **slb_alloc_user_cache(void); > > void slb_free_user_cache(struct slb **); > > > > -#else > > +#elif defined(BOOKE) > > > > struct pmap { > > + struct pmap_statistics pm_stats; /* pmap statistics */ > > struct mtx pm_mtx; /* pmap mutex */ > > tlbtid_t pm_tid[MAXCPU]; /* TID to identify this > pmap entries in TLB */ > > cpuset_t pm_active; /* active on cpus */ > > - struct pmap_statistics pm_stats; /* pmap statistics */ > > > > /* Page table directory, array of pointers to page tables. */ > > pte_t *pm_pdir[PDIR_NENTRIES]; > > > === > Mark Millard > markmi at dsl-only.net > > On 2016-Nov-19, at 7:00 PM, Mark Millard <mar...@dsl-only.net> wrote: > > It may take a little bit but I'll try the patch. > > It looks like sys/powerpc/include/pmap.h from -r176700 from 2088-Mar-3 > is when the BOOKE/E500 split started with the preprocessor use of AIM > and #else . This predates PowerMac G5 support. > > This is definitely not new for the general structure on the powerpc > side of things. Any place that did not have the AIM vs. not status > available was subject to problems of possibly mismatched definitions. > > === > Mark Millard > markmi at dsl-only.net > > On 2016-Nov-19, at 6:47 PM, Justin Hibbits <jhibbits at freebsd.org> > wrote: > > On Sat, 19 Nov 2016 18:36:39 -0800 > Mark Millard <markmi at dsl-only.net> wrote: > > > [Quick top post I'm afraid.] > > > > I think that I figured out why there is a problem even earlier > > --that just did not stop the compiles. > > > > lib/libutil/kinfo_getallproc.c is built here as part of buildworld > > (stage 4.2 "building libraries" instead of buildkernel. It does not > > have the KERNCONF's AIM vs. BOOKE vs. . . . definitions vs. lack of > > them). > > > > So if it includes machine/pmap.h that binds to > > sys/powerpc/include/pmap.h which has the structure. . . > > > > . . . > > #if defined(AIM) > > . . . (definitions here) > > #elif defined(BOOKE) > > . . . (definitions here) > > #endif > > . . . > > > > it gets no definition now. > > > > With the older: > > > > . . . > > #if defined(AIM) > > . . . (definitions here) > > #else > > . . . (definitions here) > > #endif > > . . . > > > > It got a definition, just not necessarily the right one. > > > > > > === > > Mark Millard > > markmi at dsl-only.net > > Can you try the attached patch? There was a subtle ABI issue that > r308817 exposed, which is that the pmap structs aren't identical such > that the pm_stats are at different locations, and libkvm ends up > reading with the Book-E pmap, getting different stats than expected for > AIM. This patch fixes that, bumping version to account for this ABI > change. > > - Justin<fix_pmap.diff> > > > _______________________________________________ freebsd-current@freebsd.org mailing list To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
https://www.mail-archive.com/freebsd-current@freebsd.org/msg168520.html
CC-MAIN-2018-30
refinedweb
606
70.5
How I Learned to Stop Worrying and Love JSON Schema [TOC] Intro This post operates on a few shared assumptions. So, we need to explicitly state them, or otherwise you will read things that are more or less rational but they will appear to be garbage. - APIs are good - Many APIs are web APIs - Many web APIs consume and produce JSON - JSON is good - JSON is better if you know what will be in it So, JSON Schema is a way to increase the number of times in your life that JSON is better in that way, therefore making you happier. So, let's do a quick intro on JSON Schema. You can always read a much longer and surely better one from which I stole most examples at Understanding JSON Schema. later (or right now, it's your time, lady, I am not the boss of you). Schemas So, a JSON Schema describes your data. Here is the simplest schema, that matches anything: { } Scary, uh? Here's a more restrictive one: { "type": "string" } That means "a thing, which is a string." So this is valid: "foo" and this isn't 42 Usually, on APIs you exchange JSON objects (dictionaries for you pythonistas), so this is more like you will see in real life: { "type": "object", "properties": { "street_address": { "type": "string" }, "city": { "type": "string" }, "state": { "type": "string" } }, "required": ["street_address", "city", "state"] } That means "it's an object", that has inside it "street_address", "city" and "state", and they are all required. Let's suppose that's all we need to know about schemas. Again, before you actually use them in anger you need to go and read Understanding JSON Schema. for now just assume there is a thing called a JSON Schema, and that it can be used to define what your data is supposed to look like, and that it's defined something like we saw here, in JSON. Cool? Using schemas Of course schemas are useless if you don't use them. You will use them as part of the "contract" your API promises to fulfill. So, now you need to validate things against it. For that, in python, we can use jsonschema It's pretty simple! Here is a "full" example. import jsonschema schema = { "type": "object", "properties": { "street_address": {"type": "string"}, "city": {"type": "string"}, "state": {"type": "string"}, }, "required": ["street_address", "city", "state"] } jsonschema.validate({ "street_address": "foo", "city": "bar", "state": "foobar" }, schema) If the data doesn't validate, jsonchema will raise an exception, like this: >>> jsonschema.validate({ ... "street_address": "foo", ... "city": "bar", ... }, schema) Traceback (most recent call last): File "<stdin>", line 4, in <module> File "jsonschema/validators.py", line 541, in validate cls(schema, *args, **kwargs).validate(instance) File "jsonschema/validators.py", line 130, in validate raise error jsonschema.exceptions.ValidationError: 'state' is a required property Failed validating 'required' in schema: {'properties': {'city': {'type': 'string'}, 'state': {'type': 'string'}, 'street_address': {'type': 'string'}}, 'required': ['street_address', 'city', 'state'], 'type': 'object'} On instance: {'city': 'bar', 'street_address': 'foo'} Hey, that is a pretty nice description of what is wrong with that data. That is how you use a JSON schema. Now, where would you use it? Getting value out of schemas Schemas are useless if not used. They are worthless if you don't get value out of using them. These are some ways they add value to your code: - You can use them in your web app endpoint, to validate things. - You can use them in your client code, to validate you are not sending garbage. - You can use a fuzzer to feed data that is technically valid to your endpoint, and make sure things don't explode in interesting ways. But here is the most value you can extract of JSON schemas: You can discuss the contract between components in unambiguous terms and enforce the contract once it's in place. We are devs. We discuss via branches, and comments in code review. JSON Schema turns a vague argument about documentation into a fact-based discussion of data. And we are much, much better at doing the latter than we are at doing the former. Discuss the contracts. Since the document describing (this part of) the contract is actually used as part of the API definitions in the code, that means the document can never be left behind. Every change in the code that changes the contract is obvious and requires an explicit renegotiation. You can't break API by accident, and you can't break API and hope nobody will notice. Enforce the contracts. Finally, you can version the contract. Use that along with API versioning and voilá, you know how to manage change! Version your contracts. - Discuss your contracts - Enforce your contracts - Version your contracts So now you can stop worrying and love JSON Schema as well. Hi Roberto, thanks for the post. I have one question, it's possible nest schemas? Yes, you can have an element in a schema that is a reference to another one, check this:... You got back to blogging often? Great! So, what do you think of MyPy? I am looking forward to playing with it, it seems very interesting.
https://ralsina.me/weblog/posts/how-i-learned-to-stop-worrying-and-love-json-schema.html
CC-MAIN-2021-17
refinedweb
852
80.82
About the Shared Types Schema [AX 2012] Updated: January 20, 2012 Applies To: Microsoft Dynamics AX 2012 R2, Microsoft Dynamics AX 2012 Feature Pack, Microsoft Dynamics AX 2012 The shared-types schema defines the enumerationss and extended data types (EDTs) that are used by a document service. The shared-types schema consolidates common data types into one schema that is imported by all document service schemas. The namespace for the shared-types schema is. If you publish a new service that contains a new enumeration or EDT, the shared-types schema is automatically updated. The shared-types schema is included with the Web Services Definition Language (WSDL) file. If you want to view the shared-types schema, view the schema for any service from either the Inbound ports or the Outbound ports form. To view the shared-types schema, in the Schema form, click View imported schemas. For more information, see the section “Viewing schemas” in Customize service contracts.
https://technet.microsoft.com/en-us/library/hh769362.aspx
CC-MAIN-2017-17
refinedweb
159
60.65
./tools/rt.py -t PerfStampEvent --repeat=5Note: If you're doing one of the large-data tests (that is, named "PerfLargeDataSomethingOrOther"), you'll need to generate the large repository (which rt.py will automatically tell the large-data tests to restore when it runs them). The PerfImportCalendar test creates this repository, so just run it once to make that happen: ./tools/rt.py -t PerfImportCalendarrt.py will pretty print the results and calculate standard deviation for you. These will look like this: PerfStampEvent.py 4.62 4.52 4.76 4.54 5.69 | 4.62 ± 0.49The numbers before | are the individual runs, the number immediately after | is the median of the values and the last number is the standard deviation. WARNING: The pretty printed results show only the last measured test from each file. See catsProfile command line argument for more information. Below the pretty printed values, and also when you run without rt.py, will be all of the results including lines such as the following: OSAF_QA: Perf_Stamp_as_Event.Note_creation | 13845 | 1.299634 OSAF_QA: Perf_Stamp_as_Event.Change_the_Event_stamp | 13845 | 0.221906 OSAF_QA: Perf_Stamp_as_Event | 13845 | 2.324717The text after OSAF_QA: specifies the test, next is revision number, and the last shows the time in seconds. So which line is the actual test result? perf.py has the official list, but it is generally easy to determine, for example in the above case the middle line shows the time it took to stamp. ./tools/rt.py -Pt PerfStampEventThis will create the hotshot profile in ./test_profile/PerfStampEvent.hotshot. # we want to profile foobar() import hotshot prof = hotshot.Profile("foobar.prof") prof.runcall(foobar)There is more documentation about hotshot on the Python website, including interacive samples on how to run and analyze a hotshot profile. from util.easyprof import QuickProfile @QuickProfile('foobar.prof') def foobar(): $ hotshot2calltree filename -o filename.prof $ kcachegrind filename.prof >>> import timeit >>> t = timeit.Timer('1 > 0 and 1 < 2') >>> t.timeit() 0.24477005004882812 >>> t = timeit.Timer('0 < 1 < 2') >>> t.timeit() 0.22036290168762207
http://chandlerproject.org/Projects/BusyDevelopersGuideToChandlerPerformanceOptimization
crawl-002
refinedweb
334
52.66
Type annotations for boto3.Outposts 1.11.17 service, generated by mypy-boto3-buider 1.0.0 Project description mypy-boto3-outposts Type annotations for boto3.Outposts 1.11.17 Outposts service. python -m pip install boto3-stubs[mypy-boto3-outposts] Use boto3 with mypy_boto3 in your project and enjoy type checking and auto-complete. import boto3 from mypy_boto3 import outposts # alternative import if you do not want to install mypy_boto3 package # import mypy_boto3_outposts as outposts # Use this client as usual, now mypy can check if your code is valid. # Check if your IDE supports function overloads, # you probably do not need explicit type annotations # client = boto3.client("outposts") client: outposts.OutpostsClient = boto3.client("outposts") # works for session as well session = boto3.session.Session(region="us-west-1") session_client: outposts.OutpostsClient = session.client("outposts").
https://pypi.org/project/mypy-boto3-outposts/1.11.17.0/
CC-MAIN-2020-16
refinedweb
134
52.56
: Sockets and Internet Protocols Java SSL client without installing certificate Djordje Cvetkovic Greenhorn Posts: 2 posted 2 months ago Hi all, I had to create a Java SSL client. I did it by following the following tutorial: When I tried to run the program there was an exception saying that there was no certificate found (I cannot remember exactly the exception). I solved this by installing the certificate into cacerts file and everything worked fine. Note: This certificate is a self-signed and is not validated by any CA. However, the clients to whom this software will be distributed should not have to do this. How can I create a SSL/TLS Java client w/o installing untrusted certificate. If some more clarification is needed do not hesitate do ask. Thanks in advance! Stephan van Hulst Saloon Keeper Posts: 9986 206 posted 2 months ago Welcome to CodeRanch! Your server needs to have a valid SSL certificate installed that is issued by a CA that the client trusts. There's no way around this. Tim Holloway Bartender Posts: 20562 120 I like... posted 2 months ago If you have a self-signed certificate, any properly secure client is going to inform you before you attempt to connect to that server via SSL. It would be a major security vulnerability not to. Many clients do have the ability to add a confirmation to accept the cert and proceed, which is done via a pop-up dialog and/or a command-line switch. The point being is that the end user should be aware of, and consciously agree to talk to a server whose authenticity and trustworthiness cannot be independently proven. If that is not suitable, get a signed cert. You can get them free from letsencrypt, although those have to be renewed every 90 days - an expired cert also causes the client to question the user. There are additionally some fairly inexpensive cert authorities for longer-term certs and if you want extra bells and whistles, the big name certifiers add additional trustworthiness features that clients will often display as medallions next to the navigation control to assure users of the server's bona fides. When it comes to destroying a civilization, gas chambers cannot hold a candle to echo chambers. Djordje Cvetkovic Greenhorn Posts: 2 posted 2 months ago Hi again... I Just wanted to share with you my findings. Maybe someone will find this useful. At the moment of writing this, I can say with 99% certainty that what I wanted in the first place IS possible. I implemented it and in localhost (both server and the client) everything works. The 1% left is because I still need to test my solution with Wireshark to see that the communication is actually SSL/TLS encrypted. I will update once these tests are done. First of all, big thanks to this article . It helped me achieve what I wanted, even though it is HTTPS related. My client is a TCP client implementing custom protocol. All imports are from javax.net.ssl package. Anyway... here is the client code: Somewhere in the client initialization class: public void establishSSLConnection() { String serverAddress = "127.0.0.1"; int serverPort = 12345; MyHandshakeCompletedListener hcl = new MyHandshakeCompletedListener(); SSLSocket sslSocket = null; try { TrustManager[] trustAllCerts = new TrustManager[]{new X509TrustManager() { @Override public java.security.cert("TLSv1"); sc.init(null, trustAllCerts, new java.security.SecureRandom()); SSLSocketFactory sslFactory = sc.getSocketFactory(); sslSocket = (SSLSocket) sslFactory.createSocket(serverAddress, serverPort); sslSocket.addHandshakeCompletedListener(hcl); sslSocket.startHandshake(); } catch (Exception ex) { ex.printStackTrace(); } } MyHandshakeCompletedListener: public class MyHandshakeCompletedListener implements HandshakeCompletedListener { @Override public void handshakeCompleted(HandshakeCompletedEvent hce) { // At this point the SSL connection is established } } If you have any questions feel free to ask Tim Holloway Bartender Posts: 20562 120 I like... posted 2 months ago Eek. If I read that properly, your solution to the cert not being trusted was to trust everything. So I could inject a DNS highjack into the path of clients, cause it to poll my trojan server with my trojan cert and proceed to pOwn them. It isn't just a case of not having a cert authorization chain for your self-signed cert that makes it untrusted. A self-signed cert is, by definition, untrusted. You can probably get just as good effect without the special code with an option on your SSL client class. All in all, you're better off just getting a properly-signed cert. Unless you really enjoy showing up on the Internet's Exploit of the Week roster. When it comes to destroying a civilization, gas chambers cannot hold a candle to echo chambers. Matt Wong Ranch Hand Posts: 138 5 I like... posted 1 month ago the chosen solution is the "most wrong" (I'm not a native english speaker but I'm aware that this isn't correct grammar) one could choose from the whole point of using TLS in the first place is to have authenticated ensurance that the remote is really the one you want to speak to - no matter if strong encryption is used by overwriting this very basic essential check you can get away with not using TLS at all but use RSA and AES in stream-mode and javax.crypto.CipherIn/OutputStream the right way to do TLS with your own private CA: create a root-key and -cert create an instance of SSLContext using it - and create end-point-certs signed by root or if you like intermediate cert there is no need to pre-install the root-cert ahead of use - just create a TrustManager at runtime - done A berm makes a great wind break. And Iwe all like to break wind once in a while. Like this tiny ad: global solutions you can do in your home or backyard Post Reply Bookmark Topic Watch Topic New Topic Boost this thread! Similar Threads This Weeks Giveaway javax.net.ssl.SSLException: No available certificate or key corresponds to the SSL cipher suites whi untrusted server cert chain Secured Socket Layer (SSL) w/ Java Websphere 5.1 unknown certificate problem More...
https://coderanch.com/t/706130/java/Java-SSL-client-installing-certificate
CC-MAIN-2019-18
refinedweb
1,008
62.68
Random random number random number Sir could u please send me the code registration form of roseindia.net with image varification random number random number Please How do I generate a program that gives me random integer from 115 to 250? Using java.util.random. Thank you very much!  ...[] args){ int min = 115; int max = 250; Random random = new Random random numbers random numbers hi.. i am creating a website and on feedback form to authenticate user i want to generate a random number on label.if user types that number correctly in textbox then he is allowed to submit feedback otherwise regarding the user usage and habits. Servlets sends cookies to the browser client...:// the servlets what is diff between generic servlets and httpservlets what is diff between generic servlets and httpservlets Difference between GenericServlet and HTTPServlet: 1)GenericServlet belongs to javax.servlet package Jsp redirect Jsp redirect In this section, you will learn how to redirect... can also redirect the jsp page in a jsp. Here is the code of redirect.jsp Pick at Random the entry from the array before selecting another name at random random numbers random numbers Hi I need some help with the code for the this program. This program should randomly choose a number between 1 and 1000, and then challenge the user to guess what it is, giving hints ('too high!' or 'too low SERVLETS servlets the servlets Servlets Random Access File. Random Access File. Explain Random Access File redirect to multiple links from servlet redirect to multiple links from servlet hello , In my servlet page... the output in browser,i need to redirect to another link like the above line. i want redirect to multiple links simultaneously. anyone help me out from servlet to servlet Redirect from servlet to servlet I want to insert data from Text box to oracle database ,so in my HTML where there are text box (and 3 Buttons-New ,Update and save)so when i click any of three buttons it takes me using random number using random number generate a 10 digit number using random number and display the length of longest increasing series Java Redirect output to file Java Redirect output to file This section illustrates you how to redirect... to redirect the run-time exceptions and SOPs (System.out.println) to a file... as argument. This will redirect any console output using System.out.println send redirect in JSP - JSP-Servlet send redirect in JSP How can I include a message i.e "redirected to this page because blah blah" in the send redirect page? Hi friend, Please specify your problem the message print on the redirect to the next Random Access File class Random Access File class Write a short note on Random Access File class. Please visit the following link: Java Random Access File getting random number in java getting random number in java getting random number in java Hi everyone, i am trying to get a random number in a particular range. for example random numbers between 1 to 100 in Java. Though i m new to Java, i have tried many Generate Random Numbers Generate Random Numbers hi,,, Please give me answer with example How do you generate random numbers within a given limit with actionscript... This function generate random numbers with in given limit Introduction to Java Servlets Introduction to Java Servlets Java Servlets are server side Java programs that require... associated information required for creating and executing Java Servlets RANDOM ACCESS FILE CONCEPT RANDOM ACCESS FILE CONCEPT In a random access file, first write the alphabets a,b, c, d till z. Read the file in reverse order and print it on screen How to redirect from a HTML page? How to redirect from a HTML page? Hi, Is it possible to redirect from HTML page to another page on the net? I have one page and I want to redirect... to redirect to another page using HTML code. You can use the following code: <meta using random number using random number generate 10 number and display the length of longest incresing series using random number using random number generate 10 digit number and display the length of longest incresing swries random numbers - Java Beginners random numbers write a program to accept 50 numbers and display 5... java.util.*; class RandomNumberExample { static Random generator = new Random...(); System.out.println("Random numbers are: "); int random[]=new int[5]; for(int i=0;i Random in jsp Random in jsp Random numbers are the numbers that are determined entirely by chance. User does not have any control over the working of random numbers Redirect the log used by DriverManager and JDBC driver Redirect the log used by DriverManager and JDBC driver How can I get or redirect the log used by DriverManager and JDBC drivers numbers - Development process Random numbers hi, How to generate unique random numbers between range like(10 to 50) Hi friend, class RandomNumber { public static void main(String[] args) { //set the variable aNumber random numbers - Java Beginners random numbers Hello i need this code to be modified for me to display the random numbers, but not twice or more. I mean i need a number... RandomNumberExample { static Random generator = new Random(); public static int get (int[] array servlet redirect problem help needed servlet redirect problem help needed package p; import java.io.IOException; import javax.servlet.RequestDispatcher; import... the counter executes the line re.forward(req, resp);..it it redirect to jsp RANDOM ACCESS FILE CONCEPT RANDOM ACCESS FILE CONCEPT Write a program to use a File object and print even numbers from two to ten. Then using RandomAccessFile write numbers from 1 to 5. Then using seek () print only the last 3 digits Generating random numbers in a range with Java Generating random numbers in a range with Java Generating random numbers in a range with Java random color fill random color fill Hi I am trying to create a randomly color filled oval but the outcome is a black filled oval this is the code: package...; Random rcolor = new Random(); int r = rcolor.nextInt(255); int g1 RANDOM ACCESS FILE CONCEPT RANDOM ACCESS FILE CONCEPT Write a program to write the details of an employee to a file. Details such as year of joining, department code, employee name and salary should be included. You must use RandomAccessFile J2ME Random Number J2ME Random Number In this application we are going to generate the random number using Random class. The Random class within the java.util package and we can use Advertisements If you enjoyed this post then why not add us on Google+? Add us to your Circles
http://www.roseindia.net/tutorialhelp/comment/27112
CC-MAIN-2015-32
refinedweb
1,114
50.87
Tk_GetUid, Tk_Uid - convert from string to unique identifier #include <tk.h> #typedef char *Tk_Uid Tk_Uid Tk_GetUid(string) String for which the corresponding unique identifier is desired. Tk_GetUid returns the unique identifier corresponding to string. Unique identifiers are similar to atoms in Lisp, and are used in Tk to speed up comparisons and searches. A unique identifier (type Tk_Uid) is a string pointer and may be used anywhere that a variable of type ``char *'' could be used. However, there is guaranteed to be exactly one unique identifier for any given string value. If Tk_GetUid is called twice, once with string a and once with string b, and if a and b have the same string value (strcmp(a, b) == 0), then Tk_GetUid will return exactly the same Tk_Uid value for each call (Tk_GetUid(a) == Tk_GetUid(b)). This means that variables of type Tk_Uid may be compared directly (x == y) without having to call strcmp. In addition, the return value from Tk_GetUid will have the same string value as its argument (strcmp(Tk_GetUid(a), a) == 0). atom, unique identifier
http://search.cpan.org/dist/Tk/pod/pTk/GetUid.pod
CC-MAIN-2016-30
refinedweb
177
62.68
When you work with ChaRM in Solution Manager 7.1, the first thing you do if you follow the Configuration Guide is copy the standard transaction types to Z transaction types. You copy the standard Request for Change (SMCR) and the standard Change Documents such as Urgent Change (SMHF), Normal Change (SMMJ), Admin Change (SMAD) and General Change (SMCG). After the copy, when you go to the Web Client and try to create a new Change Request Management document, you see a pop-up to select the standard or the new custom Transaction type that you just created (Figure 1 shows the example of Request for Change pop-up). It may be a problem to keep both options available for the user, as he can select the wrong one, even if you explain, train, write procedures showing the correct transaction types to be used. In order to avoid such errors and confusion, you need to configure the Web Client to show only the customer transaction types. I will show you how to do it. It is very easy. Fig 1: Pop-up to select the Request for Change transaction type. I assume you created your own Business Role as a copy of standard SOLMANPRO. I recommend you to always create your own copies to avoid losing the reference and having your configuration replaced by the standard during an upgrade to a new Support Package. So I guess you have at least one new Business Role. You can have as many as necessary depending of the different activities for different group of users, or user roles. For that Business Role I suppose you created also a new Navigation Bar Profile, so that you can configure the navigation bar the way you want. In this article I show the changes I’ve done in a copy of the navigation bar profile named ZSOLMANPRO, assigned to a copy of the standard Business Role named ZSOLMANPRO. I will configure the Create menu (Figure 2) of my navigation bar profile to directly call the Z Request for Change, without showing the pop-up for selection of transaction type. This menu is a Direct Link Group. As this Business Role will only be used for ChaRM, not for Incident Management, I have changed it to show only the Create Direct Links that I need. Figure 2: Example of a Create Menu (Direct Link Group assigned to Navigation Bar Profile) So, to configure the Direct Link Group you go to IMG Customer Relationship Management in transaction SPRO. Open the IMG menu UI Framework. Open Technical Role definition->Define Navigation Bar Profile. (See Figure 3) Figure 3: IMG menu for UI definitions To display the current configuration of the Create menu assigned to the navigation bar profile, click on Define Navigation Bar Profile and select the navigation bar profile you are working with. I will select ZSOLMANPRO. Then click on Assign Direct Link Groups to Navigation Bar Profile to see the Direct Links currently in use. The Direct Link Group related to the Create menu is SM-CREATE. (Figure 4) Figure 4: Direct Link Groups assigned to Navigation Bar Profile In the same screen, go to Define Link Groups to see the details of Direct Link Group SM-CREATE. (Figure 5) Select the Group ID SM-CREATE and click on Assign Links to Direct Link Group. You will see the list of logical links related to the Create menu. The one corresponding to Request for Change is SM-CR-DC. (Figure 6) Figure 5: Define Direct Link Groups Figure 6: Assign Logical Link to Direct Link Group Now go to Define Logical Links (the first option in the Dialog Structure) and search for SM-CR-DC. In the details, enter in Field “Parameter” the transaction type of your Request for Change, ZMCR in my example. In Field “Paramater Class” keep the value CL_CRM_UIU_BT_PARAM_CREATE (Figure 7). Save. Figure 7: Define Logical Links After that, the Direct Link for Request for Change of the Create menu will open a new document directly, without asking you to select the transaction type in a pop-up. I’ve met you some years ago here in São Paulo at Asug. It was a lecture presented by Renan Guedes regarding Implementation tools, remember that? I have always seen your contributions on SDN and I find it very interesting how accurate you are. Congrats! Hi Hailton, yes, I remember you and our chat that day 🙂 Thank you for your kind comment. By the way, we have ASUG Conference next Aug 14th, I will be there. Cheers, Raquel Hi Raquel, If my organization has 2 Z transaction types for Normal Corrections due to business reasons, can this configuration support that? Ryan Hi Raquel, Is it possible to enable only one transaction for selection via dno_cust04 and CHANGE_REQUEST_NEW? I entered ZMCR, however I still get SMCR and ZMCR when creating new RfC. Thx Regards, Julius Hi, where do you still get both? In the Work Center or in the Web Client? Regards, Raquel In Web UI, exactly as your first picture in the blog. I expected that SMCR gets removed from this pop up once I put ZMCR in dno_cust04 for CHANGE_REQUEST_NEW. Or did I get it wrong? Hi very nice blog. You could have same result also by declaring standard Transaction Type as ‘inactive’ in SPRO in ‘Definition of Transaction Types’ Thanks for contribution Khalil Is it possible to customize this pop-up? We have multiple transaction types and want to make this pop-up available to our functional team. Currently the description in the pop-up box is not the correct one. Very good stuff. keep them coming…. Thank you very much. Regards, Raquel hi Raquel , i face an issue in ChaRM configuration, maybe due to missing step, in crm_ui RFC is ZMCR but the following documents such as SMMJ , SMHF are created in standard not Zxxx , is it missing copy routine or defining the Z types ? thanks TM Great Raquel, bookmarked, liked, shared, rated an twitted-it 😉 Regards, Luis Thanks Luis 😀 Thanks Raquel Your method worked well with the SMCR with its own logical link of SM-CR-DC. I was able to put in ZMCR as a parameter and the pop-up in the web UI was gone and defaulted correctly to ZMCR. The question I have is with other Change Documents (SMMJ, SMAD, SMHF, SMCG) and if this technique would work since it appears they all fall under one logical link SM-CD-DC. If there is only one logical link then you can only have it default to one doc type in the parameter which would not work in this case. There was another approach documented in this thread by Khalil Serrhini that noted you could get the same effect if you made the doc type inactive in the Definition of Transaction Types. Here you can go into each S* doc type that you have Z* doc type counterpart and make the S* doc type inactive. That appears to work as well. Is there any other hidden dangers in making the S* doc types inactive here as long as you have a Z* namespace for the document already saved? Thanks, Ben Mullaly Hi Raquel, Thanks for the details step by step configuration. Do you have any idea on how to configure SAP GUI (SOLMAN_WORKCENTER) so that it can redirect us to ZMCR only? I encounter the same screen (fig 1) when I click on New Request for Change in the solman work center. Thanks Best Regards Remy Hi everyone! I having the same problem, however, it has happend from de solman_workcenter when I try to create a request from Job Schedule Escenario. May some one help me to solve the issue, please? Wins Dear Raquel, I tried this approach on Solution Manager 7.2 after the upgrade from 7.1 and its not working. Request you to share if there is any different procedure that needs to be followed in case of solution manager 7.2 Best Regards, Viswanathan Subramanian Dear Viswanathan, Same here. This approach is not working in Solution Manager 7.2 Did you find any alternate solution to resolve it in 7.2? Thanks Hi Raquel, Its not working on solman 7.2, anyone got it to work ? Regards, Olwethu
https://blogs.sap.com/2012/07/27/call-your-customized-rfc-in-the-web-client-direct-link-without-a-pop-up-in-solman-71/
CC-MAIN-2018-17
refinedweb
1,382
62.17
Ethernet Table of Contents Legacy Networking Libraries This documentation covers the networking libraries available for mbed 2. For mbed 5, the networking libraries have been revised to better support additional network stacks and thread safety here. Ethernet Network Interface The Ethernet peripheral is of little use without an IP networking stack on top of it. You could be interested in the Ethernet Interface library. The Ethernet Interface allows the mbed Microcontroller to connect and communicate with an Ethernet network. This can therefore be used to talk to other devices on a network, including communication with other computers such as web and email servers on the internet, or act as a webserver. Hello World!¶ Read destination and source from every Ethernet packet #include "mbed.h" Ethernet eth; int main() { char buf[0x600]; while(1) { int size = eth.receive(); if(size > 0) { eth.read(buf, size); printf("Destination: %02X:%02X:%02X:%02X:%02X:%02X\n", buf[0], buf[1], buf[2], buf[3], buf[4], buf[5]); printf("Source: %02X:%02X:%02X:%02X:%02X:%02X\n", buf[6], buf[7], buf[8], buf[9], buf[10], buf[11]); } wait(1); } } API¶ API summary Interface¶ All of the required passive termination circuits are implemented on the mbed Microcontoller, allowing direct connection to the ethernet network. The Ethernet library sets the MAC address by calling a weak function extern "C" void mbed_mac_address(char * mac); to copy in a 6 character MAC address. This in turn performs a semihosting request to the mbed interface to get the serial number, which contains a MAC address unique to every mbed device. If you are using this library on your own board (i.e. not an mbed board), you should implement your own extern "C" void mbed_mac_address(char * mac); function, to overwrite the existing one and avoid a call to the interface.
https://os.mbed.com/handbook/Ethernet
CC-MAIN-2017-43
refinedweb
304
52.09
I made a little arduino sketch, that displays the temperature on a LCD display measured by a LM35. I used a 16X1 LCD Display. The Code is quite easy and can send the value via serial connection, too. What you need: Arduino Board LCD Display LM35 temperature sensor resistor (for backlight) Step 1: Here’s my source coude (based on the example LCD code on arduino.cc): #include <LiquidCrystal.h> LiquidCrystal lcd(12, 11, 5, 4, 3, 2); #define LCD_CLEAR 0b00000001 // 0×01 float temp; int tempPin = 0; void setup() { Serial.begin(9600); lcd.begin(16, 2); //lcd.print(“”); } void loop() { temp = analogRead(tempPin); temp = temp * 0.48828125; lcd.setCursor(0, 0); lcd.print(temp); lcd.print(” C”); delay(1000); lcd.clear(); Serial.println(temp); } 6 Discussions 3 years ago on Introduction I need your help on this project please 4 years ago on Introduction why give me these errors : This report would have more information with "Show verbose output during compilation" enabled in File > Preferences. Arduino: 1.0.6 (Windows 7), Board: "Arduino Uno" sketch_jan20a:21: error: stray '\' in program sketch_jan20a:21: error: stray '\' in program sketch_jan20a.ino: In function 'void loop()': sketch_jan20a:21: error: 'u201dC' was not declared in this scope 6 years ago on Introduction Hi. Very cool project. I will be making this soon. May I ask why the 5v is pulled to the outer rail if nothing connects to the rail? 6 years ago on Introduction very nice, are you going to use this for a larger project? Reply 6 years ago on Introduction i did something like this: Reply 6 years ago on Introduction In used it to display the temperature on the computer (via processing) and then, when the computer is online it sends the temperature to a webserver,and the temperature can be seen in the internet.
https://www.instructables.com/id/Measure-the-temperature-and-show-it-on-a-LCD-displ/
CC-MAIN-2019-09
refinedweb
305
64.3
# Redirects: 301, 302, 307 | How-To 301 guide What is redirection? -------------------- Redirection is a way of forwarding the user to the URL that is different from the one they initially clicked on. Down below some of the most common types of redirection are listed. ![how to do a 301 redirect](https://habrastorage.org/r/w1560/webt/ns/68/ac/ns68ac-yy0uvtntc-omnlovdhu0.png) ### 301 Moved Permanently A 301 redirect is a permanent redirect which passes between 90-99% of link equity (ranking power) to the redirected page. 301 signals that the page has been moved to another URL and the old URL is outdated. ### 302 Found (HTTP 1.1) / Moved Temporarily (HTTP 1.0) 302 is a temporary redirect which passes 0% of link equity, and shouldn't be used in most cases. As of now, the internet runs on an HTTP protocol which dictates how URLs work. In two versions of this protocol, the HTTP response status code is different: * HTTP 1.0: 302 status code is «Moved Temporarily» — current page has been moved temporarily to another URL. * HTTP 1.1: the description phrase has been changed to «Found» — current page has been found. ### 307 Moved Temporarily (HTTP 1.1 Only) A 307 redirect is the HTTP 1.1 successor of the 302 redirect. While the major crawlers will treat it like a 302 in some cases, it is best to use a 301 for almost all cases. The exception to this is when content is really moved only temporarily (such as during maintenance) and the server has already been identified by the search engines as 1.1 compatible. Since it's essentially impossible to determine whether the search engines have identified a page as compatible, it is generally best to use a 302 redirect for content that has been moved temporarily. ### Other redirection types There are also some other types of redirection: Meta Refresh or JavaScript redirection that are executed on the page level rather than the web server level. This is what a typical Meta Refresh redirect looks like: It's best not to use these types of redirect, as they are often used by spammers and doorway pages. Besides, they pass little to none of the [link juice](https://www.trustcorp.net/link-juice). ### Examples of using redirects **Redirecting your domain to a non-www URL:** ``` RewriteCond %{HTTP_HOST} ^www.site\.com$ [NC] RewriteRule ^(.*)$ http://site.com/$1 [R=301,L] ``` **Redirecting your domain to a www URL:** ``` RewriteCond %{HTTP_HOST} ^site\.com$ [NC] RewriteRule ^(.*)$ http://www.site.com/$1 [R=301,L] ``` To choose which URL to make canonical, consider: * which URL ranks higher in the SERPs; * which URL is more represented in the index. **Redirecting your domain to URL without a slash** When developing a website, it's important to choose whether you want to add a slash to the links, because the search engines consider the links — [www.site.com/cat1](http://www.site.com/cat1/) and — [www.site.com/cat1](http://www.site.com/cat1) to be different. Then, you'll have to add the following code: **To delete the slash from the URLs:** ``` RewriteCond %{HTTP_HOST} (.*) RewriteCond %{REQUEST_URI} /$ [NC] RewriteRule ^(.*)(/)$ $1 [L,R=301] ``` **To add the slash to the URLs:** ``` RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_URI} !(.*)/$ RewriteRule ^(.*[^/])$ $1/ [L,R=301] ``` **To redirect the user from one page to another:** ``` Redirect 301 /oldpage.html http://www.site.com/newpage.html ``` **Redirecting the main page duplicates** This code ensures that if there are multiple versions of the direct link to the main page (index, etc.), they will all redirect to the canonical main page: ``` RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /([^/]+/)*(default|index|main)\.(html|php|htm)\ HTTP/ [NC] RewriteRule ^(([^/]+/)*)(default|main|index)\.(html|php|htm)$ http://www.site.com/$1 [L,R=301] ``` **Redirecting catalogs** If the URLs reflect the structure of a catalog, changes in the catalog will lead to changes in the URLs. In this case, use the following redirect: ``` RewriteRule ^(.*)/old-catalog/(.*)$ $1/new-catalog/$2 [R=301,L] ``` But, if the URL of the previous catalog comes right after the name of a domain: [www.site.com/old-catalog](http://www.site.com/old-catalog/), use this code: ``` RewriteRule old-catalog /(.*) / old-catalog /$1 [R=301,L] ``` If you've switched platforms or a CMS and only the URLs' extension has changed, use this redirect: ``` RedirectMatch 301 (.*)\.php$ http://www.site.com$1.html ``` Examples of using redirection to avoid duplicate pages **Redirecting (sub-)domains** In case you've bought several domains with multiple TLDs, OR used a subdomain to develop a new website and forgot to block it from being indexed by the search engines. Either way, you have to set up redirection to the root domain: ``` RewriteCond %{HTTP_HOST} !^www\.site\.com RewriteRule ^(.*)$ http://www.site.com/$1 [R=301,L] ``` That way, all the (sub-)domains like [www.site.org](http://www.site.org), [www.site.net](http://www.site.net), test.site.com, will redirect to [www.site.com](http://www.site.com) **Deleting multiple slashes/hyphens from the URLs** Sometimes, user can accidentally type in multiple slashes, e.g. [www.site.com/catalog////page-1.html](http://www.site.com/catalog////page-1.html). In this case, you have to set up a 301 redirect to a page with a single slash [www.site.com/catalog/page-1.html](http://www.site.com/catalog/page-1.html): ``` RewriteCond %{REQUEST_URI} ^(.*)//(.*)$ RewriteRule . %1/%2 [R=301,L] ``` In the same way, you can set up a redirect from a URL with multiple hyphens (for example, [www.site.com/catalog/page---1.html](http://www.site.com/catalog/page---1.html)) to [www.site.com/catalog/page-1.html](http://www.site.com/catalog/page-1.html): ``` RewriteCond %{REQUEST_URI} ^(.*)--(.*)$ RewriteRule . %1-%2 [R=301,L] ``` **Redirecting from any URL to a lowercase URL** The search engines notice the letter case, so it's best to have your URLs in lowercase. If you haven't developed your website with this in mind, you can use this PHP-script: ``` $lowerURI=strtolower($_SERVER['REQUEST_URI']); if($_SERVER['REQUEST_URI']!=$lowerURI) { header("HTTP/1.1 301 Moved Permanently"); header("Location: http://" . $_SERVER['HTTP_HOST'] . $lowerURI); exit(); } ``` How to move your website to a new domain? The optimal strategy for a 301 redirect According to the most popular search engines, the best strategy for moving to a new domain is: * mounting a page-by-page 301 redirect from the old site to the new one; * don't set up a redirect from robots.txt — add the Host directive to the new domain instead. In this case, the code on the former website will have to look something like this: ``` RewriteCond %{REQUEST_FILENAME} robots.txt$ [NC] RewriteRule ^([^/]+) $1 [L] RewriteCond %{HTTP_HOST} !^www\.site\.com RewriteRule ^(.*)$ http://www.site.com/$1 [R=301,L] а файл robots.txt для старого сайта: User-agent: Yandex Disallow: Host: newsite.com ``` ### Generating 301 redirects If you're not particularly tech-savvy, you can use the online services for generating basic redirects: <http://www.webconfs.com/htaccess-redirect-generator.php> <http://www.rapidtables.com/web/tools/redirect-generator.htm> Just enter your data and get a code for redirection between domains, directories, and URLs. #### How to test the 301 redirect With every change in 301 redirect, you need to test the site's performance: * whether it's working (check the main page); * go through the main sections and webpages of the site. ### 301 redirect VS Canonical — which one to use & when? Minor details aside, in order to understand what exactly do we want to say, Google offers some clear-cut rules. In very simple terms, this is how the search engines understand our commands: **301**: okay, google (or any other search engine), my page isn't there anymore and it's been permanently moved to a new URL. Please, delete the old link from the index, and pass the link juice to the new page. **Canonical**: okay, google, I've got multiple versions of the same page (or its content), so, please, index only the canonical version. I will keep the other versions for people to see, but don't index them, please, and pass the link juice to the canonical page. ### When is it better to use a 301 redirect? * This is a preferred method by default; * For pages, that have beenmoved permanently, or their URLs have been changed; * For domains, if you've moved your website to the new domain; * For 404 pages. For example, if a certain product has been deleted from the catalog, you can set up a redirect to a page with a similar product, or to the URL of the product category. ### When is it better to use rel= «canonical»? * If you can't set up 301 redirects or it won't be time-efficient; * For duplicate content, if you want to keep both versions (for example, pages with different clothing sizes); * When you have multiple URLs leading to the same page (catalog categories, pages for tracking the traffic from affiliate links, etc.); * For cross-domain resource sharing, if you want to transfer data between pages that have different origins (domains, for example). **To sum it up** Both solutions pass the link juice and both are ranked by Google equally. 301 redirect is a bit more preferred, though. ### Redirection mistakes * Redirection chains (avoid them to maximize the speed and the link juice flow of the website); * Using the wrong type of redirect (to make the right decision, you have to consider all the details); * Setting up internal redirects without rewriting the URLs of the links. You have to make sure that links on your website don't lead to pages with the redirect; * Redirecting to irrelevant pages/content. Your links should always lead either to similar pages or to the section of the site that included the requested page; * The wrong choice of either rel=canonical or 301 direct (see above); * Redirecting robots.txt (it's better to add the Host directive); * Any redirect, which doesn't lead to a page with a 200 status code (every link should lead to a properly working page with a 200 status response. Otherwise, don't confuse the crawlers and just show the 404 Error page). Hopefully, this 301 how-to guide will be your cheat sheet and will help you to use a 301 redirect on your website. If you have any questions, ask them down below. I will try my best to help you! Read also: [Essential on-site SEO factors](https://habr.com/en/post/455768/).
https://habr.com/ru/post/458108/
null
null
1,757
54.73
Defines attributes that apply to an entire JSP page. <%@ page [ language="java " ] [ extends="package.class " ] [ import= "{package.class | package.* }, ..."] [ session="true |false" ] [ buffer="none|8kb|size kb" ] [ autoFlush="true |false" ] [ isThreadSafe="true |false" ] [ info="text " ] [ errorPage="relativeURL " ] [ contentType="mimeType [ ;charset=characterSet ]" | "text/html ; charset=ISO-8859-1""text/html ; charset=ISO-8859-1" ] [ isErrorPage="true|false " ] %> <%@ page import="java.util.*, java.lang.*" %> <%@ page buffer="5kb" autoFlush="false" %> <%@ page errorPage="error.jsp" %> The page directive applies to an entire JSP file and any static files it includes with the Include Directive or <jsp:include>, which together are called a translation unit. Note that the page directive does not apply to any dynamic included files; see <jsp:include> for more information. You can use the page directive more than once in a translation unit, but you can only use each attribute, except import, once. (Because the import attribute is similar to the import statement in the Java programming language, you can use it more than once, just as you would use multiple import commands in the Java programming language) No matter where you position the page directive in the JSP file or included files, it applies to the entire translation unit. However, it is usually good programming style to place it at the top of the file. language="java " The scripting language used in scriptlets, declarations, and expressions in the JSP file and any included files. In JSP 1.0, the only allowed value is java. extends="package.class " The fully qualified name of the superclass of the Java class file this JSP file will be compiled to. Use this attribute cautiously, as it can limit the JSP engine's ability to provide a specialized superclass that improves the quality of the compiled file. import= "{package.class |package.* }, ..." A comma-separated list of one or more packages that the JSP file should import. The packages (and their classes) are available to scriptlets, expressions, declarations, and tags within the JSP file. You must place the import attribute before the tag that calls the imported class. If you want to import more than one package, you can specify a comma-separated list after import or you can use import more than once in a JSP file. session="true |false" Whether the client must join an HTTP session in order to use the JSP page. If the value is true, the session object refers to the current or new session. If the value is false, you cannot use the session object in the JSP file. The default value is true. buffer="none| 8kb|size kb" The buffer size in kilobytes used by the out object to handle output sent from the compiled JSP page to the client Web browser. The default value is 8kb. If you specify a buffer size, the output is buffered with at least the size you specified. autoFlush="true |false" Whether the buffered output should be flushed automatically when the buffer is full. If true (the default value), means that the buffer will be flushed. If false, means that an exception will be raised when the buffer overflows. You cannot set autoFlush to false when the value of buffer is none. isThreadSafe="true |false" Whether thread safety is implemented in the JSP file. The default value is true, which means that the JSP engine can send multiple requests to the page concurrently. If you use true (the default value), multiple threads can access the JSP page, and you must synchronize them. If you use false, the JSP engine sends client requests one at a time to the JSP page. info="text " A text string that is incorporated verbatim in the compiled JSP page. You can later retrieve the string with the Servlet.getServletInfo() method. errorPage="relativeURL " A pathname to a JSP file that this JSP file sends exceptions to. If the pathname begins with a /, the path is relative to the JSP application's document root directory and is resolved by the Web server. If not, the pathname is relative to the current JSP file. isErrorPage="true|false " Whether the JSP file displays an error page. If true, you can use the exception object, which contains a reference to the thrown exception, in the JSP file. If false (the default value), means that you cannot use the exception object in the JSP file. contentType="mimeType [ ;charset=characterSet ]" | "text/html;charset=ISO-8859-1 " The MIME type and character encoding the JSP file uses for the response it sends to the client. You can use any MIME type or character set that are valid for the JSP engine. The default MIME type is text/html, and the default character set is ISO-8859-1.
http://java.sun.com/products/jsp/tags/10/syntaxref10.fm7.html
crawl-001
refinedweb
782
63.7
Editing NVDL SchemasHistory | Edit Some complex XML documents are composed by combining elements and attributes from namespaces. Furthermore, the schemas that define these namespaces are not even developed in the same schema language. In such cases, it is difficult to specify in the document all the schemas that Editor offers support for editing NVDL schema files in the following editing modes: - Text editing mode - Allows you to edit NVDL schema files in a source editing mode, along with a schema design pane with two tabs that offer a Full Model View and Logical Model View. - Grid editing mode - Displays NVDL schema files in a structured spreadsheet-like grid. - Author editing mode - The visual Author mode is also available for Relax NG schema files, presenting them in a compact and easy to understand representation. For information about applying and detecting schemas, see Associating a Schema to XML Documents.
https://www.oxygenxml.com/doc/versions/19.0/ug-editor/topics/editing-nvdl-schemas.html
CC-MAIN-2017-39
refinedweb
147
55.27
Home >>C++ Tutorial >C++ Data Abstraction The procedure that involves delivering of only the essential details to the external world and all the internal details are hidden, in other words the representation of the essential details in the program only is known as the Data Abstraction in C++. Another definition of the data abstraction in C++ can be that it is a programming technique which generally depends on the interference separation and details of the program implementation. Here is a real life example that will help you understand it from a layman’s view: ordering a product online is very easy, you just select the item, add the address and pay for it. Then the item gets delivered to you within a promised time. Here is the catch, we know how the product will get to us but we are not aware of the fact that how the gets shipped to us or how is the pricing gets decided or how the payment gets to the merchant. Hence, it can be said that E-commerce website separates the implementation details from the external interface. An eminent level of abstraction is provided by the C++ programming language. For instance, the pow() function is generally used to calculate the power of a number despite the algorithm being unknown, the function follows this. Generally, if the users implement the class with public and private members in the C++ program then it is understood that it is an example of the data abstraction in C++. There are generally two ways by which data abstraction can be achieved: As we already known that classes are generally used to gather all the data members and member functions in to a single unit just by using the access specifiers. And from this fact you can conclude that the classes are used to achieve abstraction. The responsibility to determine that which data member will be visible outside and which will not, generally belongs to the class. Header file is generally known as another type of the abstraction. For instance, we have already discussed that the pow()function is used to calculate the power of a number despite the algorithm being unknown that is used in calculating the power. Here are the examples of the data abstraction in C++ that will help you in understanding the topic from a better point of view: #include <iostream> using namespace std; class add { private: int a, b, c; // private members public: void Sum() { cout<<"Enter Your first and Second number "; cin>>a>>b; c= a+b; cout<<"Sum of two number = "<<c<<endl; } }; int main() { add obj; obj.Sum(); return 0; } Here are some of the advantages of the abstraction that are listed below:
http://www.phptpoint.com/data-abstraction-in-cpp/
CC-MAIN-2021-10
refinedweb
452
51.92
______________________________________________________________________________ expat − Creates an instance of an expat parser object package require tdom expat ?parsername? ?-namespace? ?arg arg .. xml::parser ?parsername? ?-namespace? ?arg arg .. _________________________________________________________________ The parser created with expat or xml::parser (which is just another name for the same command in an own namespace) are able to parse any kind of well-formed XML. The parsers are stream oriented XML parser. This means that you register handler scripts with the parser prior to starting the parse. These handler scripts are called when the parser discovers the associated structures in the document being parsed. A start tag is an example of the kind of structures for which you may register a handler script. The parsers do not validate the XML document. They do parse the internal DTD and, at request, external DTD and external entities, if you resolve the identifier of the external entities with the -externalentitycommand script (see there). Additionly, the Tcl extension code that implements this command provides an API for adding C level coded handlers. Up to now, there exists the parser extension command "tdom". The handler set installed by this extension build an in memory "tDOM" DOM tree, while the parser is parsing the input. It is possible to register an arbitrary amount of different handler scripts and C level handlers for most of the events. If the event occurs, they are called in turn. -namespace Enables namespace parsing. You must use this option while creating the parser with the expat or xml::parser command. You can’t enable (nor disable) namespace parsing with <parserobj> configure .... -final boolean This option indicates whether the document data next presented to the parse method is the final part of the document. A value of "0" indicates that more data is expected. A value of "1" indicates that no more is expected. The default value is "1". If this option is set to "0" then the parser will not report certain errors if the XML data is not well-formed upon end of input, such as unclosed or unbalanced start or end tags. Instead some data may be saved by the parser until the next call to the parse method, thus delaying the reporting of some of the data. If this option is set to "1" then documents which are not well-formed upon end of input will generate an error. -baseurl url Reports the base url of the document to the parser. -elementstartcommand script Specifies a Tcl command to associate with the start tag of an element. The actual command consists of this option followed by at least two arguments: the element type name and the attribute list. The attribute list is a Tcl list consisting of name/value pairs, suitable for passing to the array set Tcl command. Example: proc HandleStart {name attlist} { puts stderr "Element start ==> $name has attributes $attlist" } $parser configure -elementstartcommand HandleStart $parser parse {<test id="123"></test>} This would result in the following command being invoked: HandleStart text {id 123} -elementendcommand script Specifies a Tcl command to associate with the end tag of an element. The actual command consists of this option followed by at least one argument: the element type name. In addition, if the -reportempty option is set then the command may be invoked with the -empty configuration option to indicate whether it is an empty element. See the description of the -reportempty option for an example. proc HandleEnd {name} { puts stderr "Element end ==> $name" } $parser configure -elementendcommand HandleEnd HandleEnd test -characterdatacommand script Specifies a Tcl command to associate with character data in the document, ie. text. The actual command consists of this option followed by one argument: the text. It is not guaranteed that character data will be passed to the application in a single call to this command. That is, the application should be prepared to receive multiple invocations of this callback with no intervening callbacks from other features. proc HandleText {data} { puts stderr "Character data ==> $data" } $parser configure -characterdatacommand HandleText $parser parse {<test>this is a test document</test>} HandleText {this is a test document} -processinginstructioncommand script Specifies a Tcl command to associate with processing instructions in the document. The actual command consists of this option followed by two arguments: the PI target and the PI data. proc HandlePI {target data} { puts stderr "Processing instruction ==> $target $data" } $parser configure -processinginstructioncommand HandlePI $parser parse {<test><?special this is a processing instruction?></test>} HandlePI special {this is a processing instruction} -notationdeclcommand script Specifies a Tcl command to associate with notation declaration in the document. The actual command consists of this option followed by four arguments: the notation name, the base uri of the document (this means, whatever was set by the -baseurl option), the system identifier and the public identifier. The notation name is never empty, the other arguments may be. -externalentitycommand script Specifies a Tcl command to associate with references to external entities in the document. The actual command consists of this option followed by three arguments: the base uri, the system identifier of the entity and the public identifier of the entity. The base uri and the public identifier may be the empty list. This handler script has to return a tcl list consisting of three elements. The first element of this list signals, how the external entity is returned to the processor. At the moment, the three allowed types are "string", "channel" and "filename".", or the path to the external entity to be read in case of type "filename". Behind the scene, the external entity referenced by the returned Tcl channel, string or file name will be parsed with an expat external entity parser with the same handler sets as the main parser. If parsing of the external entity fails, the whole parsing is stopped with an error message. If a Tcl command registered as externalentitycommand isn’t able to resolve an external entity it is allowed to return TCL_CONTINUE. In this case, the wrapper give the next registered externalentitycommand a try. If no externalentitycommand is able to handle the external entity parsing stops with an error. proc externalEntityRefHandler {base systemId publicId} { if {![regexp {^[a-zA-Z]+:/} $systemId]} { regsub {^[a-zA-Z]+:} $base {} base set basedir [file dirname $base] set systemId "[set basedir]/[set systemId]" } else { regsub {^[a-zA-Z]+:} $systemId systemId } if {[catch {set fd [open $systemId]}]} { return -code error \ -errorinfo "Failed to open external entity $systemId" } return [list channel $systemId $fd] } set parser [expat -externalentitycommand externalEntityRefHandler \ -baseurl "" \ -paramentityparsing notstandalone] $parser parse {<?xml version=’1.0’?> <!DOCTYPE test SYSTEM "test.dtd"> <test/>} externalEntityRefHandler test.dtd {} External entities are only tried to resolve via this handler script, if necessary. This means, external parameter entities triggers this handler only, if -paramentityparsing is used with argument "always" or if -paramentityparsing is used with argument "notstandalone" and the document isn’t marked as standalone. -unknownencodingcommand script Not implemented at Tcl level. -startnamespacedeclcommand script Specifies a Tcl command to associate with start scope of namespace declarations in the document. The actual command consists of this option followed by two arguments: the namespace prefix and the namespace URI. For an xmlns attribute, prefix will be the empty list. For an xmlns="" attribute, uri will be the empty list. The call to the start and end element handlers occur between the calls to the start and end namespace declaration handlers. -endnamespacedeclcommand script Specifies a Tcl command to associate with end scope of namespace declarations in the document. The actual command consists of this option followed by the namespace prefix as argument. In case of an xmlns attribute, prefix will be the empty list. The call to the start and end element handlers occur between the calls to the start and end namespace declaration handlers. -commentcommand script Specifies a Tcl command to associate with comments in the document. The actual command consists of this option followed by one argument: the comment data. proc HandleComment {data} { puts stderr "Comment ==> $data" } $parser configure -commentcommand HandleComment $parser parse {<test><!-- this is <obviously> a comment --></test>} HandleComment { this is <obviously> a comment } -notstandalonecommand script This Tcl command is called, if the document is not standalone (it has an external subset or a reference to a parameter entity, but does not have standalone="yes"). It is called with no additional arguments. -startcdatasectioncommand script Specifies a Tcl command to associate with the start of a CDATA section. It is called with no additional arguments. -endcdatasectioncommand script Specifies a Tcl command to associate with the end of a CDATA section. It is called with no additional arguments. -elementdeclcommand script Specifies a Tcl command to associate with element declarations. The actual command consists of this option followed by two arguments: the name of the element and the content model. The content model arg is a tcl list of four elements. The first list element specifies the type of the XML element; the six different possible types are reported as "MIXED", "NAME", "EMPTY", "CHOICE", "SEQ" or "ANY". The second list element reports the quantifier to the content model in XML Syntax ("?", "*" or "+") or is the empty list. If the type is "MIXED", then the quantifier will be "{}", indicating an PCDATA only element, or "*", with the allowed elements to intermix with PCDATA as tcl list as the fourth argument. If the type is "NAME", the name is the third arg; otherwise the third argument is the empty list. If the type is "CHOICE" or "SEQ" the fourth argument will contain a list of content models build like this one. The "EMPTY", "ANY", and "MIXED" types will only occur at top level. Examples: proc elDeclHandler {name content} { puts "$name $content" } set parser [expat -elementdeclcommand elDeclHandler] $parser parse {<?xml version=’1.0’?> <!DOCTYPE test [ <!ELEMENT test (#PCDATA)> ]> <test>foo</test>} test {MIXED {} {} {}} $parser reset $parser parse {<?xml version=’1.0’?> <!DOCTYPE test [ <!ELEMENT test (a|b)> ]> <test><a/></test>} elDeclHandler test {CHOICE {} {} {{NAME {} a {}} {NAME {} b {}}}} -attlistdeclcommand script Specifies a Tcl command to associate with attlist declarations. The actual command consists of this option followed by five arguments. The Attlist declaration handler is called for *each* attribute. So a single Attlist declaration with multiple attributes declared will generate multiple calls to this handler. The arguments are the element name this attribute belongs to, the name of the attribute, the type of the attribute, the default value (may be the empty list) and a required flag. If this flag is true and the default value is not the empty list, then this is a "#FIXED" default. proc attlistHandler {elname name type default isRequired} { puts "$elname $name $type $default $isRequired" } set parser [expat -attlistdeclcommand attlistHandler] $parser parse {<?xml version=’1.0’?> <!DOCTYPE test [ <!ELEMENT test EMPTY> <!ATTLIST test id ID #REQUIRED name CDATA #IMPLIED> ]> <test/>} This would result in the following commands being invoked: attlistHandler test id ID {} 1 attlistHandler test name CDATA {} 0 -startdoctypedeclcommand script Specifies a Tcl command to associate with the start of the DOCTYPE declaration. This command is called before any DTD or internal subset is parsed. The actual command consists of this option followed by four arguments: the doctype name, the system identifier, the public identifier and a boolean, that shows if the DOCTYPE has an internal subset. -enddoctypedeclcommand script Specifies a Tcl command to associate with the end of the DOCTYPE declaration. This command is called after processing any external subset. It is called with no additional arguments. -paramentityparsing never|notstandalone|always "never" disables expansion of parameter entities, "always" expands always and "notstandalone" only, if the document isn’t "standalone=’no’". The default ist "never" -entitydeclcommand script Specifies a Tcl command to associate with any entity declaration. The actual command consists of this option followed by seven arguments: the entity name, a boolean identifying parameter entities, the value of the entity, the base uri, the system identifier, the public identifier and the notation name. According to the type of entity declaration some of this arguments may be the empty list. -ignorewhitecdata boolean If this flag is set, element content which contain only whitespaces isn’t reported with the -characterdatacommand. -ignorewhitespace boolean Another name for -ignorewhitecdata; see there. -handlerset name This option sets the Tcl handler set scope for the configure options. Any option value pair following this option in the same call to the parser are modifying the named Tcl handler set. If you don’t use this option, you are modifying the default Tcl handler set, named "default". -noexpand boolean Normally, the parser will try to expand references to entities defined in the internal subset. If this option is set to a true value this entities are not expanded, but reported literal via the default handler. Warning: If you set this option to true and doesn’t install a default handler (with the -defaultcommand option) for every handler set of the parser all internal entities are silent lost for the handler sets without a default handler. -useForeignDTD <boolen> If <boolen> is true and the document does not have an external subset, the parser will call the -externalentitycommand script with empty values for the systemId and publicID arguments. This option must be set, before the first piece of data is parsed. Setting this option, after the parsing has started has no effect. The default is not to use a foreign DTD. The default is restored, after reseting the parser. Pleace notice, that a -paramentityparsing value of "never" (which is the default) suppresses any call to the -externalentitycommand script. Pleace notice, that, if the document also doesn’t have an internal subset, the -startdoctypedeclcommand and enddoctypedeclcommand scripts, if set, are not called. parser configure option value ?option value? Sets configuration options for the parser. Every command option, except -namespace can be set or modified with this method. parser cget ?-handlerset name? option Return the current configuration value option for the parser. If the -handlerset option is used, the configuration for the named handler set is returned. parser free Deletes the parser and the parser command. A parser cannot be freed from within one of its handler callbacks (neither directly nor indirectly) and will raise a tcl error in this case. parser get -specifiedattributecount|-idattributeindex|-currentbytecount|-currentlinenumber|-currentcolumnnumber|-currentbyteindex -specifiedattributecount Returns the number of the attribute/value pairs passed in last call to the elementstartcommand that were specified in the start-tag rather than defaulted. Each attribute/value pair counts as 2; thus this corresponds to an index into the attribute list passed to the elementstartcommand. -idattributeindex Returns the index of the ID attribute passed in the last call to XML_StartElementHandler, or -1 if there is no ID attribute. Each attribute/value pair counts as 2; thus this corresponds to an index into the attributes list passed to the elementstartcommand. -currentbytecount Return the number of bytes in the current event. Returns 0 if the event is in an internal entity. -currentlinenumber Returns the line number of the current parse location. -currentcolumnnumber Returns the column number of the current parse location. -currentbyteindex Returns the byte index of the current parse location. Only one value may be requested at a time. parser parse data Parses the XML string data. The event callback scripts will be called, as there triggering events happens. This method cannot be used from within a callback (neither directly nor indirectly) of the parser to be used and will raise an error in this case. parser parsechannel channelID Reads the XML data out of the tcl channel channelID (starting at the current access position, without any seek) up to the end of file condition and parses that data. The channel encoding is respected. Use the helper proc tDOM::xmlOpenFile out of the tDOM script library to open a file, if you want to use this method. This method cannot be used from within a callback (neither directly nor indirectly) of the parser to be used and will raise an error in this case. parser parsefile filename Reads the XML data directly out of the file with the filename filename and parses that data. This is done with low level file operations. The XML data must be in US-ASCII, ISO-8859-1, UTF-8 or UTF-16 encoding. If applicable, this is the fastest way, to parse XML data. This method cannot be used from within a callback (neither directly nor indirectly) of the parser to be used and will raise an error in this case. parser reset Resets the parser in preparation for parsing another document. A parser cannot be reseted from within one of its handler callbacks (neither directly nor indirectly) and will raise a tcl error in this cases. A script invoked for any of the parser callback commands, such as -elementstartcommand, -elementendcommand, etc, may return an error code other than "ok" or "error". All callbacks may in addition return "break" or "continue". If a callback script returns an "error" error code then processing of the document is terminated and the error is propagated in the usual fashion. If a callback script returns a "break" error code then all further processing of every handler script out of this Tcl handler set is suppressed for the further parsing. This does not influence any other handler set. If a callback script returns a "continue" error code then processing of the current element, and its children, ceases for every handler script out of this Tcl handler set and processing continues with the next (sibling) element. This does not influence any other handler set. expatapi, tdom SAX
http://man.sourcentral.org/f14/n+expat
CC-MAIN-2018-43
refinedweb
2,893
53.81
[Blog Map]. Note: The Open XML API Version 2.0 will contain additional functionality that enables easier processing of the XML contents of parts. However, it's not available yet. It will be the best way to query documents in the future, but for now, the approach presented in this post is interesting. I have a set of goals for this code: To accomplish these goals, I used the following approach: Using Nominal Tuple Types as Part of the API In my functional programming tutorial, I talked about the idea of using nominal types for tuples instead of anonymous types. I use this approach in this code. Using types that are defined using the automatic property syntax helps in shortening the code. I use the approach that is in the spirit of a dynamic language - to shorten the code, make all properties public, and use automatic properties. This makes the properties read/write. However, all the code is written in a functional style - no objects are mutated after creation and initialization. So, for example, here is the definition of the class that encapsulates a comment for a paragraph: public class Comment { public int Id { get; set; } public string Text { get; set; } public string Author { get; set; } } Annotating Each Part with its XDocument This code uses the approach that each part in the package is annotated with its XDocument. There is a small extension method to get an XDocument for a part. If the annotation already exists, we avoid de-serializing, and just return the annotation.; Then, any time we have a part, we can call GetXDocument(), knowing that we aren’t causing any extra work for deserializing: return (string)mainDocument .StyleDefinitionsPart .GetXDocument() .Root .Elements(w + "style") .Where(style => (string)style.Attribute(w + "type") == "paragraph" && (string)style.Attribute(w + "default") == "1") .First() .Attribute(w + "styleId"); An Extension Method on MainDocumentPart To retrieve the paragraphs of a document, I implemented an extension method on the MainDocumentPart. The signature is: public static IEnumerable<Paragraph> Paragraphs( this MainDocumentPart mainDocument) Using the Open XML SDK, you navigate to the MainDocumentPart. Once you have it, you can call the Paragraphs method: using (WordprocessingDocument doc = WordprocessingDocument.Open(filename, false)) foreach (var p in doc.MainDocumentPart.Paragraphs()) { Console.WriteLine("Style: {0} Text: >{1}<", p.StyleName.PadRight(16), p.Text); foreach (var c in p.Comments()) Console.WriteLine( " Comment Author:{0} Text:>{1}<", c.Author, c.Text); } The Paragraphs method returns a collection of Paragraph objects. The Paragraph class contains some public properties, such as Text and Style. It also contains a method, Comments, which returns a collection of comments for each paragraph. (A paragraph, of course, can have more than one comment.) It also contains a public property ParagraphElement, which contains the <w:p> XElement for the paragraph. The code takes care of the housekeeping necessary to aggregate multiple <w:t> text nodes into a single string containing the text of the paragraph. If you run this code on the small document that is attached to this post, you see: Printing Paragraphs =================== Style: Normal Text: >Paragraph 1< Comment Author:Eric White (DPE) Text:>Comment 1< Comment Author:Eric White (DPE) Text:>Comment 2< Style: Normal Text: >Paragraph 2< Style: Normal Text: >Paragraph 3< Comment Author:Eric White (DPE) Text:>Comment 3< Style: Normal Text: >Paragraph 4< There is also an extension method on the MainDocumentPart to retrieve the default style for the document. Its signature: public static string DefaultStyle(this MainDocumentPart mainDocument) The Paragraphs method uses this extension method, but you are free to use it too. The code to retrieve the paragraphs and comments for a Wordprocessing document is 88 lines long. An Extension Method on WorksheetPart To retrieve the rows and cells of a spreadsheet, I implemented an extension method on the WorksheetPart. The signature is: public static IEnumerable<Row> Rows( this WorksheetPart worksheetPart) The Row class contains a method, Cells, which returns a collection of the cells for the row. Its signature: public IEnumerable<Cell> Cells() Using the Open XML SDK, you navigate to the WorksheetPart. Once you have it, you can iterate through the rows, and for each row, you can iterate through the cells in the row: Console.WriteLine("Contents of Spreadsheet"); Console.WriteLine("======================="); using (SpreadsheetDocument doc = SpreadsheetDocument.Open(filename, false)) WorksheetPart worksheet = (WorksheetPart)doc.WorkbookPart.GetPartById("rId1"); foreach (var row in worksheet.Rows()) Console.WriteLine(" RowId:{0}", row.RowId); Console.WriteLine(" Spans:{0}", row.Spans); foreach (var cell in row.Cells()) {); } Console.WriteLine(); When run on the small spreadsheet attached to this post, you see: Contents of Spreadsheet ======================= RowId:1 Spans:1:3 Column:A1 ColumnId:A Value:1 Column:B1 ColumnId:B Value:2 Column:C1 ColumnId:C Value:3 RowId:2 Column:A2 Type:s Value:0 SharedString:>a< Column:B2 SharedString:>b< Column:C2 SharedString:>c< If you want to retrieve just the cells for a specific column, you can just tack on a Where method call to the Cells method call: WorksheetPart worksheet = (WorksheetPart)doc.WorkbookPart.GetPartById("rId1"); foreach (var row in worksheet.Rows()) Console.WriteLine(" RowId:{0}", row.RowId); Console.WriteLine(" Spans:{0}", row.Spans); foreach (var cell in row.Cells().Where(c => c.ColumnId == "B"))); The code to get rows and cells for spreadsheets is 69 lines long. This meets my definition of “short”. Eric White shows some interesting use of LINQ to XML to query an Open XML document. Something that is Eric White on Linq and Open XML. Eric White has been posting some great code samples lately, including Ce post n’a pas voulu partir ni jeudi ni vendredi, le voici donc ! Des mise à jours à n’en plus finir (November 14, 2008 - I've updated my approach for querying Open XML documents using LINQ to XML. You [Blog Map] Excel has a very cool feature where you can declare that a range of cells is a table. It is What about the body of the extension method Paragraphs? Is there some optimal way of writing that? Hi Sten, did you take a look at the Paragraphs extension method (attached to this blog post?) I think it is pretty optimal: public static IEnumerable<Paragraph> Paragraphs(this MainDocumentPart mainDocument){ XNamespace w = ""; XName r = w + "r"; XName ins = w + "ins"; string defaultStyle = mainDocument.DefaultStyle(); return from p in mainDocument.GetXDocument() .Root.Element(w + "body").Descendants(w + "p") let styleNode = p.Elements(w + "pPr").Elements(w + "pStyle") .FirstOrDefault() select new Paragraph(mainDocument) { ParagraphElement = p, StyleName = styleNode != null ? (string)styleNode.Attribute(w + "val") : defaultStyle, // in the following query, need to select both the r and ins elements // to assemble the text properly for paragraphs that have tracked changes. Text = p.Elements() .Where(z => z.Name == r || z.Name == ins) .Descendants(w + "t") .StringConcatenate(element => (string)element) };} Thanks a bunch! I'm trying to use OOXML API 2 and its class structure instead of XDocument, but in either case the extraction of Text tip is very helpful. Can one easily write and alter the content as easily? Hi Pierre, The code that I've presented here doesn't write or alter content. It's not hard to modify documents, but it is more work than querying them. I'm working on some ideas around this - I'll blog them as soon as I have time. :) -Eric If you would like to receive an email when updates are made to this post, please register here RSS Trademarks | Privacy Statement
http://blogs.msdn.com/ericwhite/archive/2008/07/09/open-xml-sdk-and-linq-to-xml.aspx
crawl-002
refinedweb
1,231
55.74
From: David Abrahams (dave_at_[hidden]) Date: 2007-06-14 20:29:59 on Thu Jun 14 2007, Eric Niebler <eric-AT-boost-consulting.com> wrote: > David Abrahams wrote: >> on Thu Jun 14 2007, Eric Niebler <eric-AT-boost-consulting.com> wrote: >>> I also seem to recall that there was still some question as to whether >>> the use of the reference violated the ODR, if the reference is not >>> optimized away. I don't think we ever got an answer to that question. >> >> I don't think there ever was any question about that. At least, I >> never had a question about it. References don't have any identity >> distinct from the object they reference. > > > Funny you should say that, since you were the one who sent the "Are > references subject to ODR?" question to the std reflector: > >> David Abrahams <dave_at_[hidden]> >> 4/1/06 >> reply-to c++std-core_at_[hidden] >> to undisclosed-recipients <> >> date Apr 1, 2006 9:02 AM >> subject Are references subject to ODR? >> To: C++ core language mailing list >> Message c++std-core-11390 >> >> Is the following program legal? >> >> // a.cpp >> int x; >> int& y = x; >> >> // b.cpp >> extern int x; >> int& y = x; >> >> My impression is that it is not legal because references are not >> explicitly given an exemption in 3.2/5 >> >> If I am right, should it be so? It's hard to imagine how this could >> be a problem. > > > The result of ensuing discussion was that a reference is a pointer, and > so they *do* have identity. That's a different question than I thought you were referring to. I would put the reference itself in an unnamed namespace, and I'm pretty sure that handles it. As long as the reference refers to something not in an unnamed namespace: template<typename T> struct static_const { static T const value; }; template<typename T> T const static_const<T>::value = {}; struct placeholder {}; namespace // <<============== HERE { placeholder const &_1 = static_const<placeholder>::value; } template<class T> void foo(T const &) { // use _1 here. OK, _1 refers to the same object // in all translation units. It doesn't matter that the // reference is has a different identity; the object is // the same: static_const<placeholder>::value } -- Dave Abrahams Boost Consulting The Astoria Seminar ==> Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2007/06/123380.php
CC-MAIN-2020-29
refinedweb
394
56.35
Created attachment 34408 [details] password protected xlsx I will attach TestThisEncryption.xlsx. The password is: Test001!! This opens ok in Excel but Poi fails to decrypt it using the password. It seems to happen if the xlsx is created in one Excel version and then later password protected with a recent version of Excel on Windows. If anyone has any insight into why this workbook won't decrypt in Poi 3.15 that would be appreciated. This is the decryptor check that fails for me: def checkPassword(fileName: String, password: String) { val fs = new POIFSFileSystem(new FileInputStream(fileName)) val info = new EncryptionInfo(fs) val decryptor = Decryptor.getInstance(info) println(s"$fileName password works? ${decryptor.verifyPassword(password)}") } the file also can't be opened via Libre Office 5 ... is the password wrong? Thanks Andreas for checking. The xlsx opens for in Excel 2016 on Mac. The xlsx itself was created by a colleague using a Windows install of Excel. hm .. the cipher of the header (aes128) doesn't match the cipher of the verifier (aes256) ... I am able to open the workbook with "Test001!!" in Excel 2013 on Windows 7. Up till now the implementation used the cipher and hashes of the header and verifier interchangeably, as they were always the same in the test files. So I guess, now we need to use the verifier data (keydata element) for validating the key, and the header data for en-/decryption. I'll play around with it ... (and it looks like Libre Office made the same mistake ...) Created attachment 34410 [details] preliminary patch for decrypting This is a preliminary patch for decrypting. Currently encryption doesn't work. When both work, I'll add another customized encryption test, to produce a similar file as the original failing one ... Thank you for providing the test file - patch applied via r1767399 I guess this won't be the last issue around encryption, as the agile encryption leaves a few more possibilities open on what to go wrong next :|
https://bz.apache.org/bugzilla/show_bug.cgi?format=multiple&id=60320
CC-MAIN-2021-31
refinedweb
333
58.28
SymPy is a free open source library in Python, used for Symbolic and Mathematical Computation. In this tutorial, we will cover how to effectively use the Python SymPy Library to perform mathematical operations on expressions, Symbolic computations and other various algebraic properties. Symbolic Computation with SymPy SymPy allows us represent data and values in a different manner, that is 100% accurate. Let’s take a look at the following example. import math print(math.sqrt(7)) 2.6457513110645907 What this does, is returns an approximation of the sqaure root of 7. It’s not a 100% exact answer. The more decimal places you include, the more accurate the answer gets. So technically this will never be an exact representation. Now let’s try the same thing, but with SymPy instead. import sympy print(sympy.sqrt(7)) sqrt(7) What SymPy returned here, was not a mathematical value, but rather a symbolic representation of the sqrt of 7. Thus, it can never be inaccurate. We can use this symbolic representation in other calculations, with a higher degree of accuracy. Here’s an another interesting snippet. import sympy import math print(math.pow(math.sqrt(7), 2)) print(sympy.sqrt(7) ** 2) 7.000000000000001 7 We took the sqrt root of 7 with both libraries, then applied a power of 2 on both. SymPy returned the correct value, whereas the math library didn’t (because it works off approximations). Other interesting things SymPy can do, is return simplified representations. sympy.sqrt(8) 2*sqrt(2) The next thing to cover in this Tutorial, is how to create and manipulate expressions in Python SymPy. Mathematical Expressions with SymPy SymPy gives us the ability to create actual expressions using variables (called symbols in SymPy). For example, if you want to represent the equation 2x2 + 4x + 5 in Python, how would you do so? You could represent such an equation in a string, but it would be of little use without SymPy. SymPy gives us the ability to create and handle such expressions that can actually be used in computation. Python SymPy has various functions which could perform actual operations such as differentiation and integration, and return the resultant expression! (We’ll take a look at these later in the tutorial) So how do we create such an expression? Well it’s simple. First we define a symbol, which represents an unknown/variable like “x” or “y”. from sympy import symbols x, y = symbols("x y") We can now use x and y to construct expressions. No need to make a string or anything, just do it normally. Let’s try out a few examples. from sympy import symbols x, y = symbols("x y") expr1 = 2*x + 4*y # 2x + 2y expr2 = 2*(x**2) + 5 # 2(x^2) + 5 expr3 = x**2 + y**2 # x^2 + y^2 Modifying SymPy Expressions We can even do cool stuff like modify these expressions, by adding, subtracting or multiplying constants and other symbols from it. print(expr1) expr1 += 5 print(expr1) 2*x + 4*y 2*x + 4*y + 5 SymPy will even automatically adjust the expressions as you modify it. For example, if you have an expression "2x", and you add "x" to it, you might expect it to just concatenate and become 2x + x. But nope, SymPy will automatically simplify it to 3x. Here’s a short code snippet showing this. expr = 2*y + x + 5 print(expr) expr -= x print(expr) expr += 3 print(expr) x + 2*y + 5 2*y + 5 2*y + 8 Substituting values into Expressions Now that we know how to create and modify expressions in SymPy, let’s take a look at how to evaluate them. What we need to do here, is use the subs() function to substitute the symbols with numerical values. expr1 = 2*(x**2) + 5 # 2(x^2) + 5 print("Expr 1: (x=2) ", expr1.subs(x, 2)) Expr 1: (x=2) 13 Now let’s try this for an expression which has multiple unknowns. expr2 = 2*x + 4*y # 2x + 2y print("Expr 2: (x=2, y=4) ", expr2.subs( {x: 2, y: 4} )) Expr 2: (x=2, y=4) 20 We can also substitute them with other symbols, if that’s what we want. x, y, a, b = symbols("x y a b") expr3 = x**2 + y**2 # x^2 + y^2 print(expr3.subs({x: a, y: b})) a**2 + b**2 Solving Equations with SymPy (root finding) What’s even cooler, is that SymPy can literally solve entire equations for you, and return the root(s). No need to code the entire thing yourself, just use a single function along with the SymPy expression and a list of root(s) will be returned. Let’s try to use the SymPy solve() on the expression x2 - x - 6. import sympy from sympy import symbols x, y = symbols("x y") expr = x**2 - x - 6 print(sympy.solve(expr)) [-2, 3] Let’s try this out on another expression. expr = (x + 1)*(x - 1)*(x + 5) print(sympy.expand(expr)) print(sympy.solve(expr)) x**3 + 5*x**2 - x - 5 [-5, -1, 1] In the next section, we will cover several such operations, and explain how you can use them in your SymPy code. Trigonometry with SymPy Trigonometry is a pretty deal in most of Mathematica, so you might be wondering how you can include trigonometric functions and identities within your mathematical expressions. Let’s take a look! Here’s a simple expression, sin(x). Let’s plug in a few values, just to verify the output. (These input values of x are in radians) from sympy import symbols, expand, solve, trigsimp from sympy import sin, cos, tan, acos, asin, atan, sinh, cosh, tanh, sec, cot, csc x, y = symbols("x y") expr = sin(x) print(expr.subs(x, 0)) print(expr.subs(x, (90/57.3))) 0 0.999999993306926 As you have probably already noticed from the imports in the previous code example, SymPy gives us access to all the different variants of the trigonometric functions. “asin” represents inverse sin (arc sin), whereas “sinh” represents hyperbolic sin. The same pattern applies to cos and tan as well. Another cool thing we can do SymPy, is to simplify trigonometric identities. (No need to memorize any of them anymore!) SymPy will automatically attempt to simplify any expression you pass into the trigsimp() function. Let’s take a look at a few examples. x, y = symbols("x y") print(trigsimp(sin(x)**2 + cos(x)**2)) print(trigsimp(sin(x)**4 - 2*cos(x)**2*sin(x)**2 + cos(x)**4)) print(trigsimp(sin(x)*tan(x)/sec(x))) 1 cos(4*x)/2 + 1/2 sin(x)**2 Differentiation and Integration in SymPy The last main topic we will discuss in this Tutorial is how to differentiate and integrate expressions in Python SymPy. Differentiation In order to differentiate expressions using SymPy, we can use the diff() method on any expressions Depending on the type of parameters passed to diff(), it will return the differential of that expression. The first parameter for diff() is the expression that you want to differentiate. The second parameter is what you wish to derivative with respect to. e.g: “Derivate with respect to x”. Let’s take a look at an example. expr = x**2 - x - 6 print(diff(expr, x)) 2*x - 1 If you wish to differentiate an expression multiple times, there are two ways of doing so. The first method is by simply including the symbol you wish to derivate with respect to, multiple times. expr = x**4 print(diff(expr, x)) print(diff(expr, x, x)) print(diff(expr, x, x, x)) 4*x**3 12*x**2 24*x Alternatively, you can include an integer “n” as a parameter after the symbol, and it will differentiate it “n” times. expr = x**4 print(diff(expr, x, 1)) print(diff(expr, x, 2)) print(diff(expr, x, 3)) 4*x**3 12*x**2 24*x Furthermore, you can also differentiate with respect to multiple symbols within a single diff() function. expr = y*x**2 + x*y**2 print(diff(expr, x, y)) 2*(x + y) Integration with SymPy It’s time for Integration with SymPy. Let’s take a look at how we can integrate various mathematical expressions and obtain their integral forms. Similarly to how differentiation works, we have a function for integration in SymPy called integrate(). It also takes the same parameters, which is the symbol by which we wish to integrate the expression by. Let’s take a look at some examples. from sympy import symbols, diff, integrate x, y = symbols("x y") expr = 2*x print(integrate(expr, x)) x**2 A slightly more complex expression being integrated. expr = x**2 - x - 6 print(integrate(expr, x)) x**3/3 - x**2/2 - 6*x Here’s what happens when you integrate an expression with multiple symbols, with respect to just one of those symbols. expr = x + y + 2 print(integrate(expr, x)) x**2/2 + x*(y + 2) For more information, check out our dedicated tutorial on Differentiation and Integration with SymPy. Python SymPy Tutorial Series Is that all there is to SymPy though? Of course not! There are many more advanced features and functions yet to be covered. Here is a complete breakdown of all the individual concepts that we have covered in Python SymPy on this website, along with links to their dedicated tutorials. - Python SymPy Installation Guide - Mathematical Expressions in SymPy - Differentials and Integrals in SymPy - Converting Strings to SymPy Expressions - Trigonometric Functions in SymPy - Matrices in SymPy - Limits with SymPy This marks the end of the Python SymPy Tutorial. Any suggestions or contributions for CodersLegacy are more than welcome. Questions regarding the tutorial content can be asked in the comments section below.
https://coderslegacy.com/python/sympy-tutorial/
CC-MAIN-2022-40
refinedweb
1,648
55.44