text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
ClojureFX A Clojure extension to make working with JavaFX simpler and more idiomatic. It allows you to naturally work with stock JavaFX components through use of extended protocols. Should a feature be missing you can easily extend ClojureFX in your own codebase or just fall back to standard JavaFX methods. Next stable release Next stable release is planned for December 2018. Features - FXML loading and scripting - Automatic FXML controller generation - Declarative EDN GUI structure compilation - Simplified event binding (bind a Clojure function to an event trigger) Take a look at the ClojureFX Manual. FXML loading and controller generation (require '[clojurefx.fxml :as fxml]) (defn stage-init [instance] ;; Every function called from JavaFX gets handed the controller instance. nil) (def maincontent (fxml/load-fxml-with-controller (io/resource "fxml/mainwindow.fxml") ;; Load an FXML file 'example.core/stage-init)) ;; and define the namespace and init function. Declarative UI programming (def superbutton (compile [Button {:text "Close" :action #'close-handler}])) (compile [VBox {:id "TopLevelVBox" :children [Label {:text "Hi!"} Label {:text "I'm ClojureFX!"} HBox {:id "HorizontalBox" :children [Button {:text "OK"} superbutton]}]}])
https://chiselapp.com/user/zilti/repository/clojurefx/index
CC-MAIN-2021-04
refinedweb
178
50.23
I need some help with a code I'm writing. The user inputs some letters, a loop is ran to check if it's a vowel or consonant, then when the user is done it returns the total amount of vowels and consonants. The problem I'm having is the loop is suppose to stop when the user clicks Ctrl + X, but I can't get it to work. For now as a test I have it stop when the user enters 'z'. Any help would be VERY appreciated! #include <iostream> using namespace std; void is_vowel(int & vowel, int & cons, char ch); int main(){ int vowel = 0, cons = 0; char ch = ' '; cout << "Please enter some letters (Stop with Ctrl + X): "; while (ch != 'z'){ cin >> ch; is_vowel(vowel, cons, ch); } cout << vowel << endl; cout << cons << endl; return 0; } void is_vowel(int & vowel, int & cons, char ch){ if (ch == 'a'||ch == 'e'||ch == 'i'||ch == 'o'||ch == 'u'||ch == 'A'||ch == 'E'||ch == 'I'||ch == 'O'||ch == 'U'){ vowel++; }else if(ch >= 'a' && ch <= 'z' || ch >= 'A' && ch <= 'Z' ){ cons++; } }
https://www.daniweb.com/programming/software-development/threads/399157/stopping-a-loop-with-ctrl-x
CC-MAIN-2017-17
refinedweb
178
81.97
and this is how I call it: Code: Select all import threading import logging import Queue import smbus class Bus(): def_init__(self, t): self.bus = smbus.SMBus(1) if t is 't': self.addr = 0b0110100 if t is 'p': self.addr = 0b001011 self.main = 0b10100000 self.backup = 0b10100001 self.current = 1 self.cmd = self.main self.lock = threading.Lock() logging.info("Bus established") def switch (self): #to switch between primary and backup diodes if self.current ==0: self.current = 1 cmd = self.backup elif self.current == 1: self.current = 0 cmd = self.main with self.lock: self.bus.write_byte(self.addr, cmd) def read(self): #to read the difference between the two photodiodes with self.lock d0 = self.bus.read_byte(self.addr) d1 = self.bus.read_byte(self.addr) d2 = self.bus.read_byte(self.addr) d = d0<<16 + d1<<8 +d2 d = (pos >>6) d &= 0xFFFF return d This is the ADC I'm using: Code: Select all myBus = Bus("t") theta_reading = myBus.read() print theta_reading I put in some print statements to try to debug it and it looks like it stops after it reads d0.
https://www.raspberrypi.org/forums/viewtopic.php?f=44&t=79394&p=563453
CC-MAIN-2019-39
refinedweb
186
64.27
develamitBAN USER I just took a generic nxm matrix. It should work for square matrices as well... You can pass the matrix as an argument too import numpy as npy def printMatrix(): m = npy.matrix('9 7 6; 5 4 3; 6 7 2; 3 4 8; 6 7 2; 3 4 8') (num_rows, num_cols) = m.shape #print m for c in xrange(num_cols): row_index = 0 for c1 in reversed(xrange(0, c+1)): if row_index >= num_rows: continue print(m[row_index, c1]), row_index += 1 print("\n"), for r in xrange(1, num_rows): col_index = num_cols - 1 for r1 in xrange(r, num_rows): if col_index < 0: continue print(m[r1, col_index]), col_index -= 1 print("\n"), def main(): # print matrix in a weird way :) printMatrix() if __name__ == '__main__': main() annadwilliams31, Applications Developer at National Instruments I am Anthony, Human Resources specialist with a decade of successful experience in hiring and employee management.I am a ... lillymartin, Senior Software Development Engineer at Aristocrat Gaming Hi everybody! I'm Lilly, a 22 year old girl from Us. Master of economic and financial evaluation of development ... JaneBraun, Associate at ASAPInfosystemsPvtLtd Hi, I am Jane From York,PA.My goal is to create a life that I don’t want to ... amysamson688, Accountant Hi, I am an art teacher, good in all areas of art history, from ancient art through to contemporary art ... exploring all permutations was easier for this. I wrote this in python and using the numpy library: - develamit March 23, 2014
https://careercup.com/user?id=6392367875620864
CC-MAIN-2022-27
refinedweb
247
55.03
unused import statement ... is wrong? Follow Answered Dmd Created November 26, 2013 21:00 Sometimes PyCharm 3.0.1 marks an import statement (like 'import os') as unused, but if I comment it out, the program doesn't run, saying 'NameError: name 'os' is not defined'.What might be happening? Also you can suppress inspection: use "Alt+Enter" and go to Optimize import -> suppress for statement (take a look at web-help) It's very common to do "from pylab import *" – I know that's Bad, but the thing is, this has never caused issues until PyCharm 3.0.1 for me. (And claims that this is the CORRECT way to import pylab.) Here's a minimal program that shows the problem: import os from pylab import * os.makedirs('foo') For that code, pycharm claims that "import os" is unused, but if you comment it out, os is not defined. Having a similar issue, saying it's unused... Not sure what's going on. same here Hi Thx2190, You are creating an alias for numpy and call it xyz, but then you are trying to address it via np The problem is not related to PyCharm. You should either use import numpy as np instead of the first line of code or xyz.random.seed (1) instead of the second line of code. Same issue here: Unused import statemente 'import logging' Unused import statemente 'import re' @Pablosolar R What PyCharm version do you use? Try File | Invalidate Caches / Restart... | Invalidate and Restart Sergey Karpov I am using 2020.1, build #PC-201.6668.115. I tried the invalidate option but same behaviour: Unused import statemente 'import logging' Unused import statemente 'import re' Thank you! Could you share a screenshot showing that in your code and the location of the file in your project? Sergey Karpov sure, there you have an example. There is something curious... this behaviour only seems to happen in this class. Any chance you could share the whole code snippet with the subclasses of that class? I think it's something code-specific and probably related to the constructor overriding, but not I'm not 100% sure. The above issue (Class uses of an import not being recognized as imports) is happening with me, too: Initial import: Initial Class definition (truncated): A method within the class using the problematic import: Note, this is a PyQt5 application class, and is very long - like, 800 lines long. It's for a corporate client too, so I can't share the full code. But I'll gladly answer questions to the best of my ability if you need more context. Hello Matthew , If it is possible, please provide me with a simplified code example, so I will be able to reproduce it on my site and investigate it. Thank you in advance. That's a known issue with Django 3.1 Adding import os to settings.py should do the trick >Can I write in russian here?))))) Or only english? Better use English so other users will understand. Oh, excuse me, I've created the separate theme about this..... I thought, it would be better... It's here Thank you for quick reply! No problem. In fact, it's better to create a new ticket/post when the problem is different.
https://intellij-support.jetbrains.com/hc/en-us/community/posts/205815349-unused-import-statement-is-wrong-
CC-MAIN-2020-40
refinedweb
551
75
Subject: Re: [boost] [contract] noexcept and throwing handlers From: Lorenzo Caminiti (lorcaminiti_at_[hidden]) Date: 2016-08-07 09:06:08 On Fri, Aug 5, 2016 at 6:57 PM, Josh Juran <jjuran_at_[hidden]> wrote: > On Aug 5, 2016, at 2:13 PM, Lorenzo Caminiti <lorcaminiti_at_[hidden]> wrote: > >> I would like to discuss contracts and exception specifications (e.g., >> noexcept) specifically with respect to what Boost.Contract does. > > Iâm not following C++ standards development closely, but perhaps my comments may be of some use. > >> However, noexcept can be >> considered part of the function contract (namely, the contract that >> says the function shall not throw) >> >> For example consider a noexcept function fclose() that is called from > > Iâm going to use POSIXâs close() as an example in this discussion. > >> a destructor without a try-catch statement (correctly so because >> fclose is declared noexcept). If fclose() is now allowed to throw when >> its preconditions fail, that will cause the destructor ~x() to throw >> as well?! > > If passed -1 as an argument, close() will set errno = EBADF and return -1. This is documented behavior. A function that throws an exception when passed -1 (or under any other circumstance) is not the close() function from POSIX and should be given a different name (if itâs declared in the global namespace, at least). POSIX close() has a "wide contract" (i.e., no preconditions). For my example, I made up my own fclose() that has a narrow contract instead (i.e., it has one or more preconditions). The fclose() I use in my example is not POSIX close(). Don't worry about POSIX close(), just assume there's some sort of fclose() defined as I stated it for the sake of the example I am illustrating. >> void fclose(file& f) noexcept >> [[requires: f.is_open()]] > > I donât see why this should be treated differently than > > void fclose(file& f) noexcept { > if ( ! f.is_open() ) throw failed_precondition(); For example, this strategy will not work for class invariants failing at destructor entry because it will make the destructs throw (which is not a good idea in C++, and in fact destructors are implicitly declared noexcept in C++11): class x { bool invariant() const { ... } ~x() { if(!invariant()) throw failed_invariants(); ... This topic has been discussed in great length for C++... quoting N4160 (but also N1962, P0380, many other contract proposal for C++, previous ``What can a broken contract handler do? The most reasonable default answer appears to be std::terminate, which means "release critical resources and abort". One may wish to override the default in order to do special logging. We believe that it is not a good idea to throw from a broken contract handler. First, throwing an exception is often an action taken inside a function,in a situation where it cannot satisfy the postcondition, to make sure that class invariants are preserved. In other words, if you catch an exception or are in the middle of stack unwinding, you can safely assume that all objects' invariants are satisfied (and you can safely call destructors that may rely on invariants). If it were possible to throw from the handlers, his expectation would be violated. ...'' The above fclose() contract actually expands to something that calls a "precondition failure handler" functor: void fclose(file& f) noexcept { if (!f.is_open()) preconfition_failure_handler(from_function); ... Where by default the handler terminates: precondition_failure_handler = [] (from) { std::terminate(); }; But programmers can redefine it to throw (as you suggested above, but beware of what N4160 points out plus on how to program a throwing entry_invariant_failure_handler that shall not throw when from == from_destructor): precondition_failure_handler = [] (from) { throw failed_precondition(); }; Or to log and exit: precondition_failure_handler = [] (from) { some-logging-code; exit(-1); }; Or to take any other action programmers wish to take on precondition failure. Note: Boost.Contract does something a bit more complex than the above (using set/get functions to not expose the handler functor directly to the users, try-catch statements to call the handlers, etc.). These details are omitted here for simplicity. > The purpose of noexcept, as I understand it, is to ensure that a called function will under no circumstances throw an exception, sparing the caller the need to wrap it in try/catch. Adding a loophole that allows compiler-provided glue to throw, even when the function itself strictly speaking doesnât, invalidates this guarantee and undermines the utility of noexcept, in my opinion. Yes, I agree. That is essentially the point I was trying to make and what Boost.Contract does--no loopholes. > If the compiler can statically prove that the precondition is always satisfied, fine. But if not, then Iâd prefer that the above code not compile unless `noexcept` is removed. While desirable, this is not possible in practice. 1. First, it'd be great if the preconditions could be checked statically, but in general some times the preconditions will be satisfied and some other times they will not and the compiler will simply not know. Even if contracts were added to the language (as per N1962, P0380, etc.), static analysis tools will be able to statically check preconditions only some times, and not all the times. 2. Second, note that the compiler (and most likely static analysis tools) will not know if the action to take on contract failure is to throw or not because they will only see a call to precondition_failure_handler(from_function) and not `throw failed_precondition()`. What such failure handler call does is configured at run-time by the programmers so probably not always deductible at compile-time by the compiler or static analysis tools. Thanks, --Lorenzo Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2016/08/230523.php
CC-MAIN-2021-17
refinedweb
943
60.65
import all of the how-to articles from the pkgsrc.se wiki 1: I think this might be useful for people, that use custom build flags when building NetBSD. It is loosely based on <> 2: 3: ## Building sets 4: 5: Sometimes builds fail because things doesn't fit in 2.88M when built with certain optimizations. So if it's your case, then do: 6: 7: # ./build.sh build distribution sets 8: 9: 10: instead of 11: 12: # ./build.sh release 13: 14: 15: you'll in have your sets in your RELEASEDIR (/usr/obj/releasedir). 16: 17: ## Build kernels 18: 19: Do this for each kernel you want to build install set for 20: 21: # ./build.sh kernel=GENERIC releasekernel=GENERIC 22: 23: 24: ## Finishing up 25: 26: Then you may install over network, or build install floppies/cdrom without optimization.
https://wiki.netbsd.org/cgi-bin/cvsweb/wikisrc/tutorials/how_to_build_install_sets__44___when_you_can__39__t_build_install_floppies.mdwn?f=h;content-type=text%2Fx-cvsweb-markup;ln=1;rev=1.1
CC-MAIN-2019-13
refinedweb
142
73.98
EFnet Hits Turbulence 161. dalnet splits (Score:1) Wow (Score:1) The future (Score:2) Re:IRC = warez + child porn (Score:1) Woops (Score:1) yes, real problem (Score:4) Note that this is just one aspect of recent EFnet suckage -- Efnet & @HOME (Score:1) Blackened (Score:5) oldcharred.blackened.com: AMD K6-2 @ 333mhz, 128M of ram, 18G-10k rpm scsi primary, 9G secondary. This server houses the origional irc2.blackened.com EFnet server, the largest EFnet server in the world before it de-linked. Still running with the origional IRCD, I, O, C/N lines and TCM. It's a pity that, in blackened's case, volunteer workers such as mjr are forced to abandon what they love to do, because of immature kiddies flooding the network with useless garbage. Actually, it depends on the IRC network (Score:1) Re:The future (Score:1) Don't get me wrong, I'm glad they do but I can see fewer and fewer companies willing to give up anything for IRC Re:Actually, it depends on the IRC network (Score:1) Re:Blackened (Score:1) Blackened was really holding up most of EFNet. Yes, the rise of script kiddies has contributed to EFNet's current state, but really, if you had to place a date on it I'd pick Blackened leaving Re:The future (Score:1) > fact that no ircops participate in such matters. This has always been the case, it always worked before, but now I suppose there is a better class of asshole wondering the internet. and what do you think IRCops should do? should they do what they are meant to, and look after servers? or should they pander to thousands of whining users that accidently lost ops in a channel, and to hell with looking after a high load server channel ownership, in my experience, only works in a small network where there is some feasibility of controlling the channels so instead of having users fighting amongst themselves, the users would fight amongst themselves AND the IRCops, wasting their time and taking it away from their real purpose of looking after the servers well, there is my 2p Re:Blackened (Score:1) Semi-Stable IRC in GlaxyNet (Score:1) Oh and to an earlier post it was a channel devoted to a on-line gameing guild where we could get together and co-ordinate attacks. Wow, something to do with IRC that doesn't involve warez, kiddie pr0n or programming. Now that is wierd. "Is EFnet Dying?" (Score:1) but either way, i dont really use irc anyway, too many idiots spreading VBS viruses in every room (what idiot will download "girl-sucks-horse.jpg.VBS"???) Isn't it all just part of irc? (Score:1) Re:Efnet & @HOME (Score:1):2) Re:Semi-Stable IRC in GlaxyNet (Score:1). Best place for EFNet news (Score:1) Well, (Score:1) EFnet has been a great resource for me for computer help, etc....though I've been told once or twice to RTFM. But the people there have been generally more helpful than irritating, so I'm upset to see them getting DoS attacks, etc. I hope for everyone's sake that they can push through the "turbulence" and get things back in good order. "It is well that war is so terrible, lest we grow too fond of it." IRC Networks (Score:2) Dalnet is where I go for silly chat, that doesn't matter, and I think the services they offer (registered nicks, channels, etc) are nice. Efnet is good for various "scenes." Efnet is where the mp3 groups hang out, and I also hang out in the semi-official Ars Technical channel, along with #litestep. IMHO. I find much more intelligent conversation on Efnet that I do on Dalnet. The article that's linked to does point out the obvious, and Efnet is horrible about script kiddies. The number of DoS attacks are numerous, and I've been packetted for takeover purposes. On the other hand, Dalnet is rampant with various trojans/virii such as Life Stages, script.ini, Judgement Day, etc. Though Dalnet has done a pretty good job of implementing server side protection against these. In the end, I'll still hang out on both networks because different IRC networks server different purposes. Re:IRC = warez + child porn (Score:1) Sad but true... There are thousands of people on IRC who know the answers to any question you can think of, from HTML to freebsd help.. And of course lots of people to make friends with! Although as a friend of mine once said, "IRC is great as it gives you the opportunity to meet new people from different cultures from all over the world, and somehow find a way to piss them off madmax@efnet irc.ins.net.uk admin. Re:IRC = warez + child porn (Score:1) (...) a wake-up call to legislators who believe the Internet is controllable by legislation. De-centralization puts it beyond arm's reach and even if they could target every server being used, it would be a futile excercise as copycat protocols spring up. The same could be said for napster. Napster is not much unlike an irc-server. I'm still waiting for the MPAA or RIAA to start the lawsuits on IRC networks for "distributing intellectual property". Just as with napster, the exchange of files on IRC is a peer-to-peer issue: The IRC server only transmits the transfer-requests.Cheers, Pi IRC is kind of dying in general... (Score:1) Re:The future (Score:1) > asshole wondering the internet. I agree with everything you said - i've got the same opinions about what the opers should deal with - and what they quietly should ignore. EFNet is EFnet because of the no-ownership of channels, because of the structure, because of the history and because of (after all) its users. I'm just asking wether that no-ownership-policy may be an indirect cause for the attacks Re:IRC = warez + child porn (Score:1) Re:dalnet splits (Score:1) My wonderful EFnet, that I've enjoyed for so many years (7 or 8), is finally crumbling. This is worse than the big split back in the stealth.net days. That was some mess, too. Don Head Linux Mentor Re:It's a size problem (Score:1) EFnet never dies. It just changes form. (Score:2) Re:The future (Score:1) Re:"Is EFnet Dying?" (Score:1) Same idiots that automatically download all files and use an OS where the extension is hidden by default? I find it amusing.. (Score:1) Re:IRC = warez + child porn (Score:1) I'm afraid you're wrong. The IRC does transmit the transfer-request. If you send someone a file over DCC, what happens is that your clients sends a CTCP DCC message to that client containing your IP-address and a port-number. The client on the other side connects to this port-number and receives the file.Cheers, Pi Are you sure you're not talking about Napster? (Score:2) I can't think why any decent minded person would support the use of a protocol which is used almost solely for illegal, and quite frankly disgusting, purposes. IRC is an open protocol [newnet.net] for distributed "real time" text conferencing and file sharing. This potent idea continually gets reinvented. AOL's Instant Messager and Jabber [jabber.com] are the latest incarnations of real time conferencing. As the original killer Internet application, email has florished as a means of conferencing and file sharing. It propagated to all platforms that supported TCP/IP networking. The problem with email is that it is asynchronous. By default, it provides no notice that a message has arrived at its destination, much less was seen by the intended recipient. IRC is a way to extend the conferencing capabilities of email. You know instantly whether your message was received. For small groups, this method works well. If AOL's IM improves (for some values of "improves") on real time conferencing Napster, Gnutella and Freenet extend file sharing to be pervasive and searchable. And yes, unlicensed files are traded with wild abandon on those networks too. Hustler magazine is printed on paper, just like currence, the Bible and Neal Stephenson's Snow Crash. I wouldn't go back to using stone tables because the medium can be abused. Of course, it is easy to pick on the senile old aunt of conferencing technologies. There is no doubt that script kiddies and p0rn abound in seedy, misspelled chat rooms. It is a shame to condemn this important technology simple because of the activities of a few reprobates. If one could judge the whole by its parts, we'd have been Usenet years ago. You may not choose to use IRC because of the few bad apples, but you'd do well not to quickly condemn all IRCers. There is a lot useful information tucked away in those intangible rooms. Cheers IRC history facts straightened out (Score:2) Send flames to someone else. Re:IRC = warez + child porn (Score:1) WTF? Actually i run my own business, and my job requires knowledge of unix, HTML, cisco, etc etc etc. And where to find pictures of small girls eh? Just because you use IRC to find pics like that doesnt mean everybody does. And to find people with like-minded illegal "hobbies" with which to engage in DCC sessions. And to put on /ignore sad bastards with nothing better to do but try and get up peoples noses due to a lack of interpersonal skills and a real life? Re:The future (Score:1) > class of asshole wondering the internet. I think it's time to quote [whoever said it first]: "The amount of intelligence on the Internet is a constant; unfortunately, the population keeps increasing." It's actually quite sad, but I don't think one can expect people to behave like they did when IRC was young. In the beginning people who used IRC were among the few to even know about the Internet, but today the users on IRC represent an almost randomly chosen group of people. And there _will_ be some who won't respect other people, and there _will_ be some who will cause this kind of problems. I think the possiblity that people will start to behave nicely has to be ruled out. Those days are gone. Re:IRC = warez + child porn (Score:2) Dude, I've found two decent jobs thanks to IRC (and I wouldn't rule out taking a third). It 'aint all warez and kiddiez. Check out #php or #c sometime.... A fundamental problem with the IRC protocol (Score:2) That's right, it wouldn't. Re:A fundamental problem with the IRC protocol (Score:1) I doubt of course that such a massive change would ever be able to be implemented on a network such as efnet - half the servers run hybrid ircd, and half run comstud ircd - and nearly all are different version - co-ordinating changes in ircd are near impossible. madmax@efnet irc.ins.net.uk admin Re:IRC = warez + child porn (Score:1) Re:The future (Score:1) Of course, your attitude is fairly standard among IRC users. I'm not sure why, either. IRC is just a protocol Like ICQ. Or Oscar. Its not something special. Its not a way of life. If you think it is, get out and see the sun sometime. -------------------------- They need to... (Score:1) Re:Well, (Score:2) When turbulence happens, a branch of the network sometimes gets shaken out. We had a network of servers in the United States and a slew of machines in the Czech Republic. There were a few problems with timezones (the only time I could consistently talk to the opers over there was around 8am EST). Over the summer, a few US servers dropped (some IRCops left because they no longer had as much free time, families, etc) and the Czech network became its own autonomous network. When IRC is fun, it's a lot of fun. Unfortunately, there are always a few snotty users who think it's their devine right to pester the IRCops for weeks on end or packet a server. At some point, the IRCops have had a bad day and things like *!*@*.home.com get banned. If there were some way we could uniquely lock out a user by retinal scan or a Bad Breath breathalizer test when they connect (anyone up for re-writing identd's to do this?) we'd love to have it. However, we're stuck with broad bans in order to keep ourselves sane. It's not nice to the users, but there isn't a better solution yet. Particularly in the cases where an IRCer is good at social engineering, we have problems. Some of those users have managed (through various subtle methods) to get O: lines on our servers and our network then goes to pieces until we figure out what happened. To be honest, I don't understand what causes people to feel the need to do that every three months, but it happens. I used to spend a lot of time on irc.cs.cmu.edu (EFNet) and irc.cis.pitt.edu before that (they allowed bots!), before they were packeted so many times that our upstream cut them off. To me, that was when EFNet suddenly lost its appeal, because it became a chore to find a server where I could keep a stable client connection. I believe that EFNet will continue to exist as increasingly smaller numbers of large servers, as IRCops get tired of the problems and the fun (or power trips) become less rewarding. Re:IRC Networks (Score:2) Funny, when I left EFNet (c. 1994), Undernet seemed to have better-adjusted people than EF.... Then again, Undernet died the day they chose to make some of the admin channels a "general help" channel. -- Re:Woops (Score:1) Dumbshit. EFnet has been shit for a while now (Score:1) Re:It's a size problem (Score:2) It's been my experience that IRC servers tend to work best when in a star topology (or something close to it) than in the "spanning tree" they chose to describe in RFC-1459 (now there's an outdated document--anyone know of a completely RFC-1459-compliant network? I didn't think so :-) ). When you have a massive routing-only server, with all the other servers connected to it, it helps a great deal. Mind you, it may work better with multiple central servers, but YMMV. -- Re:Woops (Score:1) Re:dalnet splits (Score:2) Maybe, but we of the Undernet, when we split away from EFNet, said the same thing years ago :-). When Undernet had similar issues, about three years ago, we said the same too, and started up Yet Another Network (tm). That YAN died, and Undernet is still around. Most EFNet admins are at least halfway clued; give it time.... -- Re: efnet (Score:1) No it isnt. D == DEBUGMODE 2.8.21+RF+CSr30. irc.Prison.NET ACeEHiKMpRtX CS3abBDEfIjKlLmMnNoPrsStTu TS3ow Its the ACeEHiKMpRtX bit which contains what options were #definied. Do you see a D in that? madmax@efnet irc.ins.net.uk admin Re:It's a size problem (Score:1) There must be some way of revising the IRC protocol so that there's at least two distinct paths from A to B in most cases. When one of the servers in a path goes down, the servers will just resync themselves to find an alternative connection path. This will greatly reduce the painfulness of netsplits, as netsplits won't happen very often with this strategy. But of course, all this does it to provide more buffer in case of DoS problems. It doesn't really address the source of the problem... but IMHO trying to prevent/solve DoS problems on IRC is like trying to cure cancer that has spread to every other organ in your body... --- Re:It's a size problem (Score:1) Yeah, there probably is a way. But getting 36 servers to upgrade at once is well, impossible. If you start trying to implement it so its compatable with servers which use the old protocol, you make the implementation near impossible and a real head-f**k... madmax@efnet irc.ins.net.uk admin:IRC = warez + child porn (Score:1) Re:dalnet splits (Score:1) Re:IRC = warez + child porn (Score:1) Sorry, but I don't "warez child porn" as you so "eloquently" put it. As a parent and a long-standing net user I am merely concerned about the sheer volume of filth that pervades IRC, and any legitimate conversations (if there are any, which I can't seem to find) could just as easily be done using mailing lists. It seems to me that you're the one with the problem, after all I didn't lash out at you and call you a "sick bastard" did I? Feeling any residual guilt are we? the "EFnet" is going down rumor.... (Score:2) "irc.ef.net will be permanently delinked because of DoS attacks" irc.ef.net is just one EFnet server. It does not mean that all of EFnet is going down or is in serious trouble... Servers come and servers go but EFnet has survived and will continue to Josh Re:Semi-Stable IRC in GlaxyNet (Score:1) Like another guy said, #shadowrealm is great for movies =) [galaxynet.org] for more information and a list of servers. Re:EFnet has been shit for a while now (Score:1) Re:Nothing to see here, move along. (Score:2) A couple of servers changed their policy. As far as I understand, from my limited experience, there's nothing strange or extraordinary about that. From the semi-official EFNet site [efnet.org]: The official page doesn't even talk about how far the connection policies on most of the remaining U.S. servers have been tightened! From what I've seen and heard, most people these days only have a hope in hell of using east/west glbx.net, prison.net and emory.edu. This is not just "business as usual". irc admins (Score:1) Re:Nothing to see here, move along. (Score:1) ShadoWolf IRC's design is incompatable with today's internet (Score:2) IMHO IRC's biggest flaw is the fact that it's servers are networked and all channels rely on that network to function. If one server goes down, you can loose half or more of the channel's users. And servers go down a lot. I t would seem to me the only reason anyone would put up with the flakieness of IRC is because they are either part of the problem, or because they enjoy the thrill of brutal internet strife. IRCs problems are what prompted me to make something 'different'. A chat program which did not allow channel operators, banning, kicking or any of the things which typically spur DOS attacks on the servers themselves. Each room is an independant server and nicknames need only be unique per room, not per network. (and no network at all to rely on). Servers are linked similar to the Web where it gives an address and port to connect to. Best of all, it's graphical and it's free. They are a threat to free speech and must be silenced! - Andrea Chen /irc[2]?.home.com/ gone... (Score:1) It has been down for a little more than a month(?) and I wonder if a lot of the fallover from that server is overloading, and causing grief with others. That is, for the few that allow @home people to connect, and don't tell them to connect to their own irc server (which no longer exists). Damn the man. Maybe we could convince the nice Havenco people to host an EFNet ircd!! EF (Score:2) The more things change the more they stay the same. Re:IRC = warez + child porn (Score:1) Fine, there's filth on IRC. Well, there's plenty of filth available on the Net via other means - WWW, Usenet. Do we just shut down the whole Net? I agree totally with what some of the other posters to this thread have said: there really are rewarding places to be on IRC. The people in the programming channels on some of the networks are insanely knowledgable. Some of the chat channels have really great people to talk to (and meet, if you're in the same meatspace area). Many organisations use IRC to plan, meet, play, whatever. Besides, mailing lists just aren't the same as real-time chat, and chat is more suitable for some discussions. As for the undesirable stuff (warez, child porn), well, it's there, that's life. In my experience, though, it doesn't tend to just fall in your lap, so presumably (not needing warez, and not being the slightest bit interested in child porn), you need to go looking for it. Finally, since you are a parent, may I point you to the standard disclaimer many servers on many networks carry (including the server I'm an IRCop on): IRC is an unmoderated medium. Anyone who leaves their children (thinking sub-teens in particular) on IRC without keeping an eye on them is asking for trouble, IMHO. Re:It's a size problem (Score:2) Actually, it's impossible to keep it backwards-compatible with at least the Undernet implementation (well, as of 2.9.32 or something) of the ircd; there cannot be more than one routes to a given server from another server. -- Re:Well, (Score:3) If you get shunned by Efnet come to Undernet. #linux and #linuxhelp and #techies are prefectly great places to find info. Undernet is alive and well and relatively trouble free. Kintanon Re:The future (Score:2) I know a great many people with T1 and such style links, who would GLADLY run a server and allow 50-100 people to connect. EFNet won't hear of it. Re:IRC is kind of dying in general... (Score:3) On limiting to local clients... (Score:2) FWIH, most of the servers that restrict their usage as such do so for one of two reasons: 1) DoS attacks or other related abuse, or 2) bots. I don't mean to sound like a troll here, but when you link to an IRC network, those are risks you take. And you don't solve them by effectively banning *@*. If a server on any other network did that, just imagine how fast they'd be delinked. Yet EFnet puts up with it. ================================= Re:It's a size problem (Score:2) I think this will work because there will be a single route for communication between the "old" and the "new" sides; so at first, this will look like an addendum to the current tree-structured EFnet. Then as more and more of the servers upgrade, this "addendum" will grow and eventually the "old" side will shrink to zero, then we can remove the gateway server. Think this is workable? --- Re:IRC is kind of dying in general... (Score:2) Also...there's one thing I've noticed about IRC (I've been using efnet now for about 6-7 years)...if you don't piss anybody off, and no one else in your channel ever pisses anybody off, you rarely have takeover attempts! It's really quite amazing! Imagine that. Anyone got a good list of IRC servers? (Score:2) Re:IRC is kind of dying in general... (Score:2) >bots? It's amazing how many bots you need now >just to hold a channel. How about none? I'm a regular on one of the original IRC channels on EFNET and we haven't had a bot protecting the channel for years. Re: efnet (Score:2) That's what the almighty grep is for, my friend.:2) I know that I could responsibly run a server (ie: not abuse it, just leave it alone) and allow the 30 or 40 people that I know to connect, hassle free. Why will EFNet not allow private servers? They only allow things that have HUGE pipes. Simply make it a condition that a single abuse by an IRCOp on a small server is grounds for immediate and permanent detachment. Re:IRC's design is incompatable with today's inter (Score:2) As somoene who has worked extensively on IRC server to server protocols, I can confirm what you're saying: ircd scales rather badly. It only has gone up to the current levels thanks to the massive increases in bandwith and server memory. If IRC wants to keep the model of a single network with a unique channel namespace, and a completely decentralized network of servers, then alternate routes and cyclic links become a necessity. EFnet has been going mostly in the opposite direction: a few strong central hubs, and lots of leaves. That works better than the random-spanning-tree that IRC started with.. Re:It's a size problem (Score:2) Only if the gateway-side abstracts the rest of the network and makes it seem like a single über-server. -- This is news? (Score:2) However... There are other IRC networks. Dalnet, Undernet, and Yiffnet [yiff.net] to name a few. I've found that Yiffnet [yiff.net] is the most stable in terms of server splits, and the opers are actually friendly for a change. /join #furry and hang on to your keyboard. Since Yiffnet [yiff.net] is a semi-roleplaying network, there are, IIRC, two bots used to store descriptions of your character. If you ever sign on just for the hell of it, for the love of God go read the website first![1] If, after reading the site, you decide to logon, try not to make your nick look like you're an EFnet refugee. The people there get on your case about it, and there's no real reason to have all the extra stuff[2] anyway. After logging on, read the MOTD for the server you're on, then if you feel up to it, -- [1] What? You thought those links were to show off my leet HTML skillz? [2] You know, stuff like _^*=+ tacked on the end and 31337 Sp311In6? -- how ironic.. (Score:2) Why EFnet is so GREAT (Score:2) Another thing that many don't realize is the freedom on EFnet. If I want to create a channel with no one present, I can. I get no message from ChanServ telling me bob1234 registered my channel at the server's conception. IRCOPs should be hands off. If theres a disbute or a takeover in a channel, let them work it out. It's these basic freedoms which make EFnet such a great idea. I do concede that EFnet is not at its pinnacle right now. It has experienced massive DoS attacks, loss of servers, corrupt IRCops, and devistating takeovers. The IRCops are not to blame. They just went with the flow from a lack of rules. Sure, here's an i:line for this nice shell. Sure, I'll k-line that client even though its not a bot or clone. Sure, I'll abuse my power for anything that might better myself. If the administration of EFnet cannot keep itself clean, what hope is there for a new EFnet without DoS attacks. It is a sad time for EFnet. We have come so far. I will stay on it until the end. mycroft@EFnet gimme a Why this means EFnet is dying (Score:2) But isn't an IRC network effectively dead when it ceases to be a reliable means for people to get online and chat? When was the last time you could get onto your EFnet channel of choice and have a conversation with the regulars? I bet if you did the math, EFnet would spend over 50% of its time with at least one major server split or with one or more heavily hit links. When I sit down and am totally unable to have a conversation with my friends for any more than a few minutes at a time, I consider it time to move on. The only way people are going to be happy again is if they migrate to a more stable network, and nobody is going to do that until EFnet finally kicks the bucket and its existing (stable) servers either join up with a real IRC network or shut down for good. Let EFnet die. It's functionally dead already. Time to move to better protocol.. (Score:2) Take a look at SILC, Secure Internet Live Conferencing [silc.pspt.fi]. It's designed with better network structure, isn't a braindead protocol, and as the name says it's designed with security in mind. And to me the best thing about it is that it's new and not finished yet. I can suggest new features to it, I can fix broken things in it, I can try to make it the best chat protocol there is. SILC is the most serious IRC replacement I've seen so far: there's working server and client code, there's documentation, even comments in the code and the specs are in RFC drafts. The biggest reason for DOS attacks against IRC servers is (I'm pretty sure :) creating net splits and taking over channels with them. If we just design the protocol so that it is impossible to take over others' channels the network will be DOS safe (and it will be one happy chatting network ;) irc.blackened.com - mjr's delink letter (Score:2) "Even the big ones fall. [the-project.org]" Re:"Is EFnet Dying?" (Score:2) Most animals act according to an anticipation of a reward or in fear of some punishment. As a child, I learned quickly what my boundries were -- e.g. exactly what I could do without being beaten. Most parents raise their children in a violence free environment that complete negates half of our behavioral instincts -- hell, parents would be arested for abuse now-a-days if they hit their kids. ("Spare the rod; spoil the child" is true.) Setting the kid in the corner for an hour isn't punishment; it just gives them time to think up more bad things to do. When there is no fear of punishment, rewards have no meaning and children never learn to participate in a civilized society. Now that I think about it, society has just gone to hell. Did I miss the Rapture or something? A memo would have been nice... Re:Why EFnet is so GREAT (Score:2) Anyone can buy just about any domain name they want. Does this mean *.to is actually in Tonga? Are all the *.cc names somewhere in the Cacos Islands? Hell, I can have the DNS for a machine in my bathroom here in Raleigh, NC, USA say it's somewhere in *.ca. I bet you wouldn't guess, sitting here in my livingroom, I'm four hops from the Microsoft Campus in Washington state. Or that I used to be three hops from London, England, UK. (Interpath@MAE-East/Xara@MAE-East/Xara@CWIX) Hop count and transit time are what matter. Even BIND has known this for years. Re:Well, (Score:2) I have to agree, Undernet is generally younger and less hardcore for all of their subjects. But we're improving as best we can, and our network is much better than Efnets recently. Kintanon Linked Server Page (Score:2) Re:On limiting to local clients... (Score:2) When Monash University here in Melbourne AU had an EFnet server - yoyo.cc.monash.edu.au - it's users were banned between 9 and 5, and I think dialin users were banned 24/7, because the University tolerated it with the provision it didn't stop legitimate work. But that's my POV, I'm sure the exopers will correct me if I'm wrong :)
https://slashdot.org/story/00/09/26/1145220/efnet-hits-turbulence
CC-MAIN-2016-50
refinedweb
5,387
72.56
How to inject a React app into a Chrome Extension as a content script Chrome Extensions are great tools that give let you interact with the browser and websites in a variety of ways. - Browser and Page actions which appear as icons in the browser toolbar and can show an optional popup window. - New Tab pages which override the default new tab - Options pages for configuring Chrome Extensions - Chrome context menus which appear with a right click - Notifications, which slide in at the top right of the screen - Content scripts, which inject JavaScript into a web page and can alter its user interface This article is about loading a React app in a content script. Install create-react-app first if you don’t have it. Here is the repository if you just want to see the code. Create a new React app create-react-app react-content-script Update the manifest file - Find it in /public/manifest.json The manifest.json file is required to provide important information for the Chrome Extension. The manifest_version, name, and version are required for the Chrome Extension to be valid. The content_scripts array is what we will inject our React app into. You can inject a content script into a specific page as we are doing here or you can use the match key to specify the urls that you want your code to load in. For example ["<all_urls>"] loads the content script matches any valid url scheme. Update the App component Remove import logo from './logo.svg'; and replace the src for the image tag with "". Update App.css to be more presentable. Append the app to the DOM This is where we actually inject the content script into the DOM on. First examine the DOM of the website (or websites) that you want to inject a content script into. Then select an element based on where you want you want to append your code to the DOM. I chose the <div class="ctr-p" id="viewport">, but you might have to experiment to find what works best for you. You might not even need to inject any HTML at all. Deploy - Next build the app for production by running the command yarn run buildform /package.json. - Open the Extensions tab under more tools in the Customize and control Google Chrome menu (with the three vertical dots at the far right of the browser toolbar) or navigate to chrome://extensions. - Check the Developer mode checkbox. - Click on Load unpacked extension… then find and select the buildfolder in react-content-script/src/to load the extension. The extension failed to load because the "css" and "js" filenames we specified in the manifest are different than the ones generated by Webpack. Modify the build output filenames You could just rename main.css and main.js every time there are changes in your build e.g. main.cacbacc7.css and main.f0a6875b.js in this case, but a better solution is to modify the Webpack production configuration output filenames so you don’t have to update the manually when you want to load your extension. In order to access /config/webpack.config/prod, you have to eject from Create React App. - Eject from Create React App with yarn run ejectand type yto confirm. - Remove the generated hashes from the the CSS and JavaScript output filenames. Compare ORIGINAL to UPDATED in /config/webpack.config/prod. // ... (Other configuration) // Note: defined here because it will be used more than once. // ORIGINAL // const cssFilename = 'static/css/[name].[contenthash:8].css'; // UPDATED const cssFilename = 'static/css/[name].css'; // ... (Other configuration) output: { // The build folder. path: paths.appBuild, // Generated JS file names (with nested folders). // There will be one main bundle, and one file per asynchronous chunk. // We don't currently advertise code splitting but Webpack supports it. // ORIGINAL // filename: 'static/js/[name].[chunkhash:8].js', // UPDATED filename: 'static/js/[name].js', // ... (Other configuration) Rebuild and reload and verify that the filenames are correct before loading the build into Chrome. That’s it. Thanks for reading and let me know if you have any questions.
https://medium.com/@yosevu/how-to-inject-a-react-app-into-a-chrome-extension-as-a-content-script-3a038f611067
CC-MAIN-2018-43
refinedweb
684
66.03
I can’t figure out how to write the method that can do input validation within ranges. The teacher gave the starter code and I just have to create the method! But my brain is not working. please help :c here is the code with instructions: package week9; import java.util.Scanner; public class Lab9i { public static void main(String[] args) { //Initialize local variables Scanner sIn = new Scanner(System.in); //Input Scanner for String int intNum = 0; double doubleNum = 0; String choice = ""; String playAgain = "Y"; //Keep program running until user wants to quit do { //Get an integer from the user int[] intRange1 = {}; intNum = getValidInt(sIn, "Please enter a whole number: ", "Invalid response. Only whole numbers are acceptable.",intRange1); System.out.println("The whole number your entered was: "+ intNum); System.out.println("Now we will test your whole number in a math equation..."); System.out.printf("Adding 10 to your whole number would be: 10 + %d = %d.\n\n",intNum, (intNum + 10)); //Get an integer within a range from the user int[] intRange2 = {10, 50}; intNum = getValidInt(sIn, "Please enter a whole number between 10 and 50: ", "Invalid response. Only whole numbers between 10 and 50 are acceptable.",intRange2); System.out.println("The whole number your entered was: "+ intNum); System.out.println("Now we will test your whole number in a math equation..."); System.out.printf("Adding 10 to your whole number would be: 10 + %d = %d.\n\n",intNum,(intNum + 10)); // and now the method that I am creating with the instructions }//end of method main() /**getValidInt method validates user input is an Integer within range and returns it back to the calling method. * Uses sIn to get user input from the console. * Asks user question. * If range is empty, range is ignored and only validates input is an integer. * If range is not empty, validates user input is an integer within range. * If user input is not valid, prints warning and repeats question. * Returns validated input. */ public static int getValidInt(Scanner sIn, String question, String warning, int[] range){ int num1 = 0; int num2 = 0; boolean valid = false; while(!valid) { System.out.println(question); String input = sIn.nextLine(); try { num1 = Integer.parseInt(input); valid = true; } catch (Exception e) { System.out.println(warning); } // end of try catch } // end of while return num1; }//end of method getValidInt(Scanner, String, String, int[])
https://forum.freecodecamp.org/t/get-valid-x-within-ranges/486097
CC-MAIN-2022-40
refinedweb
389
58.58
On my x86-64 Gentoo system, the xen-unstable compile breaks in tools/misc/mbootpack. It complains about not being able to find errno.h and page.h. After some digging (and help from Anthony, Jerone, and Scott), I found out that Gentoo's /usr/include/asm/errno.h errouniously has #include "../asm-x86_64/errno.h" instead of #include <asm-generic/errno.h> I am reporting the bug to Gentoo, but until this bug is fixed the patch below is needed. It removes the "-I-", which prevented gcc from using user include files. Scott and Jerone verified that the header files in Ubuntu and Fedora do not have this file, though this may be a problem in SuSE (unable to confirm). Gentoo on x86 does not have the same /usr/include/asm/errno.h, so this is not an issue. Signed-off-by: Jon Mason <jdmason@xxxxxxxxxx> --- tools/misc/mbootpack/Makefile.orig 2005-06-13 16:53:56.000000000 -0500 +++ tools/misc/mbootpack/Makefile 2005-06-13 16:54:08.000000000 -0500 @@ -17,7 +17,7 @@ install: build # Tools etc. RM := rm -f GDB := gdb -INCS := -I. -I- +INCS := -I. DEFS := LDFLAGS := CC := gcc _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx
http://old-list-archives.xen.org/archives/html/xen-devel/2005-06/msg00492.html
CC-MAIN-2014-42
refinedweb
202
71.1
6. Safe pattern matching¶ Suppose we have the following List EADT: data ConsF a l = ConsF a l deriving (Functor) data NilF l = NilF deriving (Functor) eadtPattern 'ConsF "Cons" eadtPattern 'NilF "Nil" type List a = EADT '[ConsF a, NilF] -- pattern for a specific EADT: List a pattern ConsList :: a -> List a -> List a pattern ConsList a l = Cons a l We can use pattern matching on List constructors: showEADTList :: Show a => List a -> String showEADTList = \case ConsList a l -> show a ++ " : " ++ showEADTList l Nil -> "Nil" _ -> undefined > putStrLn (showEADTList intList) 10 : 20 : 30 : Nil However the compiler cannot detect that the pattern matching is complete, hence we have the choice between a warning or adding a wildcard match as we have done above. Another solution is to rely on multi-continuations. Then we can provide a function per constructor as in a pattern-matching. For instance, with multi-continuations we can transform an EADT '[A,B,C] into a function whose type is (A -> r, B -> r, C -> r) -> r. Hence the compiler will ensure that we provide the correct number of alternatives in the continuation tuple (the first parameter). 6.1. Explicitly recursive example¶ Transforming an EADT into a multi-continuation is done with eadtToCont. Mapping the continuation tuple is done with >::>. import Haskus.Utils.ContFlow showCont' l = eadtToCont l >::> ( \(ConsF a r) -> show a ++ " : " ++ showCont' r -- explicit recursion , \NilF -> "Nil" ) > showCont' intList "10 : 20 : 30 : Nil" 6.2. Catamorphism example¶ Transforming a VariantF into a multi-continuation is done with toCont. It can be useful with recursion schemes. import Haskus.Utils.ContFlow showCont l = toCont l >::> ( \(ConsF a r) -> show a ++ " : " ++ r -- no explicit recursion , \NilF -> "Nil" ) > cata showCont intList "10 : 20 : 30 : Nil" 6.3. Transformation example¶ We can use this approach to transform EADT. For instance list mapping: import Haskus.Utils.ContFlow mapList f l = toCont l >::> ( \(ConsF a r) -> Cons (f a) r , \NilF -> Nil ) > eadtShow (cata (mapList (+5)) intList :: List Int) "15 : 25 : 35 : Nil" We can also transform an EADT into another EADT: -- Some new Even and Odd constructors data EvenF a l = EvenF a l deriving (Functor) data OddF a l = OddF a l deriving (Functor) eadtPattern 'EvenF "Even" eadtPattern 'OddF "Odd" instance (Show a) => MyShow' (EvenF a) where myShow' (EvenF a l) = show a ++ " {even} : " ++ l instance (Show a) => MyShow' (OddF a) where myShow' (OddF a l) = show a ++ " {odd} : " ++ l -- convert Cons constructor into Odd or Even constructor, depending on the -- cell value evenOdd l = toCont l >::> ( \(ConsF a r) -> if even a then Even a r else Odd a r , \NilF -> Nil ) intList' :: List Int intList' = Cons (3 :: Int) $ Cons (4 :: Int) $ Cons (5 :: Int) Nil > eadtShow (cata evenOdd intList' :: EADT '[EvenF Int, OddF Int, NilF]) "3 {odd} : 4 {even} : 5 {odd} : Nil" 6.4. Splitting constructors¶ We can chose to handle only a subset of the constructors of an EADT by using splitVariantF. For instance in the following example we only handle EvenF Int and OddF Int constructors. The other ones are considered as left-overs: alg x = case splitVariantF @'[EvenF Int, OddF Int] x of Right v -> toCont v >::> ( \(EvenF a l) -> "Even : " ++ l , \(OddF a l) -> "Odd : " ++ l ) Left leftovers -> "something else" We can test this code with: eo :: EADT '[EvenF Int, OddF Int, NilF] eo = cata evenOdd intList' eo2 :: EADT '[ConsF Int, EvenF Int, OddF Int, NilF] eo2 = Even (10 :: Int) $ Odd (5 :: Int) $ Cons (7 :: Int) $ Odd (7 :: Int) Nil > cata alg eo "Odd : Even : Odd : something else" > cata alg eo2 "Even : Odd : something else" Note that the traversal ends when it encounters an unhandled constructor. 6.5. Unordered continuations ( >:%:>)¶ By using the >:%:> operator instead of >::>, we can provide continuations in any order as long as an alternative for each constructor is provided. The types must be unambiguous as the EADT type can’t be used to infer the continuation types (as is done with >::>). Hence the type ascriptions in the following example: showCont'' l = eadtToCont l >:%:> ( \(NilF :: NilF (List Int)) -> "Nil" , \(ConsF a r) -> show (a :: Int) ++ " : " ++ showCont'' r ) > showCont'' intList "10 : 20 : 30 : Nil"
https://docs.haskus.org/eadt/safe_pattern_matching.html
CC-MAIN-2019-18
refinedweb
682
54.8
Playing with the Player Project _1<<_2<< Figure 4. Player's playerv tool running on a CoroBot after subscribing to the IR and position2d devices. The large triangles are the cones shown to be obstacle-free by the infrared sensors. So far, our robot is awake, alert and ready to be told to do something interesting. Let's give it something to do. The CoroBot robot comes with a number of sensors and actuators—probably the easiest of which to interface with are the front- and rear-facing infrared ranging sensors and the mobility base's drive motors. Thus, we can write a small C program to talk to the Player server, read the IR sensors and drive the robot until it is 10cm away from an obstacle in front of the robot. The first thing we have to do to interface with the Player server is open up a connection to it. For the sake of brevity, we will skip a lot of error checking, but you can download the full version of the code from the LJ FTP server (see Resources). This code defines the variables we will use to talk to the Player server and the device interfaces in which we are interested: #include "libplayerc/playerc.h" static playerc_client_t* clientHandle; static playerc_position2d_t* positionProxy; static playerc_ir_t* irProxy; The clientHandle is used for talking to the Player server itself. The second position2d interface talks to the position2d interface, providing us with encoder information about how the wheels are moving and allowing us to send motor commands to the robot. We'll ignore the encoder information for this example. Lastly, the IR interface gives us information about the distances that the robot's IR sensors are reporting. The next code snippet uses these proxies to interface with the server and these devices: playerc_client_connect(clientHandle); // convert our interface to a PULL interface, // only updates when we read playerc_client_datamode(clientHandle, PLAYER_DATAMODE_PULL); // tell the robot to drop older messages playerc_client_set_replace_rule( clientHandle, -1, -1, PLAYER_MSGTYPE_DATA, -1, 1); // create the position proxy (controls the motors) positionProxy = playerc_position2d_create(clientHandle, 0); playerc_position2d_subscribe(positionProxy, PLAYER_OPEN_MODE); // create the IR proxy (controls the IRs) irProxy = playerc_ir_create(clientHandle, 0); playerc_ir_subscribe(irProxy, PLAYER_OPEN_MODE); We start off by connecting to the Player server and configuring our connection. We want to get new messages from the server only when we are ready for them, so we configure the connection for a pull-type arrangement. And, because we want only the most recent information (we don't care what the IR sensors were indicating a second ago, we care about what they are saying right now), we tell the server to report only the most recent data. If we really wanted, we could let Player ensure that every IR message was delivered, but that might result in getting less-than-fresh data and possibly driving into a wall. After our connection is configured, we open up the position2d interface on the Player server and subscribe to it. Then, we do the same with the IR interface. So far, so good. Now we need to get the state of the IRs from the robot and tell it how to move the motors: while (!timeToQuit) { // attempt to read from the client if (playerc_client_read(clientHandle) == 0) continue; // nothing to read, try again. // read the IR distances and verify we have good data if (irProxy->data.ranges_count == 2) { frontIr = irProxy->data.ranges[0]; rearIr = irProxy->data.ranges[1]; } // figure out how to drive runController(frontIr, rearIr, &desiredTranslation,&desiredRotation); playerc_position2d_set_cmd_vel( positionProxy, desiredTranslation, 0, desiredRotation, 1); } Each time through the loop, we try to read the newest data from the robot. After a little sanity checking, we take the ranges reported by the IR sensors and feed them into a controller function. This controller does some magic processing (we'll talk about that later) and returns information on how we should drive the robot. Finally, we pass these driving commands back to the Player server and start it all over again. All that's left now is to provide a runController function that maps from IR sensor readings to drive commands. The CoroBot driver accepts numbers in the range of –1.0 to +1.0 to tell how to drive the robot forward and backward: +1.0 means 100% power forward, –1.0 means 100% power in reverse, and 0.0 means stop. It accepts the same range for telling the robot how to turn: –1.0 means turn full power left, +1.0 means turn full power right, and 0.0 means drive straight ahead. Noting that the IR readings are provided in meters, we can use the following P-controller to drive our robot forward until we are 10cm away from a front obstacle. We even get a bonus for free—if we are closer than 10cm away, the robot will back up a bit until it is at the proper distance: void runController(double frontIr, double rearIr, double *translation,double *rotation) { // convert our IR readings into drive commands *translation = (frontIr-0.1) * 3.0; *translation = *translation > 0.9? 0.9: *translation; *rotation = 0.0; } And finally, good programmers always shut down their server connections when they are done: void shutDownProxies() { // close down proxies we have opened playerc_ir_unsubscribe(irProxy); playerc_ir_destroy(irProxy); playerc_position2d_unsubscribe(positionProxy); playerc_position2d_destroy(positionProxy); playerc_client_disconnect(clientHandle); playerc_client_destroy(clientHandle); } Building on the design we showed earlier, we can see how our drive-by-IR program interacts with the Player infrastructure. The CoroBot configuration file loads the phidgetIFK driver, which exposes an aio:0 device. This device allows the CoroBot driver to read the robot's onboard infrared sensors. The CoroBot driver also exposes the position2d and IR interfaces, which the drive-by-IR program reads with the help of the libplayerc library (Figure 5). Figure 5. The Relationship between Several Devices and Interfaces When Using the Drive-by-IR Program The Player Project offers a lot of functionality that there just isn't room to get into in one article. This includes robot simulation, support for numerous commercial robots of many different prices and qualities, and support for a whole slew of readily available devices. Its plugin system even allows you to build your own drivers for new devices, either to support new hardware or to implement new experimental algorithms. Give it a try, and give your computer a chance to stretch its legs..
https://www.linuxjournal.com/magazine/playing-player-project?page=0,2&quicktabs_1=0
CC-MAIN-2018-17
refinedweb
1,055
52.6
The desire for more speed and better multi-processor support has caused inevitable changes in Linux, resulting in the development of the current kernel — Linux 2.2. As a driver author, you may initially avoid taking advantage of the latest kernel changes. Ultimately, however, you’ll probably end up re-writing your driver to stay current with kernel design, to improve your driver’s performance, and to take advantage of the ever-increasing opportunities that appear on the horizon for Linux users in 1999. This is a comprehensive background on the changes that you, as a driver author, will need to make in order to port a driver from Linux 2.0 or 2.1 to the new 2.2 kernel. Even if you’re writing 2.0 based add-on drivers and not necessarily familiar with the Linux kernel to begin with, you’ll have a fighting chance of making your driver work with 2.2. And hopefully you’ll learn a little something along the way. None of the changes in 2.2 are gratuitous. Where possible compatibility modes exist for old methods. These modes provide warnings that the driver ought to be updated to the new methods. Several drivers in 2.2.0 still produce these warnings, so you shouldn’t feel bad if your driver produces them, especially if all you want is to make it work. Access To User Space When comparing older kernels with 2.2, the most obvious change you will see is that the pair verify_area() and memcpy_to/from_user have mutated. This is because of the good old need for speed, and also creates a convenient place to clean up some complicated SMP race conditions. Contrary to rumor, it doesn’t exist just to annoy device driver writers. With Linux 2.0 the processor walked the list of memory owned by a process to determine if an access to user space was legal. This was done to ensure that the read or write didn’t succeed when it shouldn’t, because when an illegal read or write does succeed, the resulting “Ooops” message isn’t pretty. Linux 2.2 changes the rules of the game. Since the memory management hardware can do most of the checking on a 486 or higher, it would be silly to do it with the software as well, especially since most accesses are legal. Instead the kernel builds tables that contain information about what addresses may fault, and where to jump if they do. When a user passes an invalid address, a basic sanity check is performed to ensure that it is not a kernel address. Once verified, the kernel can trust the values given, knowing it can still recover. The actual mechanics are not trivial, involving some interesting abuses of the ELF binary format and some clever inline assembler tricks. Fortunately they’re all wrapped up nicely for you. Figure 1A contains an example of a driver written for 2.0. Under 2.2, the same driver will be written as in Figure 1B. Figure 1A struct thing my_thing; if(verify_area(VERIFY_READ, userptr, sizeof(my_thing))) return -EFAULT; memcpy_fromuser(&my_thing, userptr, sizeof(my_thing)); if(verify_area(VERIFY_READ, userptr, sizeof(my_thing))) return -EFAULT; memcpy_fromuser(&my_thing, userptr, sizeof(my_thing)); Figure 1B #include <asm/uaccess.h> struct thing my_thing; if(copy_from_user(&my_thing, userptr, sizeof(my_thing))) return -EFAULT; struct thing my_thing; if(copy_from_user(&my_thing, userptr, sizeof(my_thing))) return -EFAULT; The copy_from_user function returns zero on a successful copy, or if it faults, it returns the number of bytes it was unable to copy. It is both cleaner and faster, with all the magical fault catching concealed by the driver author. Copy_to_user works the same way. Linux 2.0 also has a set of functions — get_user() and put_user() — that did the same things for native C types as the memcpy functions. These still exist, but their behavior has changed (and may be why you now have hundreds of warnings in your partially ported driver!). Previously, get_user() returned the value of the object. So it would read something like Figure 2A. Figure 2A if(verify_area(VERIFY_READ, pointer, sizeof(*pointer))) return -EFAULT; c=get_user(pointer); switch(c) { .. Figure 2B if(get_user(c, pointer)) return -EFAULT; switch(c) { .. In 2.2 the get_user function handles the fault checking, so it needs to return two different pieces of information. The arguments have changed, and it now returns zero on a successful read and -EFAULT otherwise. Figure 2A is replaced by Figure 2B. The 2.0 put_user function has been given the same treatment. The fact that it returns -EFAULT or zero can be very useful since many routines can now simply use: return put_user(value, pointer) to get the desired error/success return to userspace. File Operations Changes Almost every device driver, except for the network drivers, interacts with the file system. The file system layers have changed somewhat, although the impact on a device driver that doesn’t wish to get involved are minimal. First, many drivers need to obtain the inode of a passed file handle. In 2.0 this was done with: struct file *filp; struct inode *inode; inode = filp->f_inode; inode = filp->f_inode; In 2.2 these are handled via the directory cache (dcache), a namespace cache of active and recently accessed files. This makes things like the find command much faster. Fortunately, the change from a driver point of view is nice and simple: inode = filp->f_dentry->d_inode; For file systems the changes are major, and a review of the changes involved in porting file systems deserves an article unto itself. The read and write operations have changed only a little. They now pass the file offset pointer as an argument instead of relying on the one in the file handle. It may well be that the pointer indicates the offset in the file handle, but you don’t need to worry about that, because the POSIX standard defines pread/pwrite operations that allow you to automatically seek and fetch data at a given position. In the conventional UNIX API, the seek (selecting offset) and the read of the data were separate events. Care had to be taken with a threaded program so that when two threads accessed a file that they didn’t end up seeking and then having the other thread move the file position before they could read it. Pread/pwrite negates this problem. The drivers that care about file position (which is not all of them — a file position is not meaningful to a tty, for example) should be using the passed offset pointer instead of changing filp->f_pos. The release (close) operation is called, as before, on the last close of a file. A small change here is that it is entitled to return a failure code, which can be returned via close(). The handle must still be closed, but it allows you to report that the close stumbled across a problem. There is also a flush operation which is invoked when any given process closes its copy of the file handle. At the moment this is only used for NFS writes where the close of the file may be the only point at which you discover that a write fails because the remote disk is full. In most cases this functionality shouldn’t be needed. Finally, the disappearance of the select method will be very visible to device drivers. This method, and indeed the whole of select in the kernel has been replaced by the more scalable, but arguably less elegant, system 5 based poll. The change is not visible to end users because the kernel emulates the old select call with poll to extend compatibility. The changes made for poll in most cases can be applied fairly mechanically to any device. The fundamental API change is mostly invisible to the device driver author. 2.0 based driver code was called with a select_table as the final argument. This has become a poll_table, although the functionality is basically the same. It is used to keep a list of events that may cause the status of the poll() return to change. The wait queue which indicates something may have occurred is added to the poll table using: struct file *filp; poll_table *wait; struct wait_queue *queue; poll_wait(filp, queue,wait); poll_wait(filp, queue,wait); The poll handler should then check what events are presently true. The main events are listed in Figure 3. Figure 3 POLLERR – an error is pending POLLHUP – a hangup occurred POLLIN – input data exists POLLRDNORM – normal readable data exists POLLPRI – a “priority” message is waiting (used for urgent data on sockets) POLLOUT – output is possible (there is room) POLLWRNORM – there is space to output normal data. POLLIN – input data exists POLLRDNORM – normal readable data exists POLLPRI – a “priority” message is waiting (used for urgent data on sockets) POLLOUT – output is possible (there is room) POLLWRNORM – there is space to output normal data. Finally, the poll handler returns a mask of these events. The poll function will be called whenever a process is polling a file and the kernel code thinks the status may have changed. When the required events are true, the poll system call will clean up the tables without driver assistance and then return to the user. Figure 4 contains a simple example for a read only device (the bus mouse driver) from both kernels 2.0 and 2.2. Figure 4 Linux 2.0 – select /* *; } Init Functions A lot of drivers contain code executed only at start-up time. In Linux 2.2 based drivers, you can mark these functions and code with __init and __initdata. The kernel build uses more ELF and compiler tricks to collect these functions at link time and throws them away after booting to make more memory available for applications. Some platforms, however, don’t support __init and __initdata. For those platforms, they are ignored. Including <asm/init.h>and marking initialization data and code with these can often save you 5 to 10 percent of the total size of a device driver. A typical 2.2 kernel build throws some 40K of initialization code away at boot time. Interrupt Handlers With older systems you could assume (although you probably shouldn’t have) that a PC would have 16 interrupts. You cannot assume this with Linux 2.2. Because Linux 2.2 uses the APIC interrupt controller on multiprocessor machines, you might have 64 interrupt lines or more. In other words, do not assume anything about the number of interrupts. In the new 2.2 kernel, the notion of fast interrupts is gone. If you set the SA_INTERRUPT flag to indicate your interrupt is fast, then interrupts will be disabled on that processor while your interrupt is handled, but the remaining semantics of a “fast” interrupt are not emulated. And normally, this shouldn’t matter. A lot of 2.0 based code looks something like Figure 5A. The dev_id field in the interrupt structure is specifically intended to pass this kind of information — thus avoiding the need for device<->interrupt tables. Such tables do not work for PCI where an interrupt is likely to be shared by two instances of the same device. Instead use the call described in Figure 5B. Figure 5A void my_interrupt(int irq, void *dev_id, struct pt_regs *regs) { struct my_device *dev=my_devices[irq]; … Figure 5B request_irq(irq, my_interrupt, SA_SHIRQ, “mythingy”, dev); This will call the interrupt handler withdev_id holding the value of dev that is passed to the request function. There is one last thing to worry about with interrupts in 2.2 based drivers, and if you are using multi-processor machines it may require some thought. Under Linux 2.2 an interrupt can be executing in parallel with other kernel code. This is different from 2.0, which used global locking to make the SMP transition simple. You are still guaranteed that cli() and sti() will protect a section of code and prevent the kernel from running an interrupt handler during the protected block, but you are no longer guaranteed that an interrupt handler itself will prevent other kernel code from running. To handle this you will need to use spinlocks. Spinlocks and SMP For the sake of expediency, we’ll only review the basic spinlocks as a recipe for handling interlocking between an interrupt handler and the kernel code. On a single processor machine these functions are turned into the conventional cli/sti functions and have no overhead. However, you should probably test them with an SMP build (even on a single CPU machine) to be sure they work correctly. A spinlock is a type: spinlock_t. It is initialized with the function: spinlock_t lock; spin_lock_init(&lock); This sets the lock up and indicates that it is not being held. When you want to use a spinlock you must grab it. The function sits in a tight loop until it grabs the lock. In the event that you’re using the lock from both interrupt and non-interrupt contexts, you’ll need to disable that interrupt or all local interrupts when grabbing the lock. This is common enough that a number of functions cover it. To grab a lock: spin_lock(&lock); To release a lock: spin_unlock(&lock); To grab a lock, save the irq mask and disable local interrupts: unsigned long flags; spin_lock_irqsave(&lock,flags); and to restore it: spin_unlock_irqrestore(&lock,flags); The normal use of such code can be seen in Figure 6. Figure 6 void my_interrupt(int irq, void *dev_id, struct pt_regs *regs) { struct my_device *dev=dev_id; spin_lock(&dev->lock); /* Do the same things as we always did in 2.0 knowing user code grabbing the lock will be held up until … */ spin_unlock(&dev->lock); } /* Do the same things as we always did in 2.0 knowing user code grabbing the lock will be held up until … */ spin_unlock(&dev->lock); } Figure 7 contains a non-interrupt context where you need to protect small sections of code from the interrupt handler running in parallel. Note the use of the irq disabling version of the lock. This is very important — without it we may take the lock in the user code, then start an interrupt. The interrupt routine will spin forever, trying to get a lock that is not going to be released (because we’re stuck in the interrupt so we can’t be running the user code). When this happens, you have to reboot, and you won’t be happy. Use the right version of the spinlocks for the sake of general user happiness. Figure 7 struct my_device *dev; unsigned long flags; spin_lock_irqsave(&dev->lock, flags); /* The interrupt cannot interfere here */ /* Do the things we did in 2.0 */ spin_unlock_irqrestore(dev->lock, flags); spin_lock_irqsave(&dev->lock, flags); /* The interrupt cannot interfere here */ /* Do the things we did in 2.0 */ spin_unlock_irqrestore(dev->lock, flags); The spinlocks guarantee one additional thing. They are marked with the required magic to tell gcc that they are memory barriers. Even if you are not using volatile types, gcc will write any values from registers to their final destination before unlocking. It will also read values directly from memory, not from saved copies in registers made before the lock is taken. This means that you don’t need to worry about any misery-producing optimization surprises the compiler might otherwise invent. In Linux 2.0 the cli() sti() and restore_flags() functions have this property, and in Linux 2.2 this continues to be true. The io_request lock The io_request lock is a spin lock that is taken by the kernel when queuing a request to a block device (a hard disk, a floppy disk, or similar devices which can contain a file system). If the driver is unaware of the lock it will perform the way it does with 2.0. The I/O operation will remain single-threaded. If the driver is aware of and uses the lock, then it can get the advantages of parallel I/O operations across multiple processors. The lock protects the request queue,so a driver can safely drop it once it has copied or processed the request queue entry. In some cases this is done by the device driver. In others, SCSI for example, by the supporting code. There are two reasons to drop the lock. First, it results in better performance from your device driver. Secondly, it keeps interrupts enabled during your device operation. You may need to do this because the device is very slow or because you have to use busy loops with timeouts. The lock is dropped with: spin_unlock_irq (&io_request_lock) and taken with: spin_lock_irq (&io_request_lock) These are variants on the functions covered in the Spinlock section. The spin_lock_irq always disables the interrupts on that processor; the spin_unlock_irq always restores them. Several SCSI drivers make use of this because of things like timeout handling. The NCR5380, for example, drops the lock during the various delay loops required to control the relatively primitive controller it uses. Figure 8 spin_unlock_irq(&io_request_lock); while(!(NCR5380_read(INITIATOR_ COMMAND_REG)& ICR_ARBITRATION_PROGRESS) && time_before(jiffies,timeout)); spin_lock_irq(&io_request_lock); while(!(NCR5380_read(INITIATOR_ COMMAND_REG)& ICR_ARBITRATION_PROGRESS) && time_before(jiffies,timeout)); spin_lock_irq(&io_request_lock); An example can be seen in Figure 8. This allows the timer to continue running, and other processes can continue while the ancient 5380 hardware whirs into action. Because it drops the I/O request lock in its own handlers, it also claims it again in its interrupt function. The interrupt function in this case also manipulates the request queue and so must protect itself from another processor which may also be queuing blocks for the device. And that’s it: a general overview of some of the changes you’ll need to know about in order to take advantage of the latest advances in the Linux 2.2 Kernel. Alan Cox is a well-known Linux kernel hacker currently working on writing drivers, security auditing, Linux/SGI porting, and modular sound. He can be reached at alan@lxorguk.ukuu.org.uk.
http://www.linux-mag.com/id/216/
CC-MAIN-2016-18
refinedweb
2,998
62.88
Introduction Often, a longer document contains many themes, and each theme is located in a particular segment of the document. For example, each paragraph in a political essay may present a different argument in support of the thesis. Some arguments may be sociological, some economic, some historical, and so on. Thus, themes vary across the document, but remain constant within each segment. We will call such segments “coherent”. This. However, this optimal algorithm can be restricted in generality, such that processing time becomes linear. The approach presented here is quite similar to the one developed by Alemi and Ginsparg in Segmentation based on Semantic Word Embeddings. The implementation is available as a module on GitHub. Preliminaries In the discussion below, a document means a sequence of words, and the goal is to break documents into coherent segments. Our words are represented by word vectors. We regard segments themselves as sequences of words, and the vectorisation of a segment is formed by composing the vectorisations of the words. The techniques we describe are agnostic to the choice of composition, but we use summation here both for simplicity, and because it gives good results. Our techniques are also agnostic as to the choice of the units constituting a document — they do not need to be words as described here. Given a sentence splitter, one could also consider a sentence as a unit. Motivation Imagine your document as a random walk in the space of word embeddings. Each step in the walk represents the transition from one word to the next, and is modelled by the difference in the corresponding word embeddings. In a coherent chunk of text, the potential step directions are not equally likely, because word embeddings capture semantics, and the chunk covers only a small number of topics. Since only some step directions are likely, the length of the accumulated step vector grows more quickly than for a uniformly random walking direction. Remark: Word embeddings usually have a certain preferred direction. One survey about this and a recipe to investigate your word embeddings can be found here. Formalisation Suppose is a segment given by the word sequence and let be the word vector of and the segment vector of . The remarks in the previous section suggest that the segment vector length corresponds to the amount of information in segment . We can interpret as a weighted sum of cosine similarities: As we usually compare word embeddings with cosine similarity, the last scalar product is just the similarity of a word to the segment vector The weighting coefficients suppress frequent noise words, which are typically of smaller norm. So can be described as accumulated weighted cosine similarity of the word vectors of a segment to the segment vector. In other words: the more similar the word vectors are to the segment vector, the more coherent the segment is. How can we use the above notion of coherence to break a document of length into coherent segments, say with word boundaries given by the segmentation: A natural first attempt is to ask for maximising the sum of segment vector lengths. That is, we ask for maximising: However, without further constraints, the optimal solution to is the partition splitting the document completely, so that each segment is a single word. Indeed, by the triangle inequality, for any document , we have: Therefore, we must impose some limit on the granularity of the segmentation to get useful results. To achieve this, we impose a penalty for every split made, by subtracting a fixed positive number for each segment. The error function is now: Algorithms We developed two algorithms to tackle the problem. Both depend on a hyperparameter, , that defines the granularity of the segmentation. The first one is greedy and therefore only a heuristic, intended to be quick. The second one finds an optimal segmentation for the objective , given split penalty . Greedy The greedy approach tries to maximise by choosing split positions one at a time. To define the algorithm, we first define the notions of the gain of a split, and the score of a segmentation. Given a segment of words and a split position with , the gain of splitting at position into and is the sum of norms of segment vectors to the left and right of , minus the norm of the segment vector : The score of a segmentation is the sum of the gains of its split positions. The greedy algorithm works as follows: Split the text iteratively at the position where the score of the resulting segmentation is highest until the gain of the latest split drops below the given penalty threshold. Note that the splits resulting from this greedy approach may be less than the penalty , implying the segmentation is sub-optimal. Nonetheless, our empirical results are remarkably close to the global maximum of that is guaranteed to be achieved by the dynamic programming approach discussed below. Dynamic Programming This approach exploits the fact that the optimal segmentations of all prefixes of a document up to a certain length can be extended to an optimal segmentation of the whole. The idea of dynamic programming is that one uses intermediate results to complete a partial solution. Let’s have a look at our case: Let be the optimal segmentation of the whole document. We claim that is optimal for the document prefix up to word If this were not so, then the optimal segmentation for the document prefix would extend to a segmentation for the whole document, using , with , contradicting optimality of . This gives us a constructive induction: Given optimal segmentations we can construct the optimal segmentation up to word , by trying to extend any of the segmentations by the segment , then choosing to maximise the objective. The reason it is possible to divide the maximisation task into parts is the additive composition of the objective and the fact that the norm obeys the triangle inequality. The runtime of this approach is quadratic in the input length , which is a problem if you have long texts. However, by introducing a constant that specifies the maximal segment length, we can reduce the complexity to merely linear. Hyperparameter Choice Both algorithms depend on the penalty hyperparameter , which controls segmentation granularity: The smaller it is, the more segments are created. A simple way of finding an appropriate penalty is as follows. Choose a desired average segment length . Given a sample of documents, record the lowest gains returned when splitting each document iteratively into as many segments as expected on average due to , according to the greedy method. Take the mean of these records as . Our implementation of the greedy algorithm can be used to require a specific number of splits and retrieve the gains. The repository comes with a get_penalty function that implements the procedure as described. Experiments As word embeddings we used word2vec cbow hierarchical softmax models of dimension 400 and sample parameter 0.00001 trained on our preprocessed English Wikipedia articles. Metric Following the paper Segmentation based on Semantic Word Embeddings, we evaluate the two approaches outlined above on documents composed of randomly concatenated document chunks, to see if the synthetic borders are detected. To measure the accuracy of a segmentation algorithm, we use the metric as follows. Given any positive integer , define to be the probability that a text slice of length , chosen uniformly at random from the test document, occurs both in the segment of the reference segmentation and in the segment the segmentation created by the algorithm, for some , and set For a successful segmentation algorithm, the randomly chosen slice will often occur in the same ordinal segment of the reference and computed segmentation. In this case, the value of will be high, hence the value of low. Thus, is an error metric. In our case, we choose to be one half the length of the reference segment. We refer to the paper for more details on the metric. Test documents were composed of fixed length chunks of random Wikipedia articles that had between 500 and 2500 words. Chunks were taken from the beginning of articles with an offset of 10 words to avoid the influence of the title. We achieved values of about 0.05. We varied the length of the synthetic segments over values 50, 100 or 200 (the vertical axis in the displayed grid). We also varied the number of segments over values 3, 5 and 9 (the horizontal axis). Our base word vectorisation model was of dimension 400. To study how the performance of our segmentation algorithm varied with shrinking dimension, we compressed this model, using the SVD, into 16, 32, 64 and 128-dimensional models, and computed the metrics for each. To do this, we used the python function: def reduce(vecs, n_comp): u, s, v = np.linalg.svd(vecs, full_matrices=False) return np.multiply(u[:, :n_comp], s[:n_comp]) The graphic shows the metric (as mean) on the Y-axis. The penalty hyperparameter was chosen identically for both approaches and adjusted to approximately retrieve the actual number of segments. Observations: - The dimension reduction is efficient, as runtime scales about linearly with dimension, but perfect accuracy is still not reached for 128 dimensions. - The metric improves with the length of the segments. Perhaps this is because the signal to noise ratio improves similarly. Since both our segmentation test corpus and word embedding training corpus are different from those of Alemi and Ginsprang, our values are not directly comparable. Nonetheless, we observe that our results roughly match theirs. Runtime Experiments were done on 2.4 GHz CPU. We see the runtime properties claimed above, as well as the negligible effect of the number of segments for fixed document length. greedy optimal doc_length segments 500 5 0.004543 0.187337 1000 5 0.009159 0.735775 1000 10 0.011962 0.740502 2000 10 0.022439 3.549979 Greedy Objective The graph shows the objective maximized in the first step of the greedy algorithm. Each graph is a synthetic document composed of 5 chunks of length 100 each. The peaks at multiples of 100 are easy to spot. Application to Literature In this section, we apply the above approach to two books. Since the experiment is available as jupyter-notebook, we omit the details here. The word vectors are trained on a cleaned sample of the english Wikipedia with 100 MB compressed size (see here). We run the two algorithms on the books Siddartha, by Hermann Hesse and A tale of two cities, by Charles Dickens. We use sentences in place of words and sentence vectors are formed as the sum of word vectors of the sentence. The penalty parameter for the optimal method was determined for an average segment length of 10, 20, 40 and 80 sentences through the get_penalty method. (Notice that this results not in these exact average segment lengths, as can be seen in the notebook outputs). The greedy method was parametrised to produce the same number of splits as the optimal one, for better comparison. The graphics show sentence index on the x-axis and segment length on the y-axis. The text files with sentence and segment markers are also available. siddartha_10.txt siddartha_20.txt siddartha_40.txt siddartha_80.txt tale2cities_10.txt tale2cities_20.txt tale2cities_40.txt tale2cities_80.txt The segmentations derived by both methods are visibly similar, given the overlapping vertical lines. The values of the associated objective functions differ only by ~0.1 percent. The segmentation did not separate chapters of the books reliably. Nevertheless, there is a correlation between chapter borders and split positions. Moreover, in the opinion of the author, neighbouring chapters in each of the above books often contain similar content. We also did experiments with nonnegative word embeddings coming from a 25-dimensional NMF topic model, trained on TfIdf vectorisations of sentences from each text. Our results were of similar quality to those reported here. One advantage of this approach is that there are no out-of-vocabulary words, a particular problem in the case of specific character and place names. Discussion The two segmentation algorithms presented here — each using precomputed word embeddings trained on the English Wikipedia corpus — both performed very well with respect to the metric on our test corpus of synthetically assembled Wikipedia article chunks. Determining whether these algorithms perform well in more realistic scenarios in other knowledge domains would require annotated data. For practical purposes, it might be disturbing that the variance in segment length is quite large. This can probably be tackled by smoothing the word vectors with a window over neighbouring words. 4 thoughts on “Text Segmentation using Word Embeddings” Excuse my ignorance. How does this algorythm help me in my daily life as an avid reader and content marketer? Looking forward to your answer! Could you please share the link for ” The implementation is available as a module on GitHub “ Same for me. This was an interesting read and I would like to try the algorithm on my own data. Thanks in advance! Hi, here you can find the implementation and experiment notebook:…
https://blog.lateral.io/2017/10/text-segmentation-using-word-embeddings/
CC-MAIN-2019-04
refinedweb
2,172
53.1
Yyy Xxx wrote: > I don't see a problem with this specific backward > compatibility issue. > > 1. Non-namespace names like "test:echo" is, > IMHO, not a common practice. Common practice is, unfortunately, not the test for backward compatability. Also backward compatability breaks do occur in Ant1. It just seems that it is a moveable feast. > > 2. If there were build files like this in the field, > it would be trivial to fix them. Tell that to the guy struggling with the arsDigita build files on ant-user. Most incompatible changes to the build file syntax are trivial to change, but that does not mean they are acceptable (cf. jarfile->destfile) > > 3. The XML standards warn about using colons in names How many ant users read the XML standards? :-) I've had people complaining about Ant's inability to nest <!-- --> style comments. Conor -- To unsubscribe, e-mail: <mailto:ant-dev-unsubscribe@jakarta.apache.org> For additional commands, e-mail: <mailto:ant-dev-help@jakarta.apache.org>
http://mail-archives.eu.apache.org/mod_mbox/ant-dev/200207.mbox/%3C3D2B772D.5090002@cortexebusiness.com.au%3E
CC-MAIN-2021-10
refinedweb
165
59.3
#include <MQTTClient.h> MQTTClient_willOptions defines the MQTT "Last Will and Testament" (LWT) settings for the client. In the event that a client unexpectedly loses its connection to the server, the server publishes the LWT message to the LWT topic on behalf of the client. This allows other clients (subscribed to the LWT topic) to be made aware that the client has disconnected. To enable the LWT function for a specific client, a valid pointer to an MQTTClient_willOptions structure is passed in the MQTTClient_connectOptions structure used in the MQTTClient_connect() call that connects the client to the server. The pointer to MQTTClient_willOptions can be set to NULL if the LWT function is not required. The eyecatcher for this structure. must be MQTW. The version number of this structure. Must be 0 The LWT topic to which the LWT message will be published. The LWT payload. The retained flag for the LWT message (see MQTTClient_message.retained). The quality of service setting for the LWT message (see MQTTClient_message.qos and Quality of service).
http://www.eclipse.org/paho/files/mqttdoc/Cclient/struct_m_q_t_t_client__will_options.html
CC-MAIN-2014-35
refinedweb
169
66.94
Getting Started with a Raspberry PI¶ Zerynth Device Manager can be used for managing both microcontroller and microprocessor based devices. Let's see how to use a microprocessor based device like the Raspberry PI with the ZDM. ZDM Client Python Library¶ If your device is powerful enough to run the standard Python distribution, the ZDM Client Python Library is what you need. The latest stable version of the ZDM-Client Python Library is available on PyPI. This step requires that you have the latest Python version with pip. In order to install the ZDM-Client Python Library, type the following command in a shell pip install zdm-client-py Note If you have an old version of the ZDM-Client Python Library, type the command to update to the latest version. Publish data¶ You can now use the ZDM-Client Python libraty for connecting your CPU based device to the ZDM and stream your data. Create a new Python project with your preferred editor and paste the zdevice.json file inside it. Create a Python file zdm_basic.py and paste this simple code into it: import zdm import random import time def pub_temp_hum(): # this function publish into the tag weather two random values: the temperature and the humidity tag = 'weather' temp = random.randint(19, 38) hum = random.randint(50, 70) payload = {'temp': temp, 'hum': hum} device.publish(payload, tag) print('Published: ', payload) # connect to the ZDM using credentials in zdevice.json file device = zdm.ZDMClient() device.connect() # infinite loop while True: pub_temp_hum() time.sleep(5) To run the above code, just type the command: python zdm_basic.py It will connect to the ZDM and start publishing data!! Click here if you want to see more examples like this one.
https://docs.zerynth.com/latest/deploy/getting_started_with_rpi/
CC-MAIN-2021-17
refinedweb
289
65.62
HDP 2.5.0 provides Phoenix 4.7.0 and the following Apache patches: PHOENIX-1523: Make it easy to provide a tab literal as separator for CSV imports. PHOENIX-2276: Addednum2 to fix test failures. PHOENIX-2743: HivePhoenixHandler for big-big join with predicate push down. PHOENIX-2748: Disable auto-commit during bulk load. PHOENIX-2758: Ordered GROUP BY not occurring with leading PK equality expression.. PHOENIX-2894: Sort-merge join works incorrectly with DESC columns. PHOENIX-2898: HTable not closed in ConnectionQueryServicesImpl. PHOENIX-2905: hadoop-2.5.1 artifacts are in the dependency tree. PHOENIX-2908: PHOENIX-core depends on both antlr 3.5 and antlr 2.7.7. PHOENIX-2912: Broken IT tests after PHOENIX-2905. PHOENIX-2919: PreparedStatement Returns Incorrect Number of Deleted Records. PHOENIX-2920: Incorrect Queries on Multi-tenant tables with WHERE clause containing Row Value Constructor. PHOENIX-2934: Checking a coerce expression at top level should not be necessary for Union All query.. PHOENIX-2952: array_length return negative value.). PHOENIX-3008: Prevent upgrade of existing multi-tenant table to map to namespace until we support it correctly. PHOENIX-3011: Fix missing apache licenses. PHOENIX-3013: TO_CHAR fails to handle indexed null value..
https://docs.cloudera.com/HDPDocuments/HDP2/HDP-2.5.0/bk_release-notes/content/patch_phoenix.html
CC-MAIN-2022-27
refinedweb
199
62.64
When you work with database applications, it always needs to connect your application with a database like MSSQL or ORACLE or any others. But for beginners, it's hard to make database connection with the application. Once I faced this problem. So for beginners, now I am going to show you how to connect your application with MSSQL express edition using ADO.NET. At first, create a winform project. Then go the View menu and click Server Explorer. Right click on the label named Data Connection and click on Add Connection. A new window will open. Select Microsoft SQL Server and click continue. Then another new window will open. Write your server name on the first red rectangle area. Server name will be yourPCname\sqlexpress. Here my PC name is bikashpc. You can get your server name by clicking on the Servers node in Server Explorer window. If there is only one server, then there is no confusion.Then give a name to your database on the second red rectangle area. Here, I give my database name bksdb. And then click ok. Your application is now connected with the database. Let's go test it.Let's create a form like this: The task is you will put your friend's roll number in the text box and by clicking Show Name button you will get the name of that friend in the name text box. So let's create a table for storing friends' name and roll number.To create a table, go the Server Explorer and click on the + at the left side of your newly created database. It will expand and many other nodes will be shown including Tables,Views and so forth. Right click on the table and click Add New Table. Then a new tab will open in which you have to put your column name and data type. Write and save table by right clicking on the tab like this: Here I add two columns, name and roll and name my table student. Let's put some data manually in the table. To do that, click the + node of the left side of the table. You can see your table here. Right click on your table name and click Show Table Data. Add some name and roll. Sometimes, you need to change your table definitions: changing column name, adding column, changing data type and so on. To do that again, right click on your table name and click Open Table Definition. Then modify what you want.Let's write some program now for showing data into the text box. Double click on the button Show Name. In the code file, it will look like this: private void button1_Click(object sender, EventArgs e) { } Add namespace for sqlclient on the top of the code file writing the code below: sqlclient using System.Data.SqlClient; Now you have to connect your application by writing some code. In the button action, write: private void button1_Click(object sender, EventArgs e) { string connectionString = @"Data Source=bikashpc\sqlexpress;Initial Catalog=bksdb; Integrated Security=True"; SqlConnection sqlCon = new SqlConnection(connectionString); } Your connection is complete.The question is how can you get your connection string. Let's find out your Connectionstring. Again, right click in your created database and click on the Properties: Properties window will open. Copy your Data Source and paste it into Data Source of connectionString. So you can now go to fetch data from your database. To do that, you have to open your connection by writing: sqlCon.Open(); Remember each time you open your connection, you have to close it by writing: sqlCon.Close(); So let's add some code to your button action. private void button1_Click(object sender, EventArgs e) { string connectionString = @"Data Source=bikashpc\sqlexpress;Initial Catalog=bksdb; Integrated Security=True"; SqlConnection sqlCon = new SqlConnection(connectionString); sqlCon.Open(); string commandString = "select name from student where roll='" + textBox1.Text + "'"; SqlCommand sqlCmd = new SqlCommand(commandString, sqlCon); SqlDataReader read = sqlCmd.ExecuteReader(); while (read.Read()) { textBox2.Text = read["name"].ToString(); // it will show your friend's name } sqlCon.Close(); } Colored portion is used to fetch data from your database and show it in the texbox. Run your program and write a roll number in the textbox and then click the Show Name button. Wow!!!! What you see is your friend's name shown in the name texbox. This is all to inform you. I think it will help you a lot. Have fun with.
http://www.codeproject.com/Tips/212006/Connecting-Windows-Form-Application-With-ADO-NET-i?fid=1821781&df=10000&mpp=10&sort=Position&spc=Relaxed&tid=4442090
CC-MAIN-2015-27
refinedweb
744
67.96
I'm a bit new to the concept of OOP, but slightly I'm getting there. I do however have a question. Suppose we have a class called Helloworld where you can input a string that is supposed to read a message to the world. The string is displayed with a function displaystring inside the class Helloworld. Nothing to fancy. However, when I define an object msg1 containing the message I want to send to the world I'm unable to display it with displaystring without explicitly putting the message in the functioncall. Why is this? I already assigned my message to msg1 through Helloworld(message). By calling msg1.displaystring one would assume it would display the message which object msg1 contains. What am I doing wrong here? Code: - Code: Select all #! /usr/bin/env python3.4 class Helloworld: def __init__(self,string): self.string = string def displaystring(self,string): return (string) message = "Hi there folks!" msg1 = Helloworld(message) print (msg1.displaystring) print (msg1.displaystring(message)) Output: <bound method Helloworld.displaystring of <__main__.Helloworld object at 0x7fba059cb860>> Hi there folks! Added note: By using msg1.displaystring(message) you're actually calling for print(Helloworld(message).displaystring(message)). I find it strange I'd have to insert the message I want to display twice.
http://python-forum.org/viewtopic.php?f=6&t=11722
CC-MAIN-2014-41
refinedweb
213
71.31
Getting to grips with the entire JavaScript ecosystem is a tough job when you're getting started. Coming from the native mobile space, there's a lot to learn. I've spent a few months immersed in the environment now, and can try summerize a lot of topics. This should make it easier to find more information when you need it. This post is semi-opinionated, with links for further reading so you can get a different perspective too. This post focus specifically on the JavaScript tooling around React Native projects, but is applicable to all JavaScript projects. Lets start with the entire reason we are using JavaScript for mobile in the first place: React and React Native, React React React is a Facebook project which offers a uni-direction Component model that can replace MVC in a front-end application. React was built out of a desire to abstract away a web page's true view hierarchy (called the DOM) so that they could make changes to all of their views and then React would handle finding the differences between view states. Its model is that you would create a set of Components to encapsulate each part for the state of the page. React makes it easy to make components that are functional in the Functional Reactive Programming sense. They act like a function which takes some specially declared state and it is rendered into HTML. A component optionally uses a language called JSX to visualise how each component's child components are set up,here's an example of a React component using JSX from Emission, our React Native library: By providing a well encapsulated Component model, you can aggressively reduce the amount of redundant code you need to build an application. By not initially writing to the DOM, React can decide what has changed between user actions and that means you have to juggle significantly less state. React Native I came to this conclusion early this year that writing native apps using compiled code is a pain, and it's been amazing to be able to work in React Native in contrast. React Native is an implementation of React where instead of having it abstract a web page's DOM, it creates a native view hierarchy. In the case of iOS that is a UIView hierarchy. Note that it does not handle View Controllers. The MVC model from Apple's Cocoa framework does not directly map into React Natives. I've wrote about how we bridge that gap earlier. React Native is cross platform. You write JavaScript like above, which React Native transforms into a native view hierarchy. That view hierarchy could be on a Samsung TV, a Windows phone or Android instead. It's a smart move, most "Make apps in JS" try to have a native-like experience where they replicate the platform's UI in HTML. However, this technique tends to feel unnatural very easily. If I showed you our app, you could not distinguish between a view controller in React Native, Swift or Objective-C. App State Think of every variable inside your application, that is your application's state. You could not make an app worth using without state. In MVC, MVVM, VIPER and other native patterns, there is no consistent way to handle changes in those variables. React uses a common state pattern though the use of specific terminology: "props", "context" and "state". Yes, the "state" and "state" thing is a little confusing, we'll get to it. Props Props are chunks of app state that are passed into your component from a parent component. In JSX this is represented as an XML attribute. Let's check out an example: See the InvertedButton component, it has three props being passed in: text, selected and onPress. If any of those props were to change the entire InvertedButton component would be re-rendered to the native view hierarchy. These props are the key to passing data downwards through your hierarchy. Note: you cannot access the parent component (without passing it in as a prop.) You should therefore consider props as immutable bits of app state relevant to the component it's being passed into. State-again A component also has a state attribute. The key to understanding the difference between props and state is: state is something controlled within that component that can change - props do not. The above example is a pretty good example of this, when this component is first added to the hierarchy, we send a networking request to get whether you are following something or not. The parent component ( Header) does not need to update when we know whether you are following or not, but the InvertedButton does. So, it is state for the parent, but a prop for the InvertedButton. This means changing the state for following will only cause a re-render in the button. So state is something which changes within a component, which could be used as props for it's children. Examples of this are around handling animation progress, whether you're following something, selection indices and any kind of networking which we do outside of Relay. If you'd like to read more, there is a much deeper explanation in uberVU/react-guide Context The docs are pretty specific about context: If you aren't an experienced React developer, don't use context. There is usually a better way to implement functionality just using props and state. Seems to be something that you should only be using in really, really specific places. If you need it, you don't need this glossary. JSX As we'll find out later, modern JavaScript is a collection of different ideas, and using Babel - you can add them at will into your projects. JSX is one such feature, it is a way of describing nested data using XML-like syntax. These are used inside React's render function to express a component's children and their props. Under the hood, JSX is quite simple, with code looking like this: Turning into Where createElement comes from the React module. You can find out more in the React docs Libraries GraphQL TLDR: An API format for requesting only the data you want, and getting back just that. If you want the longer explanation, I wrote a blog post on it. Relay Relay is what makes working in our React Native app shine. It is a library that allows a component to describe small chunks of a networking request it would need to render. Relay would then look through your component hierarchy, take all the networking fragments, make a single GraphQL request for all the data. Once it has the data, Relay passes in the response as props to all of the components in the tree. This means you can throw away a significant amount of glue code. Redux Redux is a state management pattern, it builds on top of React's "state is only passed down" concept, and creates a single way to handle triggering changes to your state. I'm afraid I don't have any experience with it, so I can't provide much context. I feel like this post covers it well though. Tooling Node Node is the JavaScript implementation from Google's Chrome (called v8) with an expanded API for doing useful systems tooling. It is a pretty recent creation, so it started off with an entirely asynchronous API for any potentially blocking code. For web developers this was a big boon, you could share code between the browser and the server. The non-blocking API meant it was much easier to write faster servers, and there are lots of big companies putting a lot of time and money into improving the speed of JavaScript every day. Node has an interesting history of ownership, I won't cover it here, but this link provides some context. NPM NPM is the Node Package Manager. It is shipped with node, but it is a completely different project and team. NPM the project is ran by a private company. NPM is one of the first dependency managers to offer the ability to install multiple versions of the same library inside your app. This contributes considerably to the issue of the number of dependencies inside any app's ecosystem. JavaScript people will always complain about NPM, but people will always complain about their build tools. Dependency Manager's especially. From an outsider's view, it nearly always does what you expect, has a great team behind it and has more available dependencies than any other. NPM works with a Package.json file as the key file to represent all the different dependencies, version, authors and misc project metadata. Yarn Yarn is a NPM replacement (ish) by Facebook. It's very new. It solves three problems, which were particularly annoying to me personally. - It flattens dependencies - this means that you're less likely to have multiple versions of the same library in your app. - It uses a lockfile by default - this means that everyone on your team gets the same build, instead of maybe getting it. - It is significantly faster. It uses the NPM infrastructure for downloading modules, and works with the exact same Package.json. I moved most of our projects to it. Babel I mentioned JSX a few times above. JSX is not a part of JavaScript, it is transpiled from your source code (as XML-like code) into real JavaScript. The tool that does this is Babel. Babel is a generic JavaScript transpilation engine. It does not provide any translation by default, but instead offers a plugin system for others to hook in their own transpilation steps. This becomes important because a lot of JavaScript features have staggered releases between browsers and you can't always guarantee each JavaScript runtime will have the features you want to use. Babel's plugins can be configured inside your Package.json. To ship your code to the world, you then create a script of some sort to convert your source code into "olde world" JavaScript via Babel. In the case of a react-native project, Babel is happening behind the scenes. Webpack A JavaScript source code & resource package manager. It can be easy to confuse Babel + Webpack, so in simple: - Babel will directly transform your source code file by file - Webpack will take source code and merge it all into one file They work at different scopes. Webpack is mainly a web front-end tool, and isn't used in React Native. However, you'll come across it, and it's better to know the scope of it's domain. ESLint How can you be sure your syntax is correct? JavaScript has a really powerful and extensible linter called ESLint. It parses your JavaScript and offers warnings and errors around your syntax. You can use this to provide a consistent codebase, or in my case, to be lazy with your formatting. Fixing a lot of issues is one command away. I have my editor auto indent using ESLint every time I press save. Development Live Reload This is a common feature in JavaScript tooling. If you press save in a source file then some action is taken. Live Reloading tends to be a more blunt action, for example reloading the current view from scratch, or running all of the tests related to the file. Hot-Reloading Hot Reloading is more rare, because it's significantly harder. Hot Reloading for React projects is injecting new functions into the running application, and keeping it in the same state. For example if you had a filled-in form on your screen, you could make styling changes inside your source file and the text inside the form would not change. Hot reloading is amazing. Haste Map Part of what makes React Native support Hot Reloading, and allows Jest to understand changes for testing is by using a Haste Map. A Haste Map is a dependency resolver for JavaScript, looking through every function to know how it connects to every other function within the JavaScript project. With the dependencies mapped, it becomes possible to know what functions would need replacing or testing when you press save after writing some changes. This is why it takes a bit of time to start up a React Native project. The public API is deprecated, you shouldn't use it in your projects, but the old README is still around. Testing Jest Facebook have their own test runner called Jest. It builds on Jasmine, and offers a few features that kick ass for me: - Re-runs failing tests first - Assumes all tests unrelated to changes are green and doesn't run them - Watch mode that works reliably I miss these features when I'm not in a Jest project. Jest Snapshots Jest has a feature called Jest Snapshots, that allows you to take "snapshots" of JavaScript objects, and then verify they are they are the same as they were last time. In iOS we used visual snapshot testing a lot. VSCode-Jest I created a project to auto-run Jest inside projects that use it as a test runner when using Visual Studio Code: vscode-jest. I've wrote about our usage of VS Code on this blog series also. JavaScript the Language I'm always told that JavaScript was created in 10 days, which is a cute anecdote, but JavaScript has evolved for the next 21 years. The JavaScript you wrote 10 years ago would still run, however modern JavaScript is an amazing and expressive programming language once you start using modern features. Sometimes these features aren't available in node, or your browser's JavaScript engine, you can work around this by using a transpiler, which takes your source code and backports the features you are using to an older version of JavaScript. ES6 JavaScript is run by a committee. Around the time that people were starting to talk about HTML5 and CSS3, work was started on a new specification for JavaScript called ECMAScript 6. ES6 represents the first point at which JavaScript really started to take a lot of the best features from transpile to JavaScript languages like CoffeeScript. Making it feasible for larger systems programming to be possible in vanilla JavaScript. ES2016 It took forever for ES6 to come out, and every time they created / amended a specification there were multiple implementations of the specification available for transpiling via babel. This I can imagine was frustrating for developers wanting to use new features, and specification authors trying to put out documentation for discussion as a work in progress. This happened a lot with the Promises API. To fix this they opted to discuss specification features on a year basis. So that specifications could be smaller and more focused, instead of major multi-year projects. Quite a SemVer jump from 6 to 2016. Stages Turns out that didn't work out too well, so the terminology changed again. The change is mainly to set expectations between the Specification authors and developers transpiling those specifications into their apps. Now an ECMAScript language improvement specification moves through a series of stages, depending on their maturity. I believe starting at 0, and working up to 4. 0 Idea, 1 Proposal, 2 Draft, 3 Accepted and 4 Done. So a ECMAScript Stage 0 feature is going to be really new, if you're using it via a transpiler then you should expect a lot of potential API changes and code churn. The higher the number, the longer the spec has been discussed, and the more likely for the code you're transpiling to be the vanilla JavaScript code in time. The committee who discussed these improvements are the TC39 committee, the cool bit is that you can see all the proposals as individual GitHub repos so it's convenient to browse. Modules / Imports A modules is the terminology for a group of JavaScript code. Terminology can get confusing, as the import structure for a library is very similar to importing a local file. You can import a module using syntax like import { thin, other } from "thingy". Here's some examples from our project: An import can either have a default export, or a set of exportable function/objects. You might see an import like const _ = require("underscore") around the internet, this is an older format for packaging JavaScript called CommonJS. It was replaced by the import statements above because you can make guarantees about the individual items exported between module boundaries. This is interesting because of tree-shaking, which we'll get to later. Classes Modern JavaScript has classes introduced in es6, this means that instead of writing something like: Instead you could write: Classes provide the option of doing object-oriented programming, which is still a solid way to write code. Classes provide a simple tool for making interfaces, which is really useful when you're working to the Gang of Four principals: “Program to an interface, not an implementation,” and “favor object composition over class inheritance.” Prototypical So, classes - it took 20ish years before they happened? Before that JavaScript was basically only a prototype-based language. This meant you created "objects" but that they were just effectively just key-value stores, and you used functions to do everything else. The language is a great fit for functional programming, I ended up building an Artsy chat bot using only functions by accident. Really, a few days into it when I started looking for an example class to show in this post I realised I didn't have one. Whereas in Danger I do almost exclusive OOP in JavaScript, sometimes the project fits the paradigm too. A really good, and highly opinionated post on the values of prototypical/functional programming in JavaScript is The Two Pillars of JavaScript - I agree with a lot of it. Mutablilty JavaScript has had a keyword var to indicate a variable forever. You should basically never use this. I've never written one this year, except by accident. It's a keyword that has a really confusing scope, leading to odd bugs. ES6 brought two replacements, both of which will give you a little bit of cognitive dissonance if you have a lot of Swift experience. let - the replacement for var, this is a mutable variable, you can replace the value of a let. The scope of a let is exactly what you think from every other programming language. const - this is a let that won't allow you to change the value. So it creates a mutable object (all JS objects are mutable) but you cannot replace the object from the initial assignment. This The keyword this is a tricky one. It is confusing because this gets assigned to the object that invokes the function where you use this. It's confusing because you may have a function inside a class, and would expect this to be the instance to which the function is attached to, but it very easily could not be. For example: In the above example this inside handleTap does not refer to the instance of Article. Tricky right? There are two "easy" fixes, using arrow functions instead if normal functions: Or you can use the bind function to ensure that this inside the function is what you want it to be. This is a great in-depth explanation of the way it works: Understanding the “this” keyword in JavaScript. Strict Mode Introduced in ECMAScript 5.1, it provides a way to opt-in to more errors inside the JavaScript runtime. As you're likely to be using both a linter and a transpiler to keep your source clean, I'm less worried about including it on every page. Destructuring Object destructuring is one of those things that saves a little bit of code all the time. It's especially useful given the amount of time you spend passing around plain old JavaScript objects. This is something that CoffeeScript took from Ruby: or for an Object This makes it really easy to pull out subsets of existing objects and set them as variables. Arrow Functions In JavaScript a function has always looked like: This gets frustrating when you're doing functional-style programming, where you use closures for mapping, filtering and others. So instead ES6 introduced terser ways of doing this. So I'm going to write the same function multiple times: Promises Node is renowned for having a non-blocking API from day one. The way they worked around this is by using callbacks everywhere. This can work out pretty well, but eventually maintaining and controlling your callbacks turns into it's own problem. This can be extra tricky around handing errors. One way to fix this is to have a Promise API, Promises offer consistent ways to handle errors wand callback chaining. JavaScript now has a built-in a Promise API, this means every library can work to one API when handling any kind of asynchronous task. I'm not sure what ECMA Specification brought it in. This makes it really easy to make consistent code between libraries. However, more importantly, it makes it possible to have async/await. Async/Await Once Promises were in, then a language construct could be created for using them elegantly. They work by declaring the entire function to be an async function. An async function is a function which pretends to be synchronous, but behind the scenes is waiting for specific promises to resolve asynchronously. There are a few rules for an async function: - You cannot use awaitinside a function that has not been declared async. - Anything you do return will be implicitly wrapped in a Promise - Non-async functions can just handle the promise an asyncfunction returns So, a typical async function You aren't always given a promise to work with as not all APIs support promises and callbacks, wrapping a callback function is pretty simple: The await part of an async function using await readFile will now wait on the synchronous execution until the promise has resolved. This makes complicated code look very simple. Tree Shaking All development ecosystems have trade-offs which shape their culture. For web developers reducing the amount of JavaScript they send to a client is an easy, and vital part of their day job. This started with minifying their source code, e.g. reducing the number of characters but having the same behavior. The current state of the art is tree-shaking, wherein you can know what functions are unused and remove those from the source code before shipping the code to a client. A haste-map is one way to handle these dependencies, but it's not the only one. Rollup is considered the de-facto ruler of the space, but it is in babel and webpack also. Does this affect you if you're using React Native? Not really, but it's an interesting part of the ecosystem you should be aware of. Types Types can provide an amazing developer experience, as an editor can understand the shape of all the object's inside your project. This can make it possible to build rich refactoring, static analysis or auto-complete experiences without relying on a runtime. For JavaScript there are two main ways to use types. Flow and TypeScript. Both are amazing choices for building non-trivial applications. IMO, these two projects are what makes JavaScript a real systems language. Both take the approach of providing an optional typing system. This means you can choose to add types to existing applications bit by bit. By doing that you can easily add either to an existing project and progressively add types to unstructured data. Interfaces As both Flow and TypeScript interact with JavaScript, the mindset for applying types is through Interfaces. This is very similar to programming with protocols, where you only care about the responsibilities of an object - not the specific type. Here is a Flow interface from DangerJS: This interface defines the shape of an object, e.g. what functions/properties it will it have. Using interfaces means that you can expose the least amount of about an object, but you can be certain that if someone refactors the object and changes any interface properties - it provide errors. Flow Flow is a fascinating tool that infers types through-out your codebase. Our React Native uses a lot of Flow, we have a lot of linter rules for it too, so instead of writing a function like: We would write it like this: Wherein we now have interfaces for our arguments and the return value of the function. This means better error message from Flow, and better auto-complete in your editor. TypeScript TypeScript is a typed language that compiles JavaScript by Microsoft. It's awesome, it has all of the advantages that I talked about with Flow and a lot more. With TypeScript you can get a much more consistent build environment (you are not picking and choosing different features of ES6) as Microsoft implement all of it into TypeScript. We opted to go for JS + Flow for Artsy's React Native mainly because we could incrementally add types, and you can find a lot more examples of JavaScript on the internet. It also is the way in which React Native is built, so you get the ecosystem advantage. That said, if we start a new React Native from scratch project, I would pitch that we should use TypeScript after my experiences with making PRs to VS Code. TypeScript feels more comprehensive, I got better error messages and VS Code is very well optimised for working in TypeScript projects. Typings/Flow-Typed Shockingly, not all JavaScript modules ship with a typed interface for others. This makes it a pain to work with any code outside your perfectly crafted/typed codebase. This isn't optimal, especially in JavaScript where you rely on so many external libraries. Meaning that you can either look up the function definitions in their individual docs, or you can read through the source. This breaks your programming flow. Both TypeScript and Flow offer a tool to provide external definitions for their libraries. For typescript that is typings and for Flow, flow-typed. These tools pull into your project definition files that tell TypeScript/Flow what each module's input/outputs are shaped like, and provides inline documentation for them. Flow-Typed is new, so it's not really got many definitions at all. Typings on the other hand has quite a lot, so in our React Native we use typings to get auto-complete for our libraries. JavaScript Fatigue So that's my glossary, there's a lot of interesting projects out in the JS world. They have a term "JavaScript fatigue" which represents the concept of the churn in choosing and learning from so many projects. This is very real, which is something we're taking into account. Given the amount of flexibility in the ecosystem, it's really easy to create anything you want. If I wanted to implement a simplified version of Swift's guard function for our JavaScript, I could probably do it in about 2 days using a Babel plugin, then we can opt-in on any project we want. This can make it easy to freeze and flip the table, but it also makes JavaScript a weird, kind of ideal, primordial soup where some extremely interesting ideas come out. It's your job to use your smarts to decide which are the ideas which will evolve further, then help them stablize and mature.
http://artsy.github.io/blog/2016/11/14/JS-Glossary/
CC-MAIN-2017-26
refinedweb
4,611
61.67
Well into C++ development for an ATmega328 / ATmega32U4 target, I'm using a DTO to capture compact results parsed by ArduinoJson so that I can dispose of the buffers before continuing. All execution is single threaded. When I added a new field to the DTO of name[16] the free space reported by MemoryFree dropped from a moderately comfortable 1 kB to below 700 bytes. Surprised and alarmed by the dramatic change (and with more uses of struct just ahead), I looked into various sizes of the name field. The free memory behaved as expected for shorter arrays, add two bytes to the array, lose two bytes of free memory. However, on increasing from a length of 8 to 10 the free memory dropped by 323 bytes and the .text section increased by 194 bytes. Changing from class to struct didn't change the behavior, nor did removing the initializer. I understand that alignment might cause a byte or three of difference, but this is unlike anything I've seen before. Edit: Checking sizeof(ParseDto) behaves as expected, 31 bytes for name[8], 33 bytes for name[10], 39 bytes for name[16]. What is behind this so that I can both resolve the specific issue, as well as try to make sure I haven't repeated the mistake elsewhere? Toolchain is platformio 3.6.1 on macOS, avr-g++ (GCC) 5.4.0 class ParsedDto { public: Commands command; // enum class Commands unified_error_t error; // uint16_t i2c_addr_t addr; // uint8_t bool addr_all; float offset_t; float offset_h; char name[16]; char type[SensorRegistry::kType_string_buf_len]; // const size_t kType_string_buf_len{9}; }; A single instance of ParsedDto is created on the stack as local. The Free heap: value shown is from MemoryFree, reported at the time that the instance of ParsedDto is on the stack. With name[8] ===> setup() done {} Free heap: 1026 DATA: [==== ] 36.2% (used 742 bytes from 2048 bytes) PROGRAM: [======= ] 69.1% (used 22280 bytes from 32256 bytes) .pioenvs/uno/firmware.elf : section size addr .data 510 8388864 .text 21770 0 .bss 232 8389374 .comment 17 0 .note.gnu.avr.deviceinfo 64 0 .debug_aranges 416 0 .debug_info 3786 0 .debug_abbrev 1702 0 .debug_line 2374 0 .debug_str 520 0 Total 31391 With name[10] ===> setup() done {} Free heap: 703 DATA: [==== ] 36.2% (used 742 bytes from 2048 bytes) PROGRAM: [======= ] 69.7% (used 22474 bytes from 32256 bytes) .pioenvs/uno/firmware.elf : section size addr .data 510 8388864 .text 21964 0 .bss 232 8389374 .comment 17 0 .note.gnu.avr.deviceinfo 64 0 .debug_aranges 416 0 .debug_info 3786 0 .debug_abbrev 1702 0 .debug_line 2374 0 .debug_str 520 0 Total 31585 If it has that much effect clearly more than one instance of the class is created. Top - Log in or register to post comments Only one instance is created within one function, known to be non-reentrant, during single-threaded execution. Confirmed by replacing the default constructor with a "debug print" and only one instance is seen. Edit: Behavior for up through [8] is linear -- add two bytes, lose two bytes of run-time memory -- it is step-wise discontinuous on increasing to [10], so "multiple instances" doesn't seem to explain this behavior either. Also, .text, at least as I understand it, also shouldn't increase non-linearly in that way (based on run-time usage) as it is created at compile time. Edit: Behavior of size of .text and free memory with varying sizes of name[] It is almost as if when the array length exceeds 8 that it is handled very differently. Only one example of this, but 8 curiously is a power of two. However, changing from type[9] to type[8] doesn't "magically" shrink .text and only increases the free memory indication by 1 byte. Top - Log in or register to post comments Unless you've explicitly defined "heap start" & "heap end" there must be some algorithm based on estimated stack usage. Perhaps you've crossed some magic number boundary. Another theory: There is a limit to the indexing offset in "Base register+Offset" addressing mode (maybe 64) when you cross that boundary extra code is required. I see you did sizeof(ParseDto)=39 but what else is on the stack ? Top - Log in or register to post comments I'll look into what explicitly defined "heap start" & "heap end" means and how to try it. The loss of free memory appears at the start of execution, after the Arduino-framework main() calls setup() -- so not anything other than "Arduino essentials" and the frames for main() and setup() should be on the stack. for differing sizes of name[] the indicated free memory: This suggests to me that the impact is due to the contents of the .initN sections, which are in the .text section. The "swelling" of the .text section is coincident with the reduction of free memory. If anyone has some suggestions as to how to examine those sections, I'd appreciate it. As a "sanity check", I downloaded the official AVR 8-bit Toolchain 3.6.2 - Mac OS X 64-bit as listed on and it behaves the same way. Top - Log in or register to post comments Well, regular visitors here like to solve puzzles. But you need to give us more to go on, such as a small complete test program that exhibits the symptoms. Then we can all play along with the home game. [edit] but I don't know how many of us might have your same setup. Surely it couldn't be an idiosyncrasy in platformio! Barring full source, full maps and/or full .LSS. You can put lipstick on a pig, but it is still a pig. I've never met a pig I didn't like, as long as you have some salt and pepper. Top - Log in or register to post comments I know that feeling well, from other platforms and projects! Right now with close to 4000 lines of code and comments, I need to find a "minimal-failing" example. Top - Log in or register to post comments This looks like a bug in Json and nothing to do with your code (excepting that it exposes the bug). If that's the case then the best you can hope for in this forum is suggestions for a work-around. Though, as the IBM is famously said to claim: "Than's not a bug, its a feature!" Nicholas O. Lindan Cleveland Engineering Design, LLC Top - Log in or register to post comments That very well may be and a path that I'll try to pursue (Edit: if a compiler-based problem isn't the root of this). Edit: To be clear, this DTO isn't passed to ArduinoJson code, it is used to store the results of calls to methods in that library. Typically the "raw" result is examined and an intermediate, derived value is set in the DTO. The closest ArduinoJson gets to "touching" the DTO is In the mean time, enjoy:... Top - Log in or register to post comments WTF is a JSON DTO? TYVM!! --Mike Top - Log in or register to post comments Something I wouldn't use on a machine with more memory! Basically, when parsing JSON with the ArduinoJson library, the structure of the JSON is stored in a JsonBuffer object with the content often as references to other contemporaneous objects, but as copies of char*, String, and __FlashStringHelper* (recast program-memory references). As the buffer is big by AVR-memory standards (200-400 bytes in my case), it can be imperative to extract the data and dispose of the buffer before taking action on the parsed data. The "trick" is to parse, then copy the "interesting" data to a DTO that is much more compact than the original JSON. In my case I can pass in a reference to a 39-byte DTO to a function, create a transient 200-byte StaticJsonBuffer on the stack, parse, return from the function and reclaim 200 bytes for a net savings of ~160 bytes. As I need to create a transient 400-byte StaticJsonBuffer to generate the output, those 160 bytes come in quite handy. See further... Top - Log in or register to post comments OK I Googled it - "Data Transfer Object" Some people seem to really dislike it! --Mike Top - Log in or register to post comments Lacking that, at least before&after output from "avr-nm -SC -size-sort foo.elf" (which should at least tell us where the extra code came from.) Top - Log in or register to post comments That exercise (disabling chunks of stuff) will often help to pinpoint the problem area and lead to discovery of the cause. Still, comparing the full maps of the test cases should help. Then examine the LSS for the suspect areas. You can put lipstick on a pig, but it is still a pig. I've never met a pig I didn't like, as long as you have some salt and pepper. Top - Log in or register to post comments Thanks -- that was the kind of hint I was hoping for! So surprising were the results on the binaries I had saved away before, I captured name[8] and name[9] versions as git commits and rechecked things So the change from name[8] to name[9]: (The code is slightly different than when the thread was started through removal of the explicit ParsedDto() initializing constructor and diagnostic ParsedDto::Dump() that would print the contents) Thanks to @westfw, for each of the two generated firmware.elf: avr-nm -SC --size-sort firmware.elf > nm-SC.out cut -d ' ' -f 2- nm-SC.out > nm-SC.cut and compare the two (name[8] on the left, name[9] on the right) The first two lines are unremarkable to me as they are a uint16_t and a const char* const ("MAX30205" for 9 or 10 bytes of storage) in namespaces that may have had their limited usages inlined or otherwise optimized. Now where things are very surprising are with main, setup, and SerialApiV1::parseJsonToDto() Calling sequence here is: main() setup() while (true) { // get a line SerialApiV1::processLine() ParsedDto parsed_dto; parseJsonToDto() ... return; } With name[8], there is no setup symbol at all. I am guessing that its single usage was subsumed into main. With name[9], there is no SerialApiV1::parseJsonToDto() symbol. Its single usage is in SerialApiV1::processLine() which has a single usage in setup(). processLine() does not appear in either symbol table, so appears to have been optimized out (in?). (I grudgingly deal with the Arduino framework's hidden main() by using setup() as my "main()" and ignoring loop() and all the rest. Getting onto another framework is a topic for another day.) name[8]: 0x2da + 0x5f6 = 2256 bytes name[9]: 0x078 + 0x8d4 = 2380 bytes with a difference of 124 bytes, which is the difference seen in the size of the .text sections. While I may never know why the compiler made very different decisions based on a single-byte change in a single field, it clearly did. Now why this leads to a difference of 317 bytes very early in program execution is still a puzzle to me. const char* const SensorRegistry::TypeStrings::kMAX30205 -- 8 or 9 bytes of string constant -- maybe there are a dozen usages of this somewhere... Used as a return value for an overriden, virtual class method defined in the .h file, and once in a strcmp() call, so it seems that there should only be one reference to this in a virtual-function table somewhere and one for the strcmp() call. Even still, changes in compiled instructions aren't even in the same address space as SRAM on an AVR device. I keep coming back to a member of the .text section and think that it has to be doing something very different as well between the two builds. Top - Log in or register to post comments Why do you suspect that? Your 328p doesn't have >64k of ROM. While interpreting a disassembly output (avr-objdump -SC) might be a pain on a program with 4kLOC, it should be pretty easy to rule out the .initN sections, all of which should show up at the startup vector location, before main() is called. You don't by any chance have a case where your structure is passed by value rather than by reference/pointer, do you? I don't know exactly what avr-gcc will do once the parameters no longer fit in the registers, but I'm pretty sure it's not very pretty... Top - Log in or register to post comments Times like this have me wishing I had a debugger for the ATmega328P. If the behavior is the same on an ATmega32U4 (which I believe supports JTAG debugging), I'll try to confirm it there. The coincidence of a 32-byte instance and 32 registers (at least as I read the datasheet) pointed out by @westfw is certainly an interesting one. I'll get back to that in a bit. I did check and confirmed that the instance was being passed by reference. For sanity, I also tried passing by pointer, as well as declaring it a global and accessing it directly. No noticeable changes in behavior. For those "playing along at home", a few things tickled my thinking: In setup(): Which looked like: which, from probably doesn't "look" like a function call in the compiled code. Taking that another step, the compiled code that resulted from the declaration of that function may not be step-by-step the same as Hmmm, 300 bytes from jsonBuffer, pretty close to Checking at the "top level" of the calling tree seemed reasonable to determine "idle" memory availability. In retrospect, even though the code looked like it was "outside" of all the functions where allocation was being done, it seems like "optimization" had other ideas of where that print statement was relative to allocation for functions that weren't actually "called". Looking "inside" of the "function calls" (however GCC decided to implement them) reveals that the "peak" memory usage is comparable and suggests that the 300-byte buffer is "pre-allocated" somewhere in the code outside of the "function call" when the code is "rearranged". with name[8]: with name[9]: sizeof(StaticJsonBuffer<300>) returns 308, so within a couple of bytes of the "difference" seen. Perhaps something of interest in another application where that "pre-allocation" would push SRAM usage over the limit. Visible optimization flag is -Os For reference, the compiler line was (though I suspect the build system adds more, as -fno-rtti seems to be in effect) Top - Log in or register to post comments Why can't you just diff the LSS? Why do you need to run the code with a debugger to see what's going on? Top - Log in or register to post comments Thanks for the suggestion! I'll look into how to do that. Edit: Thanks for the pointer. seems to provide great insight. Top - Log in or register to post comments If you have a function with 300 bytes of local variables, and the compiler decides to inline that function into main(), then the 300 bytes will be allocated at the beginning of main(), not "just before the function code." That could explain the difference in heap size at the "beginning" of your program. This should be offset by no longer allocating extra stack space when the function code IS executed. If you have two functions allocating lots of local variables, and the both get inlined, you might not get the "interleaving" of memory allocation that you were hoping to get, and it's time to investigate __attribute__((noinline)) to prevent such behavior (if necessary.) Top - Log in or register to post comments Always good to know the tricks of beating a compiler into submission. Your suggestions and prodding have been very helpful. Now if someone can beat the C++ committee into submission so that __flash as an address-space qualifier can be supported in C++;) Top - Log in or register to post comments IME one small change near the beginning can cause everything after it to be different. "SCSI is NOT magic. There are *fundamental technical reasons* why it is necessary to sacrifice a young goat to your SCSI chain now and then." -- John Woods Top - Log in or register to post comments Tell your diff program to ignore the columns with addresses and opcodes and then the changes in the source will be more obvious Top - Log in or register to post comments
https://www.avrfreaks.net/comment/2582696
CC-MAIN-2019-30
refinedweb
2,770
70.94
Welcome, Rich Client Enthusiasts We've been anticipating this moment since JavaOne'03, when we introduced JDNC in a quiet little BOF labeled as a J2SE Blueprints BOF. We've spent the last year more clearly defining the vision and architecting the base code to maximize the benefits across our developer base. And now we look forward to engaging with you directly to move it forward. So talk to us. We are listening! The JDNC Team Hi Amy! I am so excited about JDNC and the great potential it has. I downloaded the package and tried it. It is very simple to use and well-documented. For a non-programmer like me, it is quite exciting to be able to use such powerful technology so easily. In your presentation of JDNC, you mentioned in the Markup section: "There are existing toolkit-level XML dialects that can be used for this ? eNode and SwingML to name a couple". I believe that you would have also mentioned Ultrid ([u] [/u]) if you had been aware of it. It fills an important gap in the creation of GUI for the Java programmers. Its structure is such that it makes it easy to add components. As a proof of concept, our developer already created an Ultrid JDNC component which means that, I, as a non-programmer was already able to create a demo application in Ultrid that uses the power of the JDNC high-level components. You are welcome to have a look at this component and the clone of JDNC editorDemo program at our site. I only had to give the namespace of that component and use these newly created tags: <[b]jdnc:jneditor[/b] <[b]jdnc:statusbar[/b] and voil�! We are quite enthusiasts at Ultrid at the potential of both technologies as we definitely have a lot in common. I am assuming that 'Rich Client Enthusiasts' includes those of us that want to develop rich internet applications. With that in mind, are we likely to see any JDNC demos based around components embedded in web pages in addition to the WebStart ones? Obviously it may be a case of RTFM, but have you any comments on whether JDNC components have been designed so that those embedded in a web page can be scripted? For example could Javascript be used to scroll a table to a specified row, or could some Javascript be triggered by the user selecting a row in a table? Message was edited by: adriancuthbert jdnc-interest@javadesktop.org wrote: > I am assuming that 'Rich Client Enthusiasts' includes those of us that want to develop rich internet applications. > With that in mind, are we likely to see any JNDC demos based around components embedded in web pages in addition > to the WebStart ones? Another delivery vehicle that is very promising for large sets of tools is Jini. The Jini lookup services make it easy to put new tools into view quickly. The JERI framework in Jini 2.0 makes it possible to develop authentication actions easily for end to end authentication. The Java security model is utilize for providing method level security out of the box. And, you can clearly augment the security framework with the DynamicPolicyProvider. I definately see some of the benefits of JNLP delivery, and applets too. However, JNLP has the problem that applications can't customize the JVM configuration for security management without writing custom installers that will invariably mess up something in the users environment. Applets have issues for longer lived applications, where the users browser can be redirected to a new page by some other application if they have 'reuse browsers' turned on etc. We need to consider all of these vehicles as ways that people will receive client software for the next generation desktop environment! Gregg Wonderly --------------------------------------------------------------------- To unsubscribe, e-mail: jdnc-unsubscribe@jdnc.dev.java.net For additional commands, e-mail: jdnc-help@jdnc.dev.java.net Hi Folks, I'm glad that people are excited about JDNC and the desktop client experience. I have high hopes for this technology to deliver a data aware rich client experiences more easily. The greatest aspect of this project is that Sun is committed to it - which means that there are Sun employees working on it full time. We are engineers in the Java Client Group (Swing/AWT/Java2D/i18n/Deployment) of J2SE who wish to use JDNC to explore new ideas, and deliver rich client solutions without the constraints and time lines of a formal JCP/J2SE release process. JDNC will be a vehicle for delivering the components and mechanisms that we have unofficially provided for years (JTreeTable, SwingWorker) and put them through the Darwinian process of open source to produce the most robust client technology. Perhaps some of this will find its way into J2SE. We have checked in the workspace and documents to set to the tone of development. We have hashed out the basic build structure, tests and architecture. It may seem a little overwhelming at first and we will do our best to get people up to speed. I believe there are a lot of people who feel a need for this technology and we wish to tap into that desire. That's where you can help. This is an open source project with the LGPL license. We welcome your involvement. I'm sure it will evolve over time to fulfill specific needs and reflect the ideas of the contributers. If we seem unresponsive right now it's because we are absolutely buried getting ready for JavaOne next week. So please be patient with us. If you are going to JavaOne this year then there are many opportunities to talk with us. See Amy's Blog entry: for details. Hope to see you there! Thank you for your interest and I hope to be working with you on JDNC. --Mark Hi! I just found out..about your project. It was about time to have such an initiative in Java. I cant wait to see more features and of course JDNC to be some kind of standard. Anyway.. I would like to ask ..something! For a couple of years I was using a Borland related Package called dbswing. It was not so bad. It provides some very specific database related stuff Swing and binding structures. I have not gone very deeply in you API so I might be wrong. Are there any structures of functionality similar to what we call a Dataset (the datamodel lets say of a JTable..with some XML capabilities (in borland dbswing it is manipulated in XML). Is the org.jdesktop.swing.binding package related to that with the AbstractBinding class and interfaces? Second question. Are there or will be..components that will cooperate with the binding structures in order to provide a database resolver/provider functionality? I had a look at the API but I could not see something (I am sorry if I missed it). For me these two are the most important..and I am looking forward to seeing them in the package if not already there. Thank you for your time. and well done again! Hi Amy and JDNC team, I looked into the website and tried out the demos. Which look great, and all the demos work without any problems on my WinXP using jdk 1.5.0 beta2! Two minor updates: 1. In the mainpage, it's using org.javadesktop.(swing, ..) as package prefix, but at below page:, it's using org.jdesktop.(swing, ..). Seems that one page needs an update. 2. Add "daily_logger" and "weekly_logger" as Observers would get the owners notified with daily and weekly access statistics: Cheers ! -George Zhang (from JDIC team) Woohoo! I get the honor of being the first poster :-). Well, folks, you beat me to the punch. I have a fairly extensive library I've been working on for the past 6 months or so that does a lot of stuff you guys do. I think some of my stuff (herein named jgui) is a little more refined, but I really like some of the things you guys have, such as the MetaData. The code I was working on is entirely open source (Apache license), and I'd love to see some of it be put to use. I was floored by how much overlap exists between the two projects. * JGUI has an Application object, so does JDNC. * JGUI has a BeanSource and JavaBeanSource, which are essentially the same idea as the DataModel. I had a newer design I'd started called a DataSource which was geared more towards RowSet's. * JGUI was about to get async loading (there is some Async stuff in the DataSource code set). * JGUI has a date picker, table sorter, and such. * JGUI has an interface called JavaBeanEnabled that is implemented by objects interested in being hooked up to a JavaBeanSource. The bean source notifies the components of any changes in the data, and such. * BeanSource's can be hooked up in Master/Detail relationships so that when a new Customer bean, for instance, is selected in the master, a bunch of orders are loaded up into the Detail BeanSource. * BeanSources support the concept of multiple filters * A JavaBeanTableModel that generates a schema for the table based on either fieldNames/titles/Class types passed into the model, or based on the first row of data in the table data. * A SplashScreen with progress indicator (threaded, of course) * A LoginDialog. Ok, I think this thing fairly rocks, but that's just a proud papa speaking :-) * A Wizard framework (Which works fairly well. A little awkward though. I know its not quite right, but it does work) * TypeAssist in the ComboBox * JGui has a component called JMasterDetail that allows one to register multiple detail panels with the master. The Master provides a way for the detail panels to be listed in a JList, JTree, or some other component such that the master component dictates which detail panel to show. * JGui has a JScrollUp component which is a nice little widget that has a button that will hide/show the content portion of the component (when hidden, only the title portio is visible). * JGui has a JTitledPanel which is what it sounds like * At one point I started in on a calendar component based on the ical specification, but not much has been done there. Well, anyway, you can see that there are quite a few simularities, and some stuff you guys haven't gotten to yet. I know there are dozens of little projects around like mine, but I'd love to be able to help out the JDNC project by donating anything that sounds interesting and helping you guys out. Rich PS> The JGUI stuff is an integral part of a project that is about to enter the beta phase. Depending on how rapid responses are in the JDNC project and how well things are working, I may be able to put some real world use to the library in short order. > Well, folks, you beat me to the punch. I have a fairly extensive library I've been working on for the past 6 > months or so that does a lot of stuff you guys do. I Yeah; thats why the open source guys say "release eary, release often". Waiting for 18 months certainly does nobody any good.. Where is this JGui (the sources) to be found? > Yeah; thats why the open source guys say "release > eary, release often". Waiting for 18 months > certainly does nobody any good.. Ya, I know. But I hate having crappy code with my name on it, ya know? I always want to make sure its in good form first. > Where is this JGui (the sources) to be found? I have a website on my home box (). However, I've been away on site for 2 months and I can't hit the server, so I don't really have anywhere to post the code. Its 407K (without supporting libs). I'd be happy to email it out to anybody interested I for one, would be glad to see your code... I have some components that i would be glad to share too. Gilles Philippart Senior IT Consultant Hey Folks, I've added the jgui source as a download on a sourceforge project I started ages ago but didn't make any progress on (strangly enough, it was for an XML description of a java gui!). The project is called xom. Download the files for jgui. Oh, by the way, there's a PDF parser I was working on (that works for some forms of PDF files) in the jgui code too. Geez, I start enough projects.... Richard (PS link:) Message was edited by: rbair > I am assuming that 'Rich Client Enthusiasts' includes > those of us that want to develop rich internet > applications. With that in mind, are we likely to see > any JDNC demos based around components embedded in > web pages in addition to the WebStart ones? Yes, JDNC components can also be used in applets embedded in web pages. If you are writing your own applet, you may use JDNC components as ordinary Java components. Alternatively, you may use our prepackaged Applet class found in the org.jdesktop.jdnc.runner package, and just supply your own JDNC configuration file (written in JDNC markup language) without having to write any Java code. The same JDNC configuration file may also be presented in a web-started application using org.jdesktop.jdnc.runner.Application. > > Obviously it may be a case of RTFM, but have you any > designed so that those embedded in a web page can be > scripted? For example could Javascript be used to > scroll a table to a specified row, or could some > Javascript be triggered by the user selecting a row > in a table? At this point, we do not have any plans for adding JavaScript support to JDNC. However, JDNC-based Java applications may still use Rhino and LiveConnect for scripting support.
https://www.java.net/node/650613
CC-MAIN-2015-27
refinedweb
2,342
72.46
Increment all numbers between brackets in a text Hello, I have a txt and I want to increment it all the number between brackets [] by a certain amount (there are other numbers in the txt but are not between brackets) If I have this txt: blabla 3 blabla 6 bla[5] blabla3 4bwebfhwefjwe[2] blabla1blabla[8] I would like to obtain this (adding 2500): blabla 3 blabla 6 bla[2505] blabla3 4bwebfhwefjwe[2502] blabla1blabla[2508] Thank you in advance - Alan Kilborn last edited by Alan Kilborn Thank you for the well-formulated problem statement. Unfortunately, this isn’t a task that Notepad++ can do without some outside help. The PythonScript plugin could be used to do such a thing with the following code: def change(m): return str(int(m.group(1)) + 2500) editor.rereplace(r'(?<=\[)(\d+)(?=\])', change); - Robin Cruise last edited by @Alan-Kilborn can you tell me what does ?<=do ? - Alan Kilborn last edited by @Robin-Cruise said in Increment all numbers between brackets in a text: can you tell me what does ?<= do ? If you look HERE you will see it is a “lookbehind” assertion: Thus, it is something that has to come before the match, but isn’t part of the match itself.
https://community.notepad-plus-plus.org/topic/21121/increment-all-numbers-between-brackets-in-a-text/1?lang=en-US
CC-MAIN-2021-39
refinedweb
207
61.09
Agenda See also: IRC log fjh: Norm Walsh will give intro to xproc <fjh> 30 September 2008 Teleconference cancelled <fjh> Next meeting 7 October. Gerald Edgar is scheduled to scribe. <fjh> 14 October 2008 Teleconference cancelled, <fjh> 20-21 October 2008 F2F at TPAC. RESOLUTION: 16 September minutes approved <fjh> XForms - 10:30 - noon (tentative) Monday 20 October fjh: Working on face-to-face schedule; will meet xforms on Monday ... ... trying to find a way to have a break that day ... <fjh> EXI - 2-3:30 Monday 20 October (note correction, 1 1/2 hours) fjh: joint session with EXI ... <fjh> WebApps - 11-12 Tuesday 21 October tlr: anything in particular we should review for these meetings? fjh: xforms is mostly going to be a listening thing ... EXI, we had a face-to-face last year's TPAC ... ... will include that in the agenda; linked from administrative page ... fjh: webapps, haven't gotten anything yet ... but note that widget requirements are in last call ... ... will see that I can put together something for preparation ... ... re WS-Policy, trying to get them to update references to Signature 2nd ed ... webapps has widget requirements in last call ... explicit request for review <fjh> WebApps Widgets 1.0 Requirements Last Call <fjh> <fjh> Resolution to accept Status/Abstract and incorporate into draft <fjh> PROPOSED: To accept revised status and abstract for best practices RESOLUTION: To accept revised status and abstract for best practices <fjh> Proposed revision for section 2.1, Best Practice 2 fjh: Second item from Scott, revision for 2.1 <fjh> fjh: Sean had some input on this ... scott: some tweaking might be needed fjh; sean, any concerns? sean: sent updated e-mail this morning PROPOSED: adopt changes proposed by Sean <fjh> draft sean: move BP #2 and following paragraph to a later point, combine with BP #5 (which should be #6...) ... basically, move to discussion of RetrievalMethod ... ... and instead of moving stuff, just drop in a sentence PROPOSED: To accept Scott's wording with Sean's additions. RESOLUTION: accept Scott's wording with Sean's additions. <scribe> ACTION: sean to edit best practices to implement Scott's and his own changes; see [recorded in] <trackbot> Created ACTION-67 - Edit best practices to implement Scott's and his own changes; see [on Sean Mullan - due 2008-09-30]. <fjh> Draft review Section 1, section 2.1.4 <fjh> <fjh> bhill: think there's a legitimate concern here ... ... if you consider using signature in office documents and the like ... ... external references can serve to track who reads documents ... ... there is a privacy concern here ... ... propose to keep this, but flash out text ... fjh: I have some ideas about changes PROPOSED: to accept changes to 2.1.4 proposed by Sean as added to by Frederick RESOLUTION: to accept changes to 2.1.4 proposed by Sean as added to by Frederick fjh: There's also changes to section 1 RESOLUTION: to accept changes to 1 proposed by Sean as added to by Frederick <scribe> ACTION: sean to implement, [recorded in] <trackbot> Created ACTION-68 - Implement, [on Sean Mullan - due 2008-09-30]. <fjh> Draft review - section 2.1.2 (Best Practice 5) <fjh> sean: xpath can have performance issues, need to pay attention, but not every implementation affected <fjh> tlr: how about taking this to e-mail? (Also, don't really like "advanced") <hal> I recommend a general disclaimer something like this hal: I'd like a general sentence saying "These concerns apply to approaches toward implementing the specification; following the best practices is a good idea whether or not implementation is affected by the concern" hal, please correct if the above note is wrong <hal> not really what I said <hal> but I don't object hal, is that better? bal: don't talk about specific implementations; some things might be sane in certain closed environments ... ran into that with fetch-over-web type things ... <fjh> ACTION: Sean to propose text to address concern of non-naive implementations not being vulnerable to attack [recorded in] <trackbot> Created ACTION-69 - Propose text to address concern of non-naive implementations not being vulnerable to attack [on Sean Mullan - due 2008-09-30]. klanz: there are standard tools to do bad things, you don't have a lot of influence on them ... think code execution in XSLT .. ... XSLT and XPath concern ... hal: to respond to the prior comment ... ... these still stand as best practices ... ... there might be reasons not to follow them ... ... but whole idea that somebody doesn't know how something works ... ... and something moves into a new environment, and boom ... ... that is one of the worst risks ... ... would like to highlight these as best practices ... ... it's fair to say "if you know what you're doing, you might not want to do this" bal: Hal, I hear you, I don't think that concern about poorly documented implementations is specific ... have to separate out what best practices are that we've learned ... ... "things you really ought not to do, and we don't see use for them" ... ... and things that ought to be better documented ... ... other concern, to keep scope of BP document to standard itself ... not attacking specific implementations ... ... of course just a BP document ... ... but people might very well run with some of what's in here and not understand trade-off ... ... 1. do not talk about bugs in particular implementations ... 2. be careful on wording ... there is tendency for documents like this to be used as prescriptive bans ... ... say "there might be good reasons for doing the things we discourage, but you really ought to know what you're doing" <hal> +1 <fjh> bal notes concern that someone might take best practices as profile, disallowing things but should be treated as practice bal: we might want to make a hard recommendation on xslt ... that's really a change to the specification ... ... not all recommendations have equal weight ... fjh: bal, can you add something? bal: I think there ought to be a disclaimer in here, not sure where that went ... let's not do something general right now, can do that later tlr: add something general to SOTD? bal: maybe, with caveat that we might want to make stronger statement ... any good boilerplate? tlr: not sure I know of any; note that we can change later fjh: If we want to do normative stuff, not in BP +1 <scribe> ACTION: thomas to propose disclaimer for SOTD [recorded in] <trackbot> Created ACTION-70 - Propose disclaimer for SOTD [on Thomas Roessler - due 2008-09-30]. <fjh> hal notes that best practices should reflect multiple applications, not just one hal: no objection against "make sure you know what you're doing", I don't understand problem with saying that more than one implementation has been observed to have the problem... ... don't say "if you think your implementation is fine, don't follow them" ... ... haven't seen the "show me your implementation" effect when mentioning that some implementations have problem smullan: we've made some progress on this ... early versions of BP document had stronger language on things like RetrievalMethod ... <fjh> sean notes now say less strong statements, consider this . smullan: also, most DOS attacks might be less serious when following BP 1 and BP 3 <fjh> sean notes that use of xslt may be appropriate in trusted environment pdatta: namespace node expansions ... ... @@ expands all namespace nodes ... ... there is some dependence on that feature ... ... interop testcase without expanding namespace nodes? fjh: sean noted that document shouldn't use relative namespace URIs <fjh> Change all examples in document to use absolute namespace URIs, not relative klanz: they're prohibited <brich> how about some text that says something like "There is an uneasy tension between security on one hand and utility and performance on the other hand. Circumstances may dictate an implementation must do something that is not the most secure. This needs to be a reasoned tradeoff that can be revisited in later versions of the implementation as necessary to address risk." <fjh> <klanz2> The use of relative URI references, including same-document references, in namespace declarations is deprecated. <klanz2> Note: <klanz2> This deprecation of relative URI references was decided on by a W3C XML Plenary Ballot [Relative URI deprecation]. It also declares that "later specifications such as DOM, XPath, etc. will define no interpretation for them". fjh: sean, you said these should reject relative namespace URIs <fjh> <klanz2> sean: our implementation rejects all the examples because of relative namespace URIs fjh: @@ <fjh> <klanz2> ns0 ... relative fjh: In the DOS example files, we have relative namespace URIs <klanz2> let's change ns0 to ... to n <klanz2> let's change ns0 to ... n tlr: I guess my question is whether there is any good reason for having relative namespaces fjh: we could use absolute ones as well <scribe> ACTION: pratik to update examples with absolute namespace URIs and regenerate signatures [recorded in] <trackbot> Created ACTION-71 - Update examples with absolute namespace URIs and regenerate signatures [on Pratik Datta - due 2008-09-30]. fjh: now, links to example files... ... we're keeping these member-visible for the moment ... tlr: The examples become vastly easier to understand in full ... could see us waiting a little longer to accommodate implementers. sean: even if there is an implementation that has a big problem ... doubt anybody will be able to fix this any time soon ... ... think the examples have the relevant text in there ... ... think that's good enough ... <esimon2> back in 10 min. fjh: no decision quite yet, decide next meeting <fjh> RetrievalMethod attack, section 2.1.3 fjh: long thread whether it can be recursive ... <fjh> <fjh> pdatta: fine with Sean's clarification fjh: what does that mean for document? ... does the attack depend on the KeyInfo concern? pdatta: no longer a denial of service attack fjh: sean, can you take care of this example? <fjh> klanz2: I thought RetrievalMethod *is* recursive? <fjh> fjh: deferred <fjh> Add synopsis for each Best Practice <klanz2> ame="Type" type="anyURI" use="optional" <scribe> ACTION: fjh to contribute synopsis for each best practice [recorded in] <trackbot> Created ACTION-72 - Contribute synopsis for each best practice [on Frederick Hirsch - due 2008-09-30]. <fjh> Misc editorial <fjh> Completion of implementer review actions? <fjh> fjh: Has anybody has a chance to look through the document -- how's review doing? ... Also, is publication at face-to-face a realistic option? <gedgar> I have to leave unfortunately. smullan: found that relative namespace URIs are a problem ... should make sure that examples are kind of accurate ... fjh: the bar I'm trying to get across is whether publication would cause any harm bal: as long as we talk about the spec itself, it's fine fjh: the document doesn't disclose things about a specific implementation brich: review of document was what I committed ... timing isn't easy ... fjh: trying to understand issue ... want to make sure that BPs are followed? statements about specific implementations? ... oh well, so maybe we can't publish as quickly ... let's at least get pending edits out of the way <fjh> fjh: hal, you had useful material about web services ... how to add that? hal: also, what happens to long-lived documents considerations? ... probably the same thing should happen to that and to my material magnus: also talked about extending scope to not just be about signature and c14n, but also a few other things ... keyInfo e.g. is generic <fjh> proposal - change title to XML Security v.next use cases and requirements tlr: Serious editing hasn't begun yet; some things in Shivaram's court fjh: need to pull things from the list <scribe> ACTION: magnus to provide proposal to adapt Requirements scope [recorded in] <trackbot> Created ACTION-73 - Provide proposal to adapt Requirements scope [on Magnus Nyström - due 2008-09-30]. hal: maybe do "domain specific requirements"? fjh: issues list, try to go through and extract requirements <fjh> issues list requirements extraction <fjh> fjh: gerald categorized requirements here ... volunteers, please! <esimon2> I'm here! <esimon2> unmute me nope ed, you shouldn't be muted yes! esimon2: IRC exceptionally slow again ... EXI review is relevant here. <fjh> EdSimon: They aren't addressing EXI needs for signature and encryption ... but based on use cases, there might be native need for signature and encryption ... ... please poke them about answering ... ... there is a long conversation to be had about native XML Security for EXI ... <fjh> fjh: would like to get use cases and requirements out in early November ... doesn't give us a whole lot of time to produce something to get us started ... ... focus on requirements soon <esimon2> Specifically what I said was that I have yet to see any discussion on EXI requirements for EXI-native signature and encryption. EXI has published a document discussing how current XML Signature and XML Encryption can be used with EXI but that, to me, does not seem sufficient. Need to find out if the EXI group sees a potential requirement for EXI-native signatures and encryption. klanz: recursive retrieval method or not? <klanz2> <klanz2> <klanz2> <attribute name="Type" type="anyURI" use="optional"/> tlr: eek, I'm confused. Please send to mailing list <scribe> ACTION: klanz2 to summarize recursive retrievalmethod point [recorded in] <trackbot> Created ACTION-74 - Summarize recursive retrievalmethod point [on Konrad Lanz - due 2008-09-30]. fjh: propose bulk closure ACTION-27 closed <trackbot> ACTION-27 contact crypto hardware and suiteB experts in NSA regarding XML Security WG and possible involvement closed ACTION-31? <trackbot> ACTION-31 -- Thomas Roessler to investigate ebXML liaison (see ACTION-6) -- due 2008-08-19 -- PENDINGREVIEW <trackbot> ACTION-31 closed <trackbot> ACTION-31 Investigate ebXML liaison (see ACTION-6) closed ACTION-39? <trackbot> ACTION-39 -- Hal Lockhart to contribute web service related scenario -- due 2008-08-25 -- PENDINGREVIEW <trackbot> ACTION-39 closed <trackbot> ACTION-39 Contribute web service related scenario closed ACTION-42 closed <trackbot> ACTION-42 Elaborate on "any document" requirement vs canonicalizing xml:base closed ACTION-47 closed <trackbot> ACTION-47 Add error noted in to c14n 1.1 errata page closed ACTION-66 pending ACTION-66? <trackbot> ACTION-66 -- Frederick Hirsch to follow up with xsl to get documents related to serialization -- due 2008-09-23 -- PENDINGREVIEW <trackbot> fjh: we have a long list of open actions ... please get the ones done that relate to requirements ... <fjh> <bal> yes ACTION-56? <trackbot> ACTION-56 -- Scott Cantor to propose text for KeyInfo processing in best practices. -- due 2008-09-16 -- OPEN <trackbot> fjh: any other updates? ... concrete proposals and material on the list, please ... ... priority for requirements document ... tlr: might also be worth talking about the way the process works <esimon2> IRC died on me; was the EXI action (was it Action-25?) closed? ACTION-25 closed <trackbot> ACTION-25 Give feedback on xml schema best practice in xml-cg closed action-25? <trackbot> ACTION-25 -- Frederick Hirsch to give feedback on xml schema best practice in xml-cg -- due 2008-08-19 -- CLOSED <trackbot> action-19? <trackbot> ACTION-19 -- Gerald Edgar to evaluate Issues and Actions for appropriate placement -- due 2008-08-19 -- PENDINGREVIEW <trackbot> action-22 closed <trackbot> ACTION-22 Review EXI docs that were published closed action-56? <trackbot> ACTION-56 -- Scott Cantor to propose text for KeyInfo processing in best practices. -- due 2008-09-16 -- PENDINGREVIEW <trackbot> action-56 closed <trackbot> ACTION-56 Propose text for KeyInfo processing in best practices. closed
http://www.w3.org/2008/09/23-xmlsec-minutes
CC-MAIN-2014-41
refinedweb
2,570
64.61
10 October 2008 15:57 [Source: ICIS news] By Nigel Davis LONDON (ICIS news)--Petrochemical and polymer players may worry about the ‘mountain’ of new capacity due to come on stream in the Middle East over the next two to three years, but what comes next may prove more influential. The received wisdom is that the region will become the supplier of low-cost olefin-based products and polymers to the world. Some also believe that a move to a more broadly-based feedstock slate will see regional producers gaining ground in other important petrochemical product markets. This, however, was yesterday’s view. The global financial crisis has and will change a great deal. It will rip through project plans and shift company strategies. A worldwide recession will also essential commodities has led to the delay of some projects. But the world’s olefins and polyolefins players know that the next two years will be hard from a supply and demand standpoint simply because so much new capacity is coming on stream. Yet other drivers make the outlook for some projects planned to start up in the early years of the next decade decidedly uncertain. A new petrochemicals plant, let alone an integrated petrochemicals complex, costs a great deal: the sort of outlay that few, if any, corporations individually can afford. The big petrochemical projects in the ?xml:namespace> Significant shifts in the global financial system alter project profiles and will push some schemes to the edge. A shifting oil price also challenges the viability of projects that are liquids-based. Taking a long-term view on oil now is a difficult game to say the least. But if private concerns are challenged to question their investment plans then attention falls on government-backed schemes. These tend to be the most grandiose mega-projects that in most instances take years to come to fruition. Governments could lose a taste for chemicals very quickly and be moved to ponder anew the question of just how important a more broadly-based chemicals sector is to a developing, sustainable 21st century economy. Power and other infrastucture projects rapidly become more important than chemicals. Until now, chemicals production in the Gaining access to low-cost feedstock for the next phase of development has not been easy. It is likely to get more difficult. This will drive companies to take a more rounded global view. The petrochemicals map will change but possibly not in the way that many now think. The faster growing markets for petrochemicals are in Asia, Latin America and Central and If feedstock is available, and the technology works, then in the current environment some of the project plays in closer-to-market locations look increasingly interesting: methanol-to-olefins in The oil majors know a thing or two about the downstream business and it is not surprising that ExxonMobil's and Shell’s next big olefins projects could be in Over the longer term, the energy majors will seek to take chemicals and refined products to market. Greater downstream integration is the key. Clearly, the worldwide search for feedstocks has widened, and as fields are developed new opportunities arise. Gazprom this week offered an intriguing picture of developing gas-to-chemicals operations between 2014 and 2024 in the vastness of The energy giant could add value to gas produced in these regions and push polyolefins into a widening In the face of global financial turmoil it is clear that those wanting to advance major petrochemical projects over the next few years will need stable balance sheets or deep pockets - or both. As always, the industry will not be for the faint-hearted but for those with a clear purpose. The fundamentally disadvantaged will be exposed..
http://www.icis.com/Articles/2008/10/10/9163158/insight-project-plans-exposed-as-priorities-shift.html
CC-MAIN-2013-20
refinedweb
624
58.11
Download the Dart Programming Language Specification from the Ecma website: For a gentler introduction to the Dart language, see the Dart language tour or the first Dart code lab. The 2nd edition of the specification added information about the following new language features: Enumerations ( enum) Implemented in 1.8. For details, see the language tour: Enumerated types. Asynchrony support ( async, await, and more) Partially implemented in 1.8. For details, see the language tour: Asynchrony support. Deferred loading ( import ... deferred as) Implemented in 1.6. For details, see the language tour: Lazily loading a library. You can find both editions of the specification at Standard ECMA-408.
https://www.dartlang.org/docs/spec/
CC-MAIN-2015-22
refinedweb
107
51.44
Simple linear regression is a technique that we can use to understand the relationship between a single explanatory variable and a single response variable. Python. Python: import pandas as pd #create dataset df = pd.DataFrame({'hours': [1, 2, 4, 5, 5, 6, 6, 7, 8, 10, 11, 11, 12, 12, 14], 'score': [64, 66, 76, 73, 74, 81, 83, 82, 80, 88, 84, 82, 91, 93, 89]}) #view first six rows of dataset df[0:6] hours score 0 1 64 1 2 66 2 4 76 3 5 73 4 5 74 5 6 81 Step 2: Visualize the Data Before we fit a simple linear regression model, we should first visualize the data to gain an understanding of it. First, we want to make sure that the relationship between hours and score is roughly linear, since that is an underlying assumption of simple linear regression. We can create a simple scatterplot to view the relationship between the two variables: import matplotlib.pyplot as plt plt.scatter(df.hours, df.score) plt.title('Hours studied vs. Exam Score') plt.xlabel('Hours') plt.ylabel('Score') plt.show() From the plot we can see that the relationship does appear to be linear. As hours increases, score tends to increase as well in a linear fashion. Next, we can create a boxplot to visualize the distribution of exam scores and check for outliers. By default, Python: df.boxplot(column=[: Note: We’ll use the OLS() function from the statsmodels library to fit the regression model. import statsmodels.api as sm #define response variable y = df['score'] #define explanatory variable x = df[['hours']] #add constant to predictor variables x = sm.add_constant(x) #fit linear regression model model = sm.OLS(y, x).fit() #view model summary print(model.summary()) OLS Regression Results ============================================================================== Dep. Variable: score R-squared: 0.831 Model: OLS Adj. R-squared: 0.818 Method: Least Squares F-statistic: 63.91 Date: Mon, 26 Oct 2020 Prob (F-statistic): 2.25e-06 Time: 15:51:45 Log-Likelihood: -39.594 No. Observations: 15 AIC: 83.19 Df Residuals: 13 BIC: 84.60 Df Model: 1 Covariance Type: nonrobust ============================================================================== coef std err t P>|t| [0.025 0.975] ------------------------------------------------------------------------------ const 65.3340 2.106 31.023 0.000 60.784 69.884 hours 1.9824 0.248 7.995 0.000 1.447 2.518 ============================================================================== Omnibus: 4.351 Durbin-Watson: 1.677 Prob(Omnibus): 0.114 Jarque-Bera (JB): 1.329 Skew: 0.092 Prob(JB): 0.515 Kurtosis: 1.554 Cond. No. 19.2 ============================================================================== From the model summary we can see that the fitted regression equation is: Score = 65.334 + 1.9824*(hours) This means that each additional hour studied is associated with an average increase in exam score of 1.9824.158: Score = 65.334 + 1.9824*(10) = 85.158 Here is how to interpret the rest of the model summary: - P>|t|: This is the p-value associated with the model coefficients. Since the p-value for hours (0.000) is significantly less than .05, we can say that there is a statistically significant association between hours and score. -. - F-statistic & p-value: The F-statistic (63.91) and the corresponding p-value (2.25 figure size fig = plt.figure(figsize=(12,8)) #produce residual plots fig = sm.graphics.plot_regress_exog(model, 'hours', fig=fig) explanatory variable. Q-Q plot: This plot is useful for determining if the residuals follow a normal distribution. If the data values in the plot fall along a roughly straight line at a 45-degree angle, then the data is normally distributed: #define residuals res = model.resid #create Q-Q plot fig = sm.qqplot(res, fit=True, line="45") plt.show() Python code used in this tutorial can be found here.
https://www.statology.org/simple-linear-regression-in-python/
CC-MAIN-2021-43
refinedweb
625
61.12
It may work, but it doesn’t mean it is correct I wrote some time ago about a simple software of mine that saves entries from RSS feeds as messages in a IMAP store. Recently I took the time to update the code to Python 3, hoping among other things that this could help me spot a few bugs. Python 3 has better unicode support, in particolar you either have and encoded array of bytes or an unicode string, so you can’t mistake one for the other like you can do with Python 2. In fact I found a couple of those bugs (only two? I’m either good at programming, or bad at bug hunting, you guess :D), but one in particular made me lose some time. The code was the following (redacted for brevity) from email.mime.text import MIMEText # ... msg = MIMEText( body, subtype, 'utf-8' ) msg["Subject"] = entry.title # ... msgText = msg.as_string() This worked just nice in Python 2, but in the new language I had sometimes errors like UnicodeEncodeError: 'ascii' codec can't encode character '\xbb' in position 0: ordinal not in range(128) This would happen with non ascii characters in the Subject… why the hell wouldn’t MIMEText encode it in utf-8 as I asked it ? “I told you to use utf-8 over there, why the hell do you use the ascii codec”. The fact that the same code was working in Python 2 added more frustration. I then realized that the encoding of the headers of an email is quite a different matter from the encoding of its body, in fact it is handled in an RFC on its own (and then some other. This means that a nice text with only some accented characters, like “Thìs strànge sentènce”, becomes the ugly “=?utf-8?b?VGjDrHMgc3Ryw6BuZ2Ugc2VudMOobmNl?=” (ouch!). Anyway, there are plenty of libraries that handle this for us. In python this means that one shouldn’t use directly strings when setting the headers, but a specifically designed Header class. So the correct code becomes from email.mime.text import MIMEText from email.header import Header # ... msg = MIMEText( body, subtype, 'utf-8' ) msg["Subject"] = Header(entry.title,'utf-8') # ... msgText = msg.as_string() But then, why did the original code work with Python 2? Well, probably the library didn’t really care about the encoding of the message, the IMAP server didn’t care either, and the mail client being a nice guy relized that the headers was a wrongly encoded string and guessed how to interpret it. The fact that the message didn’t travel through an SMTP server probably helped. A nice example of “It may work, but doesn’t mean it is correct”
https://righele.it/2013/04/19/it-may-work-but-it-doesnt-mean-it-is-correct/
CC-MAIN-2021-43
refinedweb
454
69.82
Django's logging configuration facilities, which arrived in version 1.3, have greatly eased (and standardized) the process of configuring logging for Django projects. When building complex and interactive web applications at Caktus, we've found that detailed (and properly configured!) logs are key to successful and efficient debugging. Another step in that process—which can be particularly useful in environments where you have multiple web servers—is setting up a centralized logging server to receive all your logs and make them available through an easily accessible web interface. There are a number useful tools to do this, but one we've found that works quite well is Graylog2. Installing and configuring Graylog2 is outside the scope of this post, but there are plenty of tutorials on how to do so accessible through your search engine of choice. Once you have it setup, getting logs flowing to Graylog2 from Django is relatively straightforward. First, grab a copy of the graypy package from PyPI and add it to your requirements file: pip install -U graypy Next, add the following configuration inside the LOGGING['handlers'] dictionary in your settings.py, where graylog2.example.com is the hostname of your Graylog2 server: LOGGING = { # ... 'handlers': { # ... 'gelf': { 'class': 'graypy.GELFHandler', 'host': 'graylog2.example.com', 'port': 12201, }, }, } You'll most likely want to tell your project's top-level logger to send logs to the new gelf handler as well, like so: LOGGING = { # ... 'loggers': { # ... 'projectname': { # mail_admins will only accept ERROR and higher 'handlers': ['mail_admins', 'gelf'], 'level': 'DEBUG', }, }, } With this configuration in place, log messages with a severity of DEBUG or greater that are sent to the projectname logger should begin flowing to Graylog2. You can easily test this by opening Django's python manage.py shell, grabbing the logger manually, and sending a log message: import logging logger = logging.getLogger('projectname') logger.debug('testing message to graylog2') You should see the message show up in Graylog2 almost immediately. Now, this is all well and good, but if you want to use your Graylog2 server for multiple projects, you'll quickly find that all the log messages are interspersed and it can be difficult to tell what messages are coming from what projects. To address this issue, Graylog2 supports the concept of "streams," that is, filters that you can setup (which work only on incoming messages, not existing messages) to show messages that match only certain criteria. A simple solution here could be to filter on the hostname of the originating web servers, but this may not scale well in environments like Amazon Web Services' EC2 where you're often adding or removing web servers. As a better alternative, you can add metadata to log messages at the Python level prior to sending them to Graylog2 that will help you more easily identify the messages for different projects. To do this, you need to use a feature of Python logging filters. While filters are most commonly used to filter out certain types of messages from being emitted altogether (as discussed in the Django documentation), they can also be used to modify the log records in transit and impart contextual metadata to be transmitted with the original message. To add this to our logging configuration, first create the following filter class in a Python module accessible from your project: class StaticFieldFilter(logging.Filter): """ Python logging filter that adds the given static contextual information in the ``fields`` dictionary to all logging records. """ def __init__(self, fields): self.static_fields = fields def filter(self, record): for k, v in self.static_fields.items(): setattr(record, k, v) return True Next, we need to load this filter in our logging configuration and tell the gelf logger to pass records through it: LOGGING = { # ... 'filters': { # ... 'static_fields': { '()': 'projectname.core.logfilters.StaticFieldFilter', 'fields': { 'project': 'projectname', # CHANGEME 'environment': 'staging', # can be overridden in local_settings.py }, }, }, 'handlers': { # ... 'gelf': { 'class': 'graypy.GELFHandler', 'host': 'graylog2.example.com', 'port': 12201, 'filters': ['static_fields'], }, # ... }, } The configuration under filters instantiates the StaticFieldFilter class and passes in the static fields that we want to attach to all of our log records. In this case, two fields are attached, a 'project' field with value 'projectname' and an 'environment' field with value 'staging'. The configuration for the gelf logger is the same, with the addition of the static_fields filter on the last line. With these two items in place, you should be able to create streams via the Graylog2 web interface to trap and display records that match the combination of project and environment names that you're looking for. Lastly, as an optional addition to this logging configuration, it may be desirable to filter out Django request objects from being sent to Graylog2. The request is added to log messages created by Django's exception handler and may contain sensitive information or in some cases may not be capable of being pickled (which is necessary to encode and send it with the log message). You can remove them from log messages with the following filter: class RequestFilter(logging.Filter): """ Python logging filter that removes the (non-pickable) Django ``request`` object from the logging record. """ def filter(self, record): if hasattr(record, 'request'): del record.request return True and this corresponding filter configuration: LOGGING = { # ... 'filters': { # ... 'django_exc': { '()': 'projectname.core.logfilters.RequestFilter', }, }, 'handlers': { # ... 'gelf': { 'class': 'graypy.GELFHandler', 'host': 'graylog2.example.com', 'port': 12201, 'filters': ['static_fields', 'django_exc'], }, # ... }, } With this configuration in place, you can have log messages flowing to Graylog2 from any number of project and server environment combinations, limited only by the resources of the log server itself.
https://www.caktusgroup.com/blog/2013/09/18/central-logging-django-graylog2-and-graypy/
CC-MAIN-2018-39
refinedweb
918
52.8
Proposed exercise Write a C# program to ask the user for two numbers and show their division and the remainder of the division. It will warn if 0 is entered as second number, and end if 0 is entered as first number: First number? 10 Second number? 2 Division is 5 Remainder is 0 First number? 10 Second number? 0 Cannot divide by 0 First number? 10 Second number? 3 Division is 3 Remainder is 1 First number? 0 Bye! Output Solution using System; public class ManyDivisions { public static void Main() { int num1, num2; do { Console.Write("First number? "); num1 = Convert.ToInt32(Console.ReadLine()); if (num1 != 0) { Console.Write("Second number? "); num2 = Convert.ToInt32(Console.ReadLine()); if(num2 == 0) { Console.WriteLine("Cannot divide by 0"); Console.WriteLine(); } else { Console.WriteLine("Division is {0}",num1/num2); Console.WriteLine("Remainder is {0}",num1%num2); Console.WriteLine(); } } } while(num1 != 0); Console.WriteLine("Bye!"); } }
https://www.exercisescsharp.com/2013/04/218-many-divisions.html
CC-MAIN-2019-51
refinedweb
150
54.59
Tuesday Tooling - Google Image Search With Python We must spend days of our lives searching for images on Google. That picture of a duck on a pond, the cat pic where it is stalking like a ninja. But sometimes we want to automate looking for pics, and then we turn to Python and typically the requests library and a little Python-Fu! But what if we could do it a little better? So what is it? Goolge-Images-Download, a Python library to search and download images from Hardik Vasa How can I install it? Using the pip package manager. pip3 install google_images_download So how can I use it? From the Terminal / CLI googleimagesdownload -k "Google Image Search" -l 1 The arguments -k Keyword to search for. -l Limit, how many images to download Your download will be saved to a folder called downloads inside the directory where the command is run. But this can be changed. Via Python from google_images_download import google_images_download response = google_images_download.googleimagesdownload() absolute_image_paths = response.download({"keywords":"Google Image Search","limit":1}) For Python we create an object called response which is used to query Google with our search term. Then we create an object called absolute_image_paths which will provide the arguments, just as we did in the terminal. So how can I do something useful with it? I sat down this morning and wrote a little script in Mu v1.0.0 (you should really download that and have a play) that asks the user for their search term and then runs a Google image search using that keyword(s). Here is the code First the imports and setting up our response object. from google_images_download import google_images_download response = google_images_download.googleimagesdownload() Then we create a variable called run_again later used to capture if the user wants to use the script again. run_again = "Y" Next up we use a while loop to check if the user wants to search, first time is always yes. We check to see if the run_again variable has a "Y" or "y" using "or" logic. while run_again == "y" or run_again == "Y": So now we create a new variable that will store the search term / keyword that the user wants to search for. This is saved as a string. search = input("Please enter the keyword(s) to search for. Please note, multiple keywords should be separated with a , : \n") We then search for the keyword, again limiting the response to just one image. Note that there is a new argument, no_directory which will save the image to the downloads folder created by this code, but the image is not saved to it's own directory, for example if we search for penguin without this new argument, the file is saved to ./downloads/penguin but with this new argument it is saved to ./downloads absolute_image_paths = response.download({"keywords":search,"limit":1,"no_directory":"1"}) Still inside the while loop we ask the user if they want to search for a new keyword, if so the loop repeats, if not, we print "Bye" to the REPL and the script ends. run_again = input("Perform another search? Y or N:") print("Bye!") When run the code should look like this. Have fun!
https://bigl.es/tuesday-tooling-google-image-search-with-python/
CC-MAIN-2020-16
refinedweb
533
73.27
Issue #5554 has been reported by Tsuyoshi Sawada. Feature #5554: A method that applies self to a Proc if self is a Symbol Author: Tsuyoshi Sawada Status: Open Priority: Normal Assignee: Category: Target version: Often, you want to apply a Proc to self if self is a Symbol, but not do anything if otherwise. In this case, something I call Object#desymbolize may be convenient: proc = ->sym{ case sym when :small_icon then "16pt" when :medium_icon then "32pt" when :large_icon then "64pt" end } :small_icon.desymbolize(&proc) => "16pt" "18pt".desymbolize(&proc) => "18pt" An implementation may be as follows: class Object def desymbolize; self end end class Symbol def desymbolize ≺ pr.call(self) end end
https://www.ruby-forum.com/t/ruby-trunk-feature-5554-open-a-method-that-applies-self-to-a-proc-if-self-is-a-symbol/213461
CC-MAIN-2022-33
refinedweb
112
53.71
New firmware release 1.7.6.b1 Hello, A new firmware release is out. The version is 1.7.6.b1. Here's the change log: esp32: Store the LoRaWAN state in NVS, to avoid having to re-join after waking up from deep sleep (or a power cycle). Calling lora.has_joined() after deep sleep returns True if the network was previously joined. esp32: Update to the latest IDF to fix issues with RTC clock accuracy during deep sleep. esp32: Fix bug related to setting the UART parity in the constructor. esp32: Fix bug related to incorrect I2C HW objects initialisation. In order to get the new firmware it, please user the updater tool that can be downloaded from here: Cheers, Daniel @jmarcelino yes, of course. It's the part where I'm using the socket that is commented out right now: import machine from machine import UART from network import WLAN #from network import LoRa #lora = LoRa(mode=LoRa.LORAWAN) #from machine import SD import socket #import ssl #import time import os import pycom print('starting up..') #sd = SD() #os.mount(sd, '/sd') #time.sleep(5) #os.listdir('/sd') #time.sleep(5) uart = UART(0, 115200) os.dupterm(uart) pycom.heartbeat(False) pycom.rgbled(0x7f0000) # red wlan = WLAN() #print('SD mounted!') if machine.reset_cause() != machine.SOFT_RESET: print('setting wlan config...') wlan.init(mode=WLAN.STA) wlan.ifconfig(config=('10.0.0.88', '255.255.255.0', '10.0.0.1', '8.8.8.8')) if not wlan.isconnected(): print('looking for network...') wlan.connect('NETGEAR64', auth=(WLAN.WPA2, 'oddchair195'), timeout=5000) while not wlan.isconnected(): machine.idle() print('connected to wifi!') #s = socket.socket() #ss = ssl.wrap_socket(s) #ss.connect(socket.getaddrinfo('', 443)[0][-1]) #print('connected to internet!') space = os.getfree('/flash') print('free flash mem: ', space) pycom.rgbled(0x007f00) # green - jmarcelino last edited by @soren said in New firmware release 1.7.6.b1: network card not available Can you post the code you’re trying to use, or at least the part where you setup the network / socket? @jcaron hi, has this been fixed i wonder? i get "network card not available" right now, and i'm not using lora. don't need to. @jellium @daniel OK, so I dug a bit in the code and added a few traces, and the conclusion is that: there are actually two instances of the counters being reset: right from the init of the board (in TASK_LoRa, which calls LoRaMacStoreUpAndDownLinkCountersInNvs), and a second one in the LoRaconstructor, which calls LoRaMacInitializationwhich in turn calls ResetMacParameters. Both will save the counters with their default values before they have been read from NVS. the TASK_LoRais actually an infinite loop running all the time, whether LoRa is actually running or configured or not! I'll try to find how this could be resolved, but I won't mind if someone who is more familiar with the code does it first or has any pointers! BTW, the code on GitHub has the wrong version number, is there anything else that would have missed the commit? @jcaron Hello, I cannot provide an answer to the problem you are reporting but in case you haven't read it, I draw your attention on the similar issue that I am reporting in this present thread on this and that posts. Edit: my bad, I see that you have read the post indeed :) The LoRaWAN frame counters storage in NVS doesn't seem to work as intended when combined with deep sleep (tried with both 1.7.6.b1 and the new 1.7.7.b1): the uplink frame counter restarts at 1 after each sleep... Not sure if that's because the counter is not properly stored, not properly read, or reset in the process? The only thing I do after wake up is: lora = LoRa(mode=LoRa.LORAWAN) s = socket.socket(socket.AF_LORA, socket.SOCK_RAW) s.setsockopt(socket.SOL_LORA, socket.SO_DR, 0) s.send(bytes([1])); Even though loraisn't used, if I don't include the Lora()initialiser, socketthrows an error ( OSError: Network card not available), not sure if that call resets the counter (I would expect only the jointo do so)? I even added a 2-second sleep after sending just to make sure the packet is actually sent, just in case the NVS storage is done only once the packet has finished sending, no change. Is there a way to check what is stored in NVRAM, using pycom.nvs_get()for instance? Thanks, Jacques. - jmarcelino last edited by @jellium Has anyone found a solution for this issue? I cannot find in the sources where the keys are handled and written in the flash during the first join() method call. I believe it's somewhere in the LoRa constructor but the Python language merged in C is not easily readable for me. Thanks! It's the only "bug" I see left on the optimization of LoRaWAN and deepsleep together. @peekay123 I run a similar test with every build since @this-wiederkehr raised that issue with version 1.7.3.b1. No luck. It may run very long, up to 600000 cycles on one run, a few thousand on the next. But it's still not stable. Hi, Now I have the following messages: NVS error=4354 Does anyone know what it means? - this.wiederkehr last edited by @peekay123: Just raised a hand for help in the espressif forums about the bug with the sd-card. See here: Looks like a memory corruption bug, where some pointer gets overwritten due to an out-of-bounds access. Unfortunatly I'm working with 1.7.5b2 sources as 1.7.6.b1 sources are still not available... Running an SD test on LoPy / Expansion board with firmware 1.7.6.b1 causes the LoPy to hang after an inconsistent number of cycles. This test ran flawlessly on 1.7.5.b2. import machine import os import utime import gc from machine import SD sd = SD() os.mount(sd, '/sd') # check the content os.listdir('/sd') f = open('/sd/test', 'a') t1 = utime.ticks_ms() ttot = 0 n = 0 print("Starting") while n < 300000: # initializing buffer k = bytearray(1024) # write buffer and flush to file f.write(k) f.flush() t2 = utime.ticks_ms() # calculate running average ttot = (ttot + (t2 - t1)) /2 gc.collect() n += 1 # print("Round", n, ", ", ttot, "ms \n\r", end="") t1 = utime.ticks_ms() if n%1000 == 0: print("Rounds ", n, "\r", nend="") print("\n\rRounds", n, ", avg ", ttot, "ms \n\r", end="") @jmarcelino Thanks for the pieces of information. Note that a call of LoRa(mode=LoRa.LORAWAN, adr=True, device_class=LoRa.CLASS_A)for instance reinitializes the frame counter. So, besides the encryption keys to communicate, part of the stored LoRaWAN context is lost or overwritten. So care must be taken not to redefine the LoRa object after a deepsleep or power cycle, in order not to reset some of the LoRaWAN context. Am I right? Edit: it seems inconsistent to me not to redefine the LoRa object used for the "first" join, after a deepsleep or a power cycle. If one does not redefine this object, one cannot check whether the LoPy "has joined" the network. However, as I said above, reinitializing this object, at least, resets the uplink frame counter. Furthermore, if I sequentially send a frame, recall the LoRa constructor, send a frame, and so on, I only see downlinks in (Objenious) network backend, with increasing counter, but no uplink. @Eric24 Hi, good point I guess. I ended up doing so yesterday, using machine.reset_cause() which returns 0 or 1 after sync or reset button, or 3 after a deep sleep. This function is quite handy to recover REPL when something goes wrong in the main program (for instance use the reset button to toggle a "debug" boolean which is set to inhibit the main program if set to true). @iotmaker ohh i saw that a change and now you need to provide pin 9 and pin 10 on the initialization How come when i upgrade the Sipy It says upgrading to 1.7.6.b1 and when ever i do the os.uname().release it gives me 1.7.5.b2? Also on 1.7.5.b2 I2C does not work...... I put a sipy with a 1.6.13.b1 on the same expansion board where the i2c sensor is and IT WORKS.. so how come upgrading the firmware broke my application? @jellium I think all you'd need to do is provide a mechanism (maybe a button press or a jumper during power-up, etc.) that the end-user could trigger a new LoRaWAN join. The new join will overwrite whatever has been stored in NVS (i.e. there's no need to specifically delete that information). - jmarcelino last edited by jmarcelino @jellium In LoRaWAN the concept of having "joined" a network only means you now hold the encryption keys to communicate with it. So if you've done OTAA and the keys are stored it's "joined" forever. There is no way in LoRaWAN for a node to know if it's actually communicating with anything on the other side other than re-joining or somehow requesting a downlink. If the downlink fails to appear then your node will know it's no longer within range of the network (or maybe it's time to call lora.join() again) @daniel said in New firmware release 1.7.6.b1: - esp32: Store the LoRaWAN state in NVS, to avoid having to re-join after waking up from deep sleep (or a power cycle). Calling lora.has_joined() after deep sleep returns True if the network was previously joined. Super good! However, what do you mean by "if the network was previously joined"? Because now I can't manage to have lora.has_joined()returning False after a deep sleep (or a power cycle)... Suppose I have joined a LoRaWAN network. Then I switch off the board, unplug it, press its buttons to wipe any remaining power, unplug the antenna and plug the board back. From here, lora.has_joined()will return True, even if no network is actually joined. For instance, if from here I manually try to rejoin, it will fail (as expected since no antenna is plugged), and lora.has_joined()will return False,as it should (since no network is actually joined). In other words, it seems that after a deep sleep or power cycle, any call of LoRa(mode=LoRa.LORAWAN) will set has_joined() state to True even if no network is joined. How can I manually "erase" the has_joined() state in from NVS? Thanks!
https://forum.pycom.io/topic/1457/new-firmware-release-1-7-6-b1
CC-MAIN-2022-21
refinedweb
1,774
65.93
Is this the recommended way to get the bytes from the ByteBuffer ByteBuffer bb =.. byte[] b = new byte[bb.remaining()] bb.get(b, 0, b.length); I'm reading a 16 byte array (byte[16]) from a JDBC ResultSet with rs.getBytes("id") and now I need to convert it to two long. How can I do that? This is the code ... byte[16] ResultSet rs.getBytes("id") Problem I need to convert two ints and a string of variable length to bytes. What I did I converted each data type into a byte array and then added them into a byte ... I have a Java class public class MsgLayout{ int field1; String field2; long field3; } I have a byte array in Java of size 4, which I am trying to put the first two bytes of into ByteBuffer. Here is how I am doing it: byte ByteBuffer byte[] array ...
http://www.java2s.com/Questions_And_Answers/Java-Collection/Array-Byte/bytebuffer.htm
CC-MAIN-2014-15
refinedweb
150
85.18
Opened 11 years ago Closed 11 years ago #5912 closed (invalid) Models not enforcing required fields Description Creating an instance of a model with CharFields will fill DB fields with single quotes ( '') if the CharField is not populated. class Person(models.Model): first_name = models.CharField(max_length=50) last_name = models.CharField(max_length=50) person = Person() person.first_name '' person.save() Resulting DB record: first_name = '', last_name = '' Using PostgreSQL 8.1 as the DB server... psycopg2 as the driver. I'm running the latest SVN just updated this morning. Change History (16) comment:1 follow-up: 2 Changed 11 years ago by comment:2 Changed 11 years ago by It is just returning an empty string, to this best of my knowledge this is because by default blank and null are both set to false, therefore it can't actually be empty. Well this is a rather large change in behavior from .96, I would think it should be in the docs. Previously blank=False, and null=False meant that the field was required and if you tried to save it it would attempt to save null to the DB which would result in an error because of the not null constraint on the DB. What you are saying is I should put null=True so that the field can hold a null value by default, but that will get rid of the not null constraint on the DB, and I will get null's in the db instead of proper "required field" enforcement. I edited django.db.models.fields.init and added a empty_strings_allowed = False at the top of the CharField class... this restores the previous behavior, however I'm sure it breaks a million other things I haven't found yet. comment:3 Changed 11 years ago by Hrm, you are correct, that should be db constrained, somehow I missed that. comment:4 Changed 11 years ago by so, do you think my change will break a bunch of other stuff? I haven't run into any other problems yet with the change applied.. Maybe you can think of a minefield waiting for me though? comment:5 Changed 11 years ago by The comments so far are debugging the wrong problem altogether. As I understand it, the issue is that the implicit blank=False here isn't being enforced. If that is indeed true, then it's a bug and we should fix it. It's a matter of why the validation for the 'blank' attribute isn't being enforced, nothing else. However, this has nothing to with empty_strings_allowed and empty strings certainly are allowed at the database level for CharFields. comment:6 Changed 11 years ago by comment:7 Changed 11 years ago by Ok, I was confused about how the processing went. Basically the null=True/False is just a flag on whether or not to store a null in the DB, blank=True/False is the validation on the django side of whether or not the field is required? that being the case, I question whether this was ever happening, because even in .96 the error message I get is "insert failed because it violates not null constraint at the db level". When I save() a model with blank fields where blank=False and null=False I get the db error, not a django error saying "this is a required field" or "this field cannot be left blank". Seems like that is the wrong order at least, it should enforce blank before it incurs the hit on the db by trying to insert. I will attempt another test in .96, I will set blank=False, but null=True and attempt to save to db. I would expect that to come back with a django error saying "field x is required". Is that the expected behavior? comment:8 Changed 11 years ago by ok, a little more info. With my previous change removed, if I do a model.validate() I get an error that says "this field required" for each "required" field. However, model.save() apparently doesn't care, and just saves anyway. Seems to me that save() should validate() first and make sure its ok to save, otherwise there is quite a bit of code I need to add to my side to validate before every save.. This seems to violate the DRY principle. The model.save() function doesn't call validate at all is this a design decision? comment:9 Changed 11 years ago by I added this to the beginning of the models save() function... def save(self, raw=False): error_dict = self.validate() if len(error_dict) > 0: raise validators.ValidationError([key + ": " + ";".join(error_dict[key]) for key in error_dict.keys()]) I don't know how you would want to output the error messages, since this can return a list of fields with errors, and a list of errors for each field... for me, as long as it outputs some error and doesn't save the model, that is all I need. comment:10 Changed 11 years ago by validate() does not work yet and is not intended to be used. That's why it isn't documented. Also, save() shouldn't unconditionally call validate() because they happen at different points in the processing pipeline and different error handling is needed. Implementing the missing pieces of validate() is work in progress. I am now beginning to understand what you are doing and it's possibly something that isn't meant to work yet. If you try to enter a blank string, e.g. in the admin interface, when blank=False do you get an error or not. If you don't see an error, it's a bug. If not, it's working as well as it should at the moment. comment:11 Changed 11 years ago by In admin it does enforce the required fields, I am assuming that is being done further up the chain though (by manipulators or newforms, I'm not sure which admin uses) That being the case, how can I help get this implemented faster? I am writing a rather large project using django, it has a large web component where the input can be validated by forms, however, there is an equally large backend piece which will be communicating directly at the model level and we need the models to validate (at least enforce required fields, although it would be nice if we could do other data integrity checking at that level as well). If I spend the next 2 weeks just working on model validation that will probably be ok with my masters. Is there someone in charge of this piece of the code? Something I should read to get up to speed on the design of how the django project wants this to work? It seems like a rather fundamental piece, and frankly I'm pretty surprised it isn't done. I would propose changing the documentation to make it absolutely clear that models are not validated in any way. This really seems pretty huge to me from a data integrity standpoint. Someone can write code to interact directly with a model and as long as they don't violate any DB constraints, they can royally screw up the data. What is the "django-way" for doing data validation currently? Or are all django projects just living on the edge hoping someone doesn't write code to directly interact with their models? comment:12 Changed 11 years ago by That being said, for the time being I'm leaving my change to models.save() in my django code, I ran my test suite on my project and everything checks out for *my project* with the change in there. I need to keep my data safe, if it breaks other things I'll deal with that bridge when I get there, its more important to me that I don't end up with a DB full of records with *required* fields missing. comment:13 Changed 11 years ago by I am unsure where the problem actually is. I tested your example in both the latest revision of trunk and 0.96 and person.first_name is defaulted to an empty string and able to be saved to the database. Your model definitions may have differed from the actual database fields. Since I am not sure how data is being passed to the model in your system (in most cases is through newforms) that your model definition would be raising a ValidationError before being committed to the database. blank is not used by the model, but only by form handling. By default blank is False . Can you please provide some more specific examples of how this is failing? It sounds like, if anything, an admin issue, but if it were would have been much more of an issue in the past. comment:14 Changed 11 years ago by The problem is there is no way to enforce a "required" field at the model level. I have certain model fields that I don't want null to be allowed and I don't want the empty string to be allowed either. They are required fields, they have to have data in them. I have 2 interfaces a web interface (which uses newforms and is validated through the form validation), and an API interface which is designed for mass automated updates from scripts and other backend systems communicating in code. I wanted to make model fields "required" IE, not blank, not null, and have that be enforced at the point of attempting to .save() the model (or preferably a model.is_valid() call). I have since been made aware that this is not supported at all in django, so I re-wrote my API model to use forms behind the scenes and bind the data being submitted through the API to a newforms form, validate it there, and then save to the models from the form data. I don't know if that is the "right" way to do this, but it works and that is how I dealt with this shortcoming in the models themselves. I spent about a week though trying to figure out how to enforce a "required" field on the model itself before posting this as a bug, this can probably be closed. I understand someone is working on a new model validation system which should support the functionality I wanted, for now, the forms method seems to be working fine as well. comment:15 Changed 11 years ago by To my understanding and how I feel is that models will never do this type of validation. The form layer of abstraction is where validation of data should happen whether it come from a web form or an API you are providing. Model aware validation, to me, would be things that have to be enforced at the database layer. Such as unique and unique_together . Your use of newforms is a great example of how it should be used in other places than just forms. I'll leave it to a core dev to correct me or close this ticket. comment:16 Changed 11 years ago by If it's working at the level of forms -- e.g., in the admin -- then this is something that's not really a bug, because Django models do not, at the moment, have the ability to validate themselves without help from a form and are not, at the moment, meant to have that ability (though as Malcolm has pointed out it's an eventual goal). So I'm closing this invalid for now. It is just returning an empty string, to this best of my knowledge this is because by default blank and null are both set to false, therefore it can't actually be empty.
https://code.djangoproject.com/ticket/5912
CC-MAIN-2019-04
refinedweb
1,970
69.62
I have to persist 2 strings for my application even after the application is uninstalled. Regarding that the end users don’t have SD cards for their devices and they don’t have internet connection, how could I persist those 2 strings even after the app is uninstalled? I would highly appreciate any response. Thanks Unless you’re targeting VERY old phones, you don’t need to worry about not having external storage. As long as you use Environment.getExternalStorageDirectory() as your reference, you shouldn’t have a problem, though if you’re absolutely concerned about this you can check if the external storage doesn’t exist and then opt to go to internal storage. Check out this link from the developer docs for a little more insight. If External truly isn’t available, you could then save to Internal memory, but you will have to declare a new permission for that, which may ward off some people. Answer: You have to write it to an SD card/internal storage, and hope the user does not remove that. However, this is a very fragile approach. There is no other solution, as far as I know. Answer: Phones internal storage is also treated as an “SD card”. If you create a folder and save it in a text file, it should be safe given user does not manually delete folders after uninstall. Please check out a section “Saving files that should be shared” in the following web page. Making a file that persists after uninstall entails making it available to other apps/user to read and modify. If those file options aren’t intended, you should consider an alternative app design. After re-install, your app can access the created public directory by using the following function: public static File getExternalStorageDirectory () Regarding the function above, per Google: Note: don’t be confused by the word “external” here. This directory can better be thought as media/shared storage. It is a filesystem filesystem on a computer. Also, Google recomments placing shared files into a an existing public directory as to not pollute user’s root namespace. Answer: Are the strings unique to each user or are they app specific? In either case, the right thing to do would be to save it in some kind of remote server. Firebase is what I use for something like this. Check for its existence in your Application class and download and save it to SQLite if it doesn’t exist. For user specific data however, you are going to need some kind of authentication so you know which user is getting what.Firebase does this perfectly well too. Going by the requirements (no internet, no SD card) of the OP however,I don’t see any other way besides one that isn’t unethical. Tags: androidandroid
https://exceptionshub.com/save-android-persist-data-after-uninstall.html
CC-MAIN-2021-49
refinedweb
471
62.98
May 31st 2020 A lot of us have struggled with algorithms and data structures. When I began with programming at my University, the name of a subject that got my attention and got me motivated to find my life call was the Principles of programming. I started devoting a lot of time to figuring out how to solve professors’ assignments. Now those algorithms are easy for me but then they were not, and I couldn’t pass the exam, I had the wrong approach in learning, you can not memorize them, you must practice and figure out what you need to do. We draw flowcharts to show results using just if and while loop, without any predefined code. I think now that that was the best subject where we could learn how to create an algorithm, how to think like programmers. My reason for writing this article is to give you basic problems that occur the most in everyday programming. 1. Create function pozpar that for argument have an array of whole numbers y. Function needs to return every element of the array that is positive and even, reduced by 2 and the rest of the elements to enlarge by 1 def pozpar(y) n = y.length i = 0 while i < n e = y[i].round q = e / 2 if e > 0 && q * 2 == e y[i] = e - 2 else y[i] = e + 1 end i+=1 end return y end p pozpar([1,2,3,4.4,5,6]) Output: [2, 0, 4, 2, 6, 4] Pretty straight forward, in variable n I placed a length of an array, then set variable i to be 0 while i is smaller than the length of array loop through elements, floats are not allowed, convert them to the nearest integer, then make a condition to ensure that numbers must be positive and even, change elements of an array, even numbers smaller by 2 and odd numbers bigger by 1. This assignment is done. 2. Write a function with two arguments, both of them strings s1 and s2, if in string s2 big letter exists substitute it with the first element of string s1. The output should be string s2 only if substitution happened, if not output friendly message. def strings(s1,s2) i=0 n2 = s2.length count = 0 while i < n2 if s2[i] >= 'A' && s2[i] <= 'Z' s2[i] = s1[0] count +=1 end i+=1 end if count > 0 p s2 else print "There is no change in s2" end end strings('[email protected]', 'AlldapPWDA<zx4567?') Output: "zlldapzzzz<zx4567?" So, I set i to be 0, n2 is the length of string s2, I started iterating through a string, Uppercase A has ASCII value 65 in decimal, for Z, the value is 90 in decimal, in if statement I am looking for big letters if there is any I will substitute it with the first element of s1, count that needs to check will there be any iterations if not output will be a friendly message. Back then, as a student, my first year, the first exam, when I saw the assignment, as this one is, I would feel like the most stupid person in the world. If I practiced more everything would be ok. 3. Create a function that accepts string t. If the string doesn’t contain a number, output number of big letters in the string, if it does contain number reduce those numbers bigger of 3 by 1. Output changed string. def stringy(t) count = 0 i = 0 n = t.length res = false while i < n if !(t[i] >= '0' && t[i] <= '9') if t[i] >= 'A' && t[i] <= 'Z' count+=1 res = true end elsif t[i] >= '0' && t[i] <= '9' if t[i] > '3' inte = t[i].to_i t[i] = (inte -1).to_s end res = false end i+=1 end if res == true p count else p t end end stringy('asdjJKLj1256657') Output: ‘asdjJKLj1245546’ Read and do, if it doesn’t contain number count letters if it does reduce those bigger than 3 by 1. I added variable res to be false so I And the last assignment on the exam for future Ruby developer. 4. Create a function with string r, which needs to check if a string is a name, for a string to be a name it must contain the first big letter, and the rest of the letters should be small. Output friendly message to tell the user if a string can be a name or not. def name(s) i = 0 j = 1 n = s.length res = false while (j < n) if s[i] >= 'A' && s[i] <= 'Z' && s[j] >= 'a' && s[j] <= 'z' res = true else res = false end j+=1 end if res == true print "This string can be a name" else print "This string can't be a name" end end name('Ivana') Ouptut: This string can be a name Very easy, I set variable i on 0, variable j on 1, n contain the length of string, while j < n, check if the first element is a big letter and check are the rest of the letters small, number or any other character is not allowed, so where I want something to be true I used variable res, everything different from the desired result would result in false. Trying to do algorithms without making it easier, trying to do it without using a lot of predefined functions, with just two loops, give us a better understanding of how something works, without using predefined functions we can understand easier what they are doing when we start using them. To check big letters I could just do str.uppercase and get a job done but it won’t be a problem to see ASCII table and try to do it without that function. Without using each, each_with_index, unless, for, without regular expression, map, count, etc, we can do it without those helpers that make our life easier. A good practice is to code all of these loops and functions like I did those algorithms, using just a while, and if loops, with practice like this we can understand easier how those predefined functions work. And with practice like this, we will not have problems on interviews with small problems, like the part where we need to do something with the string, or array, we won’t fail because of some stupid mistake where we didn’t pay attention, fail will be easier, for next interview we will be prepared, not disappointed because of beginner mistake. read original article here
https://coinerblog.com/programming-principles-for-beginners-ug733v8e/
CC-MAIN-2020-40
refinedweb
1,104
70.26
Starting. Discussion is limited to x86 architecture and all source code listings are based on linux kernel 2.6.15.6. 1. What are system calls? System calls provide userland processes a way to request services from the kernel. What kind of services? Services which are managed by operating system like storage, memory, network, process management etc. For example if a user process wants to read a file, it will have to make 'open' and 'read' system calls. Generally system calls are not called by processes directly. C library provides an interface to all system calls. 2. What happens in a system call? A kernel code snippet is run on request of a user process. This code runs in ring 0 (with current privilege level -CPL- 0), which is the highest level of privilege in x86 architecture. All user processes run in ring 3 (CPL 3). So, to implement system call mechanism, what we need is 1) a way to call ring 0 code from ring 3 and 2) some kernel code to service the request. 3. Good old way of doing it Until some time back, linux used to implement system calls on all x86 platforms using software interrupts. To execute a system call, user process will copy desired system call number to %eax and will execute 'int 0x80'. This will generate interrupt 0x80 and an interrupt service routine will be called. For interrupt 0x80, this routine is an "all system calls handling" routine. This routine will execute in ring 0. This routine, as defined in the file /usr/src/linux/arch/i386/kernel/entry.S, will save the current state and call appropriate system call handler based on the value in %eax. 4. New shiny way of doing it It was found out that this software interrupt method was much slower on Pentium IV processors. To solve this issue, Linus implemented an alternative system call mechanism to take advantage of SYSENTER/SYSEXIT instructions provided by all Pentium II+ processors. Before going further with this new way of doing it, let's make ourselves more familiar with these instructions. 4.1. SYSENTER/SYSEXIT instructions: Let's look at the authorized source, Intel manual itself. Intel manual says: The SYSENTER instruction is part of the "Fast System Call" facility introduced on the Pentium® II processor. The SYSENTER instruction is optimized to provide the maximum performance for transitions to protection ring 0 (CPL = 0). The SYSENTER instruction sets the following registers according to values specified by the operating system in certain model-specific registers. CS register set to the value of (SYSENTER_CS_MSR) EIP register set to the value of (SYSENTER_EIP_MSR) SS register set to the sum of (8 plus the value in SYSENTER_CS_MSR) ESP register set to the value of (SYSENTER_ESP_MSR) Looks like processor is trying to help us. Let's look at SYSEXIT also very quickly: The SYSEXIT instruction is part of the "Fast System Call" facility introduced on the Pentium® II processor. The SYSEXIT instruction is optimized to provide the maximum performance for transitions to protection ring 3 (CPL = 3) from protection ring 0 (CPL = 0). The SYSEXIT instruction sets the following registers according to values specified by the operating system in certain model-specific or general purpose registers. CS register set to the sum of (16 plus the value in SYSENTER_CS_MSR) EIP register set to the value contained in the EDX register SS register set to the sum of (24 plus the value in SYSENTER_CS_MSR) ESP register set to the value contained in the ECX register SYSENTER_CS_MSR, SYSENTER_ESP_MSR, and SYSENTER_EIP_MSR are not really names of the registers. Intel just defines the address of these registers as: SYSENTER_CS_MSR 174h SYSENTER_ESP_MSR 175h SYSENTER_EIP_MSR 176h In linux these registers are named as: /usr/src/linux/include/asm/msr.h: 101 #define MSR_IA32_SYSENTER_CS 0x174 102 #define MSR_IA32_SYSENTER_ESP 0x175 103 #define MSR_IA32_SYSENTER_EIP 0x176 4.2. How does linux 2.6 uses these instructions? Linux sets up these registers during initialization itself. /usr/src/linux/arch/i386/kernel/sysenter.c: 36 wrmsr(MSR_IA32_SYSENTER_CS, __KERNEL_CS, 0); 37 wrmsr(MSR_IA32_SYSENTER_ESP, tss->esp1, 0); 38 wrmsr(MSR_IA32_SYSENTER_EIP, (unsigned long) sysenter_entry, 0); Please note that 'tss' refers to the Task State Segment (TSS) and tss->esp1 thus points to the kernel mode stack. [4] explains the use of TSS in linux as: The x x 86 CPU switches from User Mode to Kernel Mode, it fetches the address of the Kernel Mode stack from the TSS. - When a User Mode process attempts to access an I/O port by means of an in or out instruction, the CPU may need to access an I/O Permission Bitmap stored in the TSS to verify whether the process is allowed to address the port. So during initialization kernel sets up these registers such that after SYSENTER instruction, ESP is set to kernel mode stack and EIP is set to sysenter_entry. Kernel also setups system call entry/exit points for user processes. Kernel creates a single page in the memory and attaches it to all processes' address space when they are loaded into memory. This page contains the actual implementation of the system call entry/exit mechanism. Definition of this page can be found in the file /usr/src/linux/arch/i386/kernel/vsyscall-sysenter.S. Kernel calls this page virtual dynamic shared object (vdso). Existence of this page can be confirmed by looking at cat: /proc/`pid`/maps slax ~ # cat /proc/self/maps 08048000-0804c000 r-xp 00000000 07:00 13 /bin/cat 0804c000-0804d000 rwxp 00003000 07:00 13 /bin/cat 0804d000-0806e000 rwxp 0804d000 00:00 0 [heap] b7ea0000-b7ea1000 rwxp b7ea0000 00:00 0 b7ea1000-b7fca000 r-xp 00000000 07:03 1840 /lib/tls/libc-2.3.6.so b7fca000-b7fcb000 r-xp 00128000 07:03 1840 /lib/tls/libc-2.3.6.so b7fcb000-b7fce000 rwxp 00129000 07:03 1840 /lib/tls/libc-2.3.6.so b7fce000-b7fd1000 rwxp b7fce000 00:00 0 b7fe7000-b7ffd000 r-xp 00000000 07:03 1730 /lib/ld-2.3.6.so b7ffd000-b7fff000 rwxp 00015000 07:03 1730 /lib/ld-2.3.6.so bffe7000-bfffd000 rwxp bffe7000 00:00 0 [stack] ffffe000-fffff000 ---p 00000000 00:00 0 [vdso] For binaries using shared libraries, this page can be seen using ldd also: slax ~ # ldd /bin/ls linux-gate.so.1 => (0xffffe000) librt.so.1 => /lib/tls/librt.so.1 (0xb7f5f000) ... Observe linux-gate.so.1. This is no physical file. Content of this vdso can be seen as follows: ==> dd if=/proc/self/mem of=linux-gate.dso bs=4096 skip=1048574 count=1 1+0 records in 1+0 records out ==> objdump -d --start-address=0xffffe400 --stop-address=0xffffe414 linux-gate.dso ffffe400 <__kernel_vsyscall>: ffffe400: 51 push %ecx ffffe401: 52 push %edx ffffe402: 55 push %ebp ffffe403: 89 e5 mov %esp,%ebp ffffe405: 0f 34 sysenter ... ffffe40d: 90 nop ffffe40e: eb f3 jmp ffffe403 <__kernel_vsyscall+0x3> ffffe410: 5d pop %ebp ffffe411: 5a pop %edx ffffe412: 59 pop %ecx ffffe413: c3 ret In all listings, ... stands for omitted irrelevant code. Initiation: Userland processes (or C library on their behalf) call __kernel_vsyscall to execute system calls. Address of __kernel_vsyscall is not fixed. Kernel passes this address to userland processes using AT_SYSINFO elf parameter. AT_ elf parameters, a.k.a. elf auxiliary vectors, are loaded on the process stack at the time of startup, alongwith the process arguments and the environment variables. Look at [1] for more information on Elf auxiliary vectors. After moving to this address, registers %ecx, %edx and %ebp are saved on the user stack and %esp is copied to %ebp before executing sysenter. This %ebp later helps kernel in restoring userland stack back. After executing sysenter instruction, processor starts execution at sysenter_entry. sysenter_entry is defined in /usr/src/linux/arch/i386/kernel/entry.Sas: (See my comments in [ ]) 179 ENTRY(sysenter_entry) 180 movl TSS_sysenter_esp0(%esp),%esp 181 sysenter_past_esp: 182 sti 183 pushl $(__USER_DS) 184 pushl %ebp [%ebp contains userland %esp] 185 pushfl 186 pushl $(__USER_CS) 187 pushl $SYSENTER_RETURN [%userland return addr] 188 .... 201 pushl %eax 202 SAVE_ALL [pushes registers on to stack] 203 GET_THREAD_INFO(%ebp) 204 205 /* Note, _TIF_SECCOMP is bit number 8, and so it needs testw and not testb */ 206 testw $(_TIF_SYSCALL_EMU|_TIF_SYSCALL_TRACE|_TIF_SECCOMP|_TIF_SYSCALL_AUDIT), TI_flags(%ebp) 207 jnz syscall_trace_entry 208 cmpl $(nr_syscalls), %eax 209 jae syscall_badsys 210 call *sys_call_table(,%eax,4) 211 movl %eax,EAX(%esp) ...... Inside sysenter_entry: between line 183 and 202, kernel is saving the current state by pushing register values on to the stack. Observe that $SYSENTER_RETURN is the userland return address as defined inside /usr/src/linux/arch/i386/kernel/vsyscall-sysenter.Sand %ebp contains userland ESP as %esp was copied to %ebp before calling sysenter. After saving the state, kernel validates the system call number stored in %eax. Finally appropriate system call is called using instruction: 210 call *sys_call_table(,%eax,4) This is very much similar to old way. After system call is complete, processor resumes execution at line 211. Looking further in sysenter_entry definition: 210 call *sys_call_table(,%eax,4) 211 movl %eax,EAX(%esp) 212 cli 213 movl TI_flags(%ebp), %ecx 214 testw $_TIF_ALLWORK_MASK, %cx 215 jne syscall_exit_work 216 /* if something modifies registers it must also disable sysexit */ 217 movl EIP(%esp), %edx (EIP is 0x28) 218 movl OLDESP(%esp), %ecx (OLD ESP is 0x34) 219 xorl %ebp,%ebp 220 sti 221 sysexit Copies value in %eax to stack. Userland ESP and return address (to-be EIP) are copied from kernel stack to %edx and %ecx respectively. Observe that the userland return address, $SYSENTER_RETURN was pushed on to stack in line 187. After that 0x28 bytes have been pushed on to the stack. That's why 0x28(%esp) points to $SYSENTER_RETURN. After that SYSEXIT instruction is executed. As we know from previous section, sysexit copies value in %edx to EIP and value in %ecx to ESP. sysexit transfers processor back to ring 3 and processor resumes execution in userland. 5. Some Code #include <stdio.h> int pid; int main() { __asm__( "movl $20, %eax \n" "call *%gs:0x10 \n" /* offset 0x10 is not fixed across the systems */ "movl %eax, pid \n" ); printf("pid is %d\n", pid); return 0; } This does the getpid() system call (__NR_getpid is 20) using __kernel_vsyscall instead of int 0x80. Why %gs:0x10? Parsing process stack to find out AT_SYSINFO's value can be a cumbersome task. So, when libc.so (C library) is loaded, it copies the value of AT_SYSINFO from the process stack to the TCB (Thread Control Block). Segment register %gs refers to the TCB. Please note that the offset 0x10 is not fixed across the systems. I found it out for my system using GDB. A system independent way to find out AT_SYSINFO is given in [1]. Note: This example is taken from after little modification to make it work on my system. 6. References Here are some references that helped me understand this. About Elf auxiliary vectors By Manu Garg What is linux-gate.so.1? By Johan Peterson This Linux kernel: System Calls By Andries Brouwer Understanding the Linux Kernel, By Daniel P. Bovet, Marco Cesati Linux Kernel source code
http://manugarg.googlepages.com/systemcallinlinux2_6.html
crawl-002
refinedweb
1,852
54.52
Details - Type: Improvement - Status: Closed - Priority: Trivial - Resolution: Fixed - Affects Version/s: None - - Component/s: C++ - Library - Labels:None - Environment: Windows XP 32bit, vc++ 9.0, 10.0 - Patch Info:Patch Available Description At our company we need clients running on Windows being able to connect to our linux servers running hypertable. The attached patch enables the parts needed by Hypertable to be compiled on Windows using either the VC++ 9.0 or 10.0 compilers. Having read previous posts about ports using boost::asio we found these to be too intrusive for our needs. This version uses pthreads_win32 and winsock2 and is as designed to be as un-intrusive as is possible to the original unix code base. It is mostly #defines between unix sockets and winsock2 sockets. We also tried to follow the folder structuring of the C# runtime that has visual studio solutions to be consistent. More details are in the README as not all the functionality of the original unix code base is available to windows users. We will add the missing functionality, we just wanted to share what we had as for a Windows based client for us it is sufficient. The patch is based on the latest revision in SVN, we would love feedback and any code reviews. If there is any possibility of this being added to the main trunk then that would be much appreciated, however we don't expect that. Issue Links - is part of THRIFT-1123 Patch to compile Thrift server and client for vc++ 9.0 and 10.0 - Closed - is related to THRIFT-591 Make the C++ runtime library be compatible with Windows and Visual Studio - Closed THRIFT-1123 Patch to compile Thrift server and client for vc++ 9.0 and 10.0 - Closed - relates to THRIFT-1736 Visual Studio top level project files within msvc - Open Activity Any CPP people out there who could pass judgment on this patch? A few issues: 1) Config.h is a bad name – ./configure already creates a config.h, and w\e the windows configuration mechanism ends up being, it should be completely and obviously different from what ./configure outputs 2) I really don't think we wanna sprinkle the code with any more ifdefs than there already are – I'd be more comfortable with slowly creating a set of utility functions that are reimplemented for each major platform group as needed (we've already run in this problem with Darwin's lack of clock_gettime(); and in a more distant fashion with supporting unix sockets). Or just using the Apache Portable Runtime, which solves this by making it Someone Else's Problem. 3) I'm not particularly enthused by having all files including a superset of the needed headers through including them all in to Config.h – if anything we should reduce the (transitive closure of the) number of headers included per cpp file Thank you for the feedback, it is greatly appreciated. I will look into making those changes and try to get a new patch submitted as soon as possible. Working on my project I needed to create Thrift clients and servers both in C++ and Java, and to have C++ interoperability on Windows and *NIX platforms. I found a great deal of help in this patch by Mr. Dickson's and I extended it with server functionality. The result is published as THRIFT-1123. I express my gratitude to Mr. Dickson and I am looking forward for any comments. All, Thank you once again for all the comments both in this issue and in THRIFT-1123. It has really helped me. I have attached a new patch which is what I currently have as regards combining all the comments, however it is by no means complete - it just compiles the client side code at the moment. I have posted it mainly for a (hopefully) quick code review to make sure I am not going down the wrong path. To outline the changes: - Platform specifics have been removed and replaced by using the Apache Portable Runtime (APR) v1.4.2 - Config.h has been renamed to win32_config.h and been moved into "build_windows/msvc##/" where ## is the compiler version. - By using APR very few headers are now included in this new config header, just enough to ensure POSIX types are available. - Tests for client, concurrency, realloc and server have been added, but do not yet compile. (Work in progress) To reply to some comments: @Christian Lavoie: why did you put (std::min) in parentheses in TBufferTransports.cpp ? This is because somewhere in a windows header (can't remember which one) min and max have been defined as macros and as far as I am aware cannot be turned off. The parentheses make sure the STL min\max versions get chosen instead. A pain I know! @Dragan Okiljevic: Thank you for working on the server side of things, I really appreciate it. However, in switching to using APR I have probably broken your changes. After I have finished the client side of things, I will endeavor to fix the server side for you. Again, thank you for any comments and feedback. James Christian and Roger, thank you for your feedback. The comments, advices and propositions you gave are really of big help. James, it's great to see the improved version of your patch. As for the server side, thank you for your intention to fix it in a fashion applied in your new patch, if I can be of any help, I encourage you to contact me at any time. Good job! I have one concern, the dependency on apr. I'm very happy to have another runtime dependency for Thrift. We use Thrift on very small systems and do not like to pull in additional dependencies. We need just need a little subset of APR. Are there any other alternatives to achieve this? I agree that that dependency is not acceptable, especially for such an invasive change to TSocket. My proposed solution would be - For ctime_r, only use APR on Windows. - For TSocket, instead of commenting out huge swaths of the code, just fork the class into TAprSocket. TSocket would continue to not build on Windows, but TAprSocket would be the alternative. Unix users could switch if they wanted to include the extra dependency. Thank you for the quick feedback. @Roger + David: I agree that the extra dependency is not ideal, especially as such a small part of it is used. The alternative I see would be to simply use Winsock2 on Windows as the previous patch did. If that would be the way to go then incorporating previous comments, changes to the previous patch would need to be: 1. Create a "platform.h" header, which on *nix machines includes "config.h" (generated by ./configure) and header files which are not found on Windows. This new header would be used in code where "config.h" had been included directly previously. (This is based on Roger's comment on 29/Mar/11 19:10, THRIFT-1123 part a.) 2. (1.) Would not fix situations in which *nix only headers have been included in a file. Replacing these headers with "platform.h" would pull in headers that file would not require, which has been a point of contention previously. However, if this is the only work around then maybe it is acceptable? @Roger: As regards part b. of your comment in THRIFT-1123 on 29/Mar/11 19:10, I like that solution and will make the change. @Roger: As regards part c. of your comment in THRIFT-1123 on 29/Mar/11 19:10, it is a yes and no answer. Yes in that the Winsock2 equivalent of those functions are named completely differently. For instance fcntl(...) becomes ioctlsocket(...). However, as far as requiring them to be macros then no. Maybe an alternative is to make TSocket a template class where one of the template parameters is an OS socket layer policy. This would mean the implementation of low level socket routines could exist in platform specific folders (along with gettimeofday(...) on Windows) and leave the TSocket class almost the same. Just to avoid any confusion in the above statement, TSocket would become: template< class SOCKET_POLICY > class TSocket : public TVirtualTransport<TSocket>, private SOCKET_POLICY { ... } Where SOCKET_POLICY implements the following: close(...) usleep(...) poll(...) reset(...) blocking(...) non_blocking(...) If people are happy with the proposal I will start those changes, but would appreciate any comments before hand. Thanks a lot James for starting this patch. I also reviewed the makefile project (namke) based patch in THRIFT-591 but I preferred your way of doing it without makefiles. I downloaded your patch and works great for creating libthrift.lib. looking forward to have this patch make it to the next release. David's approach, creating a TAprSocket and TAprServerSocket seems to be a good solution with minimal changes to the existing implementation and that's a good thing! The template approach is a good solution for all these Socket variants we have. I think it is worth to have that refactoring task as a dedicated issue within jira. However, I suggest to implement a TAprSocket and TAprServerSocket and do the template stuff later. Hi, I have a small issue with the patch (I'm using the v0.1): the generated code for enums does not compile. I have a simple enum enum AccountType {CHECKING, SAVINGS} In the generated AccountType_types.cpp, the following line fails to compile: const std::map<int, const char*> _AccountType_VALUES_TO_NAMES(::apache::thrift::TEnumIterator(2, _kAccountTypeValues, _kAccountTypeNames), ::apache::thrift::TEnumIterator(-1, NULL, NULL)); with the following error: 1>c:\program files\microsoft visual studio 9.0\vc\include\xutility(764) : error C2039: 'iterator_category' : is not a member of 'apache::thrift::TEnumIterator' 1> e:\tools\thrift\thrift-0.6.0-patched\lib\cpp\src\thrift.h(50) : see declaration of 'apache::thrift::TEnumIterator' 1> c:\program files\microsoft visual studio 9.0\vc\include\xutility(1598) : see reference to class template instantiation 'std::iterator_traits<_Iter>' being compiled 1> with 1> [ 1> _Iter=apache::thrift::TEnumIterator 1> ] 1> c:\program files\microsoft visual studio 9.0\vc\include\map(120) : see reference to function template instantiation 'void std::_Debug_range<_Iter>(_InIt,_InIt,const wchar_t *,unsigned int)' being compiled 1> with 1> [ 1> _Iter=apache::thrift::TEnumIterator, 1> _InIt=apache::thrift::TEnumIterator 1> ] 1> e:\temp\serial_cpp\gen-cpp\accounttype_types.cpp(18) : see reference to function template instantiation 'std::map<_Kty,_Ty>::map<apache::thrift::TEnumIterator>(_Iter,_Iter)' being compiled 1> with 1> [ 1> _Kty=int, 1> _Ty=const char *, 1> _Iter=apache::thrift::TEnumIterator 1> ] Any idea? Everything else is working fine, though. Thanks for the patch! @Xavier I got the same problem while trying to compile Thrift generated C++ code for enums using Visual Studio 2010. The map is supposed to be initialized using two iterator arguments (first, and last), so it will containt key/value pairs for each enumeration value. Quick & dirty solution: Last time I inspected the code, it seemed that this map is not used, so my code worked when I simply removed it from generated code by commenting it from Thrift generated files. (Worked not only for compilation, but for runtime usage of enums aswell). I'm not sure why MSVC doesn't recognize TEnumIterator as an iterator type. This class overloads '*' operator and returns std::pair<int, const char *> which seems to work on *nix, but not MSVC. Am I right? Anyway, there is many ways to populate std::map either during initialization or later, so I guess there is one both compatible with MSVC and *nix and Mac OSX compilers. I would suggest creation of a new issue for this problem, as it is lossely connected to the main topic of this one. Update: discussion on this issue continued at THRIFT-1139 I just observed that am facing lot of linker errors related to following: main.obj : error LNK2001: unresolved external symbol "public: virtual void __cdecl apache::thrift::concurrency::ReadWriteMutex::release(void)const " (?release@ReadWriteMutex@concurrency@thrift@apache@@UEBAXXZ) main.obj : error LNK2001: unresolved external symbol "public: virtual bool __cdecl apache::thrift::concurrency::ReadWriteMutex::attemptWrite(void)const " (?attemptWrite@ReadWriteMutex@concurrency@thrift@apache@@UEBA_NXZ) main.obj : error LNK2001: unresolved external symbol "public: virtual bool __cdecl apache::thrift::concurrency::ReadWriteMutex::attemptRead(void)const " (?attemptRead@ReadWriteMutex@concurrency@thrift@apache@@UEBA_NXZ) main.obj : error LNK2001: unresolved external symbol "public: virtual void __cdecl apache::thrift::concurrency::ReadWriteMutex::acquireWrite(void)const " (?acquireWrite@ReadWriteMutex@concurrency@thrift@apache@@UEBAXXZ) main.obj : error LNK2001: unresolved external symbol "public: virtual void __cdecl apache::thrift::concurrency::ReadWriteMutex::acquireRead(void)const " (?acquireRead@ReadWriteMutex@concurrency@thrift@apache@@UEBAXXZ) After going thru the libthrift.vcproj, observed that Mutex.cpp and many other .cpp files are excluded from build. Is it intended or its just present on my machine? Found the reason and solution for the above linker error in THRIFT-1123 @Deepak Muley You may find the following comment: useful for both THRIFT-1123 and (possibly) THRIFT-1031. It describes two additional minor interventions needed to make your MSVC 2008/2010 Thrift project work like a charm. @Dragan Okiljevic Thank you. This patch depends on pthreads_win32 library which carries an LGPL license. The ASF License FAQ lists the LGPL in the prohibited category, though it's unclear to me if this prohibition is limited to creating a derivative work from LGPL'd code, or if it also applies to runtime dependencies against standalone LGPL libraries. Does anyone know the answer? I have created a fork of latest thrift 0.6.1 on github, which comprises part of this patch, and also uses THRIFT-923 (non-blocking server with libevent). If interested, please refer to. Carl, Did you get any answer (privately) to your LGPL license related question, am curious to know the answer as well. Hi all, Please accept my apologies for the long radio silence, other tasks have somewhat occupied my time lately. However, over the past week I have been working on a newer version of the patch, one which I hope addresses all concerns and requests. I greatly appreciate all the feedback you have given me. This patch leaves the original code base completely unchanged. Where classes are completely incompatible with windows I have rewritten the incompatible parts and branched the implementation thus maintaining the same interface. It is not complete, there are still some Transport classes needing to still be ported, but the majority has been. I have also ported the compiler to compile under MSVC, as far as I can tell it works in it's entirety, however I would really appreciate testing of the various other languages as I have mainly concentrated on the C++ side of code generation. I have only given a cursory run of the other language generators. As to why I ported the compiler, this is so I can generate the testing code so that I can port all the test cases and make sure the library is equal to it's linux counterpart. From the readme: Using Thrift with C++ ===================== You need to define an enviroment variable called THIRD_PARTY. The project assumes that you have extracted the dependancies into their default structure into the path defined by THIRD_PARTY. e.g. $(THIRD_PARTY)/boost/boost_1_47_0/ Thrift is divided into two libraries. libthrift The core Thrift library contains all the core Thrift code. It requires boost shared pointers and pthreads_win32. libthriftnb This library contains the Thrift nonblocking server, which uses libevent. To link this library you will also need to link libevent. You MUST apply this patch to make generated code compile. Linking Against Thrift ====================== You need to link your project that uses thrift against all the thrift dependancies; in the case of libthrift, pthreads_win32, boost and for libthriftnb, libevent. In the project properties you must also set HAVE_CONFIG_H as force include the config header: "windows/confg.h" Dependencies ============ boost shared pointers libevent (for libthriftnb only) pthreads win32 Known issues ============ - Endianess has not been fully tested, may not work with doubles. - Currently does not support the non-blocking connect path. - Only supports the creation of clients, server sockets are not yet ported. - Does not support named pipes. (Supported in unix through unix domain sockets). TODO ==== - Port remaining classes in libthrift: - PosixThreadFactory - TFDTransport - TFileTransport - THttpClient - THttpServer - TSimpleFileTransport - TSSLSocket - TServerSocket - Port remaing classes in libthriftnb: - TNonblockingServer - Port test cases. (Not even started this. Run test cases in release mode?) - Autolink libraries depending on debug\release build. - Auto versioning. The introduction of the duplicate set of t_*_generators in the v0_2 patch just add the potential for patches to be missed when new bugs arise within the generators. This functionality should be added into the current generators as an option flag to modify the resulted generator output where necessary. If I misunderstood your comments and this is just for debugging and will not be in the final patch can you please provide it as a separate patch noted as just for debug. With version 0.2 of the patch, the py and go generates make a call to chmod. I have attached a new version of the patch with the changes applied directly to the generators removing the need to have duplicates. It simply does not call chmod when compiling under msvc. Version 0_4 adds some missing headers and enables code to be compiled with iterator debugging enabled. Attached a new version (0_5) with some bug fixes that I have encountered during testing today. @James: I reviewed the patch, and have those remarks to share: Overall: ======== Pros: - This is a nice improvement to have a 0.8 patch for Windows - Almost everything in thrift is ported (however see also Cons) - IMHO this could be the base patch for the upcoming Windows support inside thrift Cons: - the server part is missing (as you mentioned), and/or especially async servers is incomplete (libevent and pthread based, very important for high-performance thrift serving) - the new socket-windows-only code is troubling to me (see below for more details, and possible alternatives) - it lacks a 64 bits target (should be trivial to add) Details: ======== Those observations come mostly from my own experiment of maintaining a Windows port: see My focus has been on supporting async server/client, so in a way I feel it is very complimentary to this patch. I was hoping you could review my observations, and see if somehow you could blend the two together! TSocket: - It is not immediately obvious to me why creating a different socket class for Windows is necessary. On the github link above, TSocket has only a couple of minor changes, and run/compile fine on Windows. - HOWEVER: Much more importantly, I don't think it will work for multi-thread servers to use errno (see and) - My own approach was to re-define errno to WSAGetLastError (only internally to thrift), and all the errno.h codes: It works as long as errno is not getting used inside any thrift .h (or it would force clients linking against thrift to do the same). - So taking together those approaches (slight changes to the existing TSocket, and override of errno) seem better to me, even if they require special attention to never use errno inside .h. Also errors getting thrown by thrift have now the WinSock error codes, and it works in multiple threads (like the pool thread). lib\cpp\src\concurrency\PosixThreadFactory.cpp: - In the port mentioned above, you'll find the minor tweaks to have it compile and run on Windows (it was taken from another JIRA patch, perhaps even this one!) lib\cpp\src\server\TNonblockingServer.cpp: Lately Roger integrated a set of patches of mine which make the compilation on Windows happen without any changes. However see in the code above the file win32-config.h, which defines SOCKOPT_CAST_T and AF_LOCAL to make this work. TWinsockSingleton: - I'm not sure this is necessary to have thrift initialize WinSock (especially since libevent might already do this, dunno). I choose to make it a documentation point. Thanks, I feel overall the windows port is getting closer and closer. @alexandre Thank you for your comments, I mostly agree, however to make some further points: PosixThreadFactory & TNonblockingServer - I will take a look at what you have and integrate those today (16th Sept). TSocket: I have taken a look at what you have, and whilst it is indeed minimal, the main reason for re-implementing it for Windows was in response to David's proposal of creating TAprSocket, except I wasn't really a fan of using APR as it is yet another dependency to include. By using #'defs as you have done I kind of feel it is going down the route of trying to fit a square peg into a round hole (maybe I am being pedantic!). I think so long as the windows implementation of the TSocket interface fulfills the obligations and behavior of that interface then my preference would be to have an all windows implementation. Whilst what I have currently is merely a duplication with the non windows code ported over, eventually I would like to implement an entire windows version. Errno: Note that a windows specific implementation would also fix the errno issue as it would be possible to use WSAGetLastError directly as well as the windows error codes without the special care currently required in your version. I have also taken a look at your implementation of "fcntl" and it doesn't seem to follow the specification here, mainly throwing an integer value of -99 isn't specified: I'd really like this patch to be integrated into the main trunk, so if there is anything more I can do to make this happen please let me know. Just comitted the first portion of your patch related to the compiler! I would prefer the TSocket aproach from alexandre, he made good progress on making it more portable. There are just a few #ifdef's and it will be much easier to manage the code base with one implementation. I'm looking forward to bring this into the next release! Integrated in Thrift #265 (See) THRIFT-1031 Patch to compile Thrift for vc++ 9.0 and 10.0 (partial) no chmod on windows for go and py compiler Patch: James Dickson roger : Files : - /thrift/trunk/compiler/cpp/src/generate/t_go_generator.cc - /thrift/trunk/compiler/cpp/src/generate/t_py_generator.cc I have attached a new version (0_6) which includes suggestions from Alexandre. The following have now been ported: TNonblockingServer PosixThreadFactory TServerSocket I have also added the error code fixes for multi threading as Alexandre pointed out. In fact it is from Alexandre's patch, except instead of putting it into the global config include I have put it in a separate header which is only force included by TSocket and TServerSocket. Let me know what you think. I will continue pushing on to complete the porting of the remaining classes in the mean time. 0_6 is committed! Thank you James, Alexandre and all other contributors to make this happen!! Integrated in Thrift #266 (See) THRIFT-1031 Patch to compile Thrift for vc++ 9.0 and 10.0 Patch: James Dickson and Alexandre Parenteau roger : Files : - /thrift/trunk/compiler/cpp/src/windows - /thrift/trunk/compiler/cpp/src/windows/config.h - /thrift/trunk/compiler/cpp/src/windows/version.h - /thrift/trunk/lib/cpp/README_WINDOWS - /thrift/trunk/lib/cpp/libthrift.vcxproj - /thrift/trunk/lib/cpp/libthrift.vcxproj.filters - /thrift/trunk/lib/cpp/libthriftnb.vcxproj - /thrift/trunk/lib/cpp/libthriftnb.vcxproj.filters - /thrift/trunk/lib/cpp/src/concurrency/PosixThreadFactory.cpp - /thrift/trunk/lib/cpp/src/server/TNonblockingServer.h - /thrift/trunk/lib/cpp/src/transport/TServerSocket.cpp - /thrift/trunk/lib/cpp/src/transport/TSocket.cpp - /thrift/trunk/lib/cpp/src/windows - /thrift/trunk/lib/cpp/src/windows/Fcntl.cpp - /thrift/trunk/lib/cpp/src/windows/Fcntl.h - /thrift/trunk/lib/cpp/src/windows/GetTimeOfDay.cpp - /thrift/trunk/lib/cpp/src/windows/GetTimeOfDay.h - /thrift/trunk/lib/cpp/src/windows/Operators.h - /thrift/trunk/lib/cpp/src/windows/SocketPair.cpp - /thrift/trunk/lib/cpp/src/windows/SocketPair.h - /thrift/trunk/lib/cpp/src/windows/StdAfx.cpp - /thrift/trunk/lib/cpp/src/windows/StdAfx.h - /thrift/trunk/lib/cpp/src/windows/TWinsockSingleton.cpp - /thrift/trunk/lib/cpp/src/windows/TWinsockSingleton.h - /thrift/trunk/lib/cpp/src/windows/TargetVersion.h - /thrift/trunk/lib/cpp/src/windows/config.h - /thrift/trunk/lib/cpp/src/windows/force_inc.h - /thrift/trunk/lib/cpp/src/windows/tr1 - /thrift/trunk/lib/cpp/src/windows/tr1/functional - /thrift/trunk/lib/cpp/thrift.sln Hi, thanks all for making this happen! I had a chance finally to compile/run it (using the Calculator), and here are my comments/questions for @James: 1- overall definitively this looks good: I guess there is no WIN64 target, because there is no pthread for WIN64? 2- Cosmetic: something needs to define "#define HAVE_GETTIMEOFDAY" (windows/config.h?) 3- Cosmetic: #define __BYTE_ORDER: this is not necessary anymore, another patch of mine previously fixed the auto detection on win32 using boost. The README may also reflect that (double should be fine). 4- Cosmetic: typedef __kernel_size_t size_t, typedef __kernel_ssize_t ssize_t: I found this easier version: typedef ptrdiff_t ssize_t; (don't think size_t needs to be defined AFAIK) 5- Cosmetic: #define NOMINMAX (before Windows.h, because std::min is used in thrift) 6- Cosmetic warning: #pragma warning(disable: 4996) and #pragma warning(disable: 4250) could be nice ("POSIX name for this item is deprecated", "inherits via dominance") 7- config.h: here is my biggest concern: you decided (honorably!) to re-use config.h (and define HAVE_CONFIG_H): I find it in practice annoying because it assumes "config.h" is at the same level of "Thrift.h". This forces in turn to add <thrift folder>/windows to the include path. What happens is when mixing with other libraries who also use config.h, it may take precedence over thrift's config.h. Ideally, I would prefer if not using HAVE_CONFIG_H, and do something like this instead: #ifdef _WIN32 #include "windows/config.h" $endif When included from Thrift.h, this guarantees to pick the right config.h (and not Python's one for example). Now what happens for the source code .cpp which does not include Thrift.h? Well since you are using already force-compilation, you could add windows/config.h to the list for forced headers. I found this to work in practice. Again, thanks a lot for the patch! New version added - 0_7. @alexandre: 1. Yep, not sure what else to do short so if you have any suggestions please let me know. 2-7 are done, Please see the changes in 0_7. Also a couple of bug fixes with the non-blocking path. On windows the connect method returns EWOULDBLOCK instead of EINPROGRESS when non blocking is set on the socket. Hi all, Thanks for the Windows port, it's great work! My last obstacle to using it, though, is that I have to build it in VC++ 9.0; unfortunately, starting from thrift_msvc_v0_2.patch, the project files are VC++ 10.0 only. Is this a deliberate and necessary choice, or is there a way for me to undo this? Another question: a port to MinGW/MSYS would be great, since the rest of the project is based on the autotools, and we have native Win32 code here. Do you think these sources are compatible with MinGW? How much effort would be needed make it work with the autotools scripts? Thanks! just committed patch 0_7 Integrated in Thrift #275 (See) THRIFT-1031 Patch to compile Thrift for vc++ 9.0 and 10.0 => some more improvements Patch: James Dickson roger : Files : - /thrift/trunk/compiler/cpp/compiler.sln - /thrift/trunk/compiler/cpp/compiler.vcxproj - /thrift/trunk/compiler/cpp/compiler.vcxproj.filters - /thrift/trunk/lib/cpp/libthrift.vcxproj - /thrift/trunk/lib/cpp/src/Thrift.h - /thrift/trunk/lib/cpp/src/transport/TSocket.cpp - /thrift/trunk/lib/cpp/src/windows/TargetVersion.h - /thrift/trunk/lib/cpp/src/windows/config.h - /thrift/trunk/lib/cpp/src/windows/force_inc.h @james: another cosmetic change: wouldn't be better to use WSAPoll on >= Vista? #if WINVER <= 0x0502 #define poll(fds, nfds, timeout) \ poll_win32(fds, nfds, timeout) ... #else # define poll(fds, nfds, timeout) \ WSAPoll(fds, nfds, timeout) #endif // WINVER This is based on one the original windows port attempts, I updated it to match your latest version. For the problem of a WIN64 port, as well as the pthread_win32 license, I submitted a patch to replace pthread_win32 by boost: THRIFT-1361 @alexandre: I will add that change back, I forgot to after testing it on XP. As regards the POSIX and 64bit issue I came across this: The implementation seems quite good and wondered what your thoughts were? It has a licence that I believe would be compatible. @james: thanks for the link, it looks very nice! Unfortunately it does not compile with Thrift at the moment, I wrote a comment on the page. Terrific work gents! I'm very glad to see James' and Alexandre's patches merged. Adding MSVC support will really speed up Thrift adoption. I'm able to build and run a Windows sample project that I've created based on Alexandre's sample in his fork over on github. There is one issue when running against the trunk (SVN 1177082). Using a ThreadManager causes an exception in PosixThreadFactory.cpp (pthread_attr_setschedpolicy @ line 121). Here's a code snippit: boost::shared_ptr<TProtocolFactory> protocolFactory(new TBinaryProtocolFactory()); boost::shared_ptr<MyHandler> handler(new MyHandler()); boost::shared_ptr<TProcessor> processor(new MyProcessor(handler)); boost::shared_ptr threadManager = ThreadManager::newSimpleThreadManager(NumThreads); boost::shared_ptr threadFactory = boost::shared_ptr (new PosixThreadFactory()); threadManager->threadFactory(threadFactory); threadManager->start(); // <-- exception here TNonblockingServer server(processor, protocolFactory, Port, threadManager); server.serve(); The exception occurs within the threadManager->start() call. This worked with Alexandre's fork but I haven't been able to determine why it's failing with the newly merged code in the trunk (SVN 1177082). Am I doing something wrong or does the current thrift code not support using ThreadManager this way? @Peace: I think this is missing inside the trunk: > #ifdef _WIN32 > //WIN32 Pthread implementation doesn't seem to support sheduling policies other then PosixThreadFactory::OTHER - runtime error > policy_ = PosixThreadFactory::OTHER; #endif if (pthread_attr_setschedpolicy(&thread_attr, policy_) != 0) { This is the only difference I notice between the trunk and my fork, and it seems to be consistent with your problem description. @alexandre: Yes that fixes the threading exception. Thank you! Another difference is around line 252, a float cast assigned to stepsperquanta: #ifdef HAVE_SCHED_GET_PRIORITY_MAX max_priority = sched_get_priority_max(pthread_policy); #endif int quanta = (HIGHEST - LOWEST) + 1; float stepsperquanta = (float)(max_priority - min_priority) / quanta; Not sure how important that is but I put it in and the sample runs well. Uploaded "PosixThreadFactory.patch" with the changes in the above comments. I just noticed that the patch doesn't contain the full path name for PosixThreadFactory.cpp (sorry about that). It should be lib/cpp/src/concurrency/PosixThreadFactory.cpp Thanks Peace! committed. Integrated in Thrift #287 (See) THRIFT-1031 Patch to compile Thrift for vc++ 9.0 and 10.0 Patch: Peace roger : Files : - /thrift/trunk/lib/cpp/src/concurrency/PosixThreadFactory.cpp The Thrift compiler project is still VC2010 only? I have only supplied 2010 projects as this is the only system we use at work. I can't really spend the required time to fully support older versions. However, having said that, I have recently come across this project: which is similar to cmake in some ways, but I feel is better and would enable multiple project support if we switched to it. Let me know what you all think. James I like the premake approach. Is it really possible to generate all these project files correctly? Latest release is from November 2010(). Is it widely used? How active is the development and community of premake? -roger @James: Thanks for the suggestion! If there is no practical way to somehow reuse the existing Makefiles in MS Visual Studio (I suppose nmake is too badly broken for that?), the premake approach does look reasonable. Still I'm curious too about what makes you feel that premake is better than CMake? @Roger: here are some stats about both premake and CMake if that can be of any help: Those make me agree with your concern that premake looks riskier to engage with. The issue I see with CMake, contrarily to premake, is that its documentation does not clearly state that MSVC 2010 is actually supported. So I guess the only way to find out would be to try. Aurelien What is the introduction of a new make tool for? can whatever is trying to be accomplished not be achieved with autotools currently? That's my point. Using premake or CMake would still duplicate the effort that is put in the autotools for the Linux version. Instead I was wondering if using Microsoft's version of make, a.k.a. nmake (not the nmake from AT&T of course) wouldn't allow compiling with Visual C++ thanks to Makefiles generated by the autotools. Or could gnu make from MSys or Cygwin be used for the same purpose of driving Visual C++'s command-line compiler? In both cases I don't know if generating Visual Studio project files from Makefiles would be possible. cmake was already discussed here THRIFT-797 I close this issue now. Please create additional tickets for other improvements on this. Patch to add switch for vc++ compilers.
https://issues.apache.org/jira/browse/THRIFT-1031?focusedCommentId=13118843&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel
CC-MAIN-2014-23
refinedweb
5,574
55.84
Originally posted by Burkhard Hassel: Howdy ranchers, you may find it interesting that these "forbidden chars" don't even compile in a comment: public class Trap { public static void main(String args[]) { // a comment that doesn't compile '\u000a' System.out.println("ready"); } } this class does not compile. Yours, Bu. Originally posted by Raghavan Muthu: ... Since it appears within the comment, the presence of such character really does NOT make sense right? Originally posted by marc weber: It might make sense. For example... //Use \u000A for new line But it won't compile. Himai Minh wrote:A character is actually a 16-bit integer. char c = '\u0061'; int i1 = c; You can assign an integer to a character. You can also assign a float to an integer. The float has 64-bit. A character has 16-bit.A float has enough space to hold a character.
http://www.coderanch.com/t/264507/java-programmer-SCJP/certification/char-compiling
CC-MAIN-2014-52
refinedweb
146
68.57
Back to index The nsIStorageStream interface maintains an internal data buffer that can be filled using a single output stream. More... import "nsIStorageStream.idl"; The nsIStorageStream interface maintains an internal data buffer that can be filled using a single output stream. One or more independent input streams can be created to read the data from the buffer non-destructively. Definition at line 53 of file nsIStorageStream.idl.. Initialize the stream, setting up the amount of space that will be allocated for the stream's backing-store. Create a new input stream to read data (written by the singleton output stream) from the internal buffer. Multiple, independent input streams can be created.. Definition at line 95 of file nsIStorageStream.idl. True, when output stream has not yet been Close'ed. Definition at line 100 of file nsIStorageStream.idl.
https://sourcecodebrowser.com/lightning-sunbird/0.9plus-pnobinonly/interfacens_i_storage_stream.html
CC-MAIN-2016-44
refinedweb
137
58.38
Before we start developing our Windows library any further, we should make an important naming decision. As you might have noticed, I employed a convention to use the prefix Win for all classes related to Windows. Such naming conventions used to make sense in the old days. Now we have better mechanisms, not only to introduce, but also to enforce naming conventions. I'm talking about namespaces. It's easy to enclose all definitions of Windows-related classes, templates and functions in one namespace which we will conveniently call Win. The net result, for the user of our library, will be to have to type Win::Maker instead of WinMaker, Win::Dow instead of Window, etc. It will also free the prefix Win for the use in user-defined names (e.g., the client of our library will still be able to define his or her own class called WinMaker, without the fear of colliding with library names). Also, in preparation for further development, let's split our code into multiple separate files. The two classes Win::ClassMaker and Win::Maker will each get a pair of header/implementation files. The Win::Procedure function will also get its own pair, since the plan is to make it part of our library. The biggest challenge in Windows programming is to hide the ugliness of the big switch statement that forms the core of a window procedure. We'll approach this problem gradually. The first step is to abstract the user interface of the program from the engine that does the actual work. Traditionally, the UI-independent part is called the model. The UI part is split into two main components, the view and the controller. The view's responsibility is to display the data to the user. This is the part of the program that draws pictures or displays text in the program's window(s). It obtains the data to be displayed by querying the model. The controller's responsibility is to accept and interpret user input. When the user types in text, clicks the mouse button or selects a menu item, the controller is the first to be informed about it. It converts the raw input into something intelligible to the model. After notifying the model it also calls the view to update the display (in a more sophisticated implementation, the model might selectively notify the view about changes). The model-view-controller paradigm is very powerful and we'll use it in our encapsulation of Windows. The problem is, how do we notify the appropriate controller when a window procedure is notified of user input? The first temptation is to make the Controller object global, so that a window procedure, which is a global function, can access it. Remember however that there may be several windows and several window procedures in one program. Some window procedures may be shared by multiple windows. What we need is a mapping between window handles and controllers. Each window message comes with a window handle, HWND, which uniquely identifies the window for which it was destined. The window-to-controller mapping can be done in many ways. For instance, one can have a global map object that does the translation. There is, however, a much simpler way--we can let Windows store the pointer to a controller in its internal data structures. Windows keeps a separate data structure for each window. Whenever we create a new window, we can create a new Controller object and let Windows store the pointer to it. Every time our window procedure gets a message, we can retrieve this pointer using the window handle passed along with the message. The APIs to set and retrieve an item stored in the internal Windows data structure are SetWindowLong and GetWindowLong. You have to specify the window whose internals you want to access, by passing a window handle. You also have to specify which long you want to access--there are several pre-defined longs, as well as some that you can add to a window when you create it. To store the pointer to a controller, we'll use the long called GWL_USERDATA. Every window has this long, even a button or a scroll bar (which, by the way, are also windows). Moreover, as the name suggests, it can be used by the user for whatever purposes. We'll be taking advantage of the fact that a pointer has the same size as a long--will this be true in 64-bit Windows, I don't know, but I strongly suspect. There is a minor problem with the Get/SetWindowLong API: it is typeless. It accepts or returns a long, which is not exactly what we want. We'd like to make it type-safe. To this end, let's encapsulate both functions in templates, parametrized by the type of the stored data. namespace Win { template <class T> inline T GetLong (HWND hwnd, int which = GWL_USERDATA) { return reinterpret_cast<T> (::GetWindowLong (hwnd, which)); } template <class T> inline void SetLong (HWND hwnd, T value, int which = GWL_USERDATA) { ::SetWindowLong (hwnd, which, reinterpret_cast<long> (value)); } } In fact, if your compiler supports member templates, you can make GetLong and SetLong methods of Win::Dow. namespace Win { class Dow { public: Dow (HWND h = 0) : _h (h) {} template <class T> inline T GetLong (int which = GWL_USERDATA) { return reinterpret_cast<T> (::GetWindowLong (_h, which)); } template <class T> inline void SetLong (T value, int which = GWL_USERDATA) { ::SetWindowLong (_h, which, reinterpret_cast<long> (value)); } void Display (int cmdShow) { assert (_h != 0); ::ShowWindow (_h, cmdShow); ::UpdateWindow (_h); } private: HWND _h; }; } Notice the use of default value for the which argument. If the caller calls any of these functions without the last argument, it will be defaulted to GWL_USERDATA. Are default arguments already explained??? We are now ready to create a stub implementation of the window procedure. LRESULT CALLBACK Win::Procedure (HWND hwnd, UINT message, WPARAM wParam, LPARAM lParam) { Controller * pCtrl = Win::GetLong<Controller *> (hwnd); switch (message) { case WM_NCCREATE: { CreateData const * create = reinterpret_cast<CreateData const *> (lParam); pCtrl = static_cast<Controller *> (create->GetCreationData ()); pCtrl->SetWindowHandle (hwnd); Win::SetLong<Controller *> (hwnd, pCtrl); } break; case WM_DESTROY: // We're no longer on screen pCtrl->OnDestroy (); return 0; case WM_MOUSEMOVE: { POINTS p = MAKEPOINTS (lParam); KeyState kState (wParam); if (pCtrl->OnMouseMove (p.x, p.y, kState)) return 0; } } return ::DefWindowProc (hwnd, message, wParam, lParam); } We initialize the GWL_USERDATA slot corresponding to hwnd in one of the first messages sent to our window. The message is WM_NCCREATE (Non-Client Create), sent before the creation of the non-client part of the window (the border, the title bar, the system menu, etc.). (There is another message before that one, WM_GETMINMAXINFO, which might require special handling.) We pass the pointer to the controller as window creation data. We use the class Win::CreateData, a thin encapsulation of Windows structure CREATESTRUCT. Since we want to be able to cast a pointer to CREATESTRUCT passed to us by Windows to a pointer to Win::CreateData, we use inheritance rather than embedding (you can inherit from a struct, not only from a class). namespace Win { class CreateData: public CREATESTRUCT { public: void * GetCreationData () const { return lpCreateParams; } int GetHeight () const { return cy; } int GetWidth () const { return cx; } int GetX () const { return x; } int GetY () const { return y; } char const * GetWndName () const { return lpszName; } }; } The message WM_DESTROY is important for the top-level window. That's where the "quit" message is usually posted. There are other messages that might be sent to a window after WM_DESTROY, most notably WM_NCDESTROY, but we'll ignore them for now. I also added the processing of WM_MOUSEMOVE, just to illustrate the idea of message handlers. This message is sent to a window whenever a mouse moves over it. In the generic window procedure we will always unpack message parameters and pass them to the appropriate handler. There are three parameters associated with WM_MOUSEMOVE, the x coordinate, the y coordinate and the state of control keys and buttons. Two of these parameters, x and y, are packed into one LPARAM and Windows conveniently provides a macro to unpack them, MAKEPOINTS, which turns lParam into a structure called POINTS. We retrieve the values of x and y from POINTS and pass them to the handler. The state of control keys and buttons is passed inside WPARAM as a set of bits. Access to these bits is given through special bitmasks, like MK_CONTROL, MK_SHIFT, etc., provided by Windows. We will encapsulate these bitwise operations inside a class, Win::KeyState. class KeyState { public: KeyState (WPARAM wParam): _data (wParam) {} bool IsCtrl () const { return (_data & MK_CONTROL) != 0; } bool IsShift () const { return (_data & MK_SHIFT) != 0; } bool IsLButton () const { return (_data & MK_LBUTTON) != 0; } bool IsMButton () const { return (_data & MK_MBUTTON) != 0; } bool IsRButton () const { return (_data & MK_RBUTTON) != 0; } private: WPARAM _data; }; The methods of Win::KeyState return the state of the control and shift keys and the state of the left, middle and right mouse buttons. For instance, if you move the mouse while you press the left button and the shift key, both IsShift and IsLButton will return true. In WinMain, where the window is created, we initialize our controller and pass it to Win::Maker::Create along with the window's title. TopController ctrl; win.Create (ctrl, "Simpleton"); This is the modified Create. It passes the pointer to Controller as the user-defined part of window creation data--the last argument to CreateWindowEx. HWND Maker::Create (Controller & controller, char const * title) { HWND hwnd = ::CreateWindowEx ( _exStyle, _className, title, _style, _x, _y, _width, _height, _hWndParent, _hMenu, _hInst, &controller); if (hwnd == 0) throw "Internal error: Window Creation Failed."; return hwnd; } To summarize, the controller is created by the client and passed to the Create method of Win::Maker. There, it is added to the creation data, and Windows passes it as a parameter to WM_NCREATE message. The window procedure unpacks it and stores it under GWL_USERDATA in the window's internal data structure. During the processing of each subsequent message, the window procedure retrieves the controller from this data structure and calls its appropriate method to handle the message. Finally, in response to WM_DESTROY, the window procedure calls the controller one last time and unplugs it from the window. Now that the mechanics of passing the controller around are figured out, let's talk about the implementation of Controller. Our goal is to concentrate the logic of a window in this one class. We want to have a generic window procedure that takes care of the ugly stuff--the big switch statement, the unpacking and re-packing of message parameters and the forwarding of the messages to the default window procedure. Once the message is routed through the switch statement, the appropriate Controller method is called with the correct (strongly-typed) arguments. For now, we'll just create a stub of a controller. Eventually we'll be adding a lot of methods to it--as many as there are different Windows messages. The controller stores the handle to the window it services. This handle is initialized inside the window procedure during the processing of WM_NCCREATE. That's why we made Win::Procedure a friend of Win::Controller. The handle itself is protected, not private--derived classes will need access to it. There are only two message-handler methods at this point, OnDestroy and OnMouseMove. namespace Win { class Controller { friend LRESULT CALLBACK Procedure (HWND hwnd, UINT message, WPARAM wParam, LPARAM lParam); void SetWindowHandle (HWND hwnd) { _h = hwnd; } public: virtual ~Controller () {} virtual bool OnDestroy () { return false; } virtual bool OnMouseMove (int x, int y, KeyState kState) { return false; } protected: HWND _h; }; } You should keep in mind that Win::Controller will be a part of the library to be used as a base class for all user-defined controllers. That's why all message handlers are declared virtual and, by default, they return false. The meaning of this Boolean is, "I handled the message, so there is no need to call DefWindowProc." Since our default implementation doesn't handle any messages, it always returns false. The user is supposed to define his or her own controller that inherits from Win::Controller and overrides some of the message handlers. In this case, the only message handler that has to be overridden is OnDestroy--it must close the application by sending the "quit" message. It returns true, so that the default window procedure is not called afterwards. class TopController: public Win::Controller { public: bool OnDestroy () { ::PostQuitMessage (0); return true; } }; To summarize, our library is designed in such a way that its client has to do minimal work and is protected from making trivial mistakes. For each class of windows, the client has to create a customized controller class that inherits from our library class, Win::Controller. He implements (overrides) only those methods that require non-default implementation. Since he has the prototypes of all these methods, there is no danger of misinterpreting message parameters. This part--the interpretation and unpacking--is done in our Win::Procedure. It is written once and for all, and is thoroughly tested. This is the part of the program that is written by the client of our library. In fact, we will simplify it even more later. Is it explained that the result of assignment can be used in an expression? #include "Class.h" #include "Maker.h" #include "Procedure.h" #include "Controller.h" class TopController: public Win::Controller { public: bool OnDestroy () { ::PostQuitMessage (0); return true; } }; int WINAPI WinMain (HINSTANCE hInst, HINSTANCE hPrevInst, LPSTR cmdParam, int cmdShow) { char className [] = "Simpleton"; Win::ClassMaker winClass (className, hInst); winClass.Register (); Win::Maker maker (className, hInst); TopController ctrl; Win::Dow win = maker.Create (ctrl, "Simpleton"); win.Display (cmdShow); MSG msg; int status; while ((status = ::GetMessage (& msg, 0, 0, 0)) != 0) { if (status == -1) return -1; ::DispatchMessage (& msg); } return msg.wParam; } Notice that we no longer have to pass window procedure to class maker. Class maker can use our generic Win::Procedure implemented in terms of the interface provided by our generic Win::Controller. What will really distinguish the behavior of one window from that of another is the implementation of a controller passed to Win::Maker::Create. The cost of this simplicity is mostly in code size and in some minimal speed deterioration. Let's start with speed. Each message now has to go through parameter unpacking and a virtual method call--even if it's not processed by the application. Is this a big deal? I don't think so. An average window doesn't get many messages per second. In fact, some messages are queued in such a way that if the window doesn't process them, they are overwritten by new messages. This is for instance the case with mouse-move messages. No matter how fast you move the mouse over the window, your window procedure will not choke on these messages. And if a few of them are dropped, it shouldn't matter, as long as the last one ends up in the queue. Anyway, the frequency with which a mouse sends messages when it slides across the pad is quite arbitrary. With the current processor speeds, the processing of window messages takes a marginally small amount of time. Program size could be a consideration, except that modern computers have so much memory that a megabyte here and there doesn't really matter. A full blown Win::Controller will have as many virtual methods as there are window messages. How many is it? About 200. The full vtable will be 800 bytes. That's less than a kilobyte! For comparison, a single icon is 2kB. You can have a dozen of controllers in your program and the total size of their vtables won't even reach 10kB. There is also the code for the default implementation of each method of Win::Controller. Its size depends on how aggressively your compiler optimizes it, but it adds up to at most a few kB. Now, the worst case, a program with a dozen types of windows, is usually already pretty complex--read, large!--plus it will probably include many icons and bitmaps. Seen from this perspective, the price we have to pay for simplicity and convenience is minimal. What would happen if a Controller method threw an exception? It would pass right through our Win::Procedure, then through several layers of Windows code to finally emerge through the message loop. We could, in principle catch it in WinMain. At that point, however, the best we could do is to display a polite error message and quit. Not only that, it's not entirely clear how Windows would react to an exception rushing through its code. It might, for instance, fail to deallocate some resources or even get into some unstable state. The bottom line is that Windows doesn't expect an exception to be thrown from a window procedure. We have two choices, either we put a try/catch block around the switch statement in Win::Procedure or we promise not to throw any exceptions from Controller's methods. A try/catch block would add time to the processing of every single message, whether it's overridden by the client or not. Besides, we would again face the problem, what to do with such an exception. Terminate the program? That seems pretty harsh! On the other hand, the contract not to throw exceptions is impossible to enforce. Or is it?! Enter exception specifications. It is possible to declare what kind of exceptions can be thrown by a function or method. In particular, we can specify that no exceptions can be thrown by a certain method. The declaration: virtual bool OnDestroy () throw (); promises that OnDestroy (and all its overrides in derived classes) will not throw any exceptions. The general syntax is to list the types of exceptions that can be thrown by a procedure, like this: void Foo () throw (bad_alloc, char *); How strong is this contract? Unfortunately, the standard doesn't promise much. The compiler is only obliged to detect exception specification mismatches between base class methods and derived class overrides. In particular, the specification can be only made stronger (fewer exceptions allowed). There is no stipulation that the compiler should detect even the most blatant violations of this promise, for instance an explicit throw inside a method defined as throw() (throw nothing). The hope, however, is that compiler writers will give in to the demands of programmers and at least make the compiler issue a warning when an exception specification is violated. Just as it is possible for the compiler to report violations of const-ness, so it should be possible to track down violations of exception specifications. For the time being, all that an exception specification accomplishes in a standard-compliant compiler is to guarantee that all unspecified exceptions will get converted to a call to the library function unexpected (), which by default terminates the program. That's good enough, for now. Declaring all methods of Win::Controller as "throw nothing" will at least force the client who overrides them to think twice before allowing any exception to be thrown. It's time to separate library files from application files. For the time being, we'll create a subdirectory "lib" and copy all the library files into it. However, when the compiler compiles files in the main directory, it doesn't know where to find library includes, unless we tell it. All compilers accept additional include paths. We'll just have to add "lib" to the list of additional include paths. As part of the cleanup, we'll also move the definition of TopController to a separate file, control.h.
http://www.relisoft.com/book/win/2control.html
crawl-001
refinedweb
3,264
62.98
Here is example program birthday3/birthday3.cs where we add a function HappyBirthdayAndre, and call them both. Guess what happens, and then load and try it: using System; class Birthday3 { static void Main() { HappyBirthdayEmily(); HappyBirthdayAndre(); } static void HappyBirthdayEmily() { Console.WriteLine ("Happy Birthday to you!"); Console.WriteLine ("Happy Birthday to you!"); Console.WriteLine ("Happy Birthday, dear Emily."); Console.WriteLine ("Happy Birthday to you!"); } static void HappyBirthdayAndre() { Console.WriteLine ("Happy Birthday to you!"); Console.WriteLine ("Happy Birthday to you!"); Console.WriteLine ("Happy Birthday, dear Andre."); Console.WriteLine ("Happy Birthday to you!"); } } Again, definitions are remembered and execution starts in Main. The order in which the function definitions are given does not matter to C#. It is a human choice. For variety I show Main first. This means a human reading in order gets an overview of what is happening by looking at Main, but does not know the details until reading the definitions of the birthday functions. Detailed order of execution: Main HappyBirthdayEmily HappyBirthdayEmilyfunction call HappyBirthdayAndreis called as this location is remembered. HappyBirthdayAndrefunction call, done with Main; at the end of the program The calls to the birthday functions happen to be in the same order as their definitions, but that is arbitrary. If the two lines of the body of Main were swapped, the order of operations would change, but if the order of the whole function definitions were changed, it would make no difference in execution. Functions that you write can also call other functions you write. In this case Main calls each of the birthday functions. Warning A common compiler error is caused by failing to match the braces that wrap a function body. A new function heading can only exist outside all other function declarations and inside a class. If you have too few or extra '}' you are likely to find a perfectly fine looking function heading with an error, for instance, about not allowing static here.... Check your earlier lack or excess of braces! Xamarin Studio, like other modern code editors, can show you matching delimiters. If you place your cursor immediately after a delimiter { } ( ) [ ], the matching one should become highlighted.
http://books.cs.luc.edu/introcs-csharp/functions/multfunc.html
CC-MAIN-2019-09
refinedweb
354
58.99
one question what does strict? i tried it with only -disable-strict and it doens't work with disable-warnings + -disable-strict it works Offline Don't you people read a PKGBUILD when you compile this stuff? I put a link in the kdemultimedia PKGBUILD about fixing a kernel include. … /0553.html Apply that fix and everything should build. This also solves the problem in ./configure: "Checking for linux/cdrom.h usability: no". I don't get it why kdemultimedia includes kernelheaders even when after checking it finds out they're not usable. It checks for cdrom.h and finds it unusable, and then later in the build, it fails on a file included by cdrom.h. Offline JGC where to find your PKGBUILDS? I'm to blind to find them ;-) EDIT: OOPS they are on your server in src :-) Offline this is fixed in rc2 that happens in rc1 that was a reason for not announcing it to get sound rc2 packages are needed: arts, kdelibs, kdebase , kdemultimedia needs some time to build ;-) Offline OK, KDE won't be long..... i'll start building tomorrow after work (i'm working today 3-midnight and tomorrow 11-4) so expect KDE 3.3 pkg's probably tuesday, if people want to test i'll see JGC about putting it there 1st to test. Offline you got the final builds or rc2? i compiled rc2 from source, that takes really some time the only build problem is kdemultimedia with the patch it works i made a bug report to change this file: one other wish fom my side: is it possible o enable mozilla bindings in kdebindings? then you can switch between khtml and gecko engine in konqueror looking forward to test :-) Offline final... ok, these moz bindings? what's needed in config line? will enable it... alos any other requests before i start building tomorrow after work... should be all good.... Offline hi the mozilla bindings doesn't compile well shouldn't be don't waste too much time on that i changed the configure script that it finds the mozilla headers look for mozilla_incldirs it wasn't built at all then i changed to xparts diretory and run make it stops with an error and in mozilla directory it doesn't work either it was just a thing i know from suse but don't know how they get it to work Offline ok, some progress, i have arts, kde libs and kdebase built... will be building the rest over night, JGC, where can i find your PKGBUILDS so i can update mine (and any possible pkg's?) Lou... Offline can be found: Offline can be found: Are these packages the final ones or will cmf upload them to the archlinux-server when he is ready? Offline the packages are rc1 packages for rc2 you have to build it on your own by modifying the current PKGBUILDS or wait till cmf release the final ones which i expect will happen soon Offline arts libs and base are stable and are built against readline 5.0 and have no issues, the KDM multi-VT stuff should be in (just adding that now) i'm just scripting the rest to start building in about 1 hr. i'm just looking into ksvg though, as it never seems to work, i'm seeing what it's deps are (if any).... so not long to go.. Offline OK, Thank you for building. Making packages on my Athlon 600 wouhld not be much fun *g*. Offline KDE 3.3 all built and done, only no KDM session support, my echos aren't working... aside from thta, built and good, deps are sorted out and no sill libGL.la sillyness... i'm a tad drunk tonight, been out as i get my A level results tomorrow, so it should be up by tomorrow night, night night all... Love, your friendly KDE maintainer x x x Offline KDE is released to stable...... announcement is out: packages were built and will show up soon :-) Offline Ok, it seems all KDE pkg's didn't build, i'm missing kdemultimedia, thanks to the kernel sources issue... i'm trying the patch in JGC's pkgbuild, but to no avaiil.... just wondering what the next plan of action is to be. Offline KDE depends on kernel-sources? Offline if it's the mpeglib error you have to patch the /usr/include/asm/byteorder.h file which is described in PKGBUILD here is my PKGBUILD and byteorder.h: # $Id: PKGBUILD,v 1.11 2004/06/09 13:56:38 lou Exp $ # Maintainer: Lou Greenwood <lou@archlinux.org> # In order to build kdemultimedia, you need to patch /usr/include/asm/byteorder.h according to this patch: # … /0553.html pkgname=kdemultimedia pkgver=3.3.0 kdever=3.3.0rc2 pkgrel=1j1 pkgdesc="KDE Multimedia Programs." url="" groups=('kde') depends=('kdelibs>=3.3.0' 'libidn' 'cdparanoia' 'lame' 'tunepimp' 'taglib' 'xine-lib' 'perl' 'libtool') # for easier build, just uncomment the mirror you want to use mirror="" # updated every 2 hours, very fast for Europe # mirror="" # main server # mirror="ibiblio.org/pub/mirrors/kde/" # ibiblio mirror source=( ) build() { cd $startdir/src/$pkgname-$pkgver ./configure --prefix=/opt/kde --enable-audio=alsa,oss,esd --with-alsa --with-lame --with-vorbis --with-esd --disable-dependency-tracking --enable-final make || return 1 make DESTDIR=$startdir/pkg install || return 1 } byteorder.h: #ifndef _I386_BYTEORDER_H #define _I386_BYTEORDER_H #include <asm/types.h> #ifdef __GNUC__ /* For avoiding bswap on i386 */ #ifdef __KERNEL__ #include <linux/config.h> #endif static __inline__ __const__ __u32 ___arch__swab32(__u32 x) { #ifdef CONFIG_X86_BSWAP __asm__("bswap %0" : "=r" (x) : "0" (x)); #else __asm__("xchgb %b0,%h0nt" /* swap lower bytes */ "rorl $16,%0nt" /* swap words */ "xchgb %b0,%h0" /* swap higher bytes */ :"=q" (x) : "0" (x)); #endif return x; } /* gcc should generate this for open coded C now too. May be worth switching to it because inline assembly cannot be scheduled. -AK */ static __inline__ __const__ __u16 ___arch__swab16(__u16 x) { __asm__("xchgb %b0,%h0" /* swap bytes */ : "=q" (x) : "0" (x)); return x; } #ifndef __STRICT_ANSI__ static inline __u64 ___arch__swab64(__u64 val) { union { struct { __u32 a,b; } s; __u64 u; } v; v.u = val; #ifdef CONFIG_X86_BSWAP asm("bswapl %0 ; bswapl %1 ; xchgl %0,%1" : "=r" (v.s.a), "=r" (v.s.b) : "0" (v.s.a), "1" (v.s.b)); #else v.s.a = ___arch__swab32(v.s.a); v.s.b = ___arch__swab32(v.s.b); asm("xchgl %0,%1" : "=r" (v.s.a), "=r" (v.s.b) : "0" (v.s.a), "1" (v.s.b)); #endif return v.u; } #endif /* !__STRICT_ANSI__ */ #ifndef __STRICT_ANSI__ #define __arch__swab64(x) ___arch__swab64(x) #endif #define __arch__swab32(x) ___arch__swab32(x) #define __arch__swab16(x) ___arch__swab16(x) #ifndef __STRICT_ANSI__ #define __BYTEORDER_HAS_U64__ #endif #endif /* __GNUC__ */ #include <linux/byteorder/little_endian.h> #endif /* _I386_BYTEORDER_H */ Offline now compiled 3.3.0 from source to get kdemultimedia to compile don't enable --enable-final in arts and kdemultimedia then it works Offline i thin ki've gotten around the issue, although it's using teh disable-warnings and --disable-strict from gentoo ebuilds.... not too keen, but it works. Also multiple KDM session are now in! written a script to namcap every pkg to check deps, that's running now, along with 3 pkg's left... all seems to be going goood! #Lou Offline cmf: Are you building kde with debug info? I really hope not, it slows down kde a lot... If I have the gift of prophecy and can fathom all mysteries and all knowledge, and if I have a faith that can move mountains, but have not love, I am nothing. 1 Corinthians 13:2 Offline i think he did that and if not you joined too late to the thread ;-) and we will have to wait for 3.3.1 Offline For those interested in KDE development, I'll just mention that Fluendo is streaming for aKademy conference in Ogg Theora right now, just like they did for this years wonderful GUADEC.. I'm proud to be a a freedomloving infidel piece of treehugging eurotrash. Offline
https://bbs.archlinux.org/viewtopic.php?pid=39953
CC-MAIN-2016-40
refinedweb
1,341
73.58
The python programming language can be used to create a self signed certificate. Self Signed Certificates can be used for internal systems that do not need automatic public trust from a well known CA (Certification Authority). The downside to using self signed certificates is that they must be explicitly trusted, but sometimes this is preferred for increased security. Below we will demonstrate an example of using the python requests module to trust the full certificate chain, or in this case, the one certificate in the chain being self signed. As a prerequisite to this article, read our instructions on generating a CSR in python to create the public/private key pair to be used in this example. How to create a self signed certificate in Python If you do not need a publicly trusted SSL certificate you can use self signed certificates for internal SSL encryption. Mind you will have to explicitly trust the self signed certificate, it can greatly simplify your SSL implementation on internal servers if you can just generate certificates on the fly without having to interact with an external CA. First, you will generate a private key. For this example we will be using RSA having a key size of 2048, the lowest recommended bit size. from cryptography.hazmat.primitives.asymmetric import rsa key = rsa.generate_private_key( public_exponent=65537, key_size=2048, ) Next, generate the self signed certificate. The certificate will contain data about who you are and who your organization is. Note that the subject and issuer will always be the same for a self signed certificate. This is also trust of any pubic or private Root CA certificate, because a root certificate is also self signed. After writing the python code to create the certificate, it must then be signed by the private key generated in the previous step. from cryptography import x509 from cryptography.x509.oid import NameOID from cryptography.hazmat.primitives import hashes subject = issuer = x509.Name(["), ]) cert = x509.CertificateBuilder().subject_name( subject ).issuer_name( issuer ).public_key( key.public_key() ).serial_number( x509.random_serial_number() ).not_valid_before( datetime.datetime.utcnow() ).not_valid_after( datetime.datetime.utcnow() + datetime.timedelta(days=365) ).add_extension( x509.SubjectAlternativeName([x509.DNSName(u"localhost")]), critical=False, ).sign(key, hashes.SHA256()) You now have a self signed certificate that was generated by the python programming language. Trust a self signed certificate in Python requests If the OS (Operating System), or application you are running your python requests code from does not trust the certificates protecting the server you are connecting to, you will receive the following error or one similar: requests.exceptions.SSLError: [Errno 1] _ssl.c:507: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed In the event you see this error, you will need to explicitly trust the certificates being returned by the external system if indeed they are to be trusted. To do this, use the verify parameter in your requests code to trust the certificate. requests.post(url, data=data, verify='/path/to/certificate.pem') Note that if the certificate you are trusting is a self signed certificate, the above command will work as is. If the certificate is a Root CA certificate with intermediates also in the chain of trust, make sure to include the intermediate certificates in the .pem file you are pointing to. Determine public or private key type in Python The examples above used the RSA algorithm when generating the key pair. To verify this the key type or to check an unknown key type in python, you can use the isinstance method passing in the key and the algorithm. See below for an example on how to determine the key type of a public key or a private key in python. public_key = cert.public_key() if isinstance(public_key, rsa.RSAPublicKey): # This is an RSA key elif isinstance(public_key, ec.EllipticCurvePublicKey): # This is an EC key else: # It's neither an RSA or EC key. Note that in the above example the public key algorithm is being checked. If you want to know the private key algorithm, you must first get the public key to be passed into the isinstance checks. Conclusion This article demonstrated how to programmatically create a self signed certificate using python. In addition, it demonstrated how to trust a self signed certificate in the python requests library. Leave us a comment if you have any questions or would like to see additional examples of using or trusting self signed certificates in python.
https://www.misterpki.com/python-self-signed-certificate/
CC-MAIN-2022-21
refinedweb
733
53.71
Yeah, the title is cryptic. The problem (we think) is in here my @disk_space = qx{df -k .}; map { $_ =~ s/ +/ /g } @disk_space; [download] It works fine to check the available disk space on *nix servers. But occasionally - just occasionally - it brings up command windows when executed on servers (Apache or IIS) running XP or Win 2003 and stops the script the code is in from working properly. I read that there are better ways of writing qx{df -k .}; and having a Win32 compatible method of checking disk space would be nice. The current work around is simply to not run this section of code when the OS is MSWin32. But there has to be a better way. (I didn't write this code, I'm just in the process of debugging it.) Dandelio: I'd suggest using Win32::DriveInfo for windows machines. Update: Fixed cpan link. ...roboticus When your only tool is a hammer, all problems look like your thumb. I'd use fsutil on windows: C:\test>fsutil volume diskfree . Total # of free bytes : 89573777408 Total # of bytes : 627247673344 Total # of avail free bytes : 89573777408 [download] RIP Neil Armstrong So, what I think is being said is that the script this is in needs an OS checker so it can load a Win32 specific utility to do the job? The script needs to be cross-platform. 95%+ of the installations are on *nix. Maybe 5% of the Win installs come up with this problem. So it's a pretty rare issue. So, what I think is being said is that the script this is in needs an OS checker so it can load a Win32 specific utility to do the job? I have a df utility on my windows system -- part of the UnxUtils package though it segfaults on my 64-bit OS -- but generally most windows systems will not have it, and expecting users to find and install one is naive. Far simpler I think to use something like: sub freespace { if( $^O eq 'MSWin32' ) { `fsutil volume diskfree .` =~ m[avail free bytes : (\d+)]; return ( $1 // die $! ) / 1024; } elsif( $^O eq ... ) { ... } else { `df -k .` =~ m[...]; return $1 // die $! } } ... my $free = freesp
http://www.perlmonks.org/?node_id=995108
CC-MAIN-2015-22
refinedweb
363
83.15
Camera to DropBox Is there a way to take a picture from my picture script: import photos x=photos.capture_image() photos.save_image(x) And save that photo directly to Dropbox? Pythonista has the Dropbox API ( dropboxmodule) built-in, but Dropbox interaction is much more complicated than writing to a regular file. In order to be able to write to your own Dropbox your script needs to authenticate with your account data and an access token that you can get from your account settings. This thread explains the process in more detail. Once you have a usable DropboxClientobject you can use the put_filemethod to upload a file. Note that put_fileexpects a "file-like" object, so you need to either first save the image to a temporary location in the script library, or store the raw image data in a StringIOobject to simulate a file object. I would like to know how to save a camera roll image to the Pythonista script directory. Right now I copy the image to a buffer and then, using put_file, I transfer it to dropbox. I would like to be able to save it to the script directory first, then use put_file to get it to the dropbox at a later time. I can't get the buffer technique to save it to the script directory. Any help on this would be much appreciated. What kind of "buffer" are you using? If it supports normal file methods, you can easily change your code to write to a file instead: with open("my_file_name.jpg", "wb") as myfile: myfile.write(a_string_of_bytes) Reading from a file is similar: with open("my_file_name.jpg", "rb") as myfile: a_string_of_bytes = myfile.read() In both cases it is important that you access the file in binary mode ( "rb"or "wb") - the default is text mode, which will not work with non-text files like images. Any code that uses the myfileobject (the variable can of course have any name you want) needs to be inside the withblock as well. Once the withblock ends, the file is closed and can no longer be read from or written to. @coomlata1, this allows you to select a photo from the camera roll and save it into a local file. import photos assert photos.get_count(), 'Sorry no access or no pictures.' img, metadata = photos.pick_image(include_metadata=True, raw_data=True) filename = metadata.get('filename', 'my_photo.png') with open('my_photo.png', 'wb') as out_file: out_file.write(img) print('Your photo was written to the file {}.'.format(filename)) Thanks for your comments and code...much appreciated. Here is the code I was working with: import photos from io import BytesIO import PIL from DropboxLogin import get_client from pexif import JpegFile choose=photos.pick_image(show_albums=True, multi=True, include_metadata=True) for photo in choose: resized=photo[0].resize((1600, 1200), Image.ANTIALIAS) buffer=BytesIO() resized.save(buffer,'JPEG') buffer.seek(0) # Upload to dropbox folder..."new_filename" contains the path and filename for photo response=drop_client.put_file(new_filename, buffer) This code works to add the photo to dropbox but with no metadata. When I resize the photo I loose all the metadata. I want to be able to write the metadata back to the photo before uploading to dropbox. The pexif module I imported to Pythonista will do that but it wants a relative reference to the photo. Trying your code, I created the file my_photo.jpg in the Pythonista scripts directory. I didn't try to resize it, but your code did keep all the media metadata intact. When I tried to access it with the pexif module to look at the metafile output I get a "TypeError:must be string without null bytes" as it dumps the metadata. I think it it is not matching the metadata included in the camera roll. Metadata seems to be a slippery slope. ef=JpegFile.fromFile('my_photo.jpg') ef.dump Pexif allows you to read and write metadata so I was hoping I could save the metadata with pexif and then write it back to the photo after resizing it. The dump function should be dumping a complete list of all the JPEG segments. The EXIF data is an APP1 JPEG segment. Assuming the JPEG file is intact the error you are seeing may be due to a bug in Pexif. Maybe it is written in Python3 or something and does not work properly in 2.7. The traceback should show you where the problem is happening and allow you to determine the source of the problem. The code above never calls buffer.close()so a buffer is left behind in RAM unused for every image processed. Use sys.getsizeof(buffer)to see how quickly this might add up to a lot of RAM. I would recommend changing buffer=BytesIO()to with BytesIO() as buffer:and then indenting the lines that follow so that buffer.close()is called automatically for you. See If you don’t use “with”, when does Python close files? The answer is: It depends. Thank you much for the explanation...very helpful and informative!!! That would explain why Pythonista was crashing after processing 10 to 12 photos in the loop. Looping through a significant amount of photos, and processing them in this way, puts a heavy load on the memory resources in an iPhone. Is it possible, using Pythonista, to resize a camera roll photo and either save it to the Pythonista script dir and/or upload it to dropbox without losing the metadata from the original photo on the camera roll? At this point I can save the photo, untouched, to the script dir and copy or upload it to dropbox and the meta is untouched. Any attempt to resize anywhere in the chain results in wiping the meta. It appears that PiL doesn't respect exif data. I found a pure python exif writing tool, you might be able to modify it for your purposes... Namely you'd want to read the exif before PILling it, then write it after. @coomlat1 - the PIL code in Pythonista is vintage 2009. At that time exif support was "experimental". You will find that JPEG images have a _getexif function which parses the exif if it is there. I checked the Pillow project to see if they have done any big changes to the JPEG code but don't see much: In fact they seem to be attempting to deal with crashes in _getexif only recently (Oct this year): There was also this Pillow sample: from PIL import Image img_path = "/tmp/img.jpg" img = Image.open(img_path) exif = img.info['exif'] img.save("output_"+img_path, exif=exif) Tested in Pillow 2.5.3 - not clear is it works in PIL as well but is worth a try For what it's worth here is a PIL example function that should autorotate an image based on the exif info it finds. You notice that it handles "exceptions" thrown by _getexif which will probably happen - but WTF its worth a try. def exif_orientation(im): """ Rotate and/or flip an image to respect the image's EXIF orientation data. """ try: exif = im._getexif() except Exception: # There are many ways that _getexif fails, we're just going to blanket # cover them all. exif = None if exif: orientation = exif.get(0x0112) if orientation == 2: im = im.transpose(Image.FLIP_LEFT_RIGHT) elif orientation == 3: im = im.rotate(180) elif orientation == 4: im = im.transpose(Image.FLIP_TOP_BOTTOM) elif orientation == 5: im = im.rotate(-90).transpose(Image.FLIP_LEFT_RIGHT) elif orientation == 6: im = im.rotate(-90) elif orientation == 7: im = im.rotate(90).transpose(Image.FLIP_LEFT_RIGHT) elif orientation == 8: im = im.rotate(90) return im Update to my previous post: The Pillow sample: from PIL import Image img_path = "input_img.jpg" img = Image.open(img_path) exif = img.info['exif'] img.save("output_img.jpg", exif=exif) Does not work in Pythonista PIL. It does not throw exceptions, but the exif does not appear in the output JPEG. The Pillow guys seem to have added a raw exif buffer into the libraries. So one possible solution would be for Pythonista to adopt Pillow and replace current PIL with it. PIL seems to be very stable and reliable but that is largely because it is abandoned by the developers. Pillow is still pretty active and supported. This is another Ole call since PIL/Pillow has a lot of C code in it. I wonder why the Pillow guys have not adopted pexiv2/gexiv2? I finally got things working properly. I can now copy photos from the camera roll with it's metadata to the Pythonista script directory. Resize that photo and copy the metadata from the original photo back to the resized one using pexif, and then upload the resized photo with the metadata to Dropbox. Here is the script...Thanks everyone for input, info, and advice. #coding: utf-8 import photos import time import Image import sys import console import PIL import string from DropboxLogin import get_client import pexif # Global arrays for photos that will require manual processing no_exif=[] no_resize=[] def GetDateTimeInfo(meta): old_filename=str(meta.get('filename')) exif=meta.get('{Exif}') try: if not exif=='None': theDatetime=str(exif.get('DateTimeOriginal')) theDatetime=theDatetime.split(" ") theDate=theDatetime[0] theDate=theDate.split(':') theTime=theDatetime[1] theTime=theTime.replace(':','.')+'.' folder_name=theDate[0]+'/'+theDate[1]+'.'+theDate[2]+'.'+theDate[0] new_filename=theTime+old_filename except: new_filename=old_filename folder_name='None' no_exif.append(old_filename) return old_filename,new_filename,folder_name def GetDimensions(meta,resize,img_name): # Original size exif=meta.get('{Exif}') img_width=exif.get('PixelXDimension') img_height=exif.get('PixelYDimension') if resize=='No Change': x=img_width y=img_height no_resize.append(img_name) return (x,y,img_width,img_height) else: resize=resize.split('x') x=int(resize[0]) y=int(resize[1]) # Don't resize photos smaller than your resize choice. if x*y>img_width*img_height: x=img_width y=img_height no_resize.append(img_name) return (x,y,img_width,img_height) # Don't resize if width or height isn't prportional to resize choice...scaling would be a better approach here. if x>img_width or y>img_height: x=img_width y=img_height no_resize.append(img_name) return (x,y,img_width,img_height) # Landscape if img_width>img_height: new_width=x new_height=y # Square elif img_width==img_height: # Don't resize if smaller than resize size if img_width<y: new_width=img_width new_height=img_height no_resize.append(img_name) else: new_width=y new_height=y # Portrait else: new_width=y new_height=x # Return resize dimensions...new & old return (new_width, new_height,img_width,img_height) def CopyMeta(meta_src,meta_dst): # Copy metadata from original photo to a resized photo that has no media metadata and write the results to a new photo that is resized with the media metadata. # Source photo img_src=pexif.JpegFile.fromFile(meta_src) # Destination photo img_dst=pexif.JpegFile.fromFile(meta_dst) img_dst.import_metadata(img_src) # Results photo img_dst.writeFile('meta_resized.jpg') #img=pexif.JpegFile.fromFile('meta_resized.jpg') #img.exif.primary.ExtendedEXIF.PixelXDimension= 1600 #img.exif.primary.ExtendedEXIF.PixelYDimension= 1200 #img.writeFile('meta_resized.jpg') img_src='' img_dst='' def main(): console.clear() try: # Here we are picking photos from the camera roll which, in Pythonista, allows us access to extra media data in photo's metafile. Because raw data is set to true, the image is a string representing the image object, not the object itself. choose=photos.pick_image(show_albums=True, multi=True,original=True,raw_data=True,include_metadata=True) except: print 'No photos choosen...exiting.' sys.exit() # Create an instance of Dropbox client drop_client=get_client() count=0 dest_dir='/Photos' # When metadata is returned with photo the photo is a tuple, with one the image, and the other the media metadata. for photo in choose: print '' print 'Processing photo...' # Raw data string img=photo[0] # Metadata meta=photo[1] # Get date and time info of photo old_filename,new_filename,folder_name=GetDateTimeInfo(meta) # Use info to rename photo new_filename=dest_dir+'/'+folder_name+'/'+new_filename # Get dimensions for resize based on orientation and size of original photo new_width,new_height,old_width,old_height=GetDimensions(meta,'1600x1200',old_filename) print '' print 'Original Name: '+old_filename print 'New Name: '+new_filename print '' print 'Original Size: '+str(old_width)+'x'+str(old_height) print 'New Size: '+str(new_width)+'x'+str(new_height) # Write string image of original photo to Pythonista script dir with open('meta_with.jpg', 'wb') as out_file: out_file.write(img) # Open image, resize it, and write new image to scripts dir img=Image.open('meta_with.jpg') resized=img.resize((new_width,new_height),Image.ANTIALIAS) resized.save('meta_without.jpg') resized='' # Copy metadata from original photo to resized one CopyMeta('meta_with.jpg','meta_without.jpg') print '' print 'Uploading photo to Dropbox...' # Upload resized photo with original metadata to Dropbox...use with statement to open file so file closes automatically at end of with. with open('meta_resized.jpg','r') as img: response=drop_client.put_file(new_filename,img) # Give Dropbox server time to process time.sleep(5) response='' print '' print 'Upload successful.' count=count+1 print '' print str(count) + ' photos processed.' if len(no_exif)>0: print '' print 'Photos with no DateTimeOriginal tag in their metadata and will need categorizing manually:' print '\n'.join(no_exif) if len(no_resize)>0: print '' print 'Photos that did not get resized because they were smaller than the resized version:' print '\n'.join(no_resize) sys.exit() if __name__ == '__main__': main()
https://forum.omz-software.com/topic/1457/camera-to-dropbox
CC-MAIN-2017-34
refinedweb
2,172
58.79
As part of SAP S/4HANA Regional Implementation Group I have the privilege of talking to many customers about their SAP S/4HANA project plans. This blog post comes out of some recent experiences and discussions with functional team members around the current best options to assist the fit-gap process of SAP Fiori apps and classic UIs. This approach and the related tools and resources can be used when moving to SAP S/4HANA and/or for Continuous Improvement projects after go-live. This is the outline of the general approach: - Start by scoping your SAP business roles - Get to know your real world business roles - Envision the future per real world business role - Use this to drive your fit-gap criteria for apps and UIs for your business role - Explore the SAP business role to refine your app/UI selection for your real world business role according to your fit-gap criteria - Validate your extension options for meeting any gaps identified - Build your roadmap for now and later Why this approach? It’s no longer enough to understand your business process, you need to think through how this translates to user’s actual working environment, their needs, and how those business users contribute to business outcomes. Ultimately user adoption is key to achieving business outcomes – so you need to go beyond the abstract business process to how this will translate into the every day reality of your business users. Fit to process is only part of the picture – you need to fit to your user too! Why does this approach end with a roadmap? As at SAP S/4HANA 2020, the solution now delivers: - More than 2K SAP Fiori apps - Alongside more than 8K classic UIs - Classic User Interfaces include GUI transactions, ABAP Web Dynpro applications, Web Client UI) - Plus the first native mobile app for SAP S/4HANA (with more to come) - More than 40 optional Line of Business processes many with process innovations – e.g. Group Reporting, Central Procurement, predictive MRP, Advanced Accounting and Financial Close, Intercompany Management and Reconciliation – as listed in the Feature Scope Description. - A range of intelligent automation use cases, including: - More than 50 Situation Handling use cases - More than 70 Robotic Process Automation use cases - More than 15 Machine Learning use cases. In other words, the reality is there is now so much new business value available in SAP S/4HANA that you are unlikely to implement everything you want in one go. Even if you do have the resources and skills within your project, there are likely to be limits on the capacity of your business users and your business itself to take advantage of that much new value in one deployment. The most likely scenario during fit-gap is you will discover much more that you want to use than you can pragmatically use right now. However, you don’t want to lose that discovery entirely, as it can help shorten future Continuous Innovation phases. Capturing your collective initial thoughts and discoveries into a brief roadmap can simplify future discussions. It can also give you some top-of-mind options if things change. You will have some initial analysis to draw on if, for example, a change becomes more urgent due to a sudden industry or market change. That initial analysis can also be useful if a change in strategy, tactics, or timeline frees up more capacity for change, and when at some point you upgrade to a future SAP S/4HANA release. Step 1 – Start by scoping your business roles The SAP S/4HANA User Experience revolves around business roles, matching the User Experience to people (i.e. business users) who share similar needs based on their work area and tasks (i.e. their business role). The majority of decisions around User Experience are derived from that including: - Which processes/tasks they participate in - Which device types are likely to suit their working patterns - Which apps or UIs are applicable to those processes/tasks and device types - Which intelligent automation options or real-time analytics would make them more productive - How should these apps be presented to them to optimize their productivity - What additional information or additional features they may need, which may be provided by configuration, adaptation, extension, or even custom build SAP S/4HANA delivers more than 500 Business Roles based on real world jobs that can be used as templates as explained in Understanding SAP Business Roles. You can explore what’s possible in your Sandbox system using these templates. In your Development environment you copy these templates as a starting point for creating your custom business roles. You can then refine your custom business roles to match what you want to deploy to your business users. SAP S/4HANA delivers more than 500 Business Roles Tip: You can activate the apps in your custom business roles in Development environment using task lists, so that only the apps you actually use are activated. Gather the list of SAP Business Roles you need to explore as part of your scope. There are two main approaches to scoping your business roles depending on where you are in your SAP S/4HANA journey: - SAP Readiness Check – use this when converting from SAP Business Suite, or when upgrading from a lower to a higher release of SAP S/4HANA - SAP Best Practices Explorer – use this when you are doing a new implementation of SAP S/4HANA and have no transaction usage history in SAP For both options you will also need to include the standard business roles to configure adapt and extend SAP S/4HANA. Option 1: Scoping using SAP Readiness Check As part of the readiness check process, you will capture your system usage of current GUI transactions in your production system. This usage data is then used by SAP Readiness Check to identify relevant business roles. SAP Readiness Check recommends the most relevant business roles Within each Business Role the usage data is also used to assign a relevance rating to the associated SAP Fiori apps. Relevance ratings for apps within a business role based on your usage data Find out more in Readiness Check – Details about the topic of Fiori. If you want to start your scoping before you set up SAP Readiness Check, you can also provide your GUI transaction usage data as an input to the SAP Fiori apps recommendation report. This report provides much the same analysis and will help you in your early planning. Option 2: New implementation of SAP S/4HANA, with no history of SAP Using the SAP Best Practices Explorer, you can review the best practices scope items for your target SAP S/4HANA version and localization country. These scope items represent different business process areas. Each best practice scope item includes a business process diagram and test scripts. The business process diagrams and the test scripts will confirm which business roles are involved in the process, including configuration roles needed. The test scripts will even confirm the recommend SAP Fiori app or classic UI for each process. Example of a Best Practices process diagram highlighting business roles and apps/UIs assigned to tasks Include business roles that configure, adapt, and extend SAP S/4HANA Don’t forget to add the configuration roles as explained in Yes you need SAP Fiori to configure, adapt, and extend SAP S/4HANA. These roles are currently: - Administrator - Analytics Expert - Business Process Specialist - Configuration Expert – Business Process Configuration These should be activated in your Sandbox and Development environments as a minimum, i.e. in addition to your scoped business roles. To make it easier these are now included in the tasks lists for both rapid content activation (for SAP Business Roles) and content activation (for custom business roles). Select recommended SAP Business Roles option in task lists Refer to SAP Notes: - 2686456 – Fiori Setup: Content Activation for SAP Business Roles - 2813396 – Fiori Setup: Content Activation for Business Roles Step 2 – Get to know your real world business roles Whatever SAP Fiori apps and classic UIs you choose will need to fit with the real world needs and working environment of your business users as much as they need to fit with your to-be business processes. For example: - App and UIs are often optimized for use on selected device types. You will want to make sure that you have the correct app or UI that suits both the task *and* the device type. - Different features may be more or less relevant depending on where and when in their day the user performs this task. For example, if they are travelling around users need a simple flow and few buttons, when they are back at their desk users can take advantage of more advanced features and do more detailed analysis. - Apps and UIs may need to be personalized to make sure they can be used in the business user’s actual working environment, for example whether they work in bright sunshine or a darkened room. Important: You will need to gather data and other evidence of the real needs of the business users in your organization. Why? Your project team may never have experienced the working environment of your organization’s business users. Many project consultants have spent their entire career in an office environment. This is why it is so important to get some data and evidence of the real working environment of your users. This can be done through site visits, interviews, photos, or videos. The better the quality of your information, the easier to determine real world fit of your UX. Some useful questions to ask about your business users include: - What is their working environment? Do they work in an office? Are they a field worker? Do they spend much of their work time travelling? - Do they have any mobility restrictions? For example, if they are climbing ladders, or need to move around equipment on a factory floor, that might impact what type of device they can use for certain tasks. Safety regulations may even prevent them from using certain devices in certain situations. - What is their typical workwear? Uniforms, hazmat suits, helmets, safety goggles and gloves, tool belts and other accoutrements can impact when and where they can use their devices. - What devices do they currently use (or plan to use) and for what types of tasks? This gives a heads-up on both the degree of change and whether there are other factors you may need to pay attention to in your project, such as safety regulations, or even procurement of devices. - What types of apps or UIs do they currently use? Are they experienced in certain types of user interfaces such as GUI transactions? This helps you communicate changes and benefits of your final app selection. - How frequently do they use them? Knowing whether they are a casual or expert user can help you decide between a simple app or a more complex app. - What are their most frequent on-system tasks? This helps identify which activities are most critical to their day-to-day working life. It’s worth asking if there are any highly manual tasks, either on-system or that involve both on and off system activities, e.g. copying data from email attachments or spreadsheets into the system? These could indicate the need for an Upload/Import app or even a potential Robotic Process Automation scenario. - What are their most important exception tasks? These tasks are occasional critical response to business exceptions, potential bottlenecks or stumbling points where they need to respond promptly to ensure business continues smoothly. For example, a supplier not being able to deliver all of the quantity requested, or a contract approaching an expiry date, or an equipment breakdown. These could indicate a possible fit to Notifications or even a Situation Handling use case. - What are the entry points to their tasks? An entry point is not typically a “doing” activity (e.g. create, change, delete, release, etc.), it’s the trigger for action. In other words, how do they decide what should they do? How do they decide of all the things they can do, what takes priority? Are they monitoring something in the system? Are they responding to an external request via phone or email or an incoming advice from a customer or supplier? - What are their biggest pain points when working on system? This helps quickly identify perhaps apps or features that may provide obvious business value to these users, such as desirable embedded analytics/KPIs, intelligent automation, or a search across multiple business objects. Step 3 – Envision the future per real world business role Workshop with the business process experts and some business users themselves what sorts of changes would bring immediate business benefits. At this stage you are looking to build a high-level roadmap that helps you determine your fit-gap criteria for apps that fit both your business outcomes and your business users. This can be good to do as design thinking workshop. Set your fit-gap criteria for apps for each role Identify the most important criteria for each business role. Your definition of “fit for purpose” must include UX criteria and not just functional capability. Failure to include UX criteria can result in a UX that is overly complex and confusing. Important: UX criteria must be focussed on the real needs of the business users in your organization, and not on the preferences of your project consultants. Why? Many project consultants are themselves highly expert users who typically come from outside your organization and may never have experienced the working environment of your organization’s business users. Good user adoption depends on fitting the UX to real needs of your users in their working environment. For example: Are mobile devices important for certain tasks? You will need to pay attention to device type fit when selecting apps or UIs. You have GUI-invested users? You may want to focus on complementary apps first, i.e. apps that bring new business value, and can navigate to rather than replace well-loved transactions. You have casual users? Focus on ease of use for most frequent tasks. You have deep analysis/investigative users? Focus on embedded analytics that give highly flexible analysis options. Consider whether and how other analytical tools such as SAP Analytics Cloud will be integrated with their user experience. Users who are overburdened with rote work or manual tasks? Focus on intelligent automation use cases and apps that can ease their workload. Users who need to respond quickly to changing information? Focus on analytical apps, situation handling, and machine learning use cases, that help them quickly prioritize what to do next. Step 4 – Explore the SAP business role in your Sandbox or Trial system to refine your app selection In a typical SAP S/4HANA project, the recommended approach is to start by activating whole SAP Business Roles in your sandbox system for exploration. This brings a predefined set of SAP Fiori apps, classic UIs, the navigations between them, and the associated authorizations for you to explore. The aim is to use the SAP Business Roles as a starting point to creating your own custom business roles. You can activate SAP Business Roles using rapid activation, which provides options to: - Create a test user per role so you can view the apps and features as they will see them - Copy the SAP Business Role to the customer namespace so you can start refining your list of apps and UIs as you go, simplifying your next steps into your development environment. As you explore the business role, make sure you use the User Actions menu > About to capture the exact technical ids of the apps and UIs you want to include in your final role. The User Actions menu feature About shows the technical id of a SAP Fiori app or classic UI Refer to: Finding the technical name of an app Some business roles have more than 150 apps or UIs you can potentially use. You will need to set some priorities around what to explore first, particularly if you know you only have the time, resources, or skills to introduce a small set of apps in your current project phase. Prioritizing apps for exploration These are a few suggestions that customers have found useful depending on your focus and priorities for this role. You need to bring clear new business value of SAP S/4HANA. Review the SAP Fiori Lighthouse scenarios for a shortlist of apps bringing the best new value. You want to bring intelligent automation benefits. Situation Handling can be good fit for business exception tasks – review the Use cases for Situation Handling. Robotic Process Automation can be a good fit for reduce rote or manual tasks – review the use cases in the RPA bot store. Machine Learning use cases can be a good fit for identifying business exceptions or making better informed choices – review the scope items in the SAP Best Practices Explorer. You want to bring some quick wins for users who currently use and like SAP GUI. Prioritize apps that bring real-time insights that complement current GUI transaction usage, such as: Overview pages, Monitor and Manage apps, or Smart Business KPIs. These apps are often a good fit for monitoring and prioritizing day-to-day main activities, especially where the user is responsible for certain business objects. For example: a Purchaser who monitors Purchase Requisitions and Purchase Orders; a Accounts Payable Accountant who monitors outgoing payments; or a Maintenance Technician who monitors Equipment and Maintenance Notifications. Refer to: Overview Pages – a good place to start You need to find simple apps, especially apps that will work on phones. In the SAP Fiori apps reference library, check the app detail to see if it works on device type “Smartphone”. You can also select all the apps and UIs for the business role, and filter by Phone. Device Type information in the SAP Fiori apps library Remember to consider native mobile app offerings. While there are only a few of these now provided with SAP S/4HANA, the list will grow over time. Remember that classic UIs are not supported on phone. However SAP Screen Personas can be used to provide a phone-ready overlay. Tip: Classic UIs are supported on desktop/laptop. Most classic UIs are also supported on tablet as of SAP S/4HANA 1909 or higher as explained in touch-enablement of the classic UIs. Consider launchpad features that bring immediate benefits These can bring new benefits even when a business role will mostly use classic UIs. For example: - Search on business objects - Notifications - Default values – to save on data entry Refer to The SAP Fiori User Experience for SAP S/4HANA Avoid predecessor apps In the SAP Fiori apps reference library, when you aggregate apps against your SAP S/4HANA release, look out for any apps marked as “Successor also chosen”, and remove those from your list of apps to be considered. As a general guideline, you want to avoid working with any app that has already been superseded by a new and improved equivalent. When there is no suitable SAP Fiori app for a task, you may need to use a classic User Interface Classic User interfaces include GUI Transactions, ABAP Web Dynpro, or Web Client UI. All of these classic UIs can be launched from the SAP Fiori launchpad. You can also launch to URLs, e.g. SAP Cloud Platform apps, 3rd party apps, and related websites. Important: Using classic UIs to supplement SAP Fiori apps is normal and expected. The SAP S/4HANA roadmap towards new SAP Fiori UX is a long-term vision. With each SAP S/4HANA release around 300-350 SAP Fiori apps have been added. It’s worth noting that when developing new apps, bringing new business value via SAP Fiori has been prioritized over replacing the old. There are now more than 2K SAP Fiori apps delivered with SAP S/4HANA. However even with that large number, there are more than 8K classic UIs that can also be used and launched from the SAP Fiori launchpad. Some business roles have greater SAP Fiori coverage than others. You can use the SAP Fiori apps reference library to quickly see a count of the number of available apps and get an idea of coverage as explained in Understanding SAP Business Roles. Step 5 – Validate your app selection and extension options Once you have worked out which are your most likely SAP Fiori apps and classic UIs, you will need to validate that they can meet your business needs at the fine detail level. This is typically done with a series of demonstrations and walkthroughs with the business. Any gaps – such as missing fields or features – will need to be covered. Most common gaps can be covered by in-app extensibility options. You can make sure you are prepared for any questions by following a few simple steps to check what is available: - Find and review the Extensibility Documentation for the app, where provided - Confirm available optional fields for the app - Confirm if custom fields can be added - Review options to adjust visibility, features, and layout Where gaps cannot be covered using in-app extensibility options, note their criticality and business impact. You will need to pass these for deeper feasibility assessment by your technical team for side-by-side, classic, or build your own extensions. Find and review the Extensibility Documentation in the SAP Fiori apps library Some SAP Fiori apps have an Extensibility Documentation link in the SAP Fiori apps reference library that explains the extension options provided for the app. If there is an Extensibility Documentation link it will be shown in the Implementation Information tab, section Extensibility. For example: Extensibility Documentation link example in the SAP Fiori apps reference library Tip: Make sure you have already selected your SAP S/4HANA release at the top of the implementation information tab. This will adjust information on the tab for your release, including the Extensibility Documentation link. Release selection in the Implementation Information tab of the SAP Fiori apps reference library Important: If a SAP Fiori app does NOT have Extensibility Documentation this does NOT mean the app cannot be extended. It simply means that there is no specific extension advice for this app. Most SAP Fiori apps follow floorplans or frameworks that have built-in extension options. For example: - Apps based on SAP Fiori elements floorplans can use Adapt UI, Adapt Filters, and Settings - Smart Business apps can be adjusted using SAP Fiori app Manage KPIs and Reports Find more examples in Yes you need SAP Fiori to configure, adapt, and extend SAP S/4HANA. There are also more advanced extension options provided generally in SAPUI5 itself. These require developer skills. For example, Using Component Configuration – a technique that keeps most of the standard app intact and just enables the replacement of a specific view or controller. Refer to: SAPUI5 SDK > Documentation > section Extending Apps. Tip: For classic UIs you will not find listed Extensibility Documentation. However, these also have some extension options, such as using SAP Screen Personas to improve the layout. You can also use some classic extension techniques – refer to: Confirm available optional fields for an app One of the most common requirements is for a few extra fields. Many apps provide in-app extensions that let you include additional optional fields. These are some quick checks to make sure you know what is possible. Examine the app using User Actions > Adapt UI. The Adapt UI in-app extensibility option applies to all SAP Fiori elements apps and some freestyle apps. That’s approximately 50% of the more than 2K SAP Fiori apps delivered with SAP S/4HANA. Some apps provide no extra fields, some provide a handful, and some are known to provide over 100 optional fields via Adapt UI. So it’s worth a quick check to see what you can add. Adapt UI feature in the User Actions menu Example of option to add field using UI Adaptation Reviewing the list of available fields in the Add Field dialog Important: You need to be authorized to see this feature by being granted security role SAP_UI_FLEX_KEY_USER. For apps with Filter areas, examine the Adapt Filters link on filter bars. Some apps can have dozens of additional filter fields available. For apps with Tables and Charts, examine the Settings icon button for Tables and Charts. Many tables and charts provide additional fields to be included simply by checking the matching checkbox in the Settings dialog. For Smart Business KPIs and reports, examine the configuration options in SAP Fiori app Manage KPIs and Reports. For Multidimensional reports review the available dimensions. You can also use SAP Fiori app View Browser to review the available data in the associated CDS View. Consider using SAP Fiori app Custom Analytical Queries for any additional reports you need to build. Confirm whether custom fields can be added using SAP Fiori app F1481 Custom Fields and Logic There are many SAP Fiori apps that provide the option to add custom fields using SAP Fiori app F1481 Custom Fields and Logic. These apps typically also provide Extensibility Documentation where they list the matching Business Context Scenario(s) to which custom fields can be added. Extensibility Documentation for the SAP Fiori app F1602 Manage Product Master Data showing available business context scenarios for adding custom fields These business context scenarios centrally update a range of related objects for that scenario including: - SAP Fiori apps - Classic UIs - CDS Views - OData Services - APIs - Output templates (e.g. email, forms, reports) You can even control where the fields are visible and/or searchable, whether they can be aggregated and much more. Find out more in the Custom Fields and Logic documentation. Review options to adjust visibility, features, and layout Examine the app using User Actions > Adapt UI Typical changes that can be made using Adapt UI include: - Add, move or hide fields - Add, move or hide sections - Add, edit or hide cards - Move or hide buttons and links - Default preferred links in Smart Link dialogs Adapt UI field level options in an Object Page Adapt UI card options in an Overview Page Examine the app for setting default values and variants Many apps support minimizing data entry through User Defaults. Both single (primary value) and multiple (primary value + additional values) can be provided depending on whether the target field can accept single or multiple value. Confirm the available user defaults by reviewing the list in Setting User Defaults in SAP S/4HANA. Apps with filter areas, tables, charts, or cards often provide options to default preferred fields and features using Variant Management. Variants can be created for an individual user, or made public/shared for use by multiple users. List Report app showing Adapt Filters link and table Settings icon button Typical adjustments that can be made with variants include: - Set preferred filter fields and static values in a filter area - Set preferred table columns and column sequence of a table - Set preferred table sorting and grouping (e.g. for hierarchy display of totals and subtotals) - Set preferred chart columns for dimensions and categories - Set preferred chart type For classic UIs, consider SAP Screen Personas to adjust visibility, features and layout When there is no suitable SAP Fiori app for a task, you may need to use a classic User Interface such as a GUI Transaction or ABAP Web Dynpro. SAP Screen Personas can be an effective option to quickly improve the content, usability and look and feel of classic UIs. This can avoid the need to create a complex custom-built app. You can use SAP Screen Personas to: - Hide or move fields to help users focus on only what’s important - Merge tab content into a single place - Adjust the look and feel to make the UI look more “Fiori-like” - Use scripting to automate repetitive or common steps, fetch data from other related transactions or perform simple calculations - Build simple mobile ready applications using the SAP Screen Personas Slipstream engine SAP Screen Personas is delivered as a free add-on to your SAP S/4HANA system. You can find more including tutorials in the SAP Community topic for SAP Screen Personas Step 6 – Build your Roadmap for now and for later Fit-gap analysis is one of those activities that can be easily derailed by scope creep. As you design your custom business role you will need to keep a track of which SAP Fiori apps or classic UIs you want to use now in this phase, versus future continuous improvement phases. When collecting up your list of apps, make your next steps in development environment easier by collecting the following details: - The technical app id – this identifies the exact app and the technology type. - Why? Names can be confusing or similar across apps, particularly where there are multiple versions of an app or successor apps. - This also helps with confirming complex extension options should they be needed. - The original SAP Business Role, SAP Business Catalog and SAP Technical catalog of the app - Why? This helps when refining your custom role in your Development environment using the Launchpad content manager and Launchpad app manager tools. All of this information also provides a handy reference for assessing successor apps when you upgrade to a higher SAP S/4HANA release. our SAP Fiori for SAP S/4HANA wiki Brought to you by the S/4HANA RIG Thanks Jocelyn. I was asking this same basic question during a Teched 2020 session today! Yes I was one of those answering you... 🙂 This blog I have been working on for a little while now. Great and very practical, helpful blog Jocelyn! Thanks Ariane! Thanks, good blog. The big issue that, at least in my opinion, remains is as follows: SAP does not state that SAP-GUI (and WDA etc) are no longer needed for on-prem. So in the end - majority of customers end up with two additional frontends, Eclipse which a lot of ABAP developers still hate and Fiori/ui5. A lot of developers do need transactions like SE11 which are not available as a fiori-app. Quite a lot of developers still use SE11 on productive systems. What is needed is formulated easily: 100% of work needs to be doable with Fiori (and frankly, I do not understand added value of Eclipse). On prod and on dev systems including client 000. And of course - a clear roadmap that S/4 20xx will not include any SAP-GUI functionality anylonger. Hi Clemens. Thanks for taking the time for a detailed feedback. To help you I would like to clarify a couple of points. Re: “SAP does not state that SAP-GUI (and WDA etc) are no longer needed for on-prem.” SAP does not make that statement because that is not and never has been the position for SAP S/4HANA. The approach is to provide business users *new* value through SAP Fiori apps & access to both SAP Fiori apps & classic Uis via the launchpad. This is a long journey & we are already 5 years and 2K Fiori apps into it. There are very few (around 30 Uis identified by projects so far & I would personally estimate probably less than 150 still useful classic UIs in total out of 8.5K classic Uis) that cannot be used with GUI for HTML. Even these can be run via the launchpad using SAP Business Client to launch them in GUI for Windows. That is the position for business users. Re Eclipse: Developers, Administrators, Support teams & project consultants are considered to be experts rather than business users. I should mention that ADT for Eclipse has been around longer than SAP S/4HANA. Transactions such as SE11 and SE16 are actually not anywhere near as useful (and can even be quite misleading in some use cases) once you are on SAP S/4HANA. I do agree with you that traditional ABAP developers have in many ways the hardest journey of all the experts to transition to S/4HANA since so much tooling and so many best practices have changed. That of course is part of the excitement and a lot of the fun of being a developer - that new possibilities and new innovations are constantly emerging. We have taken many developers on this journey so far. Most find that once they start to learn and apply new things they don’t want to go back... there’s so much more you can do. And of course part of the reason for in-app extensibility is to take some of that burden off professional developers - at least for the most common changes. Where to go from here: i would encourage you to persist with Eclipse for CDS Views, Fiori Elements, & RAP as the entrypoints to S/4HANA development. Those are the ABAP fundamentals that will get you through most S/4HANA projects. Get access to a sandbox or a S/4HANA trial system if you can. And just take it a step at a time. If you already have developer skills then you already have all the smarts to pick this up. The rest is time and practice.
https://blogs.sap.com/2020/12/10/sap-fiori-for-sap-s-4hana-fit-gap-analysis-for-sap-fiori-apps-and-classic-uis/
CC-MAIN-2021-25
refinedweb
5,552
58.21
Results 1 to 8 of 8 POSIX Recurring Timer Help Right now I have this code to support the timer... At program start: Code: tt_spec.it_value.tv_sec = 1; tt_spec.it_interval.tv_nsec = REFRESH_NSEC; struct sigevent sig; sig.sigev_notify = SIGEV_SIGNAL; sig.sigev_signo = REFRESH_SIG; timer_create(CLOCK_MONOTONIC, &sig, &refresh_timer_id) struct sigaction action; action.sa_handler = &sig_handler; sigaction(REFRESH_SIG, &action, NULL); NOTE: tt_spec.it_value.tv_sec = 1 is just to make the itimerspec struct non-zero, as that's what is apparently required according to the POSIX API for this. sig_handler is a function which doesn't do anything at the moment except print that a timer signal was received. It's prototype is as follows (for completeness sake): Code: void sig_handler(int sig_received); Code: timer_settime(refresh_timer_id, 0, &drtt_spec, NULL); Code: ./run.sh: line 10: 20973 Profiling timer expired <path to file>/a.out So....anybody have any idea why the hell it keeps quitting almost instantly. If I disable the timer functionality it's fine. There's nothing "wrong" with the signal handler as there's nothing there (empty function now, before it simply had a print statement). Thanks, hopefully somebody knows what I'm doing wrong . lol, I like how my post shows up in the linux forums daily news, but I have no answers . C'mon, somebody out there has to know POSIX timers, lol. - Join Date - Mar 2010 - 152 There are a lot of things that can go wrong with signals, etc. - please post a complete, compilable example that demonstrates the issue you see.Programming and other random guff: cat /dev/thoughts > blogspot.com (previously prognix.blogspot.com) Programming and other random guff: cat /dev/thoughts > blogspot.com (previously prognix.blogspot.com) - Join Date - Mar 2010 - 152 So I have some good news and some bad news.... Good news is that i was able to pull out the timer stuff and-reorg it for distribution. Bad news is that the problem doesn't persist. Here's the code: Code: #include <signal.h> #include <stdio.h> #include <time.h> #define REFRESH_SIG 27 #define REFRESH_NSEC (1000000000 / 30) struct itimerspec t_spec; timer_t refresh_timer_id; int ij = 0; void sig_handler(int sig_received) { //printf("%d signal received\n", sig_received); ij++; if (ij % 30 == 0) printf("Second passed\n"); } int main() { t_spec.it_value.tv_sec = 1; t_spec.it_interval.tv_nsec = REFRESH_NSEC; struct sigevent sig; sig.sigev_notify = SIGEV_SIGNAL; sig.sigev_signo = REFRESH_SIG; if (timer_create(CLOCK_MONOTONIC, &sig, &refresh_timer_id)) printf("time_create() failed\n"); struct sigaction action; action.sa_handler = &sig_handler; sigaction(REFRESH_SIG, &action, NULL); timer_settime(refresh_timer_id, 0, &t_spec, NULL); while (1); } I noticed, though, that when I made this code, that I was using the wrong units for what I thought was nanoseconds. I was using microseconds instead. This code has that updated. Perhaps it was going too fast and I was getting a weird timing force. I'll have to let you guys know later what happened with it, until then, I'm not sure if the problem persists...I had to create a work-around timer in the meanwhile using a different system and I just tried switching it back but something else was giving me issues and I need to go to work now -.-. Thanks, I'll post back when I know more, but if you see any issues with the way I set that timer up, do tell! EDIT: Making the refresh rate come on as microseconds / 30 instead of nanoseconds / 30 means the timer was running 1000x faster than I intended (33MS vs 33,333MS firing interval). It's definitely possible the issue was there and they were coming in too quickly.. - Join Date - Mar 2010 - 152 When I run your code, I get a segmentation fault until I initialize things to zero - i.e. on the appropriate lines: Code: struct itimerspec t_spec = {{0}}; timer_t refresh_timer_id = {0}; // ... struct sigevent sig = {{0}}; // ... struct sigaction action = {{0}};Programming and other random guff: cat /dev/thoughts > blogspot.com (previously prognix.blogspot.com) Never-the-less, I'll have to make those changes and see the results accordingly. Especially now that it's 1000x slower than it used to be. What OS are you running that it seg faulted, and if it's not too much trouble, can you just throw a gdb debug hook in there and tell me which statement it's crashing on? .. It'll be a bit before I get back here with any real findings. Essentially won't have a dev machine until 4/14 and I'm not entirely sure how useful my mobile dev machine will be, never used it for this type of stuff (CUDA). Also don't know how quick the internet will be if I need to connect to my dev machine :-\.
http://www.linuxforums.org/forum/programming-scripting/195821-posix-recurring-timer-help.html
CC-MAIN-2014-52
refinedweb
776
75.3
» Publishers, Monetize your RSS feeds with FeedShow: More infos (Show/Hide Ads) Here are my notes from the Steven Sinofsky keynote at BUILD.x. I'm down in LA (OK, Anaheim actually...) for the BUILD Windows 8 conference - what was previously known as the "Professional Developers' Conference" (PDC) - not sure if the fact that it's changed names means that they want a broader appeal beyond just professional developers, but we'll see. At $1600-$2400 to attend (plus travel) I doubt too many hobbyists are coming. I'll be posting here my impressions from the keynotes and from the sessions I attend. There is a rumor that a new tablet with Win8 is going to be distributed to attendees - we'll see. Microsoft gave an Acer tablet this video showing an absolutely amazing bootup time for Win8. We'll see.... I downloaded the BidNow sample. There are a set of automated scripts the developers of BidNow have provided to configure things, which is good and bad. It’s good in that the configuration is somewhat complex and the scripted PowerShell scripts take you through setting it up by asking you a set of questions, TurboTax-style. It’s bad. The code apparently uses AppFabric Labs. I figured I’d just try the URL the page issues to retrieve the AppFabric Labs identity providers and see what I got. The URL is something like: (spacing added for readability):? protocol=wsfederation&realm=http%3a%2f%2flocalhost%3a8080%2f& reply_to=http%3a%2f%2flocalhost%3a8080%2fLogOn.aspx&version=1.0 It seemed wrong to me that it was referring to the bidnow-sample namespace within AppFabric labs, not the namespace I had created, so I figured that was probably one of the things that didn’t get updated by the scripts. However, plugging this URL into a web browser, I get this: Hmm, so it’s not complaining about an invalid namespace, as I’d expect; it’s complaining about the realm this page which explains how to set up ACS with Windows Azure. It explains that when registering the “relying party application” with ACS, you have to specify the URI – I didn’t do that when I set up my AppFabrics lab info. (This page also has more in-depth information about “relying party applications” and the realm – one of the challenging things about learning any new technology like this is that there are a bunch of new terms which you have to first learn; a good resource here is the December 2010 MSDN article “reintroducing” :) ACS; I guess you have to write an article re-introducing something when the first documentation on this just led to ho-hums and scratching heads. But this article actually guides you through the necessary steps of configuring the ACS namespace and realm and all up on the AppFabrics Labs web site reasonably well.. This got me past the first step – when I build and run the app and go to the BidNow home page and click “Login”, I now get a list of providers:: Hmm… Searching for this error on the web, I find a helpful explanation on acs.codeplex.com: The rule group(s) associated with the chosen relying party has no rules that are applicable to the claims generated by your identity provider. Configure some rules in a rule group associated with your relying party, or generate passthrough rules using the rule group editor.: Now you would think that default rules would be, oh, I don’t know, defaulted but apparently not unless you click the button to generate them. Yup, I’m understanding more and more why they had to write an article re-introducing this service. After generating these, going back to my dev fabric hosted BidNow, I am able to login! For a client engagement, I was provided VMWare images. I don’t have VMWare, but have a server running Windows Server 2008 R2 with Hyper-V. So I needed a conversion from the VMWare image to a Hyper-V image.. Sure enough, I found a good blog post. But that gives me a virtual hard drive with the image – it doesn’t give me a virtual machine. Here are the steps to convert that from the VMWare virtual machine information provided: - Download the VMDK to VHD Converter from VMToolkit. - Use it to convert the VMWare VMDK (virtual disk image) to a Hyper-V VDK (virtual disk image). This creates a new file that is a sector-by-sector copy of the original virtual hard disk. - Start Hyper-V Manager and click on your server name in the tree control on the left. - Click New / Virtual Machine… and name it and configure memory/networking. - When you get to step 4 (Connect Virtual Hard Disk), click the second option “Use an existing virtual hard disk” and point it at the VHD you created from the VMDK. - Start the Virtual Machine. Depending on whether the virtual configuration is significantly different than the VMWare image you received, Windows may need to configure hardware and restart the VM – this will happen automatically. That’s about it – pretty easy migration from VMWare to Hyper. I then went to pivot the data by clicking “Summarize with Pivot Table” in the “Table Tools” ribbon section, but the pivot table field list doesn’t contain my group column. At first I thought that maybe for some reason calculated fields wouldn’t be included in the pivot table – but this made no sense and there are are other calculated fields in my source data. After poking around a bit on the web, I found this post. I’ve just been listening to Ed Norton interviewing Bruce Springsteen Construx Software Executive Summit last week (which was a great event). Springsteen put it this way: And I said man, there’s other guys that play guitar well. There's other guys that really front well. There’s other rocking bands out there. But the writing and the imagining of a world, that's a particular thing, you know, that's a single fingerprint. All the filmmakers we love, all the writers we love, all the songwriters we love, they have they put their fingerprint on your imagination and then on - in your heart and on your soul. That was something that I'd felt, you know, felt touched by. And I said well, I want to do that. Fred Brooks (author of “The Design of Design”, and of course, famously, “The Mythical Man-Month”) put it a bit differently: “Great design does not come from great processes; it comes from great designers. Choose a chief designer separate from the manager and give him authority over the design and your trust.” Both are really saying the same thing – great, consistent, beautiful designs always come from a single mind expressing himself or herself.. For a consulting project, I recently had to join my laptop to an Active Directory domain at a client's workplace. Suddenly, my home computers can no longer see the computer. I found that the laptop could see shared items on my home network, though. I went to the "Network and Sharing Center" to see if there was some setting I had to tweak and found this: Hmmm... so I'm kinda/sorta still part of a homegroup, it sounds like. Searching for the message highlighted I found this Windows Online help topic that explains it: So it turns out that yes, you can see other computers from the domain-joined laptop, but the other computers on the homegroup can't see the domain-joined laptop any longer. I suspect this is a security thing.. I found this post. There is a $50 tool called OutlookSpy (with a free 30-day trial) that was also mentioned and might help some folks who find the tool I used a bit too geeky.. To gather the data, I use a timer which I put into a simple value class: ); } } Then I just have a list of these which I add to as I run each test: static private List<TimerInstance> _Timers = new List<TimerInstance>();.: } and used this code to serialize it into a MemoryStream: // Return items that should be persisted. By convention, we are eliminating the "outlier" // values which I've defined as the top and bottom 5% of timer values. private static IEnumerable )); x.Serialize(s, ItemsToPersist().ToList()); } }. Once I have this, it's a pretty straightforward thing to store the memory stream in an Azure Blob using UploadFromStream: //); } } > There’s an interesting article about performance of server apps in the July 2010 Communications of the ACM somewhat provocatively titled “You’re Doing It Wrong”. In it, Poul-Henning Kamp, the architect of an HTTP cache called Varnish, describes the “ah ha!” moment (on a night train to Amsterdam, no less) where he realized that traditional data structures “ignore the fact that memory is virtual”.. To fix a problem with a corrupted Outlook profile,. After living with this for a bit, I pulled out my old “VBA for Microsoft Office 2000 Unleashed” (yup, it’s been a while…) and wrote a little Outlook macro to do the right thing here:. So for anyone else who runs into this, a. You can contact support at – for me at least it was no charge and pretty quick. b. You can try to create a new profile yourself -. c. If you add a second account to the profile (e.g. an SMTP account) have it deliver to somewhere other than the same Inbox as the MSN/Hotmail account (e.g. “Work Inbox”). You can then create an Outlook Search Folder to consolidate mail from the two separate delivery folders. By isolating the SMTP inbox from the MAPI inbox that MSN/Hotmail is using, you apparently work around a problem that can lead to this problem. Good luck! UPDATE – June 30, 2010! After contacting MS support again and this time talking with Dinker, we figured it out. When creating a new profile, Outlook has a setting that (in Outlook 2010) you access through File / Account Settings in Outlook 2010: There you’ll find a tab for “Data Files”. Windows Mobile Device Center syncs items in the default data file; below I have the dialog as it appears after I set the MSN data file as the default one, not the Outlook Data File: Before I did this, “Outlook Data File” was set as the default data file so WMDC was syncing calendar and contact items from there. Note that this shows up in Outlook as separate calendars (“My Calendar” is the one syncing to Windows Live Calendar): and as separate Contacts lists as well (obviously, “Contacts – mikekelly@msn.com” is the one synced with Windows Live): Changing the default caused the WMDC to correctly sync. I’ve come across a good new blog, Coding Horror, written by Jeff Atwood, one of the founders of one of my favorite coding sites, Stack Overflow. In reading through some of the older posts, I came across one that is close to my heart which is on working remotely, Hyderabad, India; Beijing and Shanghai, China; Haifa, Israel; Dublin, Ireland; and here in the United States in Silicon Valley, Boston and North Carolina; there also is a center in Vancouver, British Columbia. Due to acquisitions, there are also a number of smaller sites doing product development all around the world, including in Portugal, France, Singapore, Germany, Norway, and Switzerland.. Jeff’s post offers a number of good solutions, as does my former colleague, Eric Brechner.. Ultimately, though, the goal isn’t pain but gain. Jeff’s and Eric’s posts referenced above have a number of good suggestions on how to achieve that. These come down to just a few basic rules: - Communicate, communicate, communicate.!” - Virtual tools are no replacement for real relationships. A friend once wrote a book on software development with a memorable rule: “Don’t flip the Bozo bit”,”. - Follow the basic rules of distributed systems.. Windows makes available a wide variety of performance counters which of course are available to your Azure roles and can be accessed using the Azure Diagnostics APIs as I described in my recent MSDN article on Windows Azure Diagnostics. However, it can be useful to create custom performance counters for things specific to your role workload. For instance, you might count the number of images or orders processed, or total types of different data entities stored in blob storage, etc.. Custom Performance Counters aren’t yet supported in the Azure Cloud fabric due to restricted permissions on Cloud-based Azure roles (see this thread for details) so this post describes how it will work when it is supported. Create the Custom Performance Counters - You create your counters within a custom category. You cannot add counters to a category, so you have to create a new performance counter category using PerformanceCounterCategory.Createin System.Diagnostics. The code below shows how to create a few different types of counters. Note that there are different types of counters: you can count occurrences of something ( PerformanceCounterType.NumberOfItems32); you can count the rate of something occurring( PerformanceCounterType.RateOfCountsPerSecond32); or you can count the average time to perform an operation( PerformanceCounterType.AverageTimer32). This last one requires a "base counter" which provides the rate to calculate the average. //()); } - When you create a performance counter category, you need to specify whether it is single-instance or multi-instance. MSDN helpfully explains that you should choose single-instance if you want a single instance of this category, and multi-instance if you want multiple instances. :) However, I found a blog entry from the WMI team that actually explains the difference - it is whether there is a single-instance of the counter on the machine (for Azure, virtual machine) or multiple instances; for example, anything per-process or per-thread is multi-instance since there is more than one on a single machine. As you can see, since I expect multiple worker thread role instances, I made my counters multi-instance. - After creating the counters, you need to access them in your code. I created private members of my worker thread role instance: public class WorkerRole : RoleEntryPoint { ... // Performance Counters PerformanceCounter _TotalOperations = null; PerformanceCounter _OperationsPerSecond = null; PerformanceCounter _AverageDuration = null; PerformanceCounter _AverageDurationBase = null; ... - Then in the code after I create the counter category if it doesn't exist, I create the members:()); } - Note that I use "." for machine name, which just means current machine, and I use the CurrentRoleInstance.Idfrom the RoleEnvironmentto distinguish the instance of each counter. If instead you wanted to aggregate these across the role rather than per-role instance, you could just use RoleEnvironment.CurrentRoleInstance.Name. Using the Custom Performance Counters Run method of your role to do the work, and this method should not return - it instead runs an infinite loop waiting for work to do and then doing it. Let's look at some simple code to use the performance counters defined above: DateTime.Ticks(see this blog post for more information)()); } }This code also shows how to take two samples of a counter and calculate the difference - the counter I'm showing is rather uninteresting, but it illustrates the approach. Monitoring the Performance Counters DiagnosticMonitorConfiguration with code like this (note this is from my role OnStart method):()); } This code also adds a standard system performance counter for % CPU usage. Note that you can either pass the DiagnosticMonitorConfiguration to DiagnosticMonitor.Start or you can change the configuration after the DiagnosticMonitor has been started:); Once you've transferred the performance counter data, they go into the Azure table storage for the storage account you've passed (i.e. the account specified by DiagnosticsConnectionString in ServiceConfiguration.cscfg). You can then either use a table storage browser to look at WADPerformanceCountersTable or you can use the very helpful Windows Azure Diagnostics Manager from Cerebrata Software (there is a free 30-day trial). This tool reads the raw data from the Azure table storage and allows you to download it, graph it, etc. References Thanks to the following which provided invaluable information along the way to figuring this out: - Michael Groeger on CodeProject – An Introduction to Performance Counters - Channel 9 – Monitoring Applications in Windows Azure - Stack Overflow – Getting High Precision Timers in C#/.NET One of the practices I advise with Windows Azure services (and really any service) is self-monitoring to find problems that may not be fatal but are indications of serious problems developing. I happened to run across a good example of how to do this on the Azure Miscellany blog..) You could even have a separate role monitoring the role and then working through the Azure Service Management API to tweak the role that is experiencing problems. While at Microsoft, I worked with a Microsoft Research developer on a prototype tool called HiLighter. Think of Diagnostics broadly – not just as logging, but also as active monitoring of problems in your roles. My. Debugger.Log)..: how much overhead does tracing add to my application if I’ve disabled at run-time the level of tracing being invoked,.. For a consulting project, I’ve been playing around with getting the Skype public API to work with Microsoft Robotics Developer Studio (RDS) by building a DSS service that communicates with Skype.. I found, though, a problem that is alluded to but not really explicitly stated in the Skype API documentation. For an app-to-app connection to work, both sides have to establish the Application (in Skype terminology). In other words, both sides of the conversation have to have done a CREATE APPLICATION Xyzzy using the Skype API to create the “Xyzzy” application (note: application is Skype’s name for an app-to-app channel). Once both sides have created the channel, one side (typically the originating caller) has to do a: ALTER APPLICATION Xyzzy CONNECT otheruser where Xyzzy is the application created and otheruser is the user you’re calling.: OnReply to Command 0(): APPLICATION Xyzzy STREAMS otheruser:1 which tells you that “otheruser” can now receive communications through the app-to-app channel. Like a lot of things in programming, this is obvious once you realize it – you can’t send through a channel unless both sides have established the channel. I was hoping I could test one side, but no 'Robotics' does not exist in the namespace 'Microsoft' ! I? A: No. I
http://reader.feedshow.com/show_items-feed=db32722137024a86f7511086cb1e8fec
CC-MAIN-2013-20
refinedweb
3,065
58.62
Point Cloud Integration with Vizard. Vizard, WorldViz’s VR development platform, can take just about any industry standard 3D model and render it in real-time, stereoscopically, and fully immersively using virtual reality technologies. It does this with such precision and speed that it can be used for highly sensitive tasks like analyzing as-built assets, conducting risk evaluation, or planning building or system modifications. Professionals in oil and gas, defense, engineering, and construction already use 3D scanning technology to facilitate these sorts of projects, and now some are taking those scans, or point cloud files, and putting them into immersive viewing technology like our VizMove VR systems. It is incredibly easy to incorporate VR into existing point-cloud workflows. Companies like DotProduct offer handheld scanning devices, like DotProduct’s DPI-8, which requires only minutes to fully scan an environment. It exports these 3D scans in .dp format and can be stored in the cloud immediately for remote access. These .dp files, as well as other point cloud file types like .3dc, .asc, and .ply, can be imported into Vizard and viewed in whatever VR hardware setup you have available to you at the exact scale that the environments exist in physical reality. If the particular task at hand calls for your point clouds to be meshed into surfaced 3D models, you can convert those files using one of many third party programs, like CloudCompare, which is free to use. These programs can export the resulting meshed 3D models in any of several file types that Vizard can also render in real time. Steps for Importing a Point Cloud into Vizard: Point clouds of the format .dp, .ply, .3dc and .asc can be opened with Vizard’s “Inspector” program and then converted to an OSGB file. If you have a text file with a different extension and the data is in XYZ format, rename the extension to .3dc and then load it in Inspector or Vizard. XYZ data is required while RGB and XYZ normal data are optional. The data should be space separated and RGB values must be in the 0-255 range. For other file types that can’t be renamed and loaded, try converting them to a Vizard compatible format using CloudCompare. CloudCompare is an open source point cloud and mesh processing application that imports/exports to many different formats. After opening up Inspector, go to File- Open and navigate to where you saved your point cloud file. One benefit of saving out to Vizard OSGB format is it's binary and will load much faster than do text formats. In Inspector you can use the Translate, Rotate and Scale tools to manipulate your point cloud. Create a new “Transform” instance by going to the Create tab in the toolbar and selecting Transform. From there, drag and drop the original point cloud file into the Transform instance. To match up the point cloud with Vizard’s point of origin in inspector you can reference the axis seen here: For an easy way to visualize the point of origin in inspector, add an object (such as a ball) from the Vizard resources folder by going to File- Add and navigating to (C:\Program Files\WorldViz\Vizard5\resources). The added model will import located at the origin. Once the additional object is loaded, you can move the point cloud to match the location of the origin by using the Translate and Rotate tools. From inspector save the point cloud as a .OSGB file into your Vizard resources folder. If you wish to make further changes to the point cloud using inspector, simply click on the point cloud object in the Vizard resources tab, make changes, and click “save”. This will automatically overwrite the existing model. You can import the new point cloud into Vizard using the standard viz.add command. (where “art” is the location of your resources folder) To help in matching up the point cloud to the origin in Vizard you can show the axes by using the following command: import vizshape vizshape.addAxes() Tips for optimizing and meshing Point Clouds: Point clouds with millions of points might be too heavy to render in real-time and therefore aren't usable for virtual reality applications. There are several ways of optimizing the content. Most efficiently, you can remove a certain amount of points without losing detail. Vizard can support a couple million points, exact numbers depend on hardware, display output, and other factors. Below are some tools that can be used to optimize your point cloud. Point clouds with millions of points might be too heavy to render in real-time and therefore aren't usable for virtual reality applications. There are several ways of optimizing the content. Most efficiently, you can remove a certain amount of points without losing detail. Vizard can support tens of millions of points points viewed simultaneously at full VR frame raters (e.g., 60Hz or higher). The exact numbers depend on hardware, display output, and other factors. Below are some tools that can be used to optimize your point cloud. To optimize and mesh point clouds you can use a third party programs, such as CloudCompare, Autodesk ReCap, Autodesk Memento, or MeshLab. If you are using the DotProduct scanner, the included application Phi3D will automatically optimize the point clouds for you. If you do require a meshed model, turning to one of the above mentioned tools will get you there. To reduce the number of points in a point cloud without losing detail using MeshLab, refer to the following video tutorial: To limit how much of a point cloud is displayed using Recap’s limit box, refer to the following documentation: This tutorial shows how to mesh a point cloud using MeshLab: This page from our documentation will give some more information on accepted 3D model formats (not including the point cloud formats from the earlier links) For additional help, our forums are also a great resource which is closely monitored by our development team and is free for anyone to use: For handlingLASfiles, werecommendthelaspyPython library byGrantBrown, and can beinstalledautomaticallyusingVizard's Package Manager feature under the tools menu.
http://kb.worldviz.com/articles/2339
CC-MAIN-2021-31
refinedweb
1,028
59.53
Structuring a new Phoenix Project This blog post is part of a series of posts detailing the development process of AlloyCI. The previous entries have been: If you are coming from a Rails background, you might find the folder structure of a Phoenix project a bit weird, specially after the changes in 1.3. But don’t be put off by this, the structure actually makes a lot of sense, specially the new 1.3 structure. Unlike Rails, there is no app/ folder where most of your code will live. Phoenix 1.3 projects follow the structure of pretty much any regular Elixir project, this means that most of your code will live inside the lib/my_app folder. As you might now, Phoenix 1.3 changed the way projects are structured. Before this change most of your code would be under web/, with little to no further organizations. You would have a models directory, where you would put all your business logic, without much consideration as to how they interact together. This folder would get pretty large, pretty fast. Also the name model seems to imply an object, but in Elixir, there are no objects, so storing your data access layer under a folder named "models" makes little contextual sense. Data & Contexts Phoenix 1.3, by default, guides you towards a better way of organizing your code. Controllers, Views, Templates go under lib/my_app_web, database migrations and related files go under priv/repo, and the rest of your Elixir code will live under lib/my_app. This is the basic setup, and you can tweak it, and change it as much as you like. You have complete liberty as to how to organize your code. Since I started writing AlloyCI before Phoenix 1.3 was fully released, some of the folder structure is different than the one the latest 1.3 generators create. I prefer the way AlloyCI is structured right now, because I really don’t like the way they renamed the web/ folder to alloy_ci_web/ and everything inside it from AlloyCi.Web.XXX to AlloyCiWeb.XXX. I really prefer the separation in the module name, and the fact that the app name is not repeated in the web folder name. Thanks to the flexibility Phoenix provides, I don’t need to follow the conventions, though. Anyways, the most important part about the structure changes, is that Phoenix now guides you towards using contexts for structuring your data access layer. Using AlloyCI as an example, we have the Accounts context (which is under lib/alloy_ci/accounts folder), where the User, Authentication, and Installation schemas live. These 3 schemas are closely related, and belong to the same business logic, namely the handling of Accounts. If you look closely at the files under the accounts folder, you will see that there are no functions in the schema files, other than the changeset function. This means that I would need to either go straight through Ecto to manipulate the database data (not recommended) or that I need an API boundary that will let me perform actions on the accounts related schemas. This is where the AlloyCi.Accounts module comes into play. This module is the boundary with which AlloyCI will interact if it needs to perform an action on any of the schemas related to an Account. All public functions of this module provide an easy way to manipulate the data, while providing security through a layer of indirection. This is the purpose of contexts. They provide a boundary between your data layer and your business logic, and allow you to have an explicit contract that tells you how you can manipulate your data. It also allows you to stay independent from Ecto. Let’s say, in the future, you’d like to switch from Ecto to the “latest, coolestDB driver”. If you didn’t use an abstraction layer, like the contexts, you would have to refactor every function across the codebase that used Ecto to communicate to the data layer. But since we are using contexts, we would only need to refactor the code inside the context itself. Data Presentation The code that will actually present your data to the user can live under the lib/my_app/web or lib/my_app_web folders, depending on how you want to structure it (the automatic generator will default to lib/my_app_web but I prefer the former). In here you will find the folders where your controllers, views, templates and channels will live. Let’s start with the presentation layer. Views & Templates If you come from a Rails background, you might wonder why there are two components to presenting the data, when in Rails all you need is the views folder. In Phoenix, the “views” are not composed of templated HTML files, but rather they are regular Elixir modules. These modules are there to help you share code with the template, and fulfill a similar purpose as the Rails “View Helpers”, but are, by default, specific to a single controller (other views are not loaded, unlike Rails that loads all view helpers, regardless of the controller being called). This separation makes it easier to use the same signature on similar helper functions needed to present data (without really overloading them), depending on which controller is being called, thus simplifying your code. The templates are, then, where your HTML code lives. The template files are saved as *.html.eex files (meaning embedded Elixir), and are very similar to erb files. The syntax is exactly the same, but instead of Ruby code inside, you write Elixir code 😄 A very important distinction between Phoenix and Rails is how you share information between the controller and the template. In Rails, it is enough to declare an instance variable with @something and it will be available to the template/view. Given the functional nature of Elixir, in Phoenix you need to explicitly pass the information you wish to be available to the views in the render function. These are called assigns. As an example, here is the show action of the PipelineController: def show(conn, %{"id" => id, "project_id" => project_id}, current_user, _claims) do with %Pipeline{} = pipeline <- Pipelines.show_pipeline(id, project_id, current_user) do builds = Builds.by_stage(pipeline) render(conn, "show.html", builds: builds, pipeline: pipeline, current_user: current_user) else _ -> conn |> put_flash(:info, "Project not found") |> redirect(to: project_path(conn, :index)) end end Everything that comes after "show.html" are the assigns, so the variables available to the templates related to the show action are builds, pipeline, and current_user. We can see an example of how to use them in this snippet from the pipeline info header: <div class="page-head"> <h2 class="page-head-title"> <%= @pipeline.project.owner %>/<%= @pipeline.project.name %> </h2> <nav aria- <!-- Breadcrumb --> <ol class="breadcrumb page-head-nav"> <li class="breadcrumb-item"> <%= ref_icon(@pipeline.ref) %> <%= clean_ref(@pipeline.ref) %> </li> <li class="breadcrumb-item"> <%= icon("github") %> <%= sha_link(@pipeline) %> </li> <li class="breadcrumb-item"> <%= icon("book") %> <%= pretty_commit(@pipeline.commit["message"]) %> </li> <%= if @pipeline.commit["pr_commit_message"] do %> <li class="breadcrumb-item"> <%= icon("code-fork") %> <%= @pipeline.commit["pr_commit_message"] %> </li> <% end %> <li class="breadcrumb-item"> <%= icon("tasks") %> <%= String.capitalize(@pipeline.status) %> <%= status_icon(@pipeline.status) %> </li> </ol> </nav> </div> Once a variable has been assigned, it is available to the template via @var_name, just like with Rails. Functions defined in the view file of the same name as the controller (in this example pipeline_view.ex) are immediately available to the template. In the above example, sha_link/1 creates an HTML link to the specific commit on GitHub. Controllers In structure, Phoenix Controllers are very similar to Rails Controllers, with the main difference being described above. When generated by the helper tools, they will have the same index, show, edit, update, and delete actions as their Rails counterparts. And just as with Rails Controllers, you can define any action you desire by defining a function, and connecting a route to it. Channels Phoenix Channels are used to communicate with the web client via Web Sockets. They are similar to ActionCable in Rails, but in my opinion, much more powerful, and performant. In AlloyCI, they are used to push the output of the build logs in real time, and to render a pre formatted piece of HTML code to show the user’s repositories (more on how AlloyCI uses Channels will be discussed in another post). Routes Routes in Phoenix are defined in a somewhat similar way as Rails Routes. Some of the syntax is different, but it is immediately recognizable and feels familiar. Have a look at how the project routes are defined on AlloyCI: scope "/", AlloyCi.Web do ... resources "/projects", ProjectController do resources("/pipelines", PipelineController, only: [:create, :delete, :show]) resources("/builds", BuildController, only: [:show, :create]) do get("/artifact", BuildController, :artifact, as: :artifact) post("/artifact/keep", BuildController, :keep_artifact, as: :keep_artifact) end resources("/badge/:ref", BadgeController, only: [:index]) end ... end The real difference when it comes to routes, between Phoenix and Rails, is the power that you get when using plugs. We will discuss them in detail in a future post. And there you have it. That is the basic structure of a Phoenix project. There are other components that we haven’t covered here, like Plugs, or Background Jobs. We will discuss these advanced topics in a future blog post. Discussion (0)
https://dev.to/suprnova32/alloyci-dev-diarypart2-1jfl
CC-MAIN-2021-49
refinedweb
1,545
62.48
hi frnds.. help me.. hi frnds.. help me.. i ve a doubt in incompatible type error (block.... double velo[][] = new double [task] [resource]; for(int i=0;iList<Double>...); for (int i=0; i } } incompatible error ------> i dont no how to solve it pls i dont no how to solve it pls Calculate and display the sum of all prime number from 1 to 100 covert in java system codes hi! hi! how can i write aprogram in java by using scanner when asking... to to enter, like(int,double,float,String,....) thanx for answering.... Hi... main(String[] args) { Scanner input=new Scanner(System.in Hi Hi I need some help I've got my java code and am having difficulty... = new Scanner(System.in); System.out.print("Enter the letter type: "); String type...) { throw new UnsupportedOperationException("Not yet implemented"); } private servlet not working properly ...pls help me out....its really urgent dont know whether i used the correct method to display the data from database via...servlet not working properly ...pls help me out....its really urgent Hi, Below is the front page of my project 1)enty.jsp </form> < Dont Do this - Java Beginners Dont Do this Whenever you guys ask a question please describe your query? Otherwise you lost your time and other who are given answer to you. thanks Rajanikant one error but i dont know how to fix it o_O!!! one error but i dont know how to fix it o_O!!! where is the mistake... main (String [] args) { Scanner keyboard = new Scanner (System.in...+"Secends have"+houres+"houres,"+minutes+"minutes and"+new Secends+"secends hi - Java Beginners ;)); writer.write("Before shorting :\n"); for(int i=0; i < 5 ; i...Sorting String Looking for an example to sort string in Java. ... args[]) { new ShortString().doit(); } public void doit(){ try { Writer writer hi online multiple choice examination hi i am developing online multiple choice examination for that i want to store questions,four options,correct answer in a xml file using jsp or java?can any one help me? Please hi - Java Beginners hi Hi.... let me know the difference between object and instance variables with examples.... Hi friend, Objects are key... information. Thanks. - Java Beginners hi hi sir,u provide a answer for datepicker,but i don't know how... ; JPanel p=new JPanel(); String mon=null...; String issuedamttotal,receivedamttotal; JPanel jp=new Hi ..I am Sakthi.. - Java Beginners Hi ..I am Sakthi.. can u tell me Some of the packages n Sub... tabbedPane = new JTabbedPane(); Component panel1 = makeTextPanel("Java... that is available in java and also starts with javax. package HEMAL RAJYAGURU   New to Java - New to java tutorial Technical description of Java Mail API This section introduces you with the core concepts of Java Mail API. You must understand the Java Mail API before actually delving Hi..Again Doubt .. - Java Beginners Hi..Again Doubt .. Thank u for ur Very Good Response...Really great.. i have completed that.. If i click the RadioButton,,ActionListenr should get...); img1=new ImageIcon("f:\\2LEAVES.jpg");img2=new ImageIcon("f:\\Q1.jpg");img3 hi friend - Java Beginners hi friend ummm i want to know a java code ...what are the code if i want to display inverted pyramid(shape). i mean a pyramid is reversed down.... thank u friends!!! Hi friend, Inverted pyramid code class really need help on how to write this program plz some 1 help out. - Java Beginners really need help on how to write this program plz some 1 help out. i am confused here on what to write can some 1 help out here i dont quite... a Java program that displays the following prompts: Enter session.timeout and notification before session expire in java session.timeout and notification before session expire in java Hi frds, I am using struts and I would like to automatically forward to some notification page when the session expires; but I want this to happen without user New to Java Please help New to Java Please help Hi I need help, can some one help me with this. I am currently doing a project. drop me an email to my email address. Thanks! If you are new in java, then you need to learn core java hi - Java Beginners hi hi sir,how to place the database records into jtable ,i am using... and placed into a jtable plzzzzzzzzzzzzzzz Hi Friend...[] args) { Vector columnNames = new Vector(); Vector data hi - Java Beginners hi hi sir,Thanks for ur coporation, i am save the 1 image...,plzzzzzzzzzz i have a panel that allows user to enter new customer,i am save...("Frame in Java Swing"); f.getContentPane().setLayout(null); l=new JLabel hi - Java Beginners hi hi sir,good afternoon, i want to add a row in jtable when i am... the program sir,urgent Thank u Hi Friend, Try...) { new InsertRows(); } public InsertRows(){ JFrame frame = new hi again - Java Beginners hi again i did the changes on the code but still the time is not decreasing i wanna reach increasing running time target sorry for asking too... number of the threads as i got from what is shown here HI - Java Beginners case & also i do subtraction & search dialognal element in this. Hi...HI how i make a program if i take 2 dimensional array in same...{ public static void main(String[] args) { int[][] a2 = new int[10]... I want to write code for change password from data base please... java bean file for setting and getting and other is .jsp file this file... Old Password * New Password - Java Beginners hi hi sir,thanks for providing the datepicker program but i want to create a datepicker class in my package and i want to use that datepicker class when i am want,in that type of flexibility plz provide the program sir,in my Hi.. Hi.. what are access specifier available in java Hi.. Hi.. null is a keyword.True/False? hi friend, In Java true, false and null are not a Java keyword they are literals in Java. For reading in detail go through the following link The null keyword - Java Beginners hi hi sir,when i am enter a one value in jtextfield the related... phone no sir Hi Friend, Try the following code: import... JTextField tf; private final JComboBox combo = new JComboBox.. - Java Beginners Hi.. Hi friends, I want to display two calendar in my form... is successfully but date of birth is not why... Hi Friend, Try... 2)calendar.js: var cal; var todayDate=new Date(); var Cal; var hi hi what are access modifiers available in java Hi - Struts Hi Hi Friends, Thanks to ur nice responce I have sub package in the .java file please let me know how it comnpile in window xp please give the command to compile Hi Hi Hi All, I am new to roseindia. I want to learn struts. I do not know anything in struts. What exactly is struts and where do we use it. Please help me. Thanks in advance. Regards, Deepak Hi.. Hi.. what are the steps mandatory to develop a simple java program? To develop a Java program following steps must be followed by a Java developer : First of all the JDK (Java Development Kit) must be available hi - Java Beginners hi hi sir,when i am add a jtable record to the database by using... and resolve this sir,plzzzzzz submit=new JButton("SUBMIT...); submit.setVisible(false); submit.addActionListener(new hi - Java Beginners public static void main(String[] args) { FinalExample finalExample = new...(){// Final Method System.out.println("I am Final Method"); }// End Of Final Method hi logic for prime number Logic for prime number in Java. HTML Tutorials hi hi what are the steps mandatory to develop a simple java program? what is default value for int type of local variable? what are the keywords available in simple HelloWorld program? Class is a blueprint of similiar objects(True to know my answer to know my answer hi, this is pinki, i can't solve my question "how to change rupee to dollar,pound and viceversa using wrapper class in java." will u help me hi.. i hv few error. pls. - Java Beginners hi.. i hv few error. pls. hi sir, i have few error. can u pls help me. pls. im very new in java. ths my Uni assgmnt. please help me... import... input = new InputStreamReader(System.in); BufferedReader br=new What you Really Need to know about Fashion What you Really Need to know about Fashion You might think... know about fashion, even if you are not really interested in following... to the masses. Personally, I have to see fashion as the way people decide Find the date of the file before the file has been modified Find the date of the file before the file has been modified Hi, may i know is it possible to find the date of the file before the file has been modified? hi friend, you can get the date of last modified file using Hi .Again me.. - Java Beginners :// Thanks. I am sending running code...Hi .Again me.. Hi Friend...... can u pls send me some code...... REsponse me.. Hi friend, import java.io.*; import java.awt. Hi..Date Program - Java Beginners Hi..Date Program Hi Friend...thank for ur Valuable information... Its displaying the total number of days.. but i WANT TO DISPLAY THE NUMBER OF DAYS BY EACH MONTH... Hi friend, Code to solve the problem Java I/O - Java Beginners Creating Directory Java I/O Hi, I wanted to know how to create a directory in Java I/O? Hi, Creating directory with the help of Java Program is not that difficult, go through the given link for Java Example Codehttp NEW IN JAVA - Java Beginners NEW IN JAVA Suppose you are asked to design a software tool... should display "Please ask your teacher for help". Hi Friend, Try... static void main(String[] args) throws Exception { Scanner scan = new Scanner hi - Java Beginners hi hi sir, i am entering the 2 dates in the jtable,i want to difference between that dates,plz provide the suitable example sir Hi Friend, Please visit the following links: I really need help with this assignment question Please help me out Please I really need help with this assignment question Please help me out Please * Description* You are hired to develop a laptop inventory information... a Java application to provide these services by connecting to a database hosted hi sir - Java Beginners hi sir Hi,sir,i am try in netbeans for to develop the swings,plz... the details sir, Thanks for ur coporation sir Hi Friend, Please visit the following link: hi - Java Beginners ; Hi anjali, class ploy { public void ployTest(int x... PolymorphismText{ public static void main(String args[]){ ploy p = new ploy...); } } ------------------------------------------------- Read for more information. sir,create a new button and embed the datepicker code to that button sir,plzzzzzzzzzzzzzzzzzzzzzzzzzzz Java I/O Assistance + Loop Java I/O Assistance + Loop I'm trying to make this program write... CANNOT Get the output as I want. I know where the error lays I just can't seem...)); //Because I only add a new line at the end, I only need to read the first line am using ur sending code but problem is not solve my code is to much large i am this code please check it and solve its very urgent Hi ragini, First time put the hard code after I/O Java writer = new PrintWriter(outFile); for(int i=0;i<inputFiles.length;i... System.out.println(" Error in Concat:"+e); } } } I am not really sure why...] + "... "); BufferedReader br = new BufferedReader(new FileReader(f[i])); String help
http://www.roseindia.net/tutorialhelp/comment/8278
CC-MAIN-2014-42
refinedweb
1,957
66.74
In case any of the 7 regular readers here aren’t following xml-dev, check out and add to the discussion about Pragmatic Namespaces, proposed as a solution for the “distributed extensiblity” problem in HTML5. For years people have been pointing to Java as the model for how XML namespaces should work, so this proposal goes that direction. Either it will work, or else it will get people to finally shut up about the whole idea. :) It’s heavily based on Tom Bradford’s Clean Namespaces proposal, which doesn’t have a living URL anymore but is available on archive.org. -m
http://dubinko.info/blog/2009/07/
CC-MAIN-2019-39
refinedweb
102
55.58
[[!; continuations (which we'll study more in the coming weeks) and exceptions (like OCaml's `failwith "message"` or `raise Not_found`); and **mutation**. This last notion is our topic this week. ## Mutation## What is mutation? It's helpful to build up to this in a series of fragments. For present pedagogical purposes, we'll be using a made-up language that's syntactically similar to, but not quite the same as, OCaml. (It's not quite Kapulet either.) This should seem entirely familiar: [A] let y = 1 + 2 in let x = 10 in (x + y, 20 + y) ; evaluates to (13, 23) In our next fragment, we re-use a variable that had been bound to another value in a wider context: [B] let y = 2 in ; will be shadowed by the binding on the next line let y = 3 in (10 + y, 20 + y) ;." But what we'll see below is a more exotic phenomenon that merits that description better. In the previous fragments, we bound the variables `x` and `y` to `int`s. We can also bind variables to function values, as here: [C] let f = (\x y. x + y + 1) in (f 10 2, f 20 2) ; evaluates to (13, 23) If the expression that evaluates to a function value has a free variable in it, like `y` in the next fragment, it's interpreted as bound to whatever value `y` has in that expression's lexical context: [D] let y = 3 in let f = (\x. x + y) in let y = 2 in (f 10, y, f 20) ; evaluates to (13, 2, 23) Other choices about how to interpret free variables are also possible (you can read about "lexical scope" versus "dynamic scope"), but what we do here is the contemporary norm in functional programming languages, and seems to be easiest for programmers to reason about. Sometimes bindings are shadowed merely in a temporary, local context, as here: [E] let y = 3 in let f = (\x. let y = 2 in ; here the most local binding for y applies x + y) in ; here the binding of y to 2 has expired (y, f 10, y, f 20) ; evaluates to (3, 12, 3, 22) Notice that the `y`s in the tuple at the end use the outermost binding of `y` to `3`, but the `y` in `x + y` in the body of the `f` function uses the more local binding. OK, now we're ready for our main event, **mutable variables.** We'll introduce new syntax to express an operation where we're not merely *shadowing* a wider binding, but *changing* or *mutating* that binding. The new syntax will show up both when we introduce the variable, using `var y = ...` rather than `let y = ...`; and also when we change `y`'s value using `set`. [F] var y = 3 in let f = (\x. set y to 2 then x + y) in ; here the change in what value y is bound to *sticks* ; because we *mutated* the value of the *original* variable y ; instead of introducing a new y with a narrower scope (y, f 10, y, f 20) ; evaluates to (3, 12, 2, 22) Notice the difference in the how the second `y` is evaluated in the tuple at the end. By the way, I am assuming here that the tuple gets evaluated left-to-right. Other languages may or may not conform to that. OCaml doesn't always. In languages that have native syntax for mutation, there are two styles in which it can be expressed. The *implicit style* is exemplified in fragment [F] above, and also in languages like C: { int y = 3; // this is like "var y = 3 in ..." ... y = 2; // this is like "set y to 2 then ..." return x + y; // this is like "x + y" } A different possibility is the *explicit style* for handling mutation. Here we explicitly create and refer to new "reference cells" to hold our values. When we mutate a variable's value, we leave the variable assigned to the same reference cell, but we modify that reference cell's contents. 3 (* this creates a new reference cell *) ... in let () = ycell := 2 in (* this changes the contents of that cell to 2 *) (* the return value of doing so is () *) (* other return values could also be reasonable: *) (* such as the old value of ycell, the new value, an arbitrary int, and so 3)]) ... (set-box! ycell 2) (+ y 3) (set! y 2) y) ;. The variables `y` in fragment [F] or in the C snippet above have the type `int`, and only ever evaluate previous discussion, is to use **thunks**. These are functions that only take the uninformative `()` as an argument, such as this: let f () = ... in ... or this: let f = fun () -> ... in ... In Scheme these are written as functions that take 0 arguments: (let* ([f (lambda () ...)]) ...) or: (define (f) ...) ... How could such functions be useful? Well, as always, the context in which you build a function need not be the same as the one in which you apply it to arguments. So for example: let ycell = ref 1 in let incr_y () = ycell := !ycell + 1 in let y = !ycell in incr_y () in y We don't apply (or call or execute or however you want to say it) the function `incr_y` until after we've extracted `ycell`'s value and assigned it to `y`. So `y` will get assigned `1`. If on the other hand we called `incr_y ()` before evaluating `let y = !ycell`, then `y` would have gotten assigned a different value. In languages with mutable variables, the free variables in a function definition are often taken to refer back to the same *reference cells* they had in their lexical contexts, and not just their original value. So if we do this for instance: let factory (starting_value : int) = let free_var = ref starting_value in (* `free_var` will be free in the bodies of the next two functions *) let getter () = !free_var in let setter (new_value : int) = free_var := new_value in (* here's what `factory starting_value` returns *) you've got a copy of *The Seasoned Schemer*, which we recommended for the seminar, see the discussion at pp. 91-118 and 127-137. If however you call the `factory` function twice, even if you supply the same `starting_value`, you'll get independent `getter`/`setter` pairs, each of which have their own, separate plus_y x = x + !ycell in let first = plus_y 1 in (* first is assigned the value 2 *) ycell := 2; let second = plus_y 1 in (* second is assigned the value 3 *) first = second (* not true! *) Notice that the two invocations of `plus *) (* so evaluates to 1 *) ##How to implement explicit-style mutable variables## We'll think about how to implement explicit-style mutation first. We suppose that we add some new syntactic forms to a language, let's call them `newref`, `getref`, and `putref`. And now we want to expand the semantics for the language so as to interpret these new forms. Well, part of our semantic machinery will be an assignment function or environment, call it `e`. Perhaps might call a table or **store**. This might be a big heap of memory. For our purposes, we'll suppose that reference cells only ever contain `int`s, and we'll let the store be a list of `int`s. We won't suppose that the metalanguage we use to express the semantics of our mutation-language itself has any mutation facilities. Instead, we'll think about how to model mutation in a wholly declarative or functional or *static* metalanguage.]]e = result Now we're going to relativize our interpretations not only to the environment `e`, but also to the current store, which I'll label `s`. Additionally, we're going to want to allow that evaluating some functions might *change* the store, perhaps by allocating new reference cells or perhaps by modifying the contents of some existing cells. So the interpretation of an expression won't just return a result; it will also return a possibly updated store. We'll suppose that our interpretation function does this quite generally, even though for many expressions in the language, the store that's returned will be the same one that the interpretation function started with: > \[[expression]]e s = (result, s') For expressions we already know how to interpret, you can by default term e s = match term with ... | Let (var, expr1, expr2) -> let (res1, s') = eval expr1 e s (* s' may be different from s *) (* now we evaluate expr2 in a new environment where var has been associated with the result of evaluating expr1 in the current environment *) eval expr2 ((var, res1) :: e) s' ... Similarly: ... | Apply (Apply(PrimitiveAddition, expr1), expr2) -> let (res1, s') = eval expr1 e s in let (res2, s'') = eval expr2 e s' in (res1 + res2, s'') ... Let's consider how to interpet our new syntactic forms `newref`, `getref`, and `putref`: 1. When `expr` evaluates to `starting_val`, then * result term e s = match term with ... | Newref (expr) -> let (starting_val, s') = eval expr e s in (* note that s' may be different from s, if expr itself contained any mutation operations *) (* now we want to retrieve the next free index in s' *) let new_index = List.length s' in (* now we want to insert starting_val there; the following is an easy but inefficient way to do it *) let s'' = List.append s' [starting_val] in (* now we return a pair of a wrapped new_index, and the new store *) (Index new_index, s'') ... 2. When `expr` evaluates to a `store_index`, then **getref expr** should evaluate to whatever value is at that index in the current store. (If `expr` evaluates to a value of another type, `getref expr` is undefined.) In this operation, we don't change the store at all; we're just reading from it. So we'll return the same store back unchanged (assuming it wasn't changed during the evaluation of `expr`). let rec eval term e s = match term with ... | Getref (expr) -> let (Index n, s') = eval expr e s in (* s' may be different from s, if expr itself contained any mutation operations *) (List.nth s' n, s') ... 3. When `expr1` evaluates to a `store_index` and `expr2` evaluates to an `int`, then **putref expr1 expr2** should have the effect of changing the store so that the reference cell at that index now contains that `int`. We have to make a decision about what result the `putref ...` call should itself evaluate to; OCaml makes this `()` but other choices are also possible. Here I'll just suppose we've got some appropriate value in the variable `dummy`. let rec eval term e s = match term with ... | Putref (expr1, expr2) -> let (Index n, s') = eval expr1 e s in (* note that s' may be different from s, if expr1 itself contained any mutation operations *) let (new_value, s'') = eval expr2 e s' in (* now we create a list which is just like s'' except it has new_value in index n *) (* the following could be expressed in Juli8 as `modify m (fun _ -> new_value) xs` *) let rec replace_nth xs m = match xs `getref`. Instead, we just treat ordinary variables as being mutable. You could if you wanted to have some variables be mutable and others not; perhaps the first sort are written in Greek and the second in Latin. But for present purposes, `var x = expr1 in expr2`. We will also have just one new syntactic form, `set x to expr1 then expr2`. (The `then` here is playing the role of the sequencing semicolon in OCaml.) Here's how to implement these. We'll suppose that our assignment function is list of pairs, as above and as in [week7](/reader_monad_for_variable_binding). LINK TODO let rec eval term e s = match term with ... | Var (var : identifier) -> let index = List.assoc var e in (* retrieve the value at that index in the current store *) let res = List.nth s index in (res, s) (* instead of `let x = ...` we now have `var x = ...`, for which I'll use the `Letvar` tag *) | Letvar ((var : identifier), expr1, expr2) -> let (starting_val, s') = eval expr1 e s in (* get next free index in s' *) let new_index = List.length s' in (* insert starting_val there *) let s'' = List.append s' [starting_val] in (* evaluate expr2 using a new assignment function and store *) eval expr2 ((var, new_index) :: e) s'' | Set ((var : identifier), expr1, expr2) -> let (new_value, s') = eval expr1 e s in (* lookup which index is associated with Var var *) let index = List.assoc var e in (* now we create a list which is just like s' except it has new_value at index *) let rec replace_nth xs m = match xs with | [] -> failwith "list too short" | x::xs when m = 0 -> new_value :: xs | x::xs -> x :: replace_nth xs (m - 1) in let s'' = replace_nth s' index in (* evaluate expr2 using original assignment function and new store *) eval expr2 e s'' ##How to implement mutation with a State monad## It's possible to do all of this monadically, instead of adding new syntactic forms and new interpretation rules to a language = (identifier * int) list (* alternatively, an env could be implemented as type identifier -> int *) type 'a reader = env -> 'a let reader_mid (x : 'a) : 'a reader = fun e -> x let reader_mbind (xx : 'a reader) (k : 'a -> 'b reader) : 'b reader = fun e -> let x = xx e in let yy = k x in yy e type store = int (* very simple store, holds only a single int *) (* this corresponds to having only a single mutable variable *) type 'a state = store -> ('a, store) let state_mid (x : 'a) : 'a state = fun s -> (x, s) let state_mbind (xx : 'a state) (k : 'a -> 'b state) : 'b state = fun s -> let (x, s') = xx s in let yy = k x in yy s' Notice the similarities (and differences) between the implementation of these two monads. With the Reader monad, we also had some special-purpose operations, beyond its general monadic operations. Two to focus on were `asks` and `shift`. We would call `asks` with a helper function like `lookup "x"` that looked up a given variable in an environment. And we would call `shift` with a helper function like `insert "x" new_value` that operated on an existing environment to return a new one. With the State monad, we'll also have some special-purpose operations. We'll consider two basic ones here. One will be to retrieve what is the current store. This is like the Reader monad's `asks (lookup "x")`, except in this simple implementation there's only a single location for a value to be looked up from. Here's how we'll do it: let state_get : store state = fun s -> (s, s) This passes through the current store unaltered, and also returns a copy of the store as its payload. (What exactly corresponds to this is the simpler Reader operation `ask`.) We can use the `state_get` operation like this: some_existing_state_monad_value >>= fun _ -> state_get >>= (fun cur_store -> ...) The `fun _ ->` part here discards the payload wrapped by `some_existing_state_monad_value`. We're only going to pass through, unaltered, whatever *store* is generated by that monadic box. We also wrap that store as *our own payload*, which can be retrieved by further operations in the `... >>= ...` chain, such as `(fun cur_store -> ...)`. As we've mentioned elsewhere, `xx >>= fun _ -> yy` can be abbreviated as `xx >> yy`. The other operation for the State monad will be to update the existing store to a new one. This operation looks like this: let state_put (new_store : int) : dummy state = fun s -> (dummy, new_store) If we want to stick this in a `... >>= ...` chain, we'll need to prefix it with `fun _ ->` too, like this: some_existing_state_monad_value >>= fun _ -> state_put 100 >>= ... Or: some_existing_state_monad_value >> state_put 100 >>= ... In this usage, we don't care what payload is wrapped by `some_existing_state_monad_value`. We don't even care what store it generates, since we're going to replace that store with our own new store. A more complex kind of `state_put` operation might insert not just some constant value as the new store, but rather the result of applying some function to the existing store. For example, we might want to increment the current store. Here's how we could do that: some_existing_state_monad_value >> state_get >>= (fun cur_store -> state_put (succ cur_store)) >>= ...We can define more complex functions that perform the underlined part `state_get >>= (fun cur_store -> state_put (succ cur_store))` as a single operation. In the Juli8 and Haskell monad libraries, this is expressed by the State monad operation `modify succ`. In general, a State monadic **value** (type `'a state`, what appears at the start of a `... >>= ... >>= ...` chain) is an operation that accepts some starting store as input --- where the store might be simple as it is here, or much more complex --- and returns a payload plus a possibly modified store. This can be thought of as a static encoding of some computation on a store, which encoding is used as a box wrapped around a value of type `'a`. (And also it's a burrito.) State monadic **operations** or Kleisli arrows (type `'a -> 'b state`, what appears anywhere in the middle or end of a `... >>= ... >>= ...` chain) are operations that generate new State monad boxes, based on what payload was wrapped by the preceding elements in the `... >>= ... >>= ...` chain. The computations on a store that such operations encode (which their payloads may or may not be sensitive to) will be chained in the order given by their position in the `... >>= ... >>= ...` chain. That is, the computation encoded by the first element in the chain will accept a starting store `s0` as input, and will return (a payload and) a new store `s1` as output, the next computation will get `s1` as input and will return `s2` as part of its output, the next computation will get `s2` as input, ... and so on. To get the whole process started, the complex computation so defined will need to be given a starting store. So we'd need to do something like this: let computation = some_state_monad_value >>= operation >>= operation in computation initial_store * See also our [[State Monad Tutorial]]. LINK TODO ##Some grades of mutation involvement## Programming languages tend to provide a bunch of mutation-related capabilities at once, if they provide any. For conceptual clarity, however, it's helped me to distill these into several small increments. This is a list of some different ways in which languages might involve mutation-like idioms. (It doesn't exhaust all the interesting such ways, but only the ones we've so far touched on.) * At the zeroth stage, we have a purely functional language, like we've been working with up until this week. * One increment would be to add implicit-style mutable variables, as we explained above. (`var x = 1 in ...`) will associate a reference cell with `x`. That won't be what `x` evaluates to, but it will be what the assignment function *binds* `x` to, behind the scenes. However, in language with implicit-style mutation, what you're clearly not able to do is to return a reference cell as the result of a function call, or indeed of any expression. This is connected to --- perhaps it's the same point as --- the fact that `x` doesn't evalute to a reference cell, but rather to the value that the reference cell it's implicitly associated with contains, at that stage in the computation. * Another/physical aside**:: `var x = value in ...` or `set `~=`). In the following fragment: let ycell = ref 1 in let xcell = ref 1 in let zcell = ycell in ... If we express numerical identity using `==`, as OCaml does, then this (and its converse) would be true: ycell == zcell but these would be false: xcell == ycell xcell == zcell If we express qualitative indiscernibility using `=`, as OCaml does, then all of the salient comparisons would be true: ycell = zcell xcell = ycell xcell = zcell (* of course true *) ycell != ref !ycell (* true, these aren't numerically identical *) ycell = ycell (* of course true *) odd pattern shows up in many other languages, too. In Python, `y = []; (0, 1, y) is (0, 1, y)` evaluates to false. In Racket, `(define y (box 1)) (eq? (cons 0 y) (cons 0 y))` also evaluates to false (and in Racket, unlike traditional Schemes, `cons` is creating immutable pairs). All these languages chose an implementation for their numerical identity predicates that is especially efficient and does the right thing in the common cases, but doesn't quite match our mathematical expectations.; you just wouldn't be able to establish so. However, they're not numerically identical, because by calling `setter 2` (but not calling `setter' 2`) we can mutate the function value `getter` (and `adder`) so that it's *no longer* qualitatively indiscernible from `getter'` (or `adder'`). There are several more layers and complexity to the way different languages engage with mutation. But this exhausts what we're in a position to consider now. ##Miscellany## * When using mutable variables, programmers will often write using *imperatival aux n sofar = if n = 0 then sofar else aux (n - 1) (n * sofar) in aux This is often referred to as an *iterative* as opposed to a *recursive* algorithm. *. If you've got a copy of *The Seasoned Schemer*, which we recommended for the seminar, see the discussion at pp. 118-125. #]()
http://lambda.jimpryor.net/git/gitweb.cgi?p=lambda.git;a=blob_plain;f=topics/week9_mutable_state.mdwn;hb=8a9edd44b9db8300cfc2acc269bb11562911951c
CC-MAIN-2021-17
refinedweb
3,519
59.03
Is it possible to convert a string to a function? If so, how? A simple snippet would be helpful. Thanks. This is a discussion on converting strings to functions within the C++ Programming forums, part of the General Programming Boards category; Is it possible to convert a string to a function? If so, how? A simple snippet would be helpful. Thanks.... Is it possible to convert a string to a function? If so, how? A simple snippet would be helpful. Thanks. convert a string to a function... No. Maybe if you explained yourself a little better, that might actually make sense. <... explain yourself better..> OK, if I have: The problem is that I'm given a string of numbers "123".The problem is that I'm given a string of numbers "123".Code:string sf1="1+2-3"; string sf2="1-2+3"; // maybe I'll make these arrays of functions // how do I take sf1 and sf2 usable functions? f1=sf1; f2=sf2; I must find out how many different combinations of '+' and '-' will be less than or equal to a target number say '9'. I can easily insert spaces or even '+' or '-' between the numbers. eg.- now the only problem is to change the numbers back to integers and the '+' and '-' back to operators. I can use strtol to convert the integer characters to integers, but how do I conbert the '+' and '-' to operators?now the only problem is to change the numbers back to integers and the '+' and '-' back to operators. I can use strtol to convert the integer characters to integers, but how do I conbert the '+' and '-' to operators?Code:string sf1="123",sf2,sf3; int noNumbs=0; noNumbs=sf1.size(); for(int i=0; i<noNumbs-1; i++){ sf2=sf1[i]+"+"; // or something like this } Last edited by kes103; 01-23-2003 at 12:53 PM. I'm still confused. You cannot make a function out of a string. Now if you wanted to make an array of strings, that would be something different altogether. C++ is compiled, it has no eval() function, (like in perl). Parsing and evaluating expressions in your strings would have to be done by hand. Doing so is quite beyond the scope of what can be easily taught over a message board. See changes make to my second post above. You'll need to parse the string to get your results. One way is to use a recursive function, and I've written this problem once this way but I need to find it first before I can show it. Another way if double digits are not allowed (ie "123" cannot be 12-3) is to parse the digits into a vector or array, then use those numbers to solve the problem and not worry about the string anymore. I programmed something like that in python. (given a set of numbers, find some sequence of operations on the given numbers (+ - * /) that produces the answer 24). If you are not dead set on C/C++ for this problem, 'scripting' langauges (python, perl, etc) offer much cleaner solutions to this problem. Basically, the problem breaks down into enumerating all possibilities (somewhat easier with scripting langauge with builtin lists), and then testing them all with eval (non-existant in C++). The python code for my somewhat similiar problem is below, I don't know if it will help, however. Code:import probstat import time QUESTION=0 START_GAME=1 END_GAME=2 OTHER_TYPE=3 def identify_line(line): """ Identifies a line of IRC text as either a QUESTION, START_GAME, END_GAME or OTHER_TYPE """ splitlines = line.split(' ') if len(splitlines) == 15: question_test = "Calculate 24 with + - * / () and the numbers".split(' ') for i in range(11): if splitlines[i] != question_test[i]: return OTHER_TYPE return QUESTION elif len(splitlines) == 11: if line == "Starting a game with 5 turns to win needed to win": return START_GAME elif line[0:3] == '-=-' and line[-3:] == '-=-': return END_GAME return OTHER_TYPE def solve_24line(line): """ Solve line from 24 bot. Assumes line is valid question, and does no checking """ splitlines = line.split(' ') target = 24 numlist = [] try: for i in range(1,5): numlist.append(float(splitlines[-i][0])) except ValueError: return '' return find_solution(numlist, target) def time_test(line): start = time.clock() if identify_line(line) == QUESTION: print solve_24line(line) print time.clock() - start def main(): time_test("Calculate 24 with + - * / () and the numbers 2, 7, 4, 1") time_test("Calculate 24 with + - * / () and the numbers 4, 3, 4, 1") time_test("Calculate 24 with + - * / () and the numbers 6, 2, 8, 2") def build_expr(list_of_num, list_of_ops, cur, dec_format='%f'): """ Assumed len(list_of_num) == len(list_of_ops) + 1 """ if cur==0: return ((dec_format+'%s'+dec_format)) % (list_of_num[1], list_of_ops[0], list_of_num[0]) else: return ((dec_format+'%s(%s)') % (list_of_num[cur+1], list_of_ops[cur], build_expr(list_of_num, list_of_ops, cur-1, dec_format))) def find_solution(numlist, target, operators="+-*/", dec_format="%d"): """ Return an expression as a string containing every number in numlist using only the basic arithmetic operators (+,-,/,*) and parenthesis that evaulates to target, else return "" """ # there are n - 1 binary operators for n numbers all_op_possibilities = direct_product(operators, len(numlist)-1) for oper_seq in all_op_possibilities: for (num_seq) in probstat.Permutation(numlist): # build and evaluate the function with floats, so the non C based division isn't used # IE, the bot thinks 7/3 is not 2, but rather 2 and 1/3... lame bot :) # The work around is to calculate with floats, use some small range for error # and see if the expression is within the error. float_expr = build_expr(num_seq, oper_seq, len(oper_seq)-1) try: float_result = eval(float_expr) if float_result > target - .1 and float_result < target + .1: return build_expr(num_seq, oper_seq, len(oper_seq)-1, dec_format) except ZeroDivisionError: pass return "" def direct_product(source,seq_len): """ Return list of sequence containing all possible 'enumerations' of list with the given components. The returned list will contain len(source)^seq_len sublists of length seq_len IE. direct_product([1,2],2) should return [[1, 1], [1, 2], [2, 1], [2, 2]] """ if seq_len == 1: return [[item] for item in source] else: sub_product = direct_product(source, seq_len-1) return [[item]+tail for item in source for tail in sub_product] if __name__ == '__main__': main() Thanks, PJYelton! SilentStrike, is Java considered to be a scripting language? Well, apparently, Java has no eval. Writing your own eval is likely harder than any other part of the solution. SilentStrike, what about C#? >>Is it possible to convert a string to a function? Not in C++, why not use a language that makes such a feat trivial...Perl comes to mind. :-) Code:#!usr/bin/perl -w print "Enter a function: "; chomp($sfunc = <STDIN>); print eval $sfunc, "\n"; *Cela*
http://cboard.cprogramming.com/cplusplus-programming/32998-converting-strings-functions.html
CC-MAIN-2015-48
refinedweb
1,092
64.1
introduce you to Behaviors in Silverlight. We will also create a small demo of the ‘MouseDragElementBehavior’ behavior using Expression Blend 4. So let’s get started. We will first create a Silverlight project using ‘Microsoft Expression Blend 4.0’. Click on Start > Microsoft Expression > Microsoft Expression Blend 4 as shown below: Create a new project and give it a name ‘BehaviorTesting’ as shown below: Now let’s start with changing the background color of the layout. From the Objects and TimeLine window, choose ‘LayoutRoot’ Now from the property Window, change the following properties: 1) Background color – Black. 2) Width – 600. 3) Height – 500. Now right click ‘LayoutRoot’ from ‘Objects and Timeline’ window and click on change Layout Type to ‘Canvas’. Now let’s do a quick overview of the out-of-box ‘Behaviors’ in Silverlight 4.0. Shown below is a list of behaviors : These behaviors are available out-of-the-box in Silverlight 4.0. These behaviors are available within the DLL – Microsoft.Expression.Interactions.dll. Now, let’s see what is a Behavior and how effectively we can use them in our Silverlight Applications using the behaviors listed above What is Behavior? Behaviour is a new way of interactivity without writing code, specially written by developers and used by designers in Microsoft Expression Blend 3.0/4.0. Behaviors makes interactivity much simpler for the designers. Developers can write their own custom behaviors which can be reused on different Silverlight, MVVM applications etc. So let’s try our ‘MouseDragElementBehavior’ by creating a simple demo. For this we need an image which we will slice into number of pieces using Microsoft Expression Designer. I am using an image of the king of the jungle, LION as shown below: Now let’s open Microsoft Expression Designer and use a slice tool to cut down our image into number of pieces as shown below: Step 1: Open Expression Designer from Microsoft Expression studio. Click on File > New menu and give a size to an image as shown below: Click ‘OK’ button. Now drag and drop the image on Expression designer and adjust its height and width to the height and width of the document, which we created above. Step 2: Now let’s use a tool ‘Slice’ from the tool box as shown below – Now slice your image into a 50 by 50 size as shown below – Once you slice your image into a number of pieces, select all the pieces by pressing ‘Ctrl + A’. Then go to ‘File’ menu and click on ‘Export’., which will bring up an ‘Export’ dialog box. Make sure that the image format is ‘JPEG’. The path for the images is set properly. Now click on ‘Export All’ button as shown below: Step 3: Now let’s get back to our Silverlight project which we created above. Now draw lines which will contain our sliced images on the ‘canvas’ by specifying ‘Left’ and ‘Top’ property of the Line control as shown below Once you are done with the above design, create one folder under Silverlight project and add all the sliced images into that folder. Now let’s add all the images with their source (the sliced image) and the behavior ‘MouseDragElementBehavior’ dynamically on the above screen. For this, you will have to write code in the ‘Constructor’ of our code file that is ‘MainPage.xaml.cs’ file as shown below: First of all import the following namespaces : Declare two variables as shown below at the class level int k = 81; int m = 25; Now add some code in the constructor of ‘MainPage’ class The above code positions all the images dynamically into each square drawn on the canvas layout, as shown above. It also adds ‘MouseDragElementBehavior’ on each image so that we can set positions of our images by dragging and dropping the images at accurate places. We have also set the z-index of each image. Now when you run the application, your application will look similar to the following: Now you can drag and drop the images to make a real picture. Make a note that we are not writing a single line of code to drag and drop the images. Before Silverlight 3.0 or 4.0, we needed to write a couple of extra lines to achieve this goal. Some Other Out-of-box behaviours Now let’s see some other out-of-box behaviors. I have created a separate Silverlight project for the demonstration. Let’s draw a rectangle (Width – 255 and Height – 180) and fill it with blue color as shown below Now let’s create an animation. Choose a rectangle and go to properties window. From the properties window > ‘Transform’ section. Now from the ‘Projection’ group change the ‘Center Of Rotation’ as shown below Now let’s create a ‘Storyboard’ from the ‘Objects and Timeline’ window as shown below: Now extend the ‘Record Key frame’ till 5 seconds. By keeping rectangle selected, from the ‘Projection’ section of properties window change the ‘Y’ angle from the ‘Rotation’ section till 180 degree as shown below: You can test your animation by playing it from objects and Timeline window as shown below: After testing the animation, drag and drop ‘ControlStoryBoardAction’ behavior on the rectangle. This behavior can be used to Play, Pause, Resume, Stop the animations on the triggers performed against the objects. Now your properties window will look similar to the following: If you observe, ‘EventName’ property has a dropdown list from where we can make a choice of event which can be fired against rectangle. Now if you observe the second property ‘ControlStoryBoardOption’ property, you can choose Play, Pause, Resume, Stop etc. actions. The third property will decide which storyboard to be played against the action of a rectangle, that is ‘Storyboard’ property. Now let’s hit ‘F5’ and test our behavior by clicking the rectangle. It should start our ‘Rect3D’ animation. Likewise there are multiple behaviors available out-of-box with Silverlight 4.0 and Expression Blend 4.0. Conclusion In this article, we took an overview of the different types of behaviors and how we can use them using Microsoft Expression Blend 4.0. The entire source code of this article can be downloaded over here
http://www.dotnetcurry.com/silverlight/671/silverlight-introducing-behaviors
CC-MAIN-2018-09
refinedweb
1,039
61.56
I am trying to set up a heuristics file for a study with field maps, but I'm still confused after looking through the heudiconv example files and relevant parts of the BIDS 1.0.1 specification paper. The study acquires a field map before each functional/dwi scan type (i.e., one for each of four tasks and one for the dwi). The field maps correspond to case 4 in the BIDS specification, with two maps with different phase encoding directions. First, should I use the "acq" field to denote the scan type the field map corresponds to? E.g., sub-01_ses-01_acq-rest_dir-PA_run-01_epi.nii.gz instead ofsub-01_ses-01_dir-PA_run-01_epi.nii.gz Second, the different directions are showing up as separate runs. Is that expected? E.g.,sub-01_ses-01_acq-rest_dir-PA_run-01_epi.nii.gz andsub-01_ses-01_acq-rest_dir-AP_run-02_epi.nii.gz Finally, and most importantly, how exactly do I add the "IntendedFor" field to the json? Can I do it in the heuristics file? 1) Personally I say yes, the acq field is useful to identify which scan each fieldmap correspond to - but it's not necessary as the IntendedFor in the json is the ultimate identifier. 2) This may be happening because of double substitution in your heuristic file. In this heuristic, item is incremented for every fieldmap, regardless if the run is PA/AP. In another example, these keys are separated in order to generate sequential runs for each direction. AFAIK, both ways are fine. item 3) This hasn't been implemented in heudiconv yet, we're working on finding the best way to add this. Within the heuristics file is a great idea though! For my datasets, I've been adding it to the JSON after conversion. Hope this helps! Thank you. Your answers are very helpful. 1 and 2 are pretty clear now, but I'm still stuck on 3. When you figure out "IntendedFor" post-conversion, how do you know which functional files correspond to each field map from the converted files? Do you compare the "AcquisitionDateTime" values from the jsons? I've only worked with two acquisitions of field maps, but took advantage of the acq-label from 1) to differentiate the two. acq-label I have cobbled together something that should assign the appropriate scans for each field map based on the acquisition times, but I have a list of the absolute paths to the selected scan files. Do you know of a way to convert the absolute path to the necessary relative path with grabbids? grabbids @tsalo I'm not sure there is a grabbids function for that - what I have been doing is splitting the path I defined as the root of my project. import os base = '/path/to/root' rel_fmap = [x.split(base)[-1] for x in fieldmaps] # where fieldmaps is list of abspaths Thanks. That works. I've created a gist that adds the "IntendedFor" field to the fmap jsons based on acquisition time. I've tested it out on a couple of subjects, but I don't know how well it will work on others. I'll try to convert it to a real function at some point.
https://neurostars.org/t/heudiconv-heuristics-for-field-maps/389
CC-MAIN-2017-22
refinedweb
537
65.12
As part of a recent personal journey to better understand databases and better learn Rust, I have recently took on the project of writing a simple key-value storage engine. Crazy, right? Lets get started! The first thing one must think of when writing a key-value storage engine is -”How should I store my data so that I can perform reads and writes efficiently?”; The two most common approaches for storing data on disk are B-Trees and LSM Trees. BTrees have been around for five decades and are used in “traditional” SQL databases such as PostgreSQL as an index on tables… I have been writing Go professionally for about two years now as part of a large corporation. Recently I have decided to pick up Rust; I started the way I usually approach picking up new languages — Writing a small project while reading the documentation (in this case — the Rust book). My first impression of the language was how “large” and verbose it is compared to Go. It took me a month and a half to finish “The Book”. For comparison, “A tour of Go” took me about a week. Go is simply a “smaller” language with less features… The simplest way to setup Openshift locally is using “CodeReady Containers”. It provides a minimal “Openshift 4” installation on a VM running locally. Follow the steps in the link above to setup you’re cluster. To set up Prometheus lets first create a namespace for it: oc new-project monitoring && oc project monitoring Next, we need to deploy Prometheus: oc new-app prom/prometheus The above command will pull the latest Prometheus image to our clusters registry and create a “Deployment” of a single pod running Prometheus. There are several ways to expose and application running on Openshift, I find the simplest way… Software Engineer; Rock Climber; Hiker; Runner; Father; Partner.
https://nimrodshn.medium.com/?source=post_internal_links---------4----------------------------
CC-MAIN-2021-17
refinedweb
312
66.27
Progressively Enhance a Form to a Modal Form With. Step 1: Decide on the Project Goals Before starting any journey, it helps (most times) to have a destination. The goal of this project is to take a standard link to a page containing a contact form and enable that form to pop-up on the current page in a modal dialog. There's several reasons for this approach: - If the user has JavaScript disabled, they are sent to the contact form page as usual. - Only one version of the form must be maintained. - The additional content (the form) can be loaded asynchronously. Step 2: List the Tools To write this from scratch in raw JavaScript would be a lot of code. Fortunately for us, there are existing tools we can leverage to make the task easier. This tutorial relies on: To make this code as reusable as possible, we'll write a plug-in. If you are unfamiliar with authoring a plug-in, you can get an introduction from Jeffrey Way's article here on Nettuts+. The modal functionality will come from jQuery-UI's $.dialog. Step 3: Design the Plug-in Interface We're going to follow the normal pattern for a jQuery plug-in: calling the plug-in on a selector and setting options via array. What options are needed? There will be options both for the modal window and for the plug-in itself. We're going to expect the plug-in to be called on an anchor, and enforce that in the code. $('a.form_link').popUpForm({ container : '', modal : true, resizeable : false, width : 440, title : 'Website Form', beforeOpen : function(container) {}, onSuccess : function(container) {}, onError : function(container) {} }); Examining the options Container: This is how the plug-in user will specify the ID of the form on the remote page. The link itself specifies the page, but container option will allow us to fetch the relevant part. This will be the only required option when calling the plug-in. Modal, Resizeable, Width, Title: These options are all going to be passed along to jQuery UI's $.dialog. The values above are defaults and the plug-in will run just fine without any of these being set when $.popUpForm is called. beforeOpen, onSuccess, onError: These are all callbacks, and expect a function. The function will be passed the object for the link that was clicked as 'this' and the container to which that link is targeted. Callbacks are designed to allow custom functionality for the users of a plug-in. The default for these callbacks will be an empty function. The minimum code required to use the plug-in would then look like this: $('a.form_link').popUpForm({ container : '#form_id' }); That seems simple, doesn't it? When you call a plug-in like this, the plug-in's code is called with a jQuery collection of all the DOM elements matching the selector, which will be available in the special variable 'this'. Step 4: The Plug-In's Skeleton Most jQuery plug-ins follow a very similar pattern. They iterate over the group of selectors and do whatever it is they do. I've got a basic plug-in "outline" I generally work from, and it will fit in here nicely. This would be the start of your plug-in file, popUpForm.jquery.js. (function($) { $.fn.popUpForm = function(options) { // Defaults and options var defaults = { container : '', modal : true, resizeable : false, width : 440, title : 'Website Form', beforeOpen : function(container) {}, onSuccess : function(container) {}, onError : function(container) {} }; var opts = $.extend({}, defaults, options); self.each(function() { // The REAL WORK happens here. // Within the scope of this function 'this' refers to a single // DOM element within the jQuery collection (not a jQuery obj) }); } })(jQuery); The code is wrapped in a self-executing function, and adds itself to jQuery using the $.fn namespace. The identifier following $.fn is the method name you'll use to invoke it. We're also following good coding practices by passing in the jQuery variable explicitly. This will keep us from getting into trouble if the plug-in is used on a page with other JavaScript frameworks, some of which use $ as a variable. Next, an array of default values is created, and these defaults will be used if they aren't defined when the plug-in is called. The line immediately following the defaults array merges the passed in options with the defaults and stores them all in the opts array. Finally, a loop is created for iterating over the jQuery collection identified by the selector when the plug-in is called.. While chances are in most situations it will be a single item ( an anchor), it will still handle multiple links with a single call - assuming they all load the same form. An important thing to understand is that the value of the special variable 'this' changes when we enter the self.each loop; it's a special jQuery method designed to make looping DOM collections easier. The callback function uses the context of the current DOM element, so the variable 'this' refers to that element within the loop. You can see in a very simple example how 'this' refers to a jQuery collection of jQuery objects in the plug-in function scope, but inside the each loop, 'this' refers to a single, non-jQuery DOM element. Step 5: Starting the Guts The code for the next few sections is all contained within the self.each block of our skeleton. What do we do now? For each jQuery element passed in, there are going to be several steps to take: - Make sure it is a link, and that it goes somewhere - Fetch the part of the remote page specified - Attach the remote form to the page, and create a hidden dialog for it - Steal the link so it creates our pop-up - Handle form submissions AJAX style Before doing any of that, however, we're going to add one line of code inside the callback, at the very top var $this = $(this); This is more then just convenience; the variable 'this' will go out of scope in any closures within the each loop, and we're going to need access to the current object later. Since we'll almost always want it as a jQuery object, we're storing it as one. Step 6: Make Sure the Element Is Valid $.popUpForm is only going to operate on anchor tags, and the anchor tag must have a href value so we know where to fetch the form from. If either of those conditions is not met, we're going to leave the element alone. The second line of our 'guts' will be: if (!$this.is('a') || $this.attr('href') == '') { return ; } Some people hate multiple return points in a function, but I've always found having one at the start can make a function more readable, as opposed to using an if(condition) to wrap the rest of the function. Performance wise, they're identical. Step 7: Fetch the From From the Remote Page The $.load method has nice functionality that allows a call to specify and ID in order to only attach part of a fetched document. The script won't attach the returned HTML directly to the DOM, because $.load only overwrites, it doesn't append. var SRC = $this.attr('href') + ' ' + opts.container; var formDOM = $("<div />").load(SRC, function() { The variable opts.container has the ID of the form element on the remote page. The second line loads this remote page, and attaches the form and its contents to a div, the entirety of which is stored in the variable formDOM. Notice that $.load includes a callback (the function) -- we'll use formDOM inside that callback. Step 8: Attach the HTML and Create the Dialog Inside the $.load callback, the code is going to attach the form, override the click event of the anchor, and override the submit event of the form. The form's HTML is stored in the formDOM variable at this point, and attaching it to the existing page is easy. $('#popUpHide').append(formDOM); The id #popUpHide refers to a hidden div that will attached to the page by the plug-in. In order to provide that div, the following line will be added at the top of the plug-in. If it already exists, we don't recreate it. $("#popUpHide").length || $('<div id="popUpHide" />').appendTo('body').css('display','none'); Now that the form is hidden safely away on our page, it is time to use a call to the $.dialog method to create the form. Most of the set-up params are taken from our plug-in. The 'autoopen' option is hard coded since we want the dialog to open when the link is clicked, and not when the dialog is created. // Create and store the dialog $(opts.container).dialog({ autoOpen : false, width : opts.width, modal : opts.modal, resizable : opts.resizeable, title : opts.title }); Step 9: Override Default Event Handling If we stopped here, the plug-in wouldn't be doing much. The link would still take us to the next page. The behavior we desire is for the link to open the dialog. $this.bind('click', function(e) { e.preventDefault(); opts.beforeOpen.call($this[0], opts.container); $(opts.container).dialog('open'); }); The first line of this click handler is very important. It stops the link from loading the new page when it is clicked. The second line is our 'beforeOpen' callback. The variable opts.beforeOpen contains a function reference - that much is obvious. The .call method is used to invoke the function in a way where we can provide context -- the 'this' variable for that function. The first argument passed becomes 'this' to the called function. When a function has access to the variable 'this' there are some contracts JavaScript has with the programmer that we should maintain. - The 'this' variable should be the object the function acts on - The 'this' variable is a single DOM object In order to maintain that contract, we pass $this[0] instead of $this. $this[0] represents a single, non-jQuery DOM object. To help understand this a little better, imagine the following callback function: opts.beforeOpen = function(container) { // Gives the value of the link you just clicked alert('The remote page is ' + this.href); // Gives the id container assigned to this link alert('And the container is ' + container); } The link click isn't the only default behavior to override. We also want the form to submit via AJAX, so the normal form onsumbit event needs to be prevented and new behavior coded. $(opts.container).bind('submit', function(e) { e.preventDefault(); ajaxSubmit(); }); Again, we use preventDefault() to stop the event, and in this case add a new function to handle the form submission. The ajaxSubmit() code could go directly in the callback, but it has been moved to a new function for readability. Step 10: Handle Form Submissions, AJAX-Style This function would be added immediately after the end of the self.each loop ( don't worry, you'll see the entire plug-in code in one shot in just a bit ). It takes the form, submits it to a remote script, and fires the appropriate callbacks. The first step is to get the form as a jQuery object, and to determine the form's method, either GET or POST. function ajaxSubmit() { var form = $(opts.container); var method = form.attr('method') || 'GET'; If you remember, we stored the form's ID in opts.container. The next line checks the form for a method, and assigns 'GET' if no method is present. This is consistent with HTML which uses GET by default on forms if no method is specified. Use the $.ajax method to submit the form: $.ajax({ type : method, url : form.attr('action'), data : form.serialize(), success : function() { $(opts.container).dialog('close'); opts.onSuccess.call($this[0], opts.container); }, error : function() { $(opts.container).dialog('close'); opts.onError.call($this[0], opts.container); } }); The URL option is determined from the action attribute of the form tag. The data is produced by using the serialize method on the jQuery object containing the form. The success and error options are $.ajax callbacks, which we're in turn using to call our callbacks, in the same way the beforeOpen callback was invoked. We're also closing the dialog in for both the success and error handlers. Step 11: The Entire Plug-In As a review, let's look at the code we've written so far as a whole, including some helpful code comments: (function($) { var alog = window.console ? console.log : alert; $.fn.popUpForm = function(options) { // REQUIRE a container if(!options.container) { alert('Container Option Required'); return; } // Give us someplace to attach forms $("#popUpHide").length || $('<div id="popUpHide" />').appendTo('body').css('display','none'); // Defaults and options var defaults = { container : '', modal : true, resizeable : false, width : 440, title : 'Website Form', beforeOpen : function(container) {}, onSuccess : function(container) {}, onError : function(container) {} }; var opts = $.extend({}, defaults, options); // The "this" within the each loop refers to the single DOM item // of the jQuery collection we are currently operating on this.each(function() { /* We want to keep the value 'this' available to the $.load * callback */ var $this = $(this); /* we only want to process an item if it's a link and * has an href value */ if (!$this.is('a') || $this.attr('href') == '') { return ; } /* For a $.load() function, the param is the url followed by * the ID selector for the section of the page to grab */ var SRC = $this.attr('href') + ' ' + opts.container; /* the event binding is done in the call back in case the * form fails to load, or the user clicks the link before * the modal is ready */ var formDOM = $("<div />").load(SRC, function() { // Append to the page $('#popUpHide').append(formDOM); // Create and store the dialog $(opts.container).dialog({ autoOpen : false, width : opts.width, modal : opts.modal, resizable : opts.resizeable, title : opts.title }); /* stops the normal form submission; had to come after * creating the dialog otherwise the form doesn't exist * yet to put an event handler to */ $(opts.container).bind("submit", function(e) { e.preventDefault(); ajaxSubmit($this[0]); }); // create a binding for the link passed to the plug-in $this.bind("click", function(e) { e.preventDefault(); opts.beforeOpen.call($this[0], opts.container); $(opts.container).dialog('open'); }); }); }); function ajaxSubmit(anchorObj) { console.log(anchorObj); var form = $(opts.container); var method = form.attr('method') || 'GET'; $.ajax({ type : method, url : form.attr('action'), data : form.serialize(), success : function() { $(opts.container).dialog('close'); opts.onSuccess.call(anchorObj, opts.container); }, error : function() { opts.onError.call(anchorObj, opts.container); } }); } } })(jQuery); This code should all be saved in a file called popUpForm.jquery.js Step 12: Setting Up the Plug-In The first step in plug-in usage would be to include all the required dependencies on your HTML page. Personally I prefer to use the Google CDN. The files being on a separate domain can help page load speed, and the servers are fast. Also, it increases the chances that a visitor will already have these files cached. In the HEAD of the HTML document, add the following: <link rel="stylesheet" href="" type="text/css" /> <link rel="stylesheet" href="css/main.css" type="text/css" /> <script src=''></script> <script src=''></script> The main.css file is for our site specific styles, everything else is from Google's CDN. Notice you can even use jQuery-UI themes from the CDN in this fashion. Step 13: Invoking the Plug-In Remember, we only want to invoke the plug-in on links that go to a form page. In the online demo, the forms are contained in form.html, and only two links go to that page. <script> $(document).ready(function() { $('.contact a').popUpForm({ container : '#modalform', onSuccess : function() { alert('Thanks for your submission!'); }, onError : function() { alert('Sorry there was an error submitting your form.'); } }); $('.survey a').popUpForm({ 'container' : '#othercontainer' }); }); </script> The calls are wrapped in a document.ready block so we can be sure the anchor elements exist before trying to act upon them. The second call, $('.survey a') is an example of the minimum amount needed to use our new plug-in. The first example sets a callback for both onSuccess and onError. Step 14: Styling the Modal If you've gotten this far, and you created examples forms and a page to call them from, you'd notice the form in the modal is probably, well, ugly. The modal itself isn't bad, because we're using a jQuery-UI theme. But the form inside the modal is mostly unstyled, so we should make some efforts to pretty it up. There are some things to keep in mind when creating styles for use in a jQuery-UI modal: - The modal itself is only a child of the page's BODY element - The contents of the modal are all children of a div of class 'ui-dialog' Using these small bits of information we can begin applying styles to the form in the modal. First we give the modal a background color we're happy with, and also modify the font for the title bar. .ui-dialog { background: rgb(237,237,237); font: 11px verdana, arial, sans-serif; } .ui-dialog .ui-dialog-titlebar { font: small-caps bold 24px Georgia, Times, serif; } Next, we want to separate each item in the form with lines. Since the form structure alternates h3s with divs containing form elements, we add the following rules: .ui-dialog h3, .ui-dialog div { border-top:1px solid rgb(247,247,247); border-bottom:1px solid rgb(212,212,212); padding:8px 0 12px 10px; } And we only want lines between the sections, not at the very top or very bottom. .ui-dialog .puForm div:last-child { border-bottom:none; } .ui-dialog .puForm h3:first-child { border-top:none; } Lets not forget to style the h3s, and the form elements. The radio buttons need to display inline so they are all in a row. .ui-dialog h3 { font: 18px Georgia, Times, serif; margin: 0; } .ui-dialog select, .ui-dialog textarea, .ui-dialog input { width:76%; display: block; } .ui-dialog #rating input, .ui-dialog #rating label { display: inline; width:auto; } Remember, these styles are specific to this project, you'll have to style your own forms depending on what structure you use. To target the form elements specifically, you can either target descendants of .ui-dialog, or to style each form individually, include styles descending from the form ID you've included. The styled form: Step 15: Conclusion So what have we really done? We've taken a normal link leading to a contact form (or forms) and caused that form to load up in a modal dialog, and submit via ajax. For users without javascript, nothing happens and the links behave normally, so we haven't stopped anyone from filling out your forms. If you click on the survey link in the demo, be sure to submit something. I'll post the results in the comments for fun after a week or so!
http://code.tutsplus.com/tutorials/progressively-enhance-a-form-link-to-modal-form--net-14558
CC-MAIN-2014-42
refinedweb
3,167
65.12
NAME posix_openpt - open a pseudoterminal device SYNOPSIS #include <stdlib.h> #include <fcntl.h> int posix_openpt(int flags); posix_openpt(): _XOPEN_SOURCE >= 600 DESCRIPTION The posix_openpt() function opens an unused pseudoterminal master device, returning a file descriptor that can be used to refer to that device. The flags argument is a bit mask that ORs together zero or more of the following flags: RETURN VALUE On success, posix_openpt() returns a file descriptor (a nonnegative integer) which is the lowest numbered unused file descriptor. On failure, -1 is returned, and errno is set to indicate the error. ERRORS VERSIONS Glibc support for posix_openpt() has been provided since version 2.2.1. ATTRIBUTES For an explanation of the terms used in this section, see attributes(7). CONFORMING TO POSIX.1-2001, POSIX.1-2008. posix_openpt() is part of the UNIX 98 pseudoterminal support (see pts(4)). NOTES Some older UNIX implementations that support System V (aka UNIX 98) pseudoterminals don't have this function, but it can be easily implemented by opening the pseudoterminal multiplexor. SEE ALSO open(2), getpt(3), grantpt(3), ptsname(3), unlockpt(3), pts(4), pty(7) COLOPHON This page is part of release 5.13 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at.
https://man.archlinux.org/man/posix_openpt.3.en
CC-MAIN-2022-40
refinedweb
221
55.84
[This is in reply to Andrew Tolmach's mail to the Haskell list. He asked that replies should be sent to ghc-users.] > Many months after the topic was first raised, there is finally a draft > document describing a formal external syntax for GHC's "Core" > intermediate language. Wow! This is very cool - it answers so many questions that I previously had to examine the source code to figure out! Two meta-comments before I go into detailed comments. 1) GHC documentation tends to have a very short half-life. What can be done to minimize this problem? For example can the tables of primops in section 4 be automatically generated? 2) I haven't seriously hacked on GHC or the the GHC frontend for several years so some of the following comments will be a little off. I'm hoping that people who are more up to date will correct me and that their corrections will be useful to anyone like me who used to know GHC but have lost track of the details. OK, onto the detailed comments. 1) GHC identifiers have SrcLocs (filename/linenumber) attached to them. It might be useful to be able to pass those into your parser. (But what are the SrcLoc's used for in Core code? Would they be useful?) 2) The names you use for operations all use the somewhat unreadable Z encoding. I think you should Z-decode these names because: 1) They're a lot more readable. e.g., decode "ZLZmZgZR" = "(->)" 2) It's possible that GHC will get confused if you fail to Z-encode a name. For example, it isn't possible to have the name "Z" in Core because GHC encodes the Haskell identifier "Z" as "ZZ". From memory, the Z encoding is something like this: ZL -> ( ZR -> ) Zm -> - Zg -> > Zl -> < Ze -> = Zc -> : Zp -> . Zxxx -> chr(xxx) where xxx is a sequence of digits 3) The restriction that Main.main have type IO a is unfortunate and, I think, unnecessary. It shouldn't be that hard to change it so that its type is more like: State# RealWorld -> (# State# RealWorld, Void# #) 4) While explaining namespaces, it'd be convenient to point out that you put a % in front of all keywords. Hmmm, I guess this is one reason to use Z-encoded names: % is a legal Haskell identifier. 5) The @a in datatype declarations threw me for a while. Later I saw you using it like BigLambda in an explicitly typed lambda calculus and understood what you were doing. It'd be worth making the connection explicit. 6) An alternative way of defining data constructors would be like this: %data BinTree :: * -> * = { Fork :: %forall a . BinTree a -> BinTree a -> BinTree a; Leaf :: %forall a . a -> BinTree a } That is, specify the actual type of data constructors instead of using Haskell datatype declaration syntax. This makes the language a little easier to specify. Notice that this syntax is a little more liberal than standard Haskell syntax because you could write types like: %data Expr :: * -> * = { Int :: Int -> Expr Int; App :: %forall a, b . Expr (a -> b) -> Expr a -> Expr b; Lam :: %forall a, b . Var a -> Expr b -> Expr (a -> b) } which cannot be expressed using normal Haskell syntax because the result types of each constructor are different. I saw Lennart mention this idea in a talk about 8 years ago and I've always wanted to play with it. :-) On the other hand, you might avoid this generalisation in case GHC does something weird with it. 7) In section 3.6, you say: "Value applications may be of user-defined functions, data constructors or primitives. Application of the latter two sorts need _not_ be saturated." I think you mean "none of these applications need to be saturated although both previously published descriptions of Core required that the latter two be saturated." 8) You say that the list of case alternatives "need not be exhaustive, even if no default is given; it is a disastrous run-time error if a needed case arm is missing." 1) Just how disastrous? Is an exception raised or does the RTS crash? 2) I feel a little uneasy with this design decision. 9) You don't mention the _scc_ operation used for profiling. 10) There's a move on to define %ccall in a more generic way that would apply to calling Java functions, .net functions, etc. This is bound to result in a change of the "ccall" name. It'd be worth adding a Working note to that effect. 11) The discussion of the terms "unboxed", "heap allocated" and "unlifted" (page 9) doesn't seem quite right. I believe it is: Lifted types must be heap allocated. Unlifted types may be heap allocated (e.g., Array#s) or unboxed (not heap allocated) (e.g., Int#). 12) The discussion of operations on MVar# (page 10) says that they take an initial type argument of the form (State# t) for some thread "t". Is this true? Does "t" not have to be RealWorld#? What does it mean if it is not RealWorld#? 13) Are the CCallable and CReturnable classes still there? Blech! [I'm the one who implemented them - I'd hoped they'd gone by now.] 14) If strings are represented by the "address of a C-format string" (section 4.2), how do we represent strings with embedded \0 characters in them? 15) dataToTag#, tagToEnum# and getTag# (section 4.4.1) might be used to implement the to/fromEnum operations but they may also be remnants of GHCi version 0.0 - a metacircular interpreter that was in GHC version 0.18 (or thereabouts). If the latter, someone ought to give them a decent burial. 16) You document unsafeCoerce# but not reallyUnsafePtrEq# :: a -> a -> Bool did someone finally kill that? 17) decodeDouble# and decodeFloat# are used to extract the exponent and mantissa of a floating point number. Ignoring unboxity, it returns an Int (the exponent) and an Integer (the mantissa). Representing that as unboxed types, you get (# Int#, Int#, ByteArray# #) 18) Section 4.4.8 asks what the relationships are between quotient, remainder, div and mod. This ought to be the same as that specified in the Haskell report for their boxed equivalents. 19) Your question about what indexArray# does and what alignment restrictions apply reminds me of a subtlety: When indexing into a ByteArr# or a Addr#, is the index scaled by the size of the object being read/written (as in C) or not? I am pretty sure that the index on an Addr# is not scaled (and the value of the (Addr# + Int#) combination is subject to exactly the same alignment restrictions as imposed by the hardware architecture. I don't recall the story for ByteArr# but a ByteArr# object is always aligned to the nearest 4 (maybe 8) byte boundary. 20) Section 4.4.15 says that takeMVar# and putMVar# "die" if the MVar is empty (respectively, full). 1) What does "die" mean? More generally, how is failure implemented in the primops? Do they raise exceptions? Do they return error codes? Do they call exit? Do they crash the RTS? 2) Are you sure these ops "die"? Their unboxed Haskell brethren will block the thread under the same circumstances. 21) You repeatedly mention the sequential implementation of the RTS. Last time I looked, it wasn't possible to build the RTS with concurrency turned off. If one were to build the GHC RTS with concurrency turned off, I think the concurrency primitives should not be available rather than providing versions that always/mostly fail. Or maybe one could implement non-preemptive "concurrency" in much the same way as we did in Hugs? (Not sure I recommend this route - the interactions between threads and exception handlers in Hugs are somewhat tricky.) -- Alastair Reid reid@cs.utah.edu
http://www.haskell.org/pipermail/glasgow-haskell-users/2001-June/001962.html
CC-MAIN-2014-23
refinedweb
1,307
65.93
0 Bsplayer is a videoplayer. I want to start bsplayer with the parameter -stime-x, where x is the amount of seconds. This result in playing a movie from say 23 minutes 10 seconds with the parameter -stime=1390. The whole thing I can manage to do in DOS and looks like: @echo off "c:\program files\webteh\bsplayerpro\bsplayer.exe" "D:\rodin\rodin.mp4" -stime=1390 but whatever I try and search for on this forum and google of course I cannot settle this in python code. I want to hop to a interesting time positions in the vid with a menu. What i have is this: import webbrowser import os import win32api print 'Marko Rodin\n' print 'what starting time? ' a=raw_input('hour:') b=raw_input('minutes, ') c=raw_input('seconds ') d=(int(a)*3600)+int(b)+int(c) #win32api.ShellExecute(0,"open", "Vortex Based Mathematics by Marko Rodin.mp4","-stime=1390","D:\\rodin",1) DOESNOT WORK os.execve("c:\\program files\\webteh\\bsplayerpro\\bsplayer.exe", ["-stime=str(d)"], "d:\\rodin\\rodin.mp4") # TypeError: execve() arg 3 must be a mapping object What's wrong with this and is there another shorter way to start the vid in this manner? Thanks!
https://www.daniweb.com/programming/software-development/threads/238507/starting-video-with-parameters-in-python
CC-MAIN-2017-09
refinedweb
201
58.48
New user here. Just wrote my first plugin (fun API!); but I'm having an issue writing tests for the plugin. It could be a simple Python issue, too (I'm relatively new to Python, as well). This is the error: ImportError: No module named sublime Here's the distilled unit test, which resides in my-plugin/tests: import unittest import json from my_plugin import MyCommand class MyUnitTest(unittest.TestCase): def test_read_json_data(self): ... Very abbreviated plugin: This might help for making your plugin: Why I don't know much about unit testing, I might be about to help with the plugin. Could I ask what you're trying to do? Thanks. I've built a working plugin with a handful of commands, keyboard shortcuts, etc; but I can't figure out how to write even basic unit tests for it, which is how I'm used to building software (ala TDD). I'm building unit test runner plugins. Are you trying to run the unit test with your system's python? The sublime module doesn't exist outside of the application, so you need to run your python code with the built in interpreter. Ah, now it's starting to make sense--thanks! I'm using nose as the runner, which uses my system's python. So, the built-in interpreter is lib/python26 ? I've found an approach that seems to work acceptably for testing plugins: github.com/SublimeText/VintageE ... ster/tests There's more testing-related code in here: github.com/SublimeText/VintageE ... _runner.py It still needs some work, though. Thanks, guillermoo. That looks interesting. Stupid question: how do you "run" the tests? bill You need to run them manually by running "vintage_ex_test_runner_commander" (which needs a better name :). I might have restricted some of the necessary commands to run only if the current directory is pointing to the "VintageEx" package. I did this so I wouldn't be able to accidentally run the tests when I'm not developing "VintageEx". I put all of this together quite quickly without too much thinking, so I'll have to make some changes to the code.
https://forum.sublimetext.com/t/unit-testing-plugins/3641/5
CC-MAIN-2016-07
refinedweb
356
66.44
#include <xti.h> int t_snd(int fd, void *buf, unsigned int nbytes, int flags);: If set in flags, the data will be sent as expedited data and will be subject to the interpretations of the transport provider.(3NSL) or t_getinfo(3NSL), that() will execute in asynchronous mode, and will fail immediately if there are flow control restrictions. The process can arrange to be informed when the flow control restrictions are cleared by means of either t_look(3NSL) −1(3NSL). The error TLOOK is returned for asynchronous events. It is required only for an incoming disconnect event but may be returned for other events. On successful completion, t_snd() returns the number of bytes accepted by the transport provider. Otherwise, –1 is returned on failure and t_errno is set to indicate the error. Note that. On failure, t_errno is set to one of the following: WARNINGS, below. The specified file descriptor does not refer to a transport endpoint. An invalid flag was specified.: The t_errno values that this routine can return under different circumstances than its XTI counterpart are: In the TBADDATA error cases described above, TBADDATA is returned, only for illegal zero byte TSDU ( ETSDU) send attempts. See attributes(5) for descriptions of the following attributes: fcntl(2), t_getinfo(3NSL), t_look(3NSL), t_open(3NSL), t_rcv(3NSL), attributes(5).
https://docs.oracle.com/cd/E36784_01/html/E36875/t-snd-3nsl.html
CC-MAIN-2019-30
refinedweb
217
54.22
Feedback Getting Started Discussions Site operation discussions Recent Posts (new topic) Departments Courses Research Papers Design Docs Quotations Genealogical Diagrams Archives Here is a simple software engineering problem that I have encountered the last few days. It's not something dramatic, but one that has made me stop development in search of elegant solutions. I am posting this here because I would like to see how functional programming languages would solve this problem. Here is the problem: I am writing a GUI toolkit that reuses the Win32 API wrapped up in a set of C++ classes (the reason is that there is no GUI library that completely hides Win32 will reusing it - existing libraries either follow Win32 logic (ala WxWidgets) or provide their own implementation (ala Qt/Swing)). As you may know, Win32 is not object-oriented, nor does it have a well thought out/consistent interface. For example, although the menu bar is a screen object, it is not a window: all there is is a bunch of functions for creating a menu, redrawing it, adding menu items etc. But I want to have menus and other non-window Win32 items as widgets in the toolkit, for consistency reasons. The problem lies in the organization of classes. I have 4 types of widgets: The object-oriented design solutions would be: Widget Component Container Window WindowedComponent WindowedContainer So what I am asking is how functional programming languages solve an issue like the above, which is an issue of code organization/clarity/reuse/taxonomy. I can't seem to find a good object-oriented solution to the problem, nor any of my colleagues/friends can. So I am asking if other programming paradigms have a better solution for this problem. ... there is no GUI library that completely hides Win32 will reusing it - existing libraries either follow Win32 logic (ala WxWidgets) or provide their own implementation (ala Qt/Swing). Tk hides the details of Win32 programming model, but uses native widgets on both Windows and Mac. If you grab the Tile extension it will also pick up Win XP themes. It's also a darn sight easier to use than Qt, Swing or WxWidgets. Is there any specific reason why you wouldn't want to use it? Other than that, pretty much every FP language will have some GUI toolkit, so I'd suggest having a look at what is out there. You also might want to look at Haskell's type-classes. In the MSDN library under WIN32 API there is a description of how to use the basic C/SDK window system. (the mother of all window systems?) It seem very straight forward and easy to understand compared to MFC. The only problem is that you have to type in about 100 lines of ugly C code, or you can cut and paste the GENERIC.C sample and start adding your own call back functions, etc. I believe that there are respectable people who still use this system although I don’t know who they are. Tk does not support the Model-View-Controller paradigm, does it? And most FP languages use either Gtk, or wxWidgets, or some other X-Window System-only toolkit, which does not directly show me how an FP language would solve the problem I have. WxWidgets does not hide Win32 details (for example, it uses message maps instead of overloading), and Gtk does not honour the constraints I have (for example, Gtk Buttons and Gtk MenuItems are containers). I don't know if this helps, but from an "engineering" point of view a window system is an interrupt handler. Such things are very closely related to the hardware and not ideal material for functional programming. On the other hand I have never seen an "oop wrap" of the WIN32 API that I really liked. You seem to be conflating two things - the model you want to present to the user and the model used for the implementation. The issue with this is that the original architecture did not have a nice OO structure with its components well-related by a sensible inheritance structure. You cannot build a *transparent* object model of what is an essentially ad-hoc component-based architecture without exposing the ad-hoc component-based nature of the underlying architecture. As such, if you want to provide a nice OO structure containing inheritance for this collection of components, you'll need to use the underlying system only as an artifact of implementation, plastering over the "transparent model" to provide your own abstractions. This will require some cruft in the implementation regardless of how you model the object hierarchy. If I were designing a model to expose to the user, I would have a base Component class and a subclass of that called CompoundComponent which added a collection of sub-components. I'd then have a Window class (probably inheriting from CompoundComponent, because it would have it's own set of child components - e.g., scroll bar widgets, title bars, etc.) with a secondary collection of Components called "contents" that holds what's contained in the window. I'd then place Menus, Buttons, Forms, etc., in the external facing object model in a way that makes sense in this model. So what if a Button is a Window in Win32? It doesn't need to be one for 99% of your users, so don't expose that use fo the component. Do the same analysis for the other primitive Win32 components. Build a sane object model for the user and leave the Win32 cruft where it belongs - the implementation world. Again, you cannot build a *transparent* object model of what is an essentially ad-hoc component-based architecture without exposing the ad-hoc component-based nature of the underlying architecture. (If you want an overview of the main message of this long post, read fadrian's post which was posted while I was writing the following mini-essay.) First, I'll note that this is not actually a programming language question, and despite appearances, it's not even a question about comparison between paradigms — it's really about How to Design Programs, a subject for which many of the principles transcend any specific paradigm. It sounds as though the best approach here will be somewhere between #3 and #4, but that the problems with such approaches has more to do with limitations of C++ than the appropriateness of the conceptual designs. For example, all of the problems mentioned with approach #4 would be non-issues in a language with a good polymorphic type system (e.g. ML, OCaml, Haskell). In addition, some of the complexity in #3 comes from the need to use multiple inheritance merely to provide common interfaces across a set of classes. Unfortunately, C++ fails to explicitly separate interface inheritance from implementation inheritance, which means that it's up to the programmer to organize code in a way that hides implementation details, as opposed to relying on the language to do so. This means that a price is paid in terms of implementation complexity, merely for hiding implementation details. (In C++, Real Programmers implement their own implementation hiding mechanisms, and some of the most famous books about C++ focus heavily on such techniques.) Regardless of what language or paradigm you're using, you should start out by being careful to distinguish between the design of the interface to your library, and its implementation. In C++, it can be difficult to avoid confusing the two, and this often results in confused design. With proper separation, the problems of complexity and namespace pollution don't arise, because the interface will only provide what's actually needed by clients of the class to get the job done, and nothing more. This means every feature of the interface can be justified by saying "you need that to be able to...", not "we had to put that in because the implementation..." So, the first task is to forget entirely about the implementation, and look at what makes sense for clients of these classes. Don't think about code reuse or the organization of the implementation, think only about interfaces. The interface should not contain anything which has to be explained in terms of the implementation — that's what leads to complexity and namespace pollution. As part of this process, you should be careful to avoid allowing the bad design of Windows to contaminate your interface design. Factoring the design around an abstraction called a "Window", so that e.g. a Button ends up being some kind of Window, presupposes a great number of design choices related to what a Window is. If the goal is to require that users think in terms of Microsoft's abstractions, fine, but that's an external limitation on the design which constrains the possible solutions. At the very least, in the design phase you will be better off treating a Window as an hidden implementation detail, so that if you do decide to expose some version of that abstraction to your users, you do so with a better understanding of its relationship to your design, which should be more clearly factored than that of Windows. As another example here, the idea in #4 of having a Window class as a template parameterized by Container or Component seems wrong from the library user's perspective. How would you explain to a library user why such a design was chosen, without resorting to talking about implementation details that should be hidden? Of course, it's possible that you might use such a design internally in order to achieve code reuse, but if there's a reason to expose that detail to library users, it hasn't been made clear. What needs to be examined are the operations that need to be performed on concrete objects such as buttons, menus, menu items, etc., and how best to factor those operations for users of the library, so that they can use common interfaces to access the different kinds of objects, where that makes sense. This is actually a standard OO design exercise, which seems to have been confused in the question by implementation details relating to C++ and Windows. From the problem description given, it seems that a likely solution at the interface level might involve as few as three interfaces: in the language of approach #3, these would be some variation of Component, Container, and Window (bearing in mind that "Window" is an interface whose purpose hasn't yet clearly been defined). It seems likely that Container and Window would be subtypes of Component. Once the interface design work has been done, the result should be a suitable interface for users of the library. The next task is figuring out how to express that interface design in the target language. Even in C++, expressing an interface design is not that difficult. If the result does end up with interfaces that cut across classes (as seems likely), it might very well make sense to use multiple virtual inheritance to implement that, but that should be expressed purely at the interface level, e.g. using multiple virtual inheritance from pure abstract classes (a.k.a. interfaces). In Java, you'd use Java's explicit interface mechanism to implement something like this; in Haskell, you might use typeclasses; in OCaml, you might use structural subtyping. The functional languages also tend to have a number of other ways of expressing this sort of thing (e.g. functors, units), because of the flexibility of higher order functions and their ability to parameterize behavior. The part where things will start to get messy in C++ is in implementation. That's nothing to do with object-orientation, it's purely to do with the limitations of C++. Having to suck up messy implementation details is part of the deal with the devil you make when using C++. (Like most deals with the devil, any perceived upside is purely illusory.) Again in terms of the implementation described in approach #3, the widget implementations might map to interfaces as follows: Note that although it might be useful to have implementation classes such as WindowedContainer or WindowedComponent to support reuse of code in the implementation of some widgets, that doesn't necessarily mean that these classes should be exposed to users of the library, except possibly to help them achieve code reuse in implementing new widgets of their own. In that case, it should be recognized and made clear that the purpose of such classes is reuse of implementation, and that they do not represent part of the client code interface to widgets. Note further that if a class such as Form is implemented by inheriting from a WindowedContainer class, then ideally, that implementation inheritance should be hidden from users of the Form class. Strategies to do this sort of thing are standard fare in the C++ literature, see e.g. Coplien or Alexandrescu. If you choose to forgo such strategies, then you have to recognize that you're making an optimization choice which results in less clearly delineated boundaries between interface and implementation, which results in leakage of implementation concerns into the client's code. In C++, this has consequences related to header file dependencies and so on, so forgoing such encapsulation puts the usability of the resulting library very much at risk. You wouldn't need to worry about this sort of thing in a higher-level language. I've provided some suggestions phrased in fairly OO-sounding terms. However, don't be misled into thinking that such solutions are unique to OO. Separating interface from implementation is a basic design strategy, which ironically, most OO languages do a particularly poor job of: they conflate interface and implementation at multiple levels, most damagingly in the inheritance mechanism. If you really want to understand these issues, your best bet is to forget about OO, and look at the ways in which truly abstract interfaces can be created with higher-order functions — both "manually", as well as via related mechanisms such as functors (e.g. in ML) and units (in PLT Scheme; see here for a gentler introduction). Given familiarity with such systems, the shortcomings in OO languages in these areas become clear, and it is easier to avoid the traps which the design flaws in OO languages create. The overriding point in all of this is that designs should first be done conceptually, and abstractions should be designed, well, abstractly. Once you've done that, the rest is just a matter of expressing the abstractions in your chosen language. The functional languages have an advantage here for many reasons — they're higher level, and more expressive both in terms of abstraction capabilities and their type systems; but even if it weren't for that, the mere fact that they don't have fundamental errors in the design of their core abstraction mechanisms is a huge benefit. [Edit: another nice introduction to some of the issues I've touched on above can be found in Matthias Felleisen's Components and Types.] To second this comment. Thanks a lot for the replies, they are really appreciated. Since the specified problem states that you're building on top of the Win32 API, I really doubt that you're going to be able to avoid imperitive programming. My advice in this specific instance would be to use a language like Ocaml which supports both imperitive and OO programming, and do things the obvious way. That being said, I think there's an interesting area of debate: how could/should you design a proper functional GUI library? I'm assuming that you'd have a fundamental imperitive monad to do the actual I/O, something of the form (in Ocaml's types): val putpixel: x:int -> y:int -> color:int -> unit I'm thinking of splitting the drawing of the various GUI widgets from handling their behavior. So take the scroll bar widget. There would be two "objects" associated with the scroll bar. One would be the function to draw the scroll bar in it's current position. The other object would be a listener- so when you clicked on the scroll bar to move it, the listener would change the current set of drawable objects- replacing the current scroll bar drawing function with a new scroll bar drawing function that draws the scroll bar in it's new place. Especially as the scroll bar listener object also changes other how other drawing functions, for example the drawing function of the text field associated with the scoll bar. Fudgets are one example of a functional approach to a GUI API (for X Windows, not MS Windows, but the principles are similar). Another is YAMPA and Fruit, which is built on top. They are basically the same, with slightly different formal semantics. However, they both lack a solid set of real-world combinators that actually makes gui programming easy. A similar approach is being pursued by Adam and Eve (mentioned here on LtU at some point), but staying a little closer to the tried and tested gui models. It's also worth looking at the Qtk library. It's wonderfully simple though perhaps not as elegant as Fruit.
http://lambda-the-ultimate.org/node/865
CC-MAIN-2017-51
refinedweb
2,880
57.2
It seems that Red Hat's latest glibc has introduced a strict interpretation of the ISO C mktime() definition, such that dates before 1970 are now considered to be out of range. This has caused breakage of any application that relies on the old behaviour, such as PostgreSQL. It is also the case that neither Debian's nor Suse's glibc show this change; nor is it mentioned in their changelogs.. Do you know why this change has occurred only in Red Hat's version? Are the distributions' version numbers out of sync? A small program for testing is attached. On Debian's latest libc6 it reports a timestamp of -31712400, but on latest Red Hat it apparently reports -1. -- Jesus answering said unto them, They that are whole need not a physician; but they that are sick. I come not to call the righteous, but sinners to repentance." Luke 5:31,32 #include <stdio.h> #include <time.h> int main(int argc, char *argv[]) { int failout; struct tm fails; fails.tm_sec = 0; fails.tm_min = 0; fails.tm_hour = 0; fails.tm_hour = 0; fails.tm_isdst = -1; fails.tm_year = 68; fails.tm_mon = 11; fails.tm_mday = 30; failout = mktime(&fails); printf("The system thinks 11/30/1969 is a timestamp of %d \n", failout); return 0; } Attachment: signature.asc Description: This is a digitally signed message part
http://lists.debian.org/debian-glibc/2002/05/msg00010.html
crawl-002
refinedweb
224
70.19
Odoo Help This community is for beginners and experts willing to share their Odoo knowledge. It's not a forum to discuss ideas, but a knowledge base of questions and their answers. constraint not working porperly hie...I am new in openerp. Here is my code. When condition is met properly, it throws exception which is correct. But when condition is not met, it still throws exception. How?????? def _check_resign_date(self,cr,uid,ids,context=None): for rec in self.browse(cr,uid,ids): if rec.status == 'resign': if rec.resign_date > rec.separation_date: raise osv.except_osv(('Alert!'),('Resignation date should not be greater than Separation Date.')) elif rec.status == 'transfer': if rec.transfer_date > rec.separation_date: raise osv.except_osv(('Alert!'),('Transfer date should not be greater than Separation Date.')) return True _constraints = [ (_check_resign_date,'a',['id']), ] Hello vikram, Correct the argument list on your constraint set. Brief description about arguments _constraints = [(_check_resign_date,'YOur warning Message !!', ['field_name'])] 1) First argument is your method name 2) The second argument is warning message when your method return false. 3) It must be field name of your model when that field will change than this constraint will trigger. so make sure you are giving the exact field name. No need to raise exception from there return true or false based on your warning message will trigger from constraint. I suggeest you need to pass both date on your constraint. like : _constraints = [(_check_resign_date,'YOur warning Message !!', ['status','resign_date','separation_date'])] return true or false from your method message will trigger from constraint. Hope this will help. Regards,!
https://www.odoo.com/forum/help-1/question/constraint-not-working-porperly-37681
CC-MAIN-2016-50
refinedweb
259
62.14
Kubernetes to the Cloud with Spring Boot and JHipster When your business or application is successful, it needs to scale. Not just technology-wise, but human-wise. When you’re growing rapidly, it can be difficult to hire developers fast enough. Using a microservices architecture for your apps can allow you to divide up ownership and responsibilities, and scale teams along with your code. Kubernetes is an open-source platform for managing containerized workloads and services. Kubernetes traces its lineage directly from Borg, Google’s long-rumored internal container-oriented cluster-management system. Spring Boot and Spring Cloud were some of the pioneering frameworks in Javaland. However, even they stood on the shoulders of giants when they leveraged Netflix’s open-source projects to embrace and extend. In 2018, Netflix OSS announced they’d come full circle, and adopted Spring Boot. Today, I’d like to show you how to build and deploy (with Kubernetes) a reactive microservice architecture with Spring Boot, Spring Cloud, and JHipster. Why reactive? Because Spring Cloud Gateway is now the default for JHipster 7 gateways, even if you choose to build your microservices with Spring MVC. Spring Cloud Gateway is a library for building an API Gateway on top of Spring WebFlux. It easily integrates with OAuth to communicate between microservices. You just need to add a TokenRelay filter. spring: cloud: gateway: default-filters: - TokenRelay Prerequisites - - - A Google Cloud Account - A Brief Intro to Kubernetes (K8s) - Create a Kubernetes-Ready Microservices Architecture - Generate Kubernetes Deployment Descriptors - Install Minikube to Run Kubernetes Locally - Create Docker Images with Jib - Register an OIDC App for Auth - Start Your Spring Boot Microservices with K8s - Encrypt Your Secrets with Spring Cloud Config - Deploy Spring Boot Microservices to Google Cloud (aka GCP) - Encrypt Your Kubernetes Secrets - Scale Your Reactive Java Microservices - Monitor Your Kubernetes Cluster with K9s - Continuous Integration and Delivery of JHipster Microservices - Spring on Google Cloud Platform - Why Not Istio? - Learn More About Kubernetes, Spring Boot, and JHipster. minikube --cpus 8 start. 🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default? The Okta CLI streamlines configuring a JHipster app and does several things for you: - Creates an OIDC app with the correct redirect URIs: - Creates ROLE_ADMINand ROLE_USERgroups that JHipster expects - Adds your current user to the ROLE_ADMINand ROLE_USERgroups - Creates a groupsclaim in your default authorization server and adds the user’s groups to it NOTE: The redirect URIs are for the JHipster Registry, which is often used when creating microservices with JHipster. The Okta CLI adds these by default. You will see output like the following when it’s finished: Okta application configuration has been written to: /path/to/app/.okta.env Run cat .okta.env (or type .okta.env on Windows) to see the issuer and credentials for your app. It will look like this (except the placeholder values will be populated): export SPRING_SECURITY_OAUTH2_CLIENT_PROVIDER_OIDC_ISSUER_URI=" export SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_OIDC_CLIENT_ID="{clientId}" export SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_OIDC_CLIENT_SECRET="{clientSecret}" NOTE: You can also use the Okta Admin Console to create your app. See Create a JHipster App on Okta for more information.! ./kubectl-apply.sh -f You can see if everything starts up using the following command. kubectl get pods -n demo Restart your JHipster Registry containers from the k8s directory. ./kubectl-apply.sh -f. minikube stop. ./kubectl-apply.sh -f You can monitor the progress of your deployments with kubectl get pods . What's your favorite way to protect secrets in your @kubernetesio YAML files?— Matt Raible (@mraible) April 28, 2021 learn more about base64 encoding/decoding in our documentation. echo -n <paste-value-here> | base64 --decode Put the raw value in a tls.crt file. Next, install Kubeseal. On macOS, you can use Homebrew. For other platforms, see the release notes. brew install kubeseal. kubectl scale deployments/store --replicas=2 -n demo Scaling will work just fine for the microservice apps because they’re set up as OAuth 2.0 resource servers and are therefore stateless. However, the gateway uses Spring Security’s OIDC login feature and stores the access tokens in the session. So if you scale it, sessions won’t be shared. Single sign-on should still work; you’ll just have to do the OAuth dance to get tokens if you hit a different instance. To synchronize sessions, you can use Spring Session and Redis with JHipster.: Reactive Java Microservices with Spring Boot and JHipster Build a Secure Micronaut and Angular App with JHipster Fast Java Made Easy with Quarkus and JHipster How to Docker with Spring Boot Security Patterns for Microservice Architectures Build a Microservice Architecture with Spring Boot and Kubernetes (uses Spring Boot 2.1). Okta Developer Blog Comment Policy We welcome relevant and respectful comments. Off-topic comments may be removed.
https://developer.okta.com/blog/2021/06/01/kubernetes-spring-boot-jhipster?utm_campaign=sponsorship_oauth_all_multiple_dev_dev_oauth-banner_null&utm_source=oauthio&utm_medium=cpc
CC-MAIN-2022-21
refinedweb
787
54.73
Another issue which I found really interesting... We had an ASP.NET 2.0 application and in one of the pages (called Login.aspx) we used the Login control. It worked beautifully from the IDE (F5 or CTRL+F5), but when we deployed it on the webserver (precompiled), it showed the following error. Compiler Error Message: CS0030: Cannot convert type 'ASP.login_aspx' to 'System.Web.UI.WebControls.Login'Source Error:Line 112: public login_aspx() {Line 113: string[] dependencies;Line 114: ((Login)(this)).AppRelativeVirtualPath = "~/Login.aspx";Line 115: if ((global::ASP.login_aspx.@__initialized == false)) {Line 116: dependencies = new string[1];Source File: c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\cpi.operations.field.assv\32f5e4c7\5ac0558e\App_Web_login.aspx.cdcab7d2.crjut9bu.0.cs Line: 114 Clearly, it was looking like a name conflict. We changed the name of the page class from Login to Login1 and re-deployed on the webserver and it worked fine, as expected. Moral of the story... in 2.0, don't use Login as a class name if you intend to use the Login Control. It is not really that big a compromise, specially if you consider the functionality of Login control!! Besides, you can always name your page as Login.aspx. All you need to ensure is that, the name of the class of that page is NOT called Login. PS. It is not that naming a class as Login WILL run into problems, but just in case you get the above error and your symptoms match, I hope this might help you... Cheers!Rahul Same problem, same solution - rename the class. How many times will I fall into this trap !!!! Thanks for the post :) I had the same error with a page named menu.aspx so don´t use menu as page name I had the same problem with a page called View.aspx, so I guess that is also a name to avoid... It disappeared when I renamed to ViewSpecimen.aspx Thanks Rahul I also found that changepassword.aspx has the same issue. Thanks, Rahul! This really helped me out. We had the same issue. Renaming of the class workded like charm. But what if you need to have class name that get in conflict? (in our case the class was named Content). This is where namespaces become handy! Cheers, George S. We had this error. We cleared the problem up by examing the page directive in Login.aspx. We noticed that the Inherits="Login" had a specific reference to an assembly, as in Inherits="Login, blah-blah". We deleted the ", blah-blah" and that fixed the problem. I have same problem, thank 4 u solution Had the same problem.Solved now.Renamed the class and working. Thanks a lot. thanks for this artical , i was facing same problem for fileupload control , but now it sloved Its same as name conflict, just change Page Class ReportViewer to ReportViewer1 same problem using Login.aspx with class name Login. changed class name. it works. Thanks. the "Login" is saved by c#, so change it. We renamed our class to _Login instead of Login1. I find the naming of ... well anything... with a number suffix (like Login1, TextBox1, DropDownlist1, etc) simply silly. But VS needs to name it something else, but also you better believe I then rename it to something else ;-) And yeah; the Login control does really save us a lot of work! You've been kicked (a good thing) - Trackback from DotNetKicks.com Just use namespace on each page of ur website namespace MyNamespace { public partial class Login } it will work Regards Alochan Yeah, i got this problem while deploying my application in my localhost. The treatment I did is to add a namespace in the Login.aspx.cs file and update it from the Login.aspx's Inherits property. Thank you. every page has default class (pagename) but how it become automatically login . Now How i chane it login to login1 it gives error Thanks Rahul... It worked for me toooo:) If you would like to receive an email when updates are made to this post, please register here RSS Trademarks | Privacy Statement
http://blogs.msdn.com/rahulso/archive/2006/07/03/cs0030-cannot-convert-type-asp-login-aspx-to-system-web-ui-webcontrols-login.aspx
crawl-002
refinedweb
694
68.57
Creates a temporary file. Creates a file with the specified name (or a pseudo-random name in the system temp directory), and make sure it gets deleted from the file system upon object destruction. Definition at line 25 of file Sawyer/FileSystem.h. #include <FileSystem.h> Create a temporary file in the system temp directory. Definition at line 32 of file Sawyer/FileSystem.h. Create a temporary file with the specified name. Definition at line 38 of file Sawyer/FileSystem.h. Unlink the temporary file from the filesystem. This also closes the stream if it's open. Definition at line 46 of file Sawyer/FileSystem.h. Path of temporary file. Definition at line 53 of file Sawyer/FileSystem.h. Output stream for temporary file. Definition at line 56 of file Sawyer/FileSystem.h.
http://rosecompiler.org/ROSE_HTML_Reference/classSawyer_1_1FileSystem_1_1TemporaryFile.html
CC-MAIN-2019-26
refinedweb
133
62.95
Creating Asynchronous Actions in ASP.NET MVC Introduction Asynchronous actions allow you to handle more concurrent requests and can be implemented using async / await keywords. Asynchronous actions are useful in situations where you are performing some network operation such as calling a remote service. This article discusses asynchronous actions and also shows how to create them in an ASP.NET MVC. Overview of Asynchronous Actions Before you create an asynchronous action, let's quickly understand what asynchronous processing is with respect to ASP.NET MVC and how it is beneficial to your application. public ActionResult Index() { DbHelper helper = new DbHelper(); List<Customer> data = helper.GetCustomerData(); return View(data); } The above code shows an Index() action method that returns an ActionResult. Inside, it creates an instance of a helper class (DbHelper) and calls its GetCustomerData() method. The GetCustomerData() method wraps the remote service call and returns the data returned by the service as a List of Customer objects. The data is then passed to the Index view as its model. The Index() action method shown above executes in asynchronous manner. When a request lands to the Index() action, ASP.NET picks up a thread from its thread pool and runs the Index() method on the allotted thread. Since Index() action is synchronous, all the operations, including the remote service call, happen sequentially (one after the other). Once all the operations are over, the thread running the code can be reused for some other execution. Thus a thread from the thread pool is blocked for the entire duration of the execution - start to end. Let's say this execution takes 10 seconds (just a hypothetical value). Now assume that the Index() and GetCustomerData() method has been modified to work in asynchronous manner. When a request lands to the Index() action, ASP.NET picks up a thread from the thread pool as before. The Index() method starts running on a thread allotted to it. However, the allotted thread invokes the GetCustomerData() asynchronously and is immediately returned to the thread pool to serve other requests. When the remote service call returns the required data, another thread from the thread pool is allotted to finish the remainder of the Index() method code. Thus instead of blocking a thread for the entire duration of the processing, a thread is released as soon as the network operation starts and the processing resumes on some other thread when the network operation returns. Even in this case the total time taken for the processing is 10 seconds but the thread is freed to serve other requests. This results in improved handling of concurrent requests. Although there may not be any performance improvement as far as single requests processing time is concerned, the overall performance of the application may be better than before because there is less queuing of requests. Synchronous operations are good when you wish to stick to a a simple programming model, operations are short running and CPU centric. On the other hand, asynchronous operations are good when concurrent request handling is more important than simplicity, operations are long running and network centric (such as remote Web API or service calls). Creating Asynchronous Actions Now that you know the basics of asynchronous action methods, let's create an asynchronous action method using async / await keywords of C#. To begin developing this sample application, create a new ASP.NET MVC application using empty project template. Then add an ADO.NET Entity Data Model for the Customers table of the Northwind database. The following figure shows this data model: Customers table Then add a class to the project and name it DbHelper. This class contains the GetCustomerDataAsync() method as shown below: public class DbHelper { public async Task<List<Customer>> GetCustomerDataAsync() { NorthwindEntities db = new NorthwindEntities(); var query = from c in db.Customers orderby c.CustomerID ascending select c; List<Customer> data = await query.ToListAsync(); return data; } } This application doesn't use any real network operation such as a service call. Just for the sake of testing, it fetches all the customers from the Customers table and returns to the caller. The GetCustomerDataAsync() method returns a Task object that wraps a generic List of Customer entities. The GetCustomerData() is marked with the async keyword indicating that it is to be called in asynchronous manner. Since the method is asynchronous the method name ends with "Async". Inside, a LINQ to Entities query is formed that fetches all the customers from the database. Notice that the data is realized by calling ToListAsync() method. The await keyword used in the ToListAsync() statement indicates that the execution should wait for ToListAsync() to complete. The List of Customer entities is then returned to the caller. Now, add a controller to the Controllers folder and add the following code to it: public async Task<ActionResult> IndexAsync() { DbHelper helper = new DbHelper(); List<Customer> data = await helper.GetCustomerDataAsync(); return View(data); } This is the same Index() action you saw earlier but now it has been converted to its asynchronous version. The IndexAsync() method returns a Task object that wraps the ActionResult. It is also marked with an async keyword. Inside, it creates an instance of DbHelper class and calls the GetCustomerDataAsync() method. Notice the use of the await keyword. You will find that async and await always go hand in hand. Now, add IndexAsync view to the project and add the following markup to it: @model List<AsyncMVCDemo.Models.Customer> @{ Layout = null; } <!DOCTYPE html> <html> <head> <meta name="viewport" content="width=device-width" /> <title>Index</title> </head> <body> <h1>List of Customers</h1> <table border="1" cellpadding="6"> @foreach(var customer in Model) { <tr> <td>@customer.CustomerID</td> <td>@customer.CompanyName</td> </tr> } </table> </body> </html> If you run the IndexAsync() action method you should get a list of customers displayed in a table (see below). A list of customers Dealing with Timeouts While working with asynchronous operations you should take into account the possibility that a call is made that never returns. To tackle such situations ASP.NET MVC provides the [AsyncTimeout] attribute. The [AsyncTimeout] attribute specifies a timeout value in milliseconds for the asynchronous operation. The default timeout value is 45 seconds. Although we won't get into more detail of the [AsyncTimeout] here, you can use it as follows: [AsyncTimeout(2000)] public async Task<ActionResult> IndexAsync() { ... } The above code sets the timeout value to 2000 milliseconds. Just in case you don't want to have any timeout or the asynchronous operation you can use the [NoAsyncTimeout] attribute. [NoAsyncTimeout] public async Task<ActionResult> IndexAsync() { ... } Summary ASP.NET MVC makes it easy for you to create asynchronous action methods by following the async / await pattern of .NET framework 4.5. Asynchronous operations allow action methods to cater to more concurrent requests than otherwise. Asynchronous action methods are useful for tasks involving network operations such as calling a remote service. Is it necessary is it is only one call from controllerPosted by Aswin on 08/15/2016 07:49pm If your controller has only one service method call then is it necessary to make async call. For e.g. List data = helper.GetCustomerData(); Since we have only one call, what is the point of going async. I understand if we have multiple backend calls like List data = helper.GetCustomerData(); List data1 = helper.GetCustomerData1(); then it makes sense to go async so that second call doesn't have to wait until first is done.Reply Thank youPosted by pranit on 04/13/2015 10:52am Thanks for the blog. It clears basic concept about async awaitReply ThanksPosted by Francisco on 04/10/2015 02:43pm Thanks dude this help me, For the ones who cant get .ToListAsync() method, you should add the library : System.Data.Entity; Sou you add in you proyect/Controller = using System.Data.Entity;Reply Chain of Commands for Task<> - Is it necessary?Posted by Amit Karmakar on 03/27/2015 05:06pm Great Article !!! I was wondering if we really need to complicate the helper method itself to return Task. Can we not keep the helper layer simple and instead call make the synchronous call at the Controller Action only. Something like: public async Task IndexAsync() { return await Task.Run(() = { DbHelper helper = new DbHelper(); List data = helper.GetCustomerData(); return View(data); }); } This way - the subsequent layers can be implemented as normal object types and the complexity of asynchronous call is left alone only at Controller class.Reply THANKYOUPosted by SIAMAK on 02/20/2015 11:23am Thank you, really greatReply fixed matchesPosted by fixed matches on 01/29/2015 12:33pm Hey there owner of. Great site. I think you should be little more strict with the comments.Reply Simple explanation for Async programmingPosted by Dhanuka on 01/02/2015 06:23am Thanks a lot, this is very simple and very useful. All my team mates referred to this article. Keep it up!Reply Nice article (a suggestion)Posted by SpiderCode on 11/20/2014 04:59am Hello Bipin Joshi, Its a very nice article written. Enjoyed it while reading. Apart from this, I found there is a small mistake (Not sure). In your blog, you have written that "The Index() action method shown above executes in asynchronous manner", it does not execute in asynchronous manner, it will be executed in synchronous manner :)Reply Calling async methodPosted by Guest1 on 11/11/2014 11:22am lets say i have an action public ActionResult Submit(EmpModel formModel, string SubmitForm) { IndexSync(); // how to call the async method from another action. }Reply ConfusedPosted by Logan on 07/05/2014 03:21pm Where does .ToListAsync() come from? I'm getting a "does not contain a definition for .ToListAsync" error... Also, where I declare my actions (ie. public async Task GetUsers()), its telling me "Cannot find all types required by the 'async' modifier...Reply
https://www.codeguru.com/csharp/.net/net_asp/mvc/creating-asynchronous-actions-in-asp.net-mvc.htm
CC-MAIN-2019-26
refinedweb
1,633
56.45
Comparable vs Comparator in Java In this article, I will explain about What is the difference between Comparator and Comparable. And when to use Comparable and Comparator. First, let's identify the difference between these two sectors. Comparable - Comparable provides a single sorting sequence. In other words, we can sort the collection on the basis of a single element such as id, name, and price. - Comparable affects the original class (the actual class is modified) - Comparable provides compareTo() method to sort elements. - Comparable is present on java.lang package. - We can sort the list elements of Comparable type by Collections.sort(List) method Comparator - The Comparator provides multiple sorting sequences. In other words, we can sort the collection on the basis of multiple elements such as id, name,` price, etc. - Comparator provides compare() method to sort elements. - A Comparator is present in java.util package. - We can sort the list elements of Comparator type by Collections.sort(List, Comparator) method. Code explanation Here I have created a Student class to explain these theories. First, use Comparable Interface in Student class(You can implement the Comparable Interface). Then you have to override the compareTo method. Inside the compareTo method, you can write the logic. This is a single sorting implementation. So that you can only sort by one field either ID or name. I am going to implement a sorting mechanism based on the ID. public class Student implements Comparable<Student>{ private int id; private String name; public Student(int id, String name) { this.id = id; this.name = name; } public int getId() { return id; } public void setId(int id) { this.id = id; } public String getName() { return name; } public void setName(String name) { this.name = name; } @Override public boolean equals(Object obj){ if (this == obj) { return true; } if (obj == null || getClass() != obj.getClass()) { return false; } Student student = (Student) obj; return id == student.id && Objects.equals(name, student.name); } @Override public int compareTo(Student student) { if (id == student.id){ return 0; }else if (id > student.id){ return 1; }else{ return -1; } } } Now create 3 different using Student Class and put them into an array. Let’s try to sort these students using student ID. First of all, try to print the List, and then you can see they are not in order. public static void main(String[] args) { List<Student> studentList = new ArrayList<>(); Student student1 = new Student(327, "Kasun"); Student student2 = new Student(100, "Dasun"); Student student3 = new Student(167, "Thisun"); studentList.add(student1); studentList.add(student2); studentList.add(student3); for(int i = 0; i < studentList.size(); i++) { System.out.println(studentList.get(i).getId()); } } To sort these Custom Student Objects, you need to use the sort method in the Collection Framework. Collections.sort(studentList); Output: Now in the future, I will get a requirement not to use ID and use a sorting algorithm to sort by name. Now you need to change the code again. Because I just implemented the sorting mechanism for the Student ID only. In the case of Comparable, it is not dynamic. It is always hardcoded. Once you get a requirement you need to change the code again and again. That is not recommended. So that we can use Comparator for this. When you are using a single sorting algorithm then go for Comparable. If not you can use Comparator. Now I am going to create a Class for IdComparator and NameComparator. Why do I need Comparator classes for each field? Rather than hardcoding in the same class you can create several Comparator classes and use those in your main classes when you want. Create IdComparator class and implements the Comparator Interface. Then you need to override the methods which you want. public class IdComparator implements Comparator<Student> { @Override public int compare(Student student1, Student student2) { if (student1.getId() == student2.getId()){ return 0; }else if (student1.getId() > student2.getId()){ return 1; } else { return -1; } } } Like this, you can add NameComparator also. public class NameComparator implements Comparator<Student> { @Override public int compare(Student student1, Student student2) { return student1.getName().compareTo(student2.getName()); } } Now in the main method, you can use the sort method to sort any custom object which you want. Just give the comparator name as the second parameter as an Object. Here the first one will be the Name Comparator(sort by Name) and the second one will be the ID Comparator(sort by ID). Collections.sort(studentList, new NameComparator());orCollections.sort(studentList, new IdComparator()); If we sort by the Name using NameComparator output will be like this. It will show the output Student Names in ascending order: Dasun, Kasun and Thisun Interview Question What happens if two Objects have the same sorting values? Let's assume 2 students have the same Student Id(Normally Students IDs are unique values. For this example Let’s assume there are 2 students who are having the same Student Id Numbers) and the names are different. What can we do about this? Basically, we need to sort these students' IDs first and if they are the same we can go for a Name Comparison. Let's change the code to compare the ID and well as the Name. In the code we need to check the ID first, if they are the same then we can compare Students' Names. Go to IdComparator compare method and add the name comparison code inside the first if section(This section says that the two ids are the same). if (o1.getId() == o2.getId()){ return o1.getName().compareTo(o2.getName());}else if (o1.getId() > o2.getId()){ return 1;} else { return -1;} Here I added Student Ids as 327, 100, 167, and 327. First, it will sort by the Student Id and then you can see the 327 Id Number is the same for two students(Kasun and Aruna). Now we can compare those two with the Name. Aruna starts with ‘A’ and Kasun starts with ‘K’ so obviously Aruna should occur first then Kasun. So you can provide your custom sorting using Comparator. I hope this concept is clear for you and when to use Comparator and Comparable. That's all about this tutorial. See you in the next tutorial. Thank You!
https://kasunprageethdissanayake.medium.com/comparable-vs-comparator-in-java-623b7435e06?source=user_profile---------7----------------------------
CC-MAIN-2022-40
refinedweb
1,028
59.19
Новое в JavaScript 1.7 Материал из. [править] Использование JavaScript 1.7 Чтобы воспользоваться новыми возможностями JavaScript 1.7, необходимо явно указать, что будет использоваться эта версия JavaScript. В HTML- или XUL-коде для этого следует использовать конструкцию: <script type="application/javascript;version=1.7"/> При работе в оболочке JavaScript, необходимо указывать версию, используя функцию version(): version(170); [править] Генераторы и итераторы When developing code that involves an iterative algorithm (such as iterating over a list, or repeatedly performing computations on the same data set), there are often state variables whose values need to be maintained for the duration of the computation process. Traditionally, you have to use a callback function to obtain the intermediate values of an iterative algorithm. [править] Генераторы Consider this iterative algorithm that computes Fibonacci numbers: function do_callback(num) { document.write(num + "<BR>\n"); }++) { document.write(g.next() + "<BR>\n"); }. [править] Итераторы An iterator is a special object that lets you iterate over data. In normal usage, iterator objects are "invisible"; you won't need to operate on them explicitly, but will instead use JavaScript's for...in and for each...in statements to loop naturally over the keys and/or values of objects. var objectWithIterator = getObjectSomehow(); for (var i in objectWithIterator) { document.write(objectWithIterator[i] + "<BR>\n"); } If you are implementing your own iterator object, or have another need to directly manipulate iterators, you'll need to know about the next method, the StopIteration exception, and the __iterator__ property. You can create an iterator for an object by calling Iterator(objectname); the iterator for an object is found via the object's __iterator__ property, which by default implements iteration according to the usual for...in and for each...in model. If you wish to provide a custom iterator, you should override the getter for __iterator__ to return an instance of your custom iterator. To get an object's iterator from script, you should use Iterator(obj) rather than accessing the __iterator__ property directly. Once you have an iterator, you can easily fetch the next item in the object by calling the iterator's next() method. If there is no data left, the StopIteration exception is thrown. Here's a simple example of direct iterator manipulation: var obj = {name:"Jack Bauer", username:"JackB", id:12345, agency:"CTU", region:"Los Angeles"}; var it = Iterator(obj); try { while (true) { document.write(it.next() + "<BR>\n"); } } catch (err if err instanceof StopIteration) { document.write("End of record.<BR>\n"); } catch (err) { document.write("Unknown error: " + err.description + "<BR>\n"); }. In both cases, the actual order in which the data is returned may vary based on the implementation. There is no guaranteed ordering of the data. Iterators are a handy way to scan through the data in objects, including objects whose content may include data you're unaware of. This can be particularly useful if you need to preserve data your application isn't expecting. [править] Array comprehensions Array comprehensions are a use of generators that provides a convenient way to perform powerful initialization of arrays. For example: function range(begin, end) { for (let i = begin; i < end; ++i) { yield i; } } range() is a generator that returns all the values between begin and end. Having defined that, we can use it like this: = []; for (var i=0; i <= 20; i++) { if (i % 2 == 0) evens.push(i); } Not only is the array comprehension much more compact, but it's actually easier to read, once you're familiar with the concept. [править] Scoping rules Array comprehensions have an implicit block around them, containing everything inside the square brackets, as well as implicit let declarations. Add details. [править]. [править] The let statement The let statement provides local scoping for variables, constants, and functions. It works by binding zero or more variables in the lexical scope of a single block of code. The completion value of the let statement is the completion value of the block. For example: var x = 5; var y = 0; let (x = x+10, y = 12) { document.write(x+y + "<BR>\n"); } document.write(x+y + "<BR>. [править] Scoping rules The scope of variables defined using let is the let block itself, as well as any inner blocks contained inside it, unless those blocks define variables by the same names. [править]. [править] Scoping rules Given a let expression: let (decls) expr There is an implicit block created around expr. [править] let definitions The let keyword can also be used to define variables, constants, and functions inside a block. ** This code doesn't run in FF 2.0 b1. ** if (x > y) { let const k = 37; let gamma : int = 12.7 + k; let i = 10; let function f(n) { return (n/3)+k; } return f(gamma) + f(i); } [править]. [править]"); [править] Scoping rules for (let expr1; expr2; expr3) statement In this example, expr2, expr3, and statement are enclosed in an implicit block that contains the block local variables declared by let expr1. This is demonstrated in the first loop above. for (let expr1 in expr2) statement for each(let expr1 in expr2) statement In both these cases, there are implicit blocks containing each statement. The first of these is shown in the second loop above. [править] Destructuring assignment. [править] Examples Destructuring assignment is best explained through the use of examples, so here are a few for you to read over and learn from. [править] Swapping values You can use destructuring assignment, for example, to swap values: var a = 1; var b = 3; [a, b] = [b, a]; After executing this code, b is 1 and a is 3. Or to rotate values: (poor code format) <body bgcolor = "black"> <script type="application/javascript;version=1.7"/> varo</font>"; varo</font>"; var g = 'o'; var h = 'o'; for (lp=0;lp<40;lp++) {[a, b, c, d, e, f, g, h] = [b, c, d, e, f, g, h, a]; document.write(a+''+b+''+c+''+d+''+e+''+f+''+g+''+h+''+"<br />");} </script> After executing this code, a visual colorful display of the variable rotation will be displayed [править] Multiple-value returns Thanks to destructuring assignment, functions can return multiple values. While it's always been possible to return an array from a function, this provides an added degree of flexibility. function f() { return [1, 2]; } As you can see, returning results is done using an array-like notation, with all the values to return enclosed in brackets. You can return any number of results in this way. In this example, f() returns the values [1, 2] as its output. var a, b; [a, b] = f();. [править]. [править] Looping across objects You can also use destructuring assignment to pull data out of an object: var obj = { width: 3, length: 1.5, color: "orange" }; for (let[name, value] in obj) { document.write ("Name: " + name + ", Value: " + value + "<BR>\n"); } This loops over all the key/value pairs in the object obj and displays their names and values. In this case, the output looks like the following: Name: width, Value: 3 Name: length, Value: 1.5 Name: color, Value: orange [править] Looping across values in an array of objects You can loop over an array of objects, pulling out fields of interest from each object: var people = [ { name: "Mike Smith", family: { mother: "Jane Smith", father: "Harry Smith", sister: "Samantha Smith" }, age: 35 }, { name: "Tom Jones", family: { mother: "Norah Jones", father: "Richard Jones", brother: "Howard Jones" }, age: 25 } ]; for each (let {name: n, family: { father: f } } in people) { document.write ("Name: " + n + ", Father: " + f + "<BR>\n"); } This pulls the name field into n and the family.father field into f, then prints them. This is done for each object in the people array. The output looks like this: Name: Mike Smith, Father: Harry Smith Name: Tom Jones, Father: Richard Jones
http://developer.mozilla.org/ru/docs/%D0%9D%D0%BE%D0%B2%D0%BE%D0%B5_%D0%B2_JavaScript_1.7
crawl-001
refinedweb
1,291
54.63
I can't remember if I posted this question earlier so I'm sorry if this gets posted twice. I was wondering about C and namespaces. If I create a function: int findIndex( int i ); in a.h and in some other file b.h excatly the same signature is used. The function is implemented in a.c and b.c respectively. Either directly or indirectly file a.h includes b.h. Now I have two identical signatures each one refering to a different implentation of the function int findIndex( int i ). In the file a.c I use the funtion findIndex. How does the compiler/linker now know which implentation I refere to? Is there anyway to avoid this problem, if it is a problem? In larger programs it seems to me that it would be impossible to know the names of all declared functions and some function names would proably be used twice. Is this assumption correct? I have tried testing this with gcc by declaring and implementing a dummy function strlen with excactly the same signature as the strlen function defined in <string.h>. Suprisingly(?) the program changed which function it used depending on where I implemented the dummy function. If implemented the dummy function before I called it, it used the dummy function. If I implemented it after the call the real strlen was used. oyse
http://cboard.cprogramming.com/c-programming/52558-c-namespaces.html
CC-MAIN-2014-23
refinedweb
230
68.77
Red Hat Bugzilla – Bug 221550 g++ and g++4 output bogus warnings on valid C code bracketed with extern "C" when using -Wshadow Last modified: 2007-11-16 20:14:55 EST Description of problem: g++ and g++4 output bogus warnings on valid C code bracketed with extern "C" when using -Wshadow. Version-Release number of selected component (if applicable): gcc-c++-3.4.6-3 gcc4-c++-4.1.0-18.EL4 How reproducible: Always. Steps to Reproduce: 1. Save this to foo.cpp: extern "C" { struct Foo {}; void Foo(void) {} } 2. Compile it with: g++ -Wshadow -c foo.cpp or g++4 -Wshadow -c foo.cpp 3. Actual results: I get the warning message: foo.cpp: In function ‘void Foo()’: foo.cpp:3: warning: ‘void Foo()’ hides constructor for ‘struct Foo’ Expected results: No warnings. Additional info: I ran into this problem when compiling a C++ file that included /usr/include/orbit-2.0/orbit/dynamic/dynamic-defs.h from ORBit2-devel-2.12.0-3. The message then was: /usr/include/orbit-2.0/orbit/dynamic/dynamic-defs.h:715: warning: `CORBA_TypeCode_struct* DynamicAny_DynAny_type(DynamicAny_DynAny_type*, CORBA_Environment*)' hides constructor for `struct DynamicAny_DynAny_type' There is nothing wrong on the warning. extern "C" doesn't mean compile this chunk of source with a C compiler, extern "C" is solely about external linkage. And in C++, struct/class names are injected into the same namespace as function names. I still think it's strange to give warnings about perfectly correct C code marked as C code. Plus, if I write this: extern "C" { struct Foo {}; void Foo(void) {} } int main() { Foo a; return 0; } I get an error and a warning on line 8: foo.cpp: In function `void Foo()': foo.cpp:3: warning: `void Foo()' hides constructor for `struct Foo' foo.cpp: In function `int main()': foo.cpp:8: error: expected `;' before "a" foo.cpp:8: warning: statement is a reference, not call, to function `Foo' I get the warning on line 8 even without the -Wshadow option. So, to me the warning on line 3 seems redundant for code affected by it, and unnecessary and strange for code not affected by it. Please reread what I said above and/or the ISO C++98 standard, extern "C" is not marking something as C code.
https://bugzilla.redhat.com/show_bug.cgi?id=221550
CC-MAIN-2018-47
refinedweb
385
58.99
We've made a lot of progress on the prototype; congrats on making it this far! Now, let's add a spawn manager to will spawn enemies over time. The Spawn Manager First, we need a host GameObject for the spawn manager script. You know how to do that by now, so go ahead and create it and also create a new script named SpawnManager in the scripts folder, and add it as a component. In the spawn manager script, I'll add a field named enemyPrefab and make it configurable via the inspector with the SerializeField attribute. To keep our hierarchy clean, I'll also create an enemyContainer field to be the parent for our enemy prefab instances. I don't need the Update method, so I'll remove it and in it's place I'll add a Coroutine method. I will have an upcoming article to talk about Coroutines in more detail. For now, just know that this will allow us to continually spawn enemy instances. I want to spawn a new enemy every 5 seconds as a default, but also allow for this to be adjusted via the inspector. The coroutine itself is a simple loop that yields for a provided amount of seconds, and then instantiates a new enemy instance at a random horizontal location. public class SpawnManager : MonoBehaviour { [ ] float spawnInterval = 5.0f; [ ] GameObject enemyPrefab; [ ] Transform enemyContainer; bool isStopped; Enemy enemy = null; // Start is called before the first frame update void Start() { enemy = enemyPrefab.GetComponent<Enemy>(); StartCoroutine(SpawnRoutine()); } ... } I've assigned the enemyPrefab and the enemyContainer references from the editor. We've already created variables in the enemy script that define the min and max X and Y values that we will want to use when spawning a new enemy, so in Start I am using the GetComponent method to get the Enemy component. The min and max X and Y variables on the Enemy class are private, so we need to make read-only public accessors so we can use the values, but not be able to change them. I'll add the properties just beneath the variables in Enemy.cs. public float BoundaryYMax { get { return boundarYMax; } } public float BoundaryXMin { get { return boundaryXMin; } } public float BoundaryXMax { get { return boundaryXMax; } } And here's the spawn coroutine: IEnumerator SpawnRoutine() { while (!isStopped) { float randomXPosition = Random.Range(enemy.BoundaryXMin, enemy.BoundaryXMax); Vector3 spawnPosition = new Vector3(randomXPosition, enemy.BoundaryYMax); Instantiate(enemyPrefab, spawnPosition, Quaternion.identity, enemyContainer); yield return new WaitForSeconds(spawnInterval); } } Here you can see we are creating a new random position for newly spawned enemies and waiting for the time set in our spawnInterval. After playing the game for a bit, you'll end up with lots of enemies on screen! Summary We now have a spawn manager that we can build upon as our prototype evolves. Take care. Stay awesome.
https://blog.justinhhorner.com/spawning-enemies-using-coroutines
CC-MAIN-2022-27
refinedweb
472
52.7
AxKit combines the power of Perl’s rich and varied XML processing facilities with the flexibility of the Apache web server. Rather than implementing such an environment in a monolithic package, as some application servers do, it takes a more modular approach. It allows developers to choose the lower-level tools such as XML parsers and XSLT processors for themselves. This neutrality with respect to lower-level tools gives AxKit the ability to adapt and incorporate new, better performing, or more feature-rich tools as quickly as they appear. That flexibility costs, however. You will probably have to install more than just the AxKit distribution to get a working system.Installation Requirements To get AxKit up and running, you will need: - The Apache HTTP server (Version 1.3.x) - The mod_perl Apache extension module (Version 1.26 or above) - An XML parser written in Perl or, more commonly, one written in C that offers a Perl interface module - The core AxKit distribution If you are running an open source or open source–friendly operating system such as GNU/Linux or one of the BSD variants (including Mac OS X), chances are good that you already have Apache and mod_perl installed. If this is the case, then you probably will not have to install them by hand. Simply make sure that you are running the most recent version of each, and skip directly to the next section. However, in some cases, using precompiled binaries of Apache and mod_perl proved to be problematic for people who want to use AxKit. In most cases, neither the binary in question, nor AxKit, are really broken. The problem lies in the fact that binaries built for public distribution are usually compiled with a set of general build arguments, not always well suited for specialized environments such as AxKit. If you find that all AxKit’s dependencies install cleanly, but AxKit’s test suite still fails, you may consider removing the binary versions and installing Apache and mod_perl by hand. At the time of this writing, AxKit runs only under Apache versions in the 1.3. x branch. Support for Apache 2. x is currently in development. Given that Apache 2 is quite different from previous versions, both in style and substance, the AxKit development team decided to take things slowly to ensure that AxKit for Apache 2. x offers the best that the new environment has to offer. To install Apache and mod_perl from the source, you need to download the source distributions for each from and, respectively. After downloading, unpack both distributions into a temporary directory and cd into the new mod_perl directory. A complete reference for all options available for building the Apache server and mod_perl is far beyond the scope of this book. The following will get you up and running with a useful set of features: $ perl Makefile.PL > EVERYTHING=1 > USE_APACI=1 > DYNAMIC=1 > APACHE_SRC=../apache_1.3.xxx/src > DO_HTTPD=1 > APACI_ARGS=”–enable-module=so –enable-shared=info > –enable-shared=proxy –enable-shared=rewrite > –enable-shared=log_agent” $ make $ make install All lines before the make command are build flags that are being passed to perl Makefile.PL. The characters are simply part of the shell syntax that allows you to divide the arguments across multiple lines. The > characters represent the shell’s output, and you should not include them. Also, be sure to replace the value of the APACHE_SRC option with the actual name of the directory into which you just unpacked the Apache source.XML Processing Options As I mentioned in the introduction to this chapter, AxKit is a publishing and application framework. It is not an XML parser or XSLT processor, but it allows you to choose among these lower-level tools while ensuring that they work together in a predictable way. If you do not already have the appropriate XML processing tools installed on your server, AxKit attempts to install the minimum needed to serve transformed XML content. However, more cautious minds may prefer to install the necessary XML parser and any optional XSLT libraries to make sure they work before installing the AxKit core. Deciding which XML parsers or other libraries to install depends on your application’s other XML processing needs, but the following dependency list shows which tools AxKit currently supports and which publishing features require which libraries. Gnome XML parser (libxml2) Requires: XML::LibXML Required by AxKit for: eXtensible Server Pages Available from: Expat XML parser Requires: XML::Parser Required by AxKit for: XPathScript Available from: Gnome XSLT processor (libxslt) Requires: libxml2,XML::LibXSLT Required by AxKit for: optional XSLT processing Available from: Sablotron XSLT processor Requires: Expat, XML::Sablotron Required by AxKit for: optional XSLT processing Available from: You do not need to install all these libraries before installing AxKit. For example, if you plan to do XSLT processing, you need to install either libxslt or Sablotron, not both. However, I do strongly recommend installing both supported XML parsers: Gnome Project’s libxml2 for its speed and modern features, and Expat for its wide use among many popular Perl XML modules. In any case, remember that you must install the associated Perl interface modules for any of the C libraries mentioned above, or AxKit will have no way to access the functionality that they provide. Again, some operating system distributions include one or more of the libraries mentioned above as part of their basic packages. Be sure to upgrade these libraries before proceeding with the AxKit installation to ensure that you are building against the most recent stable code. {mospagebreak title=Installing the AxKit Core} Now that you have an environment for AxKit to work in and have some of the required dependencies installed, you are ready to install AxKit itself. For most platforms this is a fairly painless operation. Using the CPAN Shell The quickest way to install AxKit is via Perl’s Comprehensive Perl Archive Network (CPAN) and the CPAN shell. Log in as root (or become superuser) and enter the following: $ perl -MCPAN -e shell > install AxKit This downloads, unpacks, compiles, and installs all modules in the AxKit distribution, as well as any prerequisite Perl modules you may need. If AxKit installs without error, you may safely skip to “Basic Server Configuration.” If it doesn’t, see “Installation Troubleshooting” for more information. From the Tarball Distribution The latest AxKit distribution can always be found on the Apache XML site at. Just download the latest tarball, unpack it, and cd to the newly created directory. As root, enter the following: $ perl Makefile.PL $ make $ make test $ make install This compiles and installs all modules in the AxKit distribution. Just like the CPAN shell method detailed above, AxKit’s installer script automatically attempts to install any module prerequisites it encounters. If make stops this process with an error, skip on to “Installation Troubleshooting” for help. Otherwise, if everything goes smoothly, you can skip ahead to “Basic Server Configuration.” In addition to the stable releases available from CPAN and axkit.org, the latest development version is available from the AxKit project’s anonymous CVS archive: cvs -d :pserver:anoncvs@cvs.apache.org:/home/cvspublic Brave souls who like to live on the edge or who may be interested in helping with AxKit development can check it out. When prompted for a password, enter: anoncvs. You may now check out a piping hot version of AxKit: <![CDATA[cvs – d:pserver:anoncvs@cvs.apache.org:/home/cvspublic co xml- axkit]]> Installing the CVS version of AxKit is otherwise identical to installing from the tarball. {mospagebreak title=Installing AxKit on Win 32 Systems} As of this writing, AxKit’s support for the Microsoft Windows environment should be considered experimental. Anyone who decides to put such a server into production does so at her own risk. AxKit will run in most cases. (Win9x users are out of luck.) If you are looking for an environment in which to learn XML web-publishing techniques, then AxKit on Win32 is certainly a viable choice. If you do not already have ActiveState’s Windows-friendly version of Perl installed, you must first download and install that before proceeding. It is available from http://. I suggest you get the latest version from the 5.8. x branch. In addition, you need the Windows port of the Apache web server. You can obtain links to the Windows installer from. Be sure to grab the latest in the 1.3. x branch. Next, grab the official Win32 binaries for libxml2 and libxslt from and follow the installation instructions there. After you install Apache, Perl libxml2, and libxslt, you can install AxKit using ActiveState’s ppm utility (which was installed when you installed ActivePerl). Simply open a command prompt, and type the following: C:> ppm ppm> repository add theoryx ppm> install mod_perl-1 ppm> install libapreq-1 ppm> install XML-LibXML ppm> install XML-LibXSLT ppm> install AxKit-1 Finally, add the following line to your httpd.conf and start Apache: LoadModule perl_module modules/mod_perl.so This combination of commands and packages should give you a workable (albeit experimental) AxKit on your Windows system. If things go wrong, be sure to join the AxKit user’s mailing list and provide details about the versions of various packages you tried, your Windows version, and relevant output from your error logs. {mospagebreak title=Basic Server Configuration} As you will learn in later chapters, AxKit offers quite a number of runtime configuration options that allow fine-grained control over every phase of the XML processing and delivery cycle. Getting a basic working configuration requires very little effort, however. In fact, AxKit ships with a sample configuration file that can be included into Apache’s main server configuration (or used as a road map for adding the configuration directives manually, if you decide to go that way instead). Copy the example.conf file in the AxKit distribution’s examples directory into Apache’s conf directory, renaming it axkit.conf. Then, add the following to the bottom of your httpd.conf file: # AxKit Setup Include conf/axkit.conf You now need to edit the new axkit.conf file to match the XML processing libraries that you installed earlier by uncommenting the AxAddStyleMap directives that correspond to tools you chose. For example, if you installed libxslt and XML::LibXSLT, you would uncomment the AxAddStyleMap directive that loads AxKit’s interface to LibXSLT. Example 2-1 helps to clarify this. Example 2-1. Sample axkit.conf fragment # Load the AxKit core. PerlModule AxKit # Associates Axkit with a few common XML file extensions AddHandler axkit .xml .xsp .dkb .rdf # Uncomment to add XSLT support via XML::LibXSLT # AxAddStyleMap text/xsl Apache::AxKit::Language::LibXSLT # Uncomment to add XSLT support via Sablotron # AxAddStyleMap text/xsl Apache::AxKit::Language::Sablot # Uncomment to add XPathScript Support # AxAddStyleMap application/x-xpathscript Apache::AxKit::Language::XPathScript # Uncomment to add XSP (eXtensible Sever Pages) support # AxAddStyleMap application/x-xsp Apache::AxKit::Language::XSP The one hard-and-fast rule about configuring AxKit is that the PerlModule directive that loads the AxKit core into Apache via mod_perl must appear at the top lexical level of your httpd.conf file, or one of the files that it includes. All other AxKit configuration directives may appear as children of other configuration directive blocks in whatever way best suits your server policy and application needs, but the PerlModule AxKit line must appear only at the top level. {mospagebreak title=Testing the Installation} Axif axkit/.. Figure 2-1. Proof of a successful demo AxKit installation. {mospagebreak title=Installation Troubleshooting} As I mentioned in this chapter’s introduction, AxKit’s core consists largely of code that glues other things together. In practice, this means that most errors encountered while installing AxKit are due to external dependencies that are missing, broken, out of date, or invisible to AxKit’s Makefile. Including a complete list of various errors that may be encountered among AxKit’s many external dependencies is not realistic here. It would likely be outdated before this book is on the shelves. In general, though, you can use a number of compile-time options when building AxKit. They will help you diagnose (and in many cases, fix) the cause of the trouble. AxKit’s Makefile.PL recognizes the following options: DEBUG=1 This option causes the Makefile to produce copious amounts of information about each step of the build process. Although wading through the sheer amount of data this option produces can be tedious, you can diagnose most installation problems (missing or unseen libraries, etc.) by setting this flag. NO_DIRECTIVES=1 This option turns off AxKit’s apache configuration directives, which means you must set these via Apache’s PerlSetVar directives instead. Use this option only in extreme cases in which AxKit’s custom configuration directives conflict with those of another Apache extension module. (These cases are very rare, but they do happen.) EXPAT_OPTS=”…” This option is relevant only if you do not have the Expat XML parser installed and decide to install it when installing AxKit. This argument takes a list of options to be passed to libexpat’s ./configure command. For example, EXPAT_ OPTS=”–prefix=/usr” installs libexpat in /usr/lib, rather than the default location. LIBS=”-L/path/to/somelib -lsomelib” This option allows you to set your library search path. It is primarily useful for pointing the Makefile to external libraries that you are sure are installed but, for some reason, are being missed during the build process. INC=”-I/path/to/somelib/include” This option is like LIBS, but it sets the include search path. {mospagebreak title=Where to Go for Help} If you get stuck at any point during the installation process, do not despair. There are still other resources available to help you get up and running. In addition to this book, there are other sources of AxKit documentation, as well as a strong AxKit user community that is willing and able to help. Installed AxKit documentation Most Perl modules that comprise the AxKit distribution include a level of documentation. In many cases, these documents are quite detailed. You can access this information using the standard perldoc utility typically installed with Perl itself. Just type perldoc <modulename> AxKit The documentation in AxKit.pm provides a brief overview of each AxKit configuration directive, including simple examples. Example: perldoc AxKit Apache::AxKit::Language::* The modules in this package namespace provide support for the various XML processing and transformation languages such as XSLT, XSP, and XPathScript. Example: perldoc Apache::AxKit::Language::XSP provides an XSP language reference. Apache::AxKit::Provider::* The modules in this namespace provide AxKit with the ability to fetch and read the sources for the XML content and stylesheets that it will use when serving the current request. Example: perldoc Apache::AxKit::Provider::Filter shows the documentation for a module that allows an upstream PerlHandler (such as Apache::ASP or Mason) to generate content. Apache::AxKit::Plugin::* Modules in this namespace provide extensions to the basic AxKit functionality. Example: perldoc Apache::AxKit::Plugin::Passthru offers documentation for the Passthru plug-in, which allows a “source view” of the XML document being processed based on the presence or absence of a specific query string parameter. Apache::AxKit::StyleChooser::* The modules in this namespace offer the ability to set the name of a preferred transformation style in environments that provide more than one way to transform documents for a given media type. Example: perldoc Apache::AxKit::StyleChooser::Cookie shows the documentation for a module that allows stylesheet transformation chains to be selected based on the value of an HTTP cookie sent from the requesting client. Additional user-contributed documentation is also available from the AxKit project’s web site at. Not only does the project site offer several useful tutorials, it also provides a user-editable Wiki that often contains the latest platform-specific installation instructions, as well as many other AxKit tips, tricks, and ideas. Mailing lists The AxKit project sports a lively and committed user base with lots of friendly folks who are willing to help. Even if you are not having trouble, I highly recommend joining the axkit-users mailing list. The amount of traffic is modest, the signal-to-noise ratio is high, and topics range from specific AxKit installation questions to general discussions of XML publishing best practices. You can subscribe online by visiting or by sending an empty email message to mailto:axkit–users-subscribe@axkit.org. You can find browsable archives of axkit-users at: - - Topics relating specifically to AxKit development are discussed on the axkit-devel list. Generally, you should post most questions, bug reports, patches, etc., to axkitusers. If you want to contribute to the AxKit codebase, then axkit-devel is the place for you. You can subscribe to the development list by sending an empty message to mailto: axkit-dev-subscribe@xml.apache.org. In addition to the mailing lists, the AxKit community also maintains an #axkit IRC channel for discussing general AxKit topics. The IRC server hosting the channel changes periodically, so check the AxKit web site for details.
http://www.devshed.com/c/a/Apache/Installing-Axkit/
CC-MAIN-2016-26
refinedweb
2,852
52.19
The. String ISame same this The ISame interface is defined as: // provide a customized comparison of objects in this class public interface ISame<T>{ // it this object the same as the given one? public boolean same(T that); } The following example illustrates the use of the tests that apply user-defined same method: For the class that represents an Author we define two authors to be same if their last names are the same: Author public class Author implements ISame<Author>{ String lastName; String firstName; Author(String lastName, String firstName){ this.lastName = lastName; this.firstName = firstName; } // two authors are same if their last names match public boolean same(Author that){ return this.lastName.equals(that.lastName); } } We now run the following tests: // sample books public Author sk=new Author("King", "Steven"); public Author dk=new Author("King", "Dan"); public Author db=new Author("Brown", "Dan"); // sample authors public Book book1=new Book("title1",sk,4000); public Book book2=new Book("title2",db,4000); public Book book3=new Book("title1",db,4000); public Book book4=new Book("title1",dk,4000); void testSame(Tester t){ t.checkExpect(this.book1, this.book2, "fails: different books, authors"); t.checkExpect(this.book1, this.book4, "should succeed - same author last names"); t.checkExpect(this.book3, this.book4 "fails: different author last names."); t.checkExpect(this.book2, this.book3 "fails: different titles."); } Here is the complete source code for this test suite. You can also download the entire souce code as a zip file. Complete test results are shown here.
http://www.ccs.neu.edu/javalib/Tester/UsersGuide/ISame.html
crawl-003
refinedweb
253
54.42
Red Hat Bugzilla – Bug 807383 Review Request: PythonMagick - Interface to ImageMagick for Python written in C++ Last modified: 2015-07-21 10:03:17 EDT Spec URL: SRPM URL: Description: PythonMagick is an object-oriented Python interface to ImageMagick Hello, This is my first RPM package for Fedora. The rpm provides an interface for ImageMagick for Python programs. Its _not_ to be confused with python-magic which is used by the file command to see what type of file a given filename is. As far as I can tell, there is no RPM package of Python ImageMagick for fedora, although it has been packaged for debian [1] I have built the package on Fedora 15 x64-86 since this is the only system i have access to. The source is licence under the ImageMagick licence. I have named the package "PythonMagick" as this corresponds with the naming used by upstream [2]. I ran the rpmlint tool on the .SPEC file and found no warnings or errors. Regards, [1] [2] Hi, if this is your first rpm for fedora, you should find a sponsor see . also, I would recommend reading to make sure that your package follow the policy. rpmlint on the srpm reports these errors/warnings on some textual issues, please fix them: PythonMagick.src: W: name-repeated-in-summary C PythonMagick PythonMagick.src: E: description-line-too-long C PythonMagick is an object-oriented interface to ImageMagick which makes it possible PythonMagick.src: E: description-line-too-long C to access the powerful image manipulation features of ImageMagick from Python applications. PythonMagick.src: E: description-line-too-long C Install this library if you want to create, edit, compose, transform or convert images rpmlint on the generated rpm file gives these additionals warnings: PythonMagick.x86_64: W: private-shared-object-provides /usr/lib64/python2.7/site-packages/PythonMagick/_PythonMagick.so _PythonMagick.so()(64bit) PythonMagick.x86_64: W: devel-file-in-non-devel-package /usr/lib64/python2.7/site-packages/PythonMagick/_PythonMagick.a Avoiding a private shared object to be "provided" by an rpm can be done by adding a filter like this: %{?filter_setup: %filter_provides_in %{python_sitearch}.*\.so$ %filter_setup } see: a library file is not needed to run the python module. Therefore _PythonMagick.a should be packed into a separate devel package see: """. """ Packages must NOT contain any .la libtool archives, these must be removed in the spec if they are built. Therefore please remove: _PythonMagick.la see: list of "MUST" items. Are you still interested in getting this package into the repo (and thus becoming packager) ? Yes Im still interested, just reading the docs again before I post an updated spec file I have taken into account the previous comments and corrected the SPEC file. Please find attached src.rpm and spec file. Thanks for your updated version. Here is my (informal) review: On my Fedora17 system "rpmbuild -ba' creates 4 rpms now: PythonMagick-0.9.7-2.fc17.src.rpm PythonMagick-0.9.7-2.fc17.x86_64.rpm PythonMagick-devel-0.9.7-2.fc17.x86_64.rpm PythonMagick-debuginfo-0.9.7-2.fc17.x86_64.rpm rplint results: rpmlint PythonMagick-0.9.7-2.fc17.src.rpm 1 packages and 0 specfiles checked; 0 errors, 0 warnings. rpmlint PythonMagick-0.9.7-2.fc17.x86_64.rpm 1 packages and 0 specfiles checked; 0 errors, 0 warnings. rpmlint PythonMagick-devel-0.9.7-2.fc17.x86_64.rpm PythonMagick-devel.x86_64: W: no-documentation 1 packages and 0 specfiles checked; 0 errors, 1 warnings. rpmlint PythonMagick-debuginfo-0.9.7-2.fc17.x86_64.rpm 1 packages and 0 specfiles checked; 0 errors, 0 warnings. The warning on the devel package is acceptable according to the "no-documentation" section in MUST items as mentioned in: key: [+] OK [.] OK, not applicable [X] needs work [+] unless.[9] [.] MUST: Every binary RPM package (or subpackage) which stores shared library files (not just symlinks) in any of the dynamic linker's default paths, must call ldconfig in %post] [.]] [+] MUST: All filenames in rpm packages must be valid UTF-8. [24] All MUST items seem fine to me. The package builds fine in mock using: mock -r fedora-17-x86_64 --rebuild PythonMagick-0.9.7-2.fc17.src.rpm Also, the package installs and runs fine in the mock chroot dir using: mock -r fedora-17-x86_64 --shell cd /builddir/build/RPMS/ rpm -i PythonMagick-0.9.7-2.fc17.x86_64.rpm cd python >>> import PythonMagick works fine. This python snippit from the README file also runs fine: from PythonMagick import * img=Image('30x30','red') img.write('test1.png') data=file('test1.png','rb').read() img=Image(Blob(data)) img.write('test2.png') print "now you should have two png files" generated png files seem ok. As mentioned earlier, my review is an informal one, meaning that I do not have the rights to sponsor you. You still need to find a sponsor to allow the package to be accepted into Fedora. The best way is to introduce yourself on the devel mailinglist (assuming you did not yet do this). The procedure for this is detailed here: A few mistakes here, including one or two eyebrow-raisers. Let's start with the reviews in comment 3 and comment 7: > [+] MUST: The package must meet the Packaging Guidelines . Please be careful here. Basically, this MUST item is the hardest one to acknowledge with a brief '[+]', since that means you've checked _everything_ written on the following hierarchy of Wiki pages: Not only would you need to try to find a section in the guidelines for every line of the spec file, you would also need to do that for the built rpms and the build job output (as created by Mock or plain rpmbuild). > [.] MUST: Static libraries must be in a -static package. [19] > > [.] OK, not applicable Cannot be true, because the reviewed package places a static lib in the -devel packages: | %files devel | %{python_sitearch}/%{name}/_PythonMagick.a Please revisit and comment on it, if you disagree or if there are questions. > [+] MUST: Development files must be in a -devel package. [20] PythonMagick is a Python module to be used within Python software. Leaving aside the Static Library guidelines for a moment, how does the _PythonMagick.a library fit into all this? $ rpmls -p PythonMagick-devel-0.9.7-2.fc18.x86_64.rpm -rw-r--r-- /usr/lib64/python2.7/site-packages/PythonMagick/_PythonMagick.a > [+] MUST: In the vast majority of cases, devel packages must > require the base package using a fully versioned dependency: > Requires: %{name}%{?_isa} = %{version}-%{release} [21] The reviewed package does Requires: %{name} = %{version}-%{release} so %_isa is not used. Minor issue only, but can lead to trouble in some situations. > %description devel > > %{name}-devel contains the library links you'll need to develop > Python ImageMagick applications. This description would deserve an explanation. Specifically: Which "links"? And when are they needed? > Group: Development/Libraries "Development/Languages" is very common for Python modules. > Requires: boost-python > Requires: ImageMagick-c++ >= 6.4 > Requires: python >= 2.4 In short: Add comments to the spec file giving the rationale for each of those explicit dependencies or drop them as appropriate. The section in the guidelines may read as if it's specific to shared libs (here the "ImageMagick-c++" explicit Requires), but basically it applies to all other explicit Requires, too. > Requires: python >= 2.4 Currently the package automatically depends on python(abi) = 2.7 libpython2.7.so.1.0()(64bit) and explicitly on python >= 2.4 so which is right? Preferably, you drop the explicit dep on python >= 2.4, since the automatic dependency is on Python 2.7. > /bin/sh ./libtool --tag=CXX --mode=link g++ -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -DBOOST_PYTHON_DYNAMIC_LIB -avoid-version -module -L/usr/lib -Wl,-z,relro -o _PythonMagick.la -rpath /usr/lib64/python2.7/site-packages/PythonMagick pythonmagick_src/libpymagick.la helpers_src/libhelper.la -L/usr/lib -lboost_python -lMagick++ -lMagickCore -lpython2.7 > This is a line from the x86_64 build job output. The '-L/usr/lib' indicates that somewhere an incorrect libdir value, perhaps a hardcoded one, is used. Tracking down where and telling upstream about it might be worthwhile. Closing due long inactivity. Feel free to reopen if you want to continue.
https://bugzilla.redhat.com/show_bug.cgi?id=807383
CC-MAIN-2017-17
refinedweb
1,381
50.02
hi all i don't know if this would be teh correct place to post but i dont know anywhere else... here goes. I'm having trouble trying to output a variable using graphics window BGI, i'm using Borland C++ builder.... Basically its a 5 second countdown/timer , just can't get teh variable to output on screen.Basically its a 5 second countdown/timer , just can't get teh variable to output on screen.Code:#include <iostream> #include <conio.h> #include "graphics2.h" #include <windows.h> using namespace std; int main(void) { int GraphDriver = 0; int GraphMode = 0; GraphDriver = 0; GraphMode = 0; int timer; timer = 5; initgraph(&GraphDriver, &GraphMode, "", 640, 480); do { Sleep (999); outtextxy(10,200, timer);/*<<the timer variable is what i need to output but can't*/ timer = (timer - 1); }while (timer >0); return = 0; } I'd appreciate any help Thanks.
http://cboard.cprogramming.com/cplusplus-programming/72220-graphics-problems.html
CC-MAIN-2015-18
refinedweb
146
63.9
When Microsoft announced in 1996 that the next version of Windows Help would be based on HTML rather than the older Rich Text Format (RTF), Blue Sky Software (eHelp's original name) began adding HTML support to RoboHelp. That support began as a plug-in that went through a sometimes tortuous evolution before becoming the powerful and largely stable authoring tool it is today. We are now starting another shift—this time from HTML to XML. Once again, we're likely to go through a sometimes tortuous evolution before RoboHelp XML settles down. This article is the first of a two-part series that examines Macromedia RoboHelp support for XML. Part 1 looks at the import and export features, while Part 2 will look at the "handlers." Both articles discuss the features' mechanics, their peculiarities, and some aspects of integrating them in a documentation workflow. These articles don't cover all the details but they will help you evaluate whether you can use RoboHelp in an XML environment. XML support by RoboHelp X5 can be summed up in two significant points: After reading this article, continue reading the next part, RoboHelp X5 and XML – Part 2: Using Handlers to Customize RoboHelp XML Features. If you are new to XML, you should read this recap of XML basic concepts. XML is similar to HTML in some basic ways: This section discusses exporting to the predefined formats: DocBook or XHTML. Although you can export to other outputs, you'll have to create custom export handlers to do so (which I cover in Part 2). To export, use the XML Output Options dialog box (see Figure 1). To open it, double-click the XML Output option under the Single Source Layouts folder on the Project tab. Figure 1. XML Output Options dialog box Figure 1 shows the dialog box and the predefined export handlers. If you select either DocBook option, the Advanced button becomes active. Clicking it opens the Advance Options dialog box (see Figure 2). Figure 2. Advanced (DocBook Output) Options dialog box Here's what the four handlers and the Advanced Options dialog box let you do: Export Topics to DocBook (Full-Featured Export): Similar to the previous option but also exports the project's CSS file and DHTML JavaScript file, providing all the topics in DocBook form. As with the Export Project to DocBook (Full-Featured Export) option, you must convert the topics to HTML, PDF, or PostScript in order for them to appear with correct formatting. Whichever DocBook option you choose depends on the files you need. For example, if you want to output the entire project for use online, you'd probably select the Export Project to DocBook (Full-Featured Export) option. If you want just the topics for PostScript, you'd probably select the Export Topics to DocBook (Content Only) option. Define which files you need and then try the options to find the appropriate one. Note: For more information about DocBook, see DocBook: The Definitive Guide by Norman Walsh and Leonard Muellner (O'Reilly & Associates, 1999) or read Writing Documentation Using DocBook: A Crash Course, suggested by Dave Beck of the Macromedia development group. Export Project to XHTML: Converts topics to XHTML with an HTM extension. (Note that this can cause trouble. See the note below about changing the extension and the discussion in the "XML Import" and "XHTML Filename Extensions" sections.) This option also exports the graphics, CSS, and JavaScript, and creates XML files for the browse, index, TOC, see also, and glossary control files—plus a log file. This option displays the topics properly formatted in a browser because it also exports the CSS file and adds the following reference in each topic's <head> section: <link rel="stylesheet" href="<nameofcss>.css" type="text/css" /> If you open the HTML files in a text editor, such as Notepad, you'll see the reference to the CSS file and the namespace declaration for XHTML in the <html> tag. (Although the topics are formatted correctly when displayed in Microsoft Internet Explorer, the "start file" that this option creates does not open the output in the tri-pane window, as WebHelp does. This is because the Export Project to XHTML option is one of those open-ended outputs I mentioned in the introduction. It does not display a finished product, like WebHelp, but instead outputs a set of files for you to process further to create your finished product.) Note: You can change the XHTML file extension from HTM to another file extension by opening the XML Handler Manager to the Advanced page for the desired handler and changing the extension in the Export Topic File Extension field. (See the discussion about the XML filename extensions later in this article.) Export Topics to XHTML: Similar to the previous option but only exports the topics, graphics, CSS, JavaScript, and log file. The XHTML export option you use depends on the files you need. For example, if you want to output the project for use online, you'd likely select the Export Project to DocBook (Full-Featured Export) option and plan on additional processing. If you want just the topics for PostScript, you'd likely select the Export Topics to DocBook (Content Only) option. This section discusses importing using the defaults. You can import other formats too, but you'll have to create custom import handlers to do so. To import XML, use the Import XML: Select XML Import Handler dialog box (see Figure 3). To open it, select File > Import and select XML File. Figure 3. Import XML: Select XML Import Handler dialog box Figure 3 shows the dialog box and the predefined handlers. If you select the Import XML (CSS/XSL) option, the Advanced button becomes active. Clicking it displays the Advance XML Import Options dialog box (see Figure 4). Figure 4. Advanced XML Import Options dialog box The three handlers and the Advanced XML Import Options dialog box let you do the following: Import XHTML (*.xml): This feature can exhibit some peculiarities relating to the filename extension. The file type in the option name "…(*.xml)" changes depending on the XHTML file's extension. If the file has an XML extension, the option title reads "Import XHTML (*.xml)." If the file has an XHTML extension, the title changes to "Import XHTML (*.xhtml)." The extension does not change if the import file has an XHT extension because the predefined import handlers are not set up to recognize that extension. If you must import files with XHT extensions, you have three options: Still, on the issue of extensions, because XHTML files can have HTM extensions, I once wanted to see if RoboHelp could distinguish between an XHTML file with an HTM extension and a regular HTML file. It didn't. Instead I was able to import an HTML file and an XHTML file, both with HTM extensions, by selecting File > Import > XML Import. This could be an issue if you think you're importing XHTML files with HTM extensions, only to discover that you imported a mix of HTML and XHTML files. This might prompt another standard that prohibits the use of HTM extensions for XHTML files. Import XML (CSS/XSL): Imports an XML file with an associated CSS or XSL style sheet. RoboHelp uses the CSS or XSL referenced in the XML file. If the XML file doesn't reference a CSS or XSL, you can specify one by selecting the Use Customized CSS/XSL File option in the Advanced XML Import Options dialog box and then choosing a CSS or XSL file. If you use this option, the Advanced button becomes active and you can select three options: Treat as XML Tree View: Imports the file as a topic that, in Preview mode, acts like an XML file displayed in a browser. In other words, you can expand or collapse nested elements (see Figure 6). RoboHelp uses the <div> tag for the elements and marks them with grid lines. The lines don't appear when you preview the topic or generate the final output. Use this option if you want to see nested parent and child elements in a topic without having to preview it. Figure 5. XML file imported as text flow Figure 6. XML file imported as tree view The Export Topics to XHTML option converts topics to XHTML with an HTM extension. Technically this is fine. However, it makes it impossible to distinguish by eye between HTML and XHTML files. It's also part of a larger question regarding which extension to use for XHTML files. The W3C's (World Wide Web Consortium) XHTML recommendation does not specify which extension to use for XHTML files. This implies that HTM is an acceptable extension. However, as I noted above, this makes it impossible to look at a filename and know whether it is in HTML or XHTML format. This will cause confusion if your company has files in both formats; a better approach is to use XHTML as the extension. In 2000 the IETF (Internet Engineering Task Force) recommended using XHTML or XHT extensions. The IETF recommended against using the XML extension because of the risk of confusion, at the server level, over how to distribute such files (text/XML or application/XML). Finally, one website—whose address I've since lost and have not been able to find again—suggests using XTM as the XHTML file extension. I don't recommend this option because the XTM extension is for files in the Extensible Topic Map format. RoboHelp X5 does not support native XML authoring. It acts more like a format converter. This is good if you want to share files between RoboHelp and other projects. It's less optimal if you want to create XML files, however, because you must first convert the material. This can adds steps to the procedure and create problems such as incompatible or messy code. So whether you use RoboHelp for XML work depends on your documentation workflow. Also consider whether that documentation workflow is based on one tool. In theory, any tool that creates XML or XHTML should create the same set of code, so tool standards should not be an issue. In reality, you'll get different XML code depending on whether you create the XML using RoboHelp, Microsoft Word, or WebWorks Publisher for Word—just to name three tools. Each tool's output works in a browser, but the codes are different. These inconsistencies may cause odd behavior or conflicts that you may have to correct. For example, I once created a file in Word 2003, saved it as XML, and imported it into RoboHelp. When I previewed the topic, a subhead was superimposed on the title but the topic appeared correctly in the final output. I'm still not sure why this happened because the code looked right. To avoid this problem in a production environment, consider establishing some authoring tool standards in your company. (Enforcing those standards may be very difficult, especially if your company grows by acquisition or has a number of disparate documentation groups.) You may also experience odd conversion results when working with files from subject-matter experts who create Word files with all text set in Normal style, or text applied with incorrect or irrationally chosen styles. I always suggest to clients who use a word processor like Microsoft Word that they train their subject-matter experts to apply styles correctly and make style usage a standard. RoboHelp's XML support is new and still evolving. As you could expect in such a situation, its XML features are still evolving too. However, once you get past a few points of confusion, the mechanics of the features are fairly clear. I would describe the XML feature set in RoboHelp as a good first effort. Continue reading the next part, RoboHelp X5 and XML – Part 2: Using Handlers to Customize RoboHelp XML Features. Thanks to Dave Beck and Raul Ramos of Macromedia for their help with my questions.
http://www.adobe.com/devnet/robohelp/articles/xml_print.html
crawl-002
refinedweb
2,006
60.55
jakzaprogramowac.pl All questions About the project How To Program How To Develop Data dodania Pytanie 2017-10-04 16:10 Create stream lazily » How can a stream be created lazily? During migration of collection based code I have run into this pattern multiple times: Collection collection = ve... (1) odpowiedzi 2017-10-04 14:10 Firestore getProductID cannot be null error » I am trying to use the new Firestore released by Firebase in my android app. Unfortunately I keep getting this error when trying to write to the datab... (3) odpowiedzi 2017-10-04 11:10 Reading Oracle htp.p output into Java app » We have built a dynamic stored proc reader due to the nature of our system. The only issue I have left is I am unable to read the htp.p output back 21:10 Calling Ruby method from Java » I have a Java application that depends on the result from a ruby script. The issue I'm facing is that this ruby script is called many times which requ... (1) odpowiedzi 2017-10-03 21:10 Functional programming: How to carry on the context for a chain of validation rules » I have a set of functions (rules) for validation which take a context as parameter and either return "Okay" or an "Error" with a message. Basically th... (6) odpowiedzi 2017-10-03 15:10 How set initial size connection to postgresql in application.yml » Always when i connect to my database, I see 10 idle connection. How can I set this in application.yml. I use spring boot 1.5.6.RELEASE. It's not wor... (2) odpowiedzi 2017-10-03 12:10 Java Singleton with an inner class - what guarantees thread safety? » One common (1,2) way of implementing a singleton uses an inner class with a static member: public class Singleton { private static class Sin... (2) odpowiedzi 2017-10-03 12:10 Can't replace SAM-constructor with lambda when first argument is a class with one method » I’m puzzled over SAM constructors, I have this Java class: public class TestSam<T> { public void observe(ZeroMethods zero, Observer<T... (1) odpowiedzi 2017-10-03 06:10 What's the difference between requires and requires static in module declaration » What's the difference between requires and requires static module statements in module declaration? For example: module bar { requires java.comp... (2) odpowiedzi 2017-10-03 04:10 Error encode/decode Base64 between Java and Android » As my question, I have a big problem when I encode/decode Base64 between Java and Android. Here is my case: I write code to encrypt/decrypt using EC... (1) odpowiedzi 2017-10-02 22:10 Create Module in JShell in Java 9 » Just exploring new release of Java, its new module system, and playing with jshell as well. Probably my question doesn't have too much sense, but I am... (1) odpowiedzi 2017-10-02 21:10 All values in RecycleView and ListView are changing while I'm using dialog » I have a dialog and I want to add a recyclerView or listView and I know that I have to use a List to control items in each row. Now when I want to cha... (2) odpowiedzi 2017-10-02 11:10 Dealing with changed ENUM definitions - database » Introduction The lead architect went and changed the ENUM definition in a spring boot project. From: public enum ProcessState{ C("COMPLETE"), ... (3) odpowiedzi 2017-10-02 10:10 SunPKCS11 provider in Java 9 » Up to Java 8 the SunPKCS11 provider was loaded like this: Provider provider = new sun.security.pkcs11.SunPKCS11 (new ByteArrayInputStream (configFile... (2) odpowiedzi 2017-10-02 07:10 Can I run timer at different intervals of time? » Actually i wanted to ask can i give value from database to a timer delay? Timer timer = new Timer(); TimerTask timerTask = new TimerTask() { ... (2) odpowiedzi 2017-10-02 05:10 What's the difference between Objects.requireNonNullElse() and Optional.ofNullable().orElse()? » Java 9 introduces the requireNonNullElse and requireNonNullElseGet methods to the Objects class. Are these functionally any different to the Optional.... (2) odpowiedzi 2017-10-01 17:10 Java 9 - What is the difference between "Modules" and "JAR" files? » I am learning about Java 9 from What's New in Java9 and one of the hot topic in the discussion is The Modular JDK. I have some doubts: Are JAR fil... (3) odpowiedzi 2017-10-01 15:10 How is String concatenation implemented in Java 9? » As written in JEP 280: Change the static String-concatenation bytecode sequence generated by javac to use invokedynamic calls to JDK library funct... (3) odpowiedzi 2017-10-01 10:10 Adding elements to different collections in a single lambda expression » I possibly use the wrong terms, feel free to correct. I have a test method which takes a Runnable: void expectRollback(Runnable r) { .. } I can ca... (2) odpowiedzi 2017-09-30 21:09 Why does short-circuit evaluation work when operator precedence says it shouldn't? » In JavaScript and Java, the equals operator (== or ===) has a higher precedence than the OR operator (||). Yet both languages (JS, Java) support short... (3) odpowiedzi 2017-09-30 17:09 Warning: Failed prop type: The prop `todos[0].title` is marked as required in `TodoList`, but its value is `undefined` » I want to add title to my server as you see in the picture enter image description here its ok to value but its not working with title, title is in ... (1) odpowiedzi 2017-09-30 13:09 What's the difference between requires and requires transitive statements in Java 9 module declaration » What's the difference between requires and requires transitive module statements in module declaration? For example: module foo { requires java.b... (5) odpowiedzi 2017-09-30 11:09 How to deal with java keywords in auto generated module names in Java 9? » My project depends on Netty Epoll transport. Here is dependency: <dependency> <groupId>io.netty</groupId> <artifactId>... (1) odpowiedzi 2017-09-30 10:09 What does "Required filename-based automodules detected." warning mean? » In my multi-module project, I created module-info.java only for few modules. And during compilation with maven-compiler-plugin:3.7.0 I'm getting next ... (2) odpowiedzi 2017-09-30 09:09 Java 8, how to group stream elements to sets using BiPredicate » I have stream of files, and a method which takes two files as an argument, and return if they have same content or not. I want to reduce this stream ... (2) odpowiedzi 2017-09-30 01:09 Failing to see the point of the Functional interfaces 'Consumer' and 'Supplier' » I realize the uses of Predicate and Function, used for passing in a loosely-coupled conditional and function to a method respectively. Predicates are... (2) odpowiedzi 2017-09-29 22:09 Algorithms: Hybrid MergeSort and InsertionSort Execution Time » Good day SO community, I am a CS student currently performing an experiment combining MergeSort and InsertionSort. It is understood that for a certai... (1) odpowiedzi 2017-09-29 21:09 Are there dangers in making an existing Java interface functional? » As a rule, in the context of a large project, is it considered safe to take make an existing, ubiquitously used interface into a functional interface?... (4) odpowiedzi 2017-09-29 15:09 Maintaining ordering in multithreaded apache camel application » We use Tibco EMS as our messaging system and have used apache camel to write our application. In our application, messages are written to a queue. A c... (1) odpowiedzi 2017-09-29 07:09 What is an open module in Java 9 and how to use it » What is the difference between module with open keyword before and without it? For instance: open module foo { } module foo { } ... -28 19:09 Java 8 Predicate placement in project structure » I am wondering where should I place java 8 predicates in standard web java project structure. Lets assume that my project structure looks like JRuby: Not found despite being installed and linked » Not sure whats up with this issue... Warning: jruby 9.1.13.0 is already installed, it's just not linked. You can use brew link jruby to link this
http://jakzaprogramowac.pl/lista-pytan-jakzaprogramowac-wg-tagow/6/strona/2
CC-MAIN-2017-43
refinedweb
1,363
63.7
: Web Services The number of WS instances - how to change it Anna Smalska Greenhorn Posts: 4 posted 10 years ago annasmalska@interia.pl annasmalska Hello, I have hot a problem and I really need your help. I have got small web application with a web service, build like this: @WebService() @Stateless() public class myWS { ... } Everything is working this fine, but it turned out, that in one time, the number of instances of this web service can’t be more than 5. When there are more clients who at this same time want to use this web service, they will have to wait. So I wanted to ask, how I can change it. I am using Netbeans with glassfish as server. I guest this is connected with number of beans in ‘pool’, but I really don’t know where to change this option. Thanks! Manan Panchal Greenhorn Posts: 24 posted 10 years ago How can you say that there is 5 instance of ws? Anna Smalska Greenhorn Posts: 4 posted 10 years ago I put simple System.out.println("running"); Thread.sleep(20000); ... ... .. When I turn on application, and execute 20 concurrent request for this webservice, I got only 5 times 'running', and after 20 seconds, when those instances finished, the other requests started (again 5 of them). So can somebody help me? Ivan Krizsan Ranch Hand Posts: 2198 1 posted 10 years ago Hi! I suspect that this is not an issue with your application, but rather with the server you are using. I think you need to, for instance, increase the size of the threadpool holding the threads that processes requests. Take a look at this article: It mentions the number of threads that process HTTP requests (which includes web service requests). Best wishes! Anna Smalska Greenhorn Posts: 4 posted 10 years ago hi, yeah, I was just browsing Glassfish admin console, and I found option 'Max Thread Pool Size' which was set to 5 ^^. And I wanted to write this, but you were first ^^ and you were right ^^. Thank you anyway But I have got one more question, if you dont mind. my webservice is simple, it doesn't do anything, it doesnt return anything (empty WS). Later I made WS Client as normal j2se application, and I created 20 threads, and each of them was connecting to this WS (in this same time). here are results (times of each of the request): time: (9.376) time: (11.375) time: (11.391) time: (11.396) time: (13.404) time: (13.416) time: (13.429) time: (14.431) time: (15.431) time: (17.437) time: (17.44) time: (18.443) time: (18.452) time: (18.458) time: (19.457) time: (19.462) time: (20.461) time: (20.471) time: (21.501) time: (22.477) Is it normal, that the time is so diffrent for 1st request and for the last one? After all this is simple webservice... Ivan Krizsan Ranch Hand Posts: 2198 1 posted 10 years ago Hi again! Ah, the art of testing ! I cannot say that this is my strongest area, but I'll share some of the experiences I've had. Yes, variations in the time it takes to process a request is normal, since there are a lot of factors that affect the response time: - Are you running anything but GlassFish on the server computer? If so, this will inevitably affect the test, since the OS need to share resources between the different applications. - For how long are you running the test? The first time a test executes and the first time the server receives requests, it may have to load resources needed to process the request. Subsequent requests does not require this and so those requests can be processed more quickly. - Garbage collection. The JVM may, at any point in time, decided that it is time for a GC. When this happens, it will delay any processing done by the JVM at that point in time. GC may actually freeze the entire JVM for a short duration of time (if I have understood correctly). Finally, I would recommend you to take a look at soapUI - a free program for testing web services. With soapUI you can, among other things, perform load testing on web services. I just ran a 60 second load test with 20 threads and 20 threads in the GlassFish thread pool against the Calculator example web service from NetBeans. In addition, I configured high level of monitoring for everything in the GlassFish monitoring panel. soapUI gave me the max response time 669 mS, the minimum response time 2 mS and the average response time 12.87 mS. In the GlassFish monitoring for the web application in which the web service was deployed, I got a max response time of 27 mS and a request count of 89150. The conclusion is that the max response time in soapUI is not entirely correct - it is always good to have more than one single source of the "truth" when load-testing. Hope this is of any help! Best wishes! My free books and tutorials: I was born with webbed fish toes. This tiny ad is my only friend: Thread Boost feature reply Bookmark Topic Watch Topic New Topic Boost this thread! Similar Threads How ro get started on mobile computing Synchronizing Jax-Ws web services Tomcat and service instances please critique the code Synchronization the value of filler advertising in 2020 More...
https://coderanch.com/t/497888/java/number-WS-instances-change
CC-MAIN-2020-40
refinedweb
909
74.79
I have a Python script that needs to execute an external program, but for some reason fails. If I have the following script: import os; os.system("C:\\Temp\\a b c\\Notepad.exe"); raw_input(); Then it fails with the following error: 'C:\Temp\a' is not recognized as an internal or external command, operable program or batch file. If I escape the program with quotes: import os; os.system('"C:\\Temp\\a b c\\Notepad.exe"'); raw_input(); Then it works. However, if I add a parameter, it stops working again: import os; os.system('"C:\\Temp\\a b c\\Notepad.exe" "C:\\test.txt"'); raw_input(); What is the right way to execute a program and wait for it to complete? I do not need to read output from it, as it is a visual program that does a job and then just exits, but I need to wait for it to complete. Also note, moving the program to a non-spaced path is not an option either. This does not work either: import os; os.system("'C:\\Temp\\a b c\\Notepad.exe'"); raw_input(); Note the swapped single/double quotes. With or without a parameter to Notepad here, it fails with the error message The filename, directory name, or volume label syntax is incorrect. subprocess.call will avoid problems with having to deal with quoting conventions of various shells. It accepts a list, rather than a string, so arguments are more easily delimited. i.e. import subprocess subprocess.call(['C:\\Temp\\a b c\\Notepad.exe', 'C:\\test.txt']) Here's a different way of doing it. If you're using Windows the following acts like double-clicking the file in Explorer, or giving the file name as an argument to the DOS "start" command: the file is opened with whatever application (if any) its extension is associated with. filepath = 'textfile.txt' import os os.startfile(filepath) Example: import os os.startfile('textfile.txt') This will open textfile.txt with Notepad if Notepad is associated with .txt files.
https://pythonpedia.com/en/knowledge-base/204017/how-do-i-execute-a-program-from-python--os-system-fails-due-to-spaces-in-path
CC-MAIN-2020-16
refinedweb
339
68.67
Are_0<< issued by a human or by a robot. In the early days of the Internet, even million dollar sites like Yahoo! and Amazon were not protected against massive poisonous requests and were taken down by a 17 year old school boy. It is around that time that appeared the CAPTCHAs (Completely Automated Public Turing test to tell Computers and Humans Apart) which, as the name says is a reverse Turing test. The gist of reverse Turing tests is to process the request only if the client can perform a task that is very easy for a human, and very difficult for a computer (like recognizing a distorted text, a picture with a house or the meaning of a simple sentence). In short, telling robots apart from humans is important for security reasons. Yet, most servers do not want to deny access to every robot because search engines like Google use robots called web spiders or web crawlers to index web pages. The current agreement is that servers should indicate their policy in a page called robots.txt (out of curiosity you can check the robots.txt page of the blog, but it only contains the address of the site map). The content tells the robots which pages they should not request, but does not prevent them to do so in any way. Not surprisingly, most spammers or petty hackers do not take the time to read the robots.txt page... perhaps some do not even know it exists. So robots could issue a request to every page anyway, right? Well, let's check that out. In the technical section below I show a very simple Python script to get the reviews of user 2467618 on IMDB. urllibas shown. import urllib content = urllib.urlopen( '' ).read() f = open('downloaded_content.html', 'w') f.write(content) f.close() You can now open the file downloaded_content.html in your home directory with your favorite browser to see what it contains. In case you are not Python-proficient, you can check out what the script retrives here. Among others, you will notice that it says "Access denied http: 403". Sure enough, the robots.txt file of IMDB says that requests to /user are disallowed. So how do servers protect themselves from unwanted queries issued by robots? There is no universal answer, but in the case of IMDB like in many others, the answer lies in the HTTP headers. You might notice in the "Access denied" page that it says at the bottom "Browser: Python-urllib/1.17". The issue here is that by default, urllib is honest about the user agent, which is easily intercepted and denied by the server. If we decide to lie about our user agent and claim we issue the request through Chrome, we would do as indicated in the the following technical part instead. import urllib2 import cookielib cookies = cookielib.LWPCookieJar() handlers = [ urllib2.HTTPHandler(), urllib2.HTTPSHandler(), urllib2.HTTPCookieProcessor(cookies), ] opener = urllib2.build_opener(*handlers) headers = { 'Accept': 'text/html,application/xhtml+xml,'\ 'application/xml;q=0.9,*/*;q=0.8', 'Accept-Charset': 'ISO-8859-1,utf-8;q=0.7,*;q=0.3', 'Accept-Encoding': 'gzip,deflate,sdch', 'Accept-Language': 'en-US,en;q=0.8,fr;q=0.6', 'Connection': 'keep-alive', 'User-Agent': 'Mozilla/5.0 (X11; Linux i686) AppleWebKit/535.19 '\ '(KHTML, like Gecko) Ubuntu/12.04 '\ 'Chromium/18.0.1025.151 Chrome/18.0.1025.151 '\ 'Safari/535.19', } request = urllib2.Request( url='', headers=headers ) connection = opener.open(request) content = connection.read() # The content is gzip-compressed. f = open('downloaded_content.html.gz', 'wb') f.write(content) f.close() As you can check by decompressing the file downloaded_content.html.gz, we get the same content as if we had issued the request from Chrome. To download the page, we had to set the HTTP headers to their values when we request the page from Chrome, which makes the code substantially more complicated. We can get the values of those headers very easily in Chrome by clicking on the wrench tool in the top-right corner and then choosing Tools > Developer Tools. This displays a console which has a "Network" item where all requests are analyzed and where you can find the values of all the HTTP headers, data and cookies. You can also use Firebug on Firefox, or a sniffer like Wireshark to analyze the traffic to and from your browser. By setting those headers to the values they have when the request is issued by a human through a browser, it is much harder to recognize that the request actually comes from a script. One thing still that most servers check is the frequency and the regularity of the requests. I don't know any human who would issue over nine thousand requests with less than a second interval. Neither do system administrators, and this is why they might block the IP address those requests are issued from. Basically, by masquerading HTTP headers, and breaking the regularity patterns in the requests, it becomes very difficult to distinguish humans from robots without CAPTCHAs. If you ever wondered, the answer is yes: this is what I did to fetch the reviews from IMBD. But then, why would I ever be interested in user 2467618? This is what I will expand on in my next post. « Previous Post | Next Post » blog comments powered by Disqus
http://blog.thegrandlocus.com/2012/07/are-you-human
CC-MAIN-2018-47
refinedweb
894
66.13
Accesses information about another process, identified by a process ID. More... #include "snapshot/linux/process_reader_linux.h" Accesses information about another process, identified by a process ID. Determines the target process’ execution time. trueon success, falseon failure, with a warning logged. On failure, user_time and system_time will be set to represent no time spent executing code in user or system mode. Initializes this object. This method must be successfully called before calling any other method in this class and may only be called once. trueon success. falseon failure with a message logged. 0) corresponds to the main executable. Determines the target process’ start time. trueon success with start_time set. Otherwise falsewith a message logged.
https://crashpad.chromium.org/doxygen/classcrashpad_1_1ProcessReaderLinux.html
CC-MAIN-2019-13
refinedweb
113
62.14
Observable Roles An implementation of the Observable pattern, on steroids and in Dart Why? (a usecase) We all know the observable pattern. Observer watches some object for events, when they happen - it invokes event handlers. Turns out, this isn't enough for some cases. Imagine you have three buttons: red, green and blue, each is represented by an Object, which is going to emit a 'click' event when a button is clicked - we'll call that object Publisher. Suppose we have an object which listens to those events, let's call it Subscriber. That is, we now have on subscriber, which is subscribed to events from three Publishers. Obviously, our buttons serve different purposes in our app. The red one deletes files, the green one creates a new document and the blue one really just shows our application user some random motivational video from YouTube. Of course, it is foolish to program those things into the buttons themselves, so we leave this code for the Subscriber's event handlers. Buttons themselves, though, are essentially the same: apart from the color, they seem to behave themselves in exactly the same manner: the show the text "Wait..." after they are clicked, impacted while they are being clicked and they look flat when disabled. Still we may say their roles are somewhat different. Here we come across an important, but really a very simple concept. Each publisher may be assigned a role (it's optional, though!). So when an event happens, our Subscriber knows publisher with which role triggered the event. With that knowledge, we can now assign three different event handlers. Another important idea is the ability of any Subscriber to stay locked for a while. A so called listening lock may be set to true, in which case any new event that is emmited by Publishers is captured, but the evend handler isn't invoked until the lock is set to false again. As an example of this idea we may say that until a new document is created, we will not process user requests for deleting of any files. Code example The following would be an example code implementing the scenario with the three buttons described above: import 'package:observable_roles'; Let's create a class that includes the Subscriber mixin and define event handlers in this class: class MySubscriber extends Object with Subscriber { var event_handlers = { 'click' : { #all : (self, p) => print("A click event was triggered"), 'file_terminator' => print("Deleting files"), 'document_creator' => print("Creating a new document"), 'motivator' => print("Showing a motivational video") } }; } Then we create a Button class. It'll be the same class for all buttons, because remeber - buttons behave in the same way (color implementation is left out of it): class Button extends Object with Publisher { String roles = [] Button([this.roles]); // Set roles when creating an object } Now we create buttons, create a subscriber object and subscribe it to all the publishers (that is, Buttons). Then check what happens when we trigger a click event: main() { var red_button = new Button(['file_terminator']); var green_button = new Button(['document_creator']); var blue_button = new Button(['motivator']); // This button doesn't have any role, so it will pass its // class name as a role later. var some_button = new Button(); var subscriber = new MySubscriber(); // Start listening for events from each button [red_button, green_button, blue_button, some_button].forEach((b) { b.addObservingSubscriber(subscriber); }); red_button.publishEvent('click'); // => "Deleting files" green_button.publishEvent('click'); // => "Creating a new document" blue_button.publishEvent('click'); // => "Showing a motivational video" some_button.publishEvent('click'); // => "A click event was triggered" } You're probably wondering if there's a simpler, nicer way of adding event_handlers to the Subscriber, and the answer is, yes there is. Here's an alternative: class MyComponent implements Subscriber { var event_handlers = new EventHandlersMap(); MyComponent() { event_handlers.add(event: ..., role: ..., handler: ...); event_handlers.add_for_role('button', [a Map, event: handler]); event_handlers.add_for_event('click', [a Map, role: handler]); } } Not that #add, #add_for_role and #add_for_event methods are called inside a constructor. This is because event_handlers is an instance variable and can be accessed only within object's context.
https://www.dartdocs.org/documentation/observable_roles/0.1.0/index.html
CC-MAIN-2017-09
refinedweb
666
52.29
I've read many answers about this question but nothing was found about the comparison between two files, actually this is a sample of the book Algorithms based on BinarySearch, here is the source code import java.util.Arrays; import edu.princeton.cs.algs4.*; public class prac1_1_23{ public static boolean BinaryLookup(int key, int[] arr) { int low = 0; int high = arr.length - 1; while(low <= high) { int mid = low + ((high - low) >> 1); if(key < arr[mid]) high = mid - 1; else if(key > arr[mid]) low = mid + 1; else return true; } return false; } public static void main(String[] args) { char symbol = '-'; int[] whitelist = new In(args[0]).readAllInts(); Arrays.sort(whitelist); while(!StdIn.isEmpty()) { int key = StdIn.readInt(); boolean found = BinaryLookup(key, whitelist); if('+' == symbol && !found) StdOut.println(key); if('-' == symbol && found) StdOut.println(key); } } } java prac1_1_23 largeW.txt < largeT.txt javac-algs4 prac1_1_23.java //compile command java-algs4 prac1_1_23 largeW.txt < largeT.txt //run command This is a powershell issue as explained in The '<' operator is reserved for future use (PowerShell). As explained in the first answer you can run your command like this; Get-Content largeT.txt | java prac1_1_23 largeW.txt Check out other answers for alternative ways to redirect input in powershell.
https://codedump.io/share/qVahZc9qrgDN/1/powershell-the-39lt39-operator-is-reserved-for-future-use-in-java
CC-MAIN-2017-47
refinedweb
204
59.4
In this blog, I will be explaining Akka Typed API. This is going to be my first blog on Akka Typed, so let us name it “Beginner Level: Akka Typed API“. Here, I will be telling you the reason for preferring Akka typed over untyped. Along with that, I will also be demonstrating some implementations with Akka Typed. Now before heading towards Akka Typed API, it’s important to first discuss Akka untyped. Basically in Akka untyped, we don’t know the type of messages that we are passing. The worst thing about Akka untyped is the receive() method of Akka untyped actor that accepts anything and returns nothing. The type of receive() method of Akka untyped API is PartialFraction[Any, Unit] which takes Any as an input and returns Unit as an output. That is the moment Akka typed comes into the picture and solves the problem. It provides the type of incoming and outgoing messages. Here, first of all, we will show the implementation of untyped actor and then move towards the typed Akka . The first thing to do is to quickly add the library dependencies for typed and untyped Akka in build.sbt file . As a result of which, our build.sbt will look like: name := "Akka-actor-Demo" version := "0.1" scalaVersion := "2.13.5" libraryDependencies ++=Seq( "com.typesafe.akka" %% "akka-actor" % "2.6.13", "com.typesafe.akka" %% "akka-actor-typed" % "2.6.14") Now let us create an actor which handles messages. We will extend the Actor trait and override receive method in Akka untyped API to create this actor. import akka.actor.{Actor, ActorSystem, Props} object Hotel extends App { case class BookRoom(roomType : String) case object RoomBooked case class BookRoomWithFood(foodAmount : Long) case object RoomBookedWithFood val system = ActorSystem("hotel") val hotelActor = system.actorOf(Props[HotelActor],"hotelActor") hotelActor ! BookRoomWithFood(500) } class HotelActor extends Actor { import Hotel._ def receive: Receive = { case BookRoom("AC Room") => println("Book Ac room") case RoomBooked => println("Room Booked ") case BookRoomWithFood(500) => println("Book Room with Food ") case RoomBookedWithFood => println("Room Booked ") } } Here in the above code, we can clearly see that the actor can handle messages of Any type it wants. Thus, we can send and receive any type of message here. Now let us talk about Akka typed API which provides type to incoming and outgoing messages. In Actor typed API There is no receive() method, no Actor trait, no Props, and no more implicit sender ( sender() ) or Actor Selection. ActorRef is typed i.e. ActorRef[T] where T is the type. Instead of extending Actors, define Behaviour[T]. Implementing Akka Typed Now, let us implement the above example using Akka typed API. For that, we need to define a behavior that must be enough for creating an actor using typed API. Behavior contains many factory methods e.g. same, receive message, setup, receive signal, etc. import akka.actor.typed.scaladsl.Behaviors import akka.actor.typed.{ActorRef, ActorSystem, Behavior} object HotelTyped { //incoming messages need to extend commands sealed trait Command case class BookRoom(roomType : String , replyTo : ActorRef[RoomBooked]) extends Command case class RoomBooked() case class BookRoomWithFood(foodAmount : Long,replyTo : ActorRef[RoomBookedWithFood]) extends Command case class RoomBookedWithFood() def apply() : Behavior[Command]={ Behaviors.receiveMessage{ case BookRoom("AC" , replyTo) => replyTo ! RoomBooked() Behaviors.same case BookRoomWithFood(500,replyTo) => replyTo ! RoomBookedWithFood() Behaviors.same //if behaviour of next msg has to change then only change it . } } } In the above code, as you can see, the protocols are defined as a sealed trait. This is to make the compiler warn us in case we are doing any mistakes while sending and receiving messages that do not belong to the protocol. The typed actor needs a function (that applies in this case) to construct a behaviour using the protocol we created. Also as you can see to keep the behaviour same we have to return Behaviour.same .This is how we create actors using Akka typed API. To conclude, this was just a quick elementary glimpse of Typed Actors. It appears very intuitive and the fact that the requests and responses both can be typed will most likely result in formulating more secure codes. I found it a remarkably delicate way of creating actor systems among all. 1 thought on “Beginners Level: Akka Typed API4 min read” Good job Brother
https://blog.knoldus.com/beginners-level-akka-typed-api/
CC-MAIN-2022-27
refinedweb
713
57.37
Split algorithm and conquer the sum of an integer array I'm having problems with splitting and conquering algorithms and was looking for some help. I am trying to write a sumArray function that calculates the sum of an array of integers. This function should be done by dividing the array in half and making recursive calls for each half. I tried to use similar concepts to the ones I used when writing recursive sum algorithms and a division and conquest algorithm to determine the maximum element in an array, but I am struggling to combine the two ideas. Below is the code I wrote for sumArray that compiles but does not return the correct result. int sumArray(int anArray[], int size) { int total = 0; //base case if (size == 0) { return 0; } else if (size == 1) { return anArray[0]; } //divide and conquer int mid = size / 2; int lsum = anArray [mid] + sumArray(anArray, --mid); int rsize = size - mid; int rsum = anArray[size - mid] + sumArray(anArray + mid, --rsize); return lsum + rsum; } I identified the problem as a function that includes the lsum value when calculating the rsum. I know the problem is my recursive call to sumArray using rsize (a variable that is equal to the size of the original array minus the midpoint). For some reason, however, I cannot find a definition. It seems to me that I am stupidly asking, since I know that the answer is looking right in my face, but how do I restore my function so that it returns the exact result? UPDATE: Thanks to all the helpful answers, I've corrected my code so that it compiles and works nicely. I'll leave my original code here in case others struggle with division and win and might make similar mistakes. For a function that correctly solves the problem, see @Laura M. @haris's answer also provides a good explanation of where my code was getting bugs. source to share int sumArray(int anArray[], int size) { //base case if (size == 0) { return 0; } else if (size == 1) { return anArray[0]; } //divide and conquer int mid = size / 2; int rsize = size - mid; int lsum = sumArray(anArray, mid); int rsum = sumArray(anArray + mid, rsize); return lsum + rsum; } source to share In your code int mid = size / 2; int lsum = anArray [mid] + sumArray(anArray, --mid); int rsize = size - mid; int rsum = anArray[size - mid] + sumArray(anArray + mid, --rsize); allows you to show an example where this makes a mistake. Let's assume the array is equal { 2, 3, 4, 5, 6, 9} , so size = 6 now that yo do mid = size / 2 and then int lsum = anArray [mid] + sumArray(anArray, --mid); int rsize = size - mid; int rsum = anArray[size - mid] + sumArray(anArray + mid, --rsize); and the mid == (size - mid) number is 5 added twice (once a lsum and then a rsum ). Further, the call sumArray() to rsum must have parameters sumArray(anArray + (mid + 1), --rsize) , since the element mid has already been added to lsum In another post, you can find much simpler code for recursion, something like .. int add(int low,int high,int *a) { int mid; if(high==low) return a[low]; mid=(low+high)/2; return add(low,mid,a)+add(mid+1,high,a); } source to share int sumArray(int anArray[],int start,int end){ if(start==end) return anArray[start]; if(start<end){ int mid=(start+end)/2; int lsum=sumArray(anArray,start,mid-1); int rsum=sumArray(anArray,mid+1,end); return lsum+rsum+anArray[mid]; } return 0; } source to share As Haris said, in your code, you add the same number to both the correct amount and the left amount; however, there is a much bigger problem with your code. You always pass the same array to your recursive calls for lsum and rsum. At first I thought this was just part of your implementation and the size parameter would take care of it. However, the size parameter doesn't work as you may have intended it to work. Your whole algorithm shrinks the size parameter until it reaches 1. Then the base case runs, and as a result, the first element in the original array is returned. What for? You never split an array in your code, and therefore the same array is preserved across all recursive calls (even in the base case). To fix this problem, all sumarray () should be done is to split the array into left half and right half based on the average calculation and recursively pass this new array until the size of the array is 1 (base case) and you return an element in an array. This effectively splits the array into its individual elements, and all functions should be doing at this point is adding lsum and rsum. pseudocode: sumArray(array[]){ if size(array) is 1 return array[0] mid = size(array) / 2 leftArray = splitArrayUpto(mid, array) rightArray = splitArrayAfter(mid+1, array) leftArraySum = sumArray(leftArray) rightArraySum = sumArray(rightArray) return leftArraySum + rightArraySum } source to share using namespace std; int sum(int a[], int l, int r) { if(l==r) return a[l]; int mid = (l+r)/2; int lsum = sum(a,l,mid); int rsum = sum(a,mid+1,r); return lsum+rsum; } int main() { int b[] = {9,7,2,6,5,3}; int fsum = sum(b,0,5); cout<<fsum; return 0; } source to share
https://daily-blog.netlify.app/questions/2170521/index.html
CC-MAIN-2021-21
refinedweb
891
57.34
Hello everyone! I've made the mistake of not asking my teacher for help when I had the time. Now I'm stuck and lost. I didn't do this because I felt I should be able to figure this out myself since this is my major. So, I'm giving up and needing advice. I would love any help. Write a program that uses a class named Rectangle. The class has floating point attributes length and width. It has member functions that calculate the perimeter and the area of the rectangle. It also has set and get functions for both length and width, The set functions verify that length and width are each floating point numbers large than 0.0 and 20.0. If invalid length or width are given, then length and width will be set to 1.0. A member Boolean function will determine if the rectangle is a square (A square exists if the length and the width differ by less than .0001) The class will have a destructor that displays a message indicating that an object has "gone out of scope". The class will have 3 overloaded constructor functions. The first will have no parameters ( in this function set the length and width to 1.0 in the body of the function. The second will have one parameter (length). (in this function set the width to 1.0 in the body of the function.) The third will have two parameters (length and width). This third constructor will set length and width to 1.0 in the body of the function if the values for these members are invalid. Error messages will indicate that an attempt has been made to create an object with invalid parameters. Test the performance of your class by performing the following tasks in your program in the given order: Declare object 1 with no parameters. Declare object 2 with valid parameters for length (7.1) and width (3.2. Declare object 3 with only a length (6.3). Declare object 4 with invalid parameters for length and width. Declare object 5 and initialize it by assigning object 2. Display the length, width, perimeter, area, of all 5 objects and indicate wether or not they are squares. Write all output data to a file. So, hopefully my code so far isn't awful. I'm sorry if it is. I understand that area and perimeter need to be passed by value returning, but i'm unsure on how to do this. I also am confused by the destructor, and I'm sure there are other things I did wrong that I need pointed out. Here's what I have so far. This is my header file Rectangle.h #ifndef Rectangle_H #define Rectangle_H class Rectangle { public: Rectangle(); Rectangle(float length); Rectangle(float length, float width); ~Rectangle(); void setLengthAndWidth(float, float); void setLength(float Length); void setWidth(float Width); void calculatePerimeter(); void calculateArea(); void isSquare(); void printInfo(); float getLength(); float getWidth(); private: float length; float width; float area; float perimeter; }; #endif This is my Member function cpp file #include <iostream> #include <iomanip> #include <cmath> #include "Rectangle.h" using namespace std; Rectangle::Rectangle() {length = width = 1.0;} Rectangle::Rectangle(float length) {setLengthAndWidth (length, 1.0);} Rectangle::Rectangle(float length, float width) {setLengthAndWidth (length, width);} void Rectangle::setLengthAndWidth(float Len, float Wid) { setLength(Len); setWidth(Wid); } void Rectangle::setLength(float length) { if (length >= 0 || length <= 20.0) length = length; else length = 1.0; } void Rectangle::setWidth(float width) { if (width >= 0 || width <= 20.0) width = width; else width = 1.0; } void Rectangle::calculatePerimeter() { (length * 2) + (width * 2) = perimeter; return perimeter; } void Rectangle::calculateArea() { length * width = area; return area; } float Rectangle::getLength() { return length; } float Rectangle::getWidth() { return width; } void Rectangle::isSquare() { return fabs(length - width) < .0001; } void Rectangle::printInfo() { cout << "the length is " << length << endl << "the width is " << width << endl; cout << "the perimeter is " << perimeter << endl << "the area is " << area << endl; if(Rectangle.isSquare) cout << "the rectangle is square" << endl; else cout << "The rectangle is not a square " << endl; } Rectangle::~Rectangle() { cout << "the object has gone out of scope. "; } And this is my main cpp file #include <iostream> #include <iomanip> #include <cmath> #include "Rectangle.h" using namespace std; int main() { Rectangle objectOne; Rectangle objectTwo(7.1, 3.2); Rectangle objectThree(6.3); Rectangle objectFour(200,300); Rectangle objectFive = objectTwo; cout << "The first objects information is\n "; objectOne.printInfo(); cout << "The second objects information is\n "; objectTwo.printInfo(); cout << "The third objects information is\n "; objectThree.printInfo(); cout << "The fourth objects information is\n "; objectFour.printInfo(); cout << "The fifth objects information is\n "; objectFive.printInfo(); } I would be forever thankful for help. This is due tomorrow and I'm worried I'm light years behind from finishing it, but I guess it's my own fault for thinking I could do it on my own. Thanks guys!
https://www.daniweb.com/programming/software-development/threads/384130/c-homework-help
CC-MAIN-2022-21
refinedweb
810
66.84
Java Interview Questions & Answers: Compile-time versus runtime During development and design, one needs to think in terms of compile-time, run-time, and build-time. It will also help you understand the fundamentals better. These are beginner to intermediate level questions. Q. What is the difference between line A & line B in the following code snippet? public class ConstantFolding { static final int number1 = 5; static final int number2 = 6; static int number3 = 5; static int number4= 6; public static void main(String[ ] args) { int product1 = number1 * number2; //line A int product2 = number3 * number4; //line B } } A. Line A, evaluates the product at compile-time, and Line B evaluates the product at runtime. If you use a Java Decompiler (e.g. jd-gui), and decompile the compiled ConstantFolding.class file, you will see whyas shown below. public class ConstantFolding { static final int number1 = 5; static final int number2 = 6; static int number3 = 5; static int number4 = 6; public static void main(String[ ] args) { int product1 = 30; int product2 = number3 * number4; } } Constant folding is an optimization technique used by the Java compiler. Since final variables cannot change, they can be optimized. Java Decompiler and javap command are handy tool for inspecting the compiled (i.e. byte code ) code. Q. Can you think of other scenarios other than code optimization, where inspecting a compiled code is useful? A. Generics in Java are compile-time constructs, and it is very handy to inspect a compiled class file to understand and troubleshoot generics. Q. Does this happen during compile-time, runtime, or both? A. Method overloading: This happens at compile-time. This is also called compile-time polymorphism because the compiler must decide how to select which method to run based on the data types of the arguments. public class { public static void evaluate(String param1); // method #1 public static void evaluate(int param1); // method #2 } If the compiler were to compile the statement: evaluate(“My Test Argument passed to param1”); it could see that the argument was a string literal, and generate byte code that called method #1. Method overriding: This happens at runtime. This is also called runtime polymorphism because the compiler does not and cannot know which method to call. Instead, the JVM must make the determination while the code is running. public class A { public int compute(int input) { //method #3 return 3 * input; } } public class B extends A { @Override public int compute(int input) { //method #4 return 4 * input; } } The method compute(..) in subclass “B” overrides the method compute(..) in super class “A”. If the compiler has to compile the following method, public int evaluate(A reference, int arg2) { int result = reference.compute(arg2); } The compiler would not know whether the input argument 'reference' is of type “A” or type “B”. This must be determined during runtime whether to call method #3 or method #4 depending on what type of object (i.e. instance of Class A or instance of Class B) is assigned to input variable “reference”. Generics (aka type checking): This happens at compile-time. The compiler checks for the type correctness of the program and translates or rewrites the code that uses generics into non-generic code that can be executed in the current JVM. This technique is known as “type erasure”. In other words, the compiler erases all generic type information contained within the angle brackets to achieve backward compatibility with JRE 1.4.0 or earlier editions. List<String> myList = new ArrayList<String>(10); after compilation becomes: List myList = new ArrayList(10); Annotations: You can have either run-time or compile-time annotations. public class B extends A { @Override public int compute(int input){ //method #4 return 4 * input; } } @Override is a simple compile-time annotation to catch little mistakes like typing tostring( ) instead of toString( ) in a subclass. User defined annotations can be processed at compile-time using the Annotation Processing Tool (APT) that comes with Java 5. In Java 6, this is included as part of the compiler itself. public class MyTest{ @Test public void testEmptyness( ){ org.junit.Assert.assertTrue(getList( ).isEmpty( )); } private List getList( ){ //implemenation goes here } } @Test is an annotation that JUnit framework uses at runtime with the help of reflection to determine which method(s) to execute within a test class. @Test (timeout=100) public void testTimeout( ) { while(true); //infinite loop } The above test fails if it takes more than 100ms to execute at runtime. @Test (expected=IndexOutOfBoundsException.class) public void testOutOfBounds( ) { new ArrayList<Object>( ).get(1); } The above code fails if it does not throw IndexOutOfBoundsException or if it throws a different exception at runtime. User defined annotations can be processed at runtime using the new AnnotatedElement and “Annotation” element interfaces added to the Java reflection API. Exceptions: You can have either runtime or compile-time exceptions. RuntimeException is also known as the unchecked exception indicating not required to be checked by the compiler. RuntimeException is the superclass of those exceptions that can be thrown during the execution of a program within the JVM. A method is not required to declare in its throws clause any subclasses of RuntimeException that might be thrown during the execution of a method but not caught. Example: NullPointerException, ArrayIndexOutOfBoundsException, etc Checked exceptions are verified by the compiler at compile-time that a program contains handlers like throws clause or try{} catch{} blocks for handling the checked exceptions, by analyzing which checked exceptions can result from execution of a method or constructor. Aspect Oriented Programming (AOP): Aspects can be weaved at compile-time, post-compile time, load-time or runtime. - Compile-time: weaving is the simplest approach. When you have the source code for an application, the AOP compiler (e.g. ajc – AspectJ Compiler) will compile from source and produce woven class files as output. The invocation of the weaver is integral to the AOP compilation process. The aspects themselves may be in source or binary form. If the aspects are required for the affected classes to compile, then you must weave at compile-time. - Post-compile: weaving is also sometimes called binary weaving, and is used to weave existing class files and JAR files. As with compile-time weaving, the aspects used for weaving may be in source or binary form, and may themselves be woven by aspects. - Load-time: weaving is simply binary weaving deferred until the point that a class loader loads a class file and defines the class to the JVM. To support this, one or more "weaving class loaders", either provided explicitly by the run-time environment or enabled through a "weaving agent" are required. - Runtime: weaving of classes that have already been loaded to the JVM. Inheritance – happens at compile-time, hence is static. Delegation or composition – happens at run-time, hence is dynamic and more flexible. Q. Have you heard the term "composition should be favored over inheritance"? If yes, what do you understand by this phrase? A. Inheritance is a polymorphic tool and is not a code reuse tool. Some developers tend to use inheritance for code reuse when there is no polymorphic relationship. The guide is that inheritance should be only used when a subclass ‘is a’ super class. - Don’t use inheritance just to get code reuse. If there is no ‘is a’ relationship then use composition for code reuse. Overuse of implementation inheritance (uses the “extends” key word) can break all the subclasses, if the super class is modified. This is due to tight coupling occurring between the parent and the child classes happening at compile time. - Do not use inheritance just to get polymorphism. If there is no ‘is a’ relationship and all you want is polymorphism then use interface inheritance with composition, which gives you code reuse and runtime flexibility. This is the reason why the GoF (Gang of Four) design patterns favor composition over inheritance. The interviewer will be looking for the key terms -- "coupling", "static versus dynamic" and "happens at compile-time vs runtime" in your answers.The runtime flexibility is achieved in composition as the classes can be composed dynamically at runtime either conditionally based on an outcome or unconditionally. Whereas an inheritance is static. Q. Can you differentiate compile-time inheritance and runtime inheritance with examples and specify which Java supports? A. The term “inheritance” refers to a situation where behaviors and attributes are passed on from one object to another. The Java programming language natively only supports compile-time inheritance through subclassing as shown below with the keyword “extends”. public class Parent { public String saySomething( ) { return “Parent is called”; } } public class Child extends Parent { @Override public String saySomething( ) { return super.saySomething( ) + “, Child is called”; } } A call to saySomething( ) method on the class “Child” will return “Parent is called, Child is called” because the Child class inherits “Parent is called” from the class Parent. The keyword “super” is used to call the method on the “Parent” class. Runtime inheritance refers to the ability to construct the parent/child hierarchy at runtime. Java does not natively support runtime inheritance, but there is an alternative concept known as “delegation” or “composition”, which refers to constructing a hierarchy of object instances at runtime. This allows you to simulate runtime inheritance. In Java, delegation is typically achieved as shown below: public class Parent { public String saySomething( ) { return “Parent is called”; } } public class Child { public String saySomething( ) { return new Parent( ).saySomething( ) + “, Child is called”; } } The Child class delegates the call to the Parent class. Composition can be achieved as follows: public class Child { private Parent parent = null; public Child( ){ this.parent = new Parent( ); } public String saySomething( ) { return this.parent.saySomething( ) + “, Child is called”; } } Tricky Java Interview Questions 9 Comments: This comment has been removed by a blog administrator. Thanks for pointing this out. It is fixed. Hi Arul, Thanks for another set of good question. Your diagrams add lots of value in post and make concept clear. By the way Concept of Static binding and dynamic binding is also related to compile time and runtime. As overloading is resolved at by static binding and overriding is during runtime. thx Kindly remove this comment, as its no longer of any value. Hi Arul, Nice blog...I have one question related to Generics ... Lets say u have List Base b = new ArrayList Base (); b.add(new Derived1()); b.add(new Derived2()); Why this is allowed? The above code will throw the runtime exception if not casted properly while retrieving List\ new ArrayList Base (); ?? You have not used generics above. Generics in Java happens at compile time. Look at my Generics tutorials to understand the use of wild cards. Sorry, your angle brackets have not come through. Look at the other generics tutorial with wild cards and when to use them, etc. Use the search at the top. Links to this post: Create a Link
http://java-success.blogspot.com.au/2011/09/core-java-interview-answers-q1.html
CC-MAIN-2018-09
refinedweb
1,801
55.84
Important: Please read the Qt Code of Conduct - SEGV error when running an application on arm Hi, I am trying to run a little application on my target (simply a window, no button, no text, just a window) but when i try to run it like : ./app -qws Nothing appear on screen and i get a "SEGV" message on the terminal. After searching for this kind of error i often read that i could happen because a program try to have acces to a part of the memory who either don't exist on which it don't have the right to use. (i chmod 777 the program to give it all rights) I know the compiler is working because i can generate a little .c file which will make a print 'hello world' on the terminal but now i would like to use the screen with my board In Qt creator the files A main.cpp: #include "mainwindow.h" #include <QApplication> int main(int argc, char *argv[]) { QApplication a(argc, argv); MainWindow w; w.show(); return a.exec(); } and a mainwindow.cpp #include "mainwindow.h" #include "ui_mainwindow.h" MainWindow::MainWindow(QWidget *parent) : QMainWindow(parent), ui(new Ui::MainWindow) { ui->setupUi(this); } MainWindow::~MainWindow() { delete ui; } [edit: koahnig] code tags adjusted - SGaist Lifetime Qt Champion last edited by Hi, Did you try running your application through the debugger ?
https://forum.qt.io/topic/52887/segv-error-when-running-an-application-on-arm
CC-MAIN-2021-10
refinedweb
228
62.27
> Looked around but couldn't find the answer. I've got a function in my script which generates a random number. I want to use that number to access a variable in the script of the according object by turning the int into a string. I've got Objects obj1 with scriptName1, obj2 with scriptName2 etc. and a list of the objects in my main script. I tried doing string stringS = "scriptName" + randomNumber.ToString(); var1 = objectList[randomNumber].GetComponent(stringS).variable; but that does not work unless I reference it without using the string like this string stringS = "scriptName" + randomNumber.ToString(); var1 = objectList[randomNumber].GetComponent(stringS).variable; var1 = objectList[randomNumber].GetComponent<scriptName1>().variable; Is there any possible way to use the string for the reference? Thank you for the help. i think i can help you but i need some extra info, each object in the list have all the different classes scriptName1 scriptName2 etc...? if not, use a base object scriptNameBase and make the rest childs so you can use var1 = objectList[randomNumber].GetComponent().variable; @xxmariofer Each object only has one script attached to it, all of them numbered. The main script is in the parent object. If I understood your instructions correctly, when I attempt to assign the variable I get this error on the console: Using the generic method UnityEngine.GameObject.GetComponent<T>()' requires 1' type argument(s) Cheers UnityEngine.GameObject.GetComponent<T>()' requires im saying something like this: //parent public abstract class ParentScript : Monobehaviour { public int variable } //child example public class scriptName1 : ParentScript { //your stuff } //reference that script var1 = objectList[randomNumber].GetComponent().variable; couldnt test it Answer by tormentoarmagedoom · Jan 15 at 04:24 PM Good day. I'm sure it is posible, but I've never done to find a script, only to find variables inside a script using a string. I will do simple and "stupid" example (C#) to show you how I think it can work. If works or not, please come to explain what you get. string RandNumb = "2" Lets look for the object called "SuperObject2": GameObject ObjectSelected = GameObject.Find("SuperObject"+RandNumb ); Now lets acces its Script, which is called "MyScript2": Object ScriptIWillUse = ObjectSelected.GetType("MyScript"+RandNum); Bye! @tormentoarmagedoom Thanks for the reply. Tried this but when I use GetType I get an error: No overload for method GetType takes 1 argument. Answer by sean244 · Jan 16 at 03:14 AM You can do this string stringS = "scriptName" + randomNumber.ToString(); var otherScript = objectList[randomNumber].GetComponent(stringS); var myVariable = otherScript.variable; Answer by cdr9042 · Jan 16 at 04:07 AM Can you explain why you have to do it this way and not another way? Normally if there are many scripts that do the similar things I would make them inherit from one class. For example public class Vehicle{ protected virtual void Go() { } } public class Bike : Vehicle { protected override void Go() { } } public class Car : Vehicle { protected override void Go() { } } Then if I want to pick randomly between the car and the bike, and then run the function Go() Vehicle pickVehicle; void PickRandomThenGo(){ GameObject randomObj = PickRandomObj(); //get the random object pickVehicle = randomObj.GetComponent<Vehicle>(); pickVehicle.Go(); } Like that, if the pickVehicle was of Bike class, the function Go() inside Bike class will run. Otherwise, Car's function will09 People are following this question. How to reference a variable in another script based on string in c# 0 Answers Problem with a Unity Scripting example from the scripting manual! 0 Answers Multiple Cars not working 1 Answer NullReferenceException when adding String to list 1 Answer Distribute terrain in zones 3 Answers
https://answers.unity.com/questions/1590448/accessing-a-script-only-using-its-string.html
CC-MAIN-2019-26
refinedweb
595
56.55
We are opening first file firstFile.txt secondly file secondFile.txt and then reading line by line both of these files and writing a line of first file to third file and then contents of first line of second file to thirdFile. Like : first line of first file first line of second file second line of first file second line of second file .. .. .. nth line of first file nth line of second file #include <stdio.h> int main() { FILE *firstFile, *secondFile, *thirdFile; char ch[2]; firstFile = fopen("firstFile.txt", "r"); secondFile = fopen("secondFile.txt", "r"); thirdFile = fopen("thirdFile.txt", "w"); if ((firstFile == NULL) || (secondFile == NULL) || (thirdFile == NULL)) { puts("something went wrong"); getchar(); exit(0); } while (1) { ch[0] = fgetc(firstFile); ch[1] = fgetc(secondFile); // check for end of file character if ((ch[0] == EOF) || (ch[1] == EOF)) break; else { do { // write first character of first file to thirdFile fputc(ch[0], thirdFile); ch[0] = fgetc(firstFile); // get character from next location } while (ch[0] != '\n'); // loop until newline character fputc('\n', thirdFile); // write a newline character to thirdFile do { // write first character of second file to thirdFile fputc(ch[1], thirdFile); ch[1] = fgetc(secondFile); // get character from next location } while (ch[1] != '\n'); // loop until newline character fputc('\n', thirdFile); // write a newline character to thirdFile } } // close all files fclose(firstFile); fclose(secondFile); fclose(thirdFile); getchar(); return 0; }
http://www.loopandbreak.com/merging-two-files-in-one-in-cc/
CC-MAIN-2020-50
refinedweb
229
61.87
- Why do we still use C? Isn’t it quite old? Ans. C is considered to be the mother of all modern-day languages. It was initially created for the purpose of making operating systems. Specifically, it was used in UNIX operating system. It is as fast as assembly languages and hence are the first priority for system development. Other programs that usually use C for their development are – Assembler, text editors, databases, Utility tools etc. - What are storage class specifiers and how many of them do you know? Ans. A storage class defines the scope and lifetime of variables or functions in C. There are four different storage classes in C – - Auto - Static - Extern - What do you mean by scope of a variable? Ans. It is the part of a program which can be directly accessed. In C all identifiers are statically scoped. C has basically 4 types of scope rules – File Scope, Block Scope, Function Prototype, Function Scope. - Can you print Hello! World without semicolon? Ans. In C printf function returns the number of characters written in stdout and performs the function. #include <stdio.h> int main(void) { if (printf ("Hello World")) { } } - Do you know about pointers? Can you explain about far pointer? Ans. Yes, Pointers are basically variables which refers to the address of a particular value. A pointer which can access all 16 segments of the RAM is known far pointer. - Can you explain what is dangling pointer and how we can avoid it? Ans. If a pointer is pointing to some address of a variable and at that moment another pointer is used to delete that variable or memory occupied then the first pointer points to a memory location which might not be accessible. The first pointer is known as Dangling pointer. It can easily be removed by making the first pointer point to null. - What is Static Memory Allocation? Ans. Static memory allocation is when the memory is allocated at compile time. This memory cannot be increased while running a program. E.g., int arr[10]; the size of this array cannot be increased during run time. This is called static memory allocation. - Then what is dynamic memory allocation? Ans. This memory allocation occurs during run time and can be easily changed. C uses malloc (), calloc (), realloc () for dynamic memory allocation. This memory is implemented using data section. Less memory space is wasted in this memory allocations. - Can you give the difference between malloc () and calloc ()? Ans. - Do you know about enumerations? Ans. Enumerations are list of integer constants with name. It is defined in C as enum. enum week{Mon,Tue,Wed,Thur,Fri,Sat,Sun} - What is Union? Why do we need Union when we already have structures? Ans. Union is a data type that helps in storing multiple types of data in a single unit. It is different than structures as in structures the size of memory that is allocated is the sum of all the elements present in them which might use a lot of extra space. Union only allocates memory of the largest variable present in it. Although in Union, we can only access one variable at a time. union unionstruct { int a; //union members declaration. float b; //assigns memory to only size of float char c; }; - Can you briefly explain at where do the variables are stored in C? Ans. The following describes where each type of data is stored in C. global variables - data static variables - data constant data types - code and/or data local variables (declared and defined in functions) - stack variables declared and defined in main function - stack pointers - data or stack, depending on the context. dynamically allocated space (using malloc, calloc, realloc)- heap - What is auto keyword and what is it’s use? Ans. Each variable defined inside a function is called a local variable, these local variables contain are known as automatic variables(auto). We do not need to explicitly define them as local. If it contains no value then it has garbage value. - Can you write a program without the use of main function? Ans. Well, I can write a program without main function, but it will only get compiled. It cannot be executed without a main function. - Write a C program to check whether a given number is palindrome or not. Ans. A number is palindrome if it is equal to its reverse #include<stdio.h> #include<conio.h> main() { int n,r,sum=0,temp; scanf("%d",&n); temp=n; // a number is palindrome if it is equal to it's reverse while(n>0) //getting reverse of the number { r=n%10; sum=(sum*10)+r; n=n/10; } if(temp==sum) printf("palindrome "); else printf("not palindrome"); }
https://www.prepbytes.com/blog/c-programming/commonly-asked-interview-questions-on-c/
CC-MAIN-2022-21
refinedweb
789
66.84
Second quarter Open Source Awards announced 106 JohnGrahamCumming writes "The Open Source Initiative has announced its Q2 award winners here. Three people/projects got $500 Merit Awards: Martin Pool for distcc, Tom Lord for GNU Arch and The GIMP. OSI is currently looking for nominations for the Q3 awards to be announced at OSCON." See!!! (Score:5, Funny) Re:See!!! (Score:5, Insightful) On the other hand, the recognition may land them jobs as developers or as managers of a group of developers. Re:See!!! (Score:5, Funny) Re:See!!! (Score:1) Trust me on that one. Re:See!!! (Score:1) That depends. Are we talking about Top Ramen? Or one of its close cousins: Bottom, Up, Down, Charm or Strange Ramen? (Mmmmm. Strange Ramen!) My local store sells ramen for about $1 each. If you bought 500 packets of ramen and ate them all, one after the other, what would happen? Well that is something to which we already have the answer [twiztv.com]. You would enter a sort of mental fugue, and your perception of time would slow down to the po Re:See!!! (Score:1) Re:See!!! (Score:5, Funny) All they need now is another $199. Re:See!!! (Score:1) Oh for some mod points... Congrats (Score:1) Re:I thought The Gimp was a Tarantino character? (Score:1, Offtopic) I'm sure many people have downloaded both forms of the GIMP over the years, however.... Re:I thought The Gimp was a Tarantino character? (Score:1, Funny) That might confuse you even more... Re:I thought The Gimp was a Tarantino character? (Score:1, Interesting) awards 4 times a year (Score:4, Interesting) Re:awards 4 times a year (Score:5, Informative) John. Re:awards 4 times a year (Score:3, Insightful) People will probably send these maintainers the email equivalent of a slap on the back, and thumbs up. Also, it draws attention to the developers. Some of these guys might end up hired as a result of these announcements. Tom Lord especially, since two of his projects won. Re:awards 4 times a year (Score:3, Insightful) Re:awards 4 times a year (Score:1) Re:awards 4 times a year (Score:3, Informative) See the Open Source Awards Charter [opensource.org] for more details. Wide open (Score:3, Funny) "OSI is currently looking for nominations for the Q3 awards to be announced at OSCON."I nominate these (wide) open sourcers [securityfocus.com] from Washington state. Speech... (Score:5, Funny) Some worthy projects in my opinion (Score:5, Interesting) Imgseek classifies bitmap images based on similarity . Both would be awesome if converted into libraries used by other programs. Actually (Score:1) Re:Some worthy projects in my opinion (Score:3, Informative) It's not hard, all it takes is sending an email! John. Re:Some worthy projects in my opinion (Score:3, Insightful) Hear hear! There are so many great programs that are really just front-ends for some service, and yet aren't implemented as such. A classic example is netpbm, a great set of image manipulation programs to crop, rotate, convert formats etc - just the kind of operations that would be perfect in a general-purpose image manipulation library. But alas, all the logic is bound up in the program. No award for Eric Raymond? (Score:2, Insightful) I don't know this as fact, but... (Score:3, Insightful) It would seem to me that the awards go to people/teams that have created great Open Source software, not evangelists. I could be wrong though. Re:I don't know this as fact, but... (Score:2, Interesting) Arguably the award for Gnu Arch was made to evangelists. They even go out of their way on their opening page () to slam those who aren't true enough in their beliefs: It is somewhat well known, these days, that some of the core developers of the Linux kernel are using a revision control system which is not free software. There is a need to create a free Arch is also a great project in its own right. (Score:2) You might note, by the way, that the gnu.org Arch site is not the primary Arch site (certainly not the most frequently updated), though that's the one linked by the article. (www/wiki).gnuarch.org are Arch's primary frontends to the world. ...and just to reemphasize... (Score:2) Re:No award for Eric Raymond? (Score:1, Funny) Re:No award for Eric Raymond? (Score:1) OSI Board of Directors [opensource.org] Re:No award for Eric Raymond? (Score:1) Pearpc for Q3 (Score:3, Informative) pearpc.sourceforge.net because that project acommplished what many people tought to be imposible.I mean a ppc emulator that runs OSX deserves a prize. Re:Pearpc for Q3 (Score:2) If they get the speed up to something a bit more reasonable, it'd definitely be a worthy candidate. Re:Pearpc for Q3 (Score:1) GIMP is all you need. (Score:5, Informative) Re:GIMP is all you need. (Score:2) The idea of the GIMP is to provide you with the tools to do the job. It is supposed to be extended by plug-ins. Plug-ins that do not necessarily need to be maintained by the few GIMP core developers. "Save for Web" can easily be implemented in a plug-in. Why has such a plug-in not been written yet? Think about it and tell me. Re:GIMP is all you need. (Score:1) Photoshop includes those features by default, GIMP dont. Photoshop not leave you wondering if plugin exist, no time waste trying to find plugin. > The idea of the GIMP is to provide you with the tools to do the job sure, the GIMP provides basic tools to the job, but that is not the point here "if you're used to the power of Photoshop" to be good enough for someone used to the powerf of Photoshop more needs to be included by default. blame distribut Re:GIMP is all you need. (Score:3) Re:GIMP is all you need. (Score:1, Funny) Re:GIMP is all you need. (Score:2) Re:GIMP is all you need. (Score:1) Re:GIMP is all you need. (Score:2) Re:GIMP is all you need. (Score:1) GIMP is NOT all you need. (Score:1) While it's nice to see GIMP getting an award, GIMP is NOT all you need. It lacks 16-bit-per-color (48-bpp) editing support. "Why is this stupid feature necessary?", you ask? It's needed because of cameras like the Canon EOS-300D/10D (see the other slashdot article [slashdot.org]). Canon's RAW format is wonderful for people who need to squeeze every last ounce o CoLinux (Score:2, Informative) It should get nominated. $500! (Score:3, Funny) Comma delimited lists (Score:4, Informative) Re:Comma delimited lists (Score:2) Re:Comma delimited lists (Score:1) Re:Comma delimited lists (Score:2) Much nicer, no? Re:Comma delimited lists (Score:2) Bigger! More! (Score:4, Insightful) I have no idea (and I did read a bit) how they manage their money, other than their 503(c) status and necessary government reporting. Do they have an endowment, or do they rely on annual donations to cover the annual (and quarterly) awards? I would hope they have an endowment. If so, It'd be nice to know how one could make small (less than $100!) donations to the endowment. After all, if lots of little guys would start giving to funds like this*, than they could give out mo'bigger awards, resulting in more media coverage as well as help fund good coders in future projects. So... do they have an endowment? Do they accept small donations to help fund this endowment? Anybody got details? * as well as the EFF and other "goods" Re:Bigger! More! (Score:2) You can always buy some Open Source Swag [cafeshops.com] if you feel like helping out. John. Glad to see Tom Lord get the nod (Score:5, Interesting) Re:Glad to see Tom Lord get the nod (Score:3, Informative) I thought monotone [venge.net], codeville [codeville.org], and darcs [abridgegame.org] all used the distributed repository model as well as arch & bk. They may be a little further behind in terms of features or surrounding tools, but each one does have some interesting theory/philosophy of version control behind it. And darcs is written in haskell, so it wins points for enjoying the soundness and showing once again that pure FP can be and is used in the "real world"... I wouldn't discount any of them yet, but I agree that the subversion fa My favorite open sourced beverage (Score:2, Funny) #include "barley" #include "hops" #include "water" #include "yeast" Submit this story? Green Party Endorses FOSS (Score:1, Insightful) An interesting development in the current Canadian election [elections.ca] is that at least one party, The Green Party of Canada [greenparty.ca], seems to be paying attention to geeks this time around. The Green Party of Canada endorses open source software [greenparty.ca] in the Science and Technology section of their platform [greenparty.ca]. Some of their promises include: Kudos to Martin! (Score:1, Informative) Say what you will about the open software community. Some people may be hot tempered, some may be exclusionary or quick to criticize, but I've yet to find a group so willing to accept people from all walks of life. Thanks to more than Martin and OSI. Thank you to everyone for making open source a Re:My work... (Score:2) Don't do it then. Duh. If it hurts, you're doing it wrong.
https://slashdot.org/story/04/06/01/1454212/second-quarter-open-source-awards-announced
CC-MAIN-2017-17
refinedweb
1,627
75.4