text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringclasses 91
values | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
|---|---|---|---|---|---|
<<
Radu Albastroiu17,498 Points
Hi, yelp api doesn't use Oauth for authentication anymore. How should I authenticate my app ?
Since march 2018 Yelp doesn't use Oauth for authentication. They say it is easier now but i'm kind of lost. How should I authenticate now?
jakubhirner14,784 Points
Have the same question
Jhoan Arango13,603 Points
Hello Radu,
I would have to read the documents in Yelp about the new way of authorizing, and to be honest I have never worked with Yelp before or its API.
I will do some research and get back to you with an answer.
I also feel that this question Pasan Premaratne may have a concrete answer.
1 Answer
Pasan PremaratneTreehouse Teacher
Hey everyone,
I'm working on refreshing this course at the moment because of the API change. With the updates to Yelp Fusion API< you don't need to authorize yourself anymore. Once you get the API key all you have to do is sign each request with the api key by including a Authorization header.
Here's an example. Try this out and let me know if you have any follow up questions
let apiKey = "your_api_key" var url = URL(string: "")! var request = URLRequest(url: url) request.addValue("Bearer \(apiKey)", forHTTPHeaderField: "Authorization")
jakubhirner14,784 Points
So if I understand correctly, I can delete the whole authorization code and the UI button. Add my apiKey to the request URL and everything will work fine right?
Pasan PremaratneTreehouse Teacher
Yep! You will still have to register an app and all that with Yelp but you can get rid of all of the rest
Sean Alves841 Points
So do I delete everything from this video and add the above code to the project? I'm having trouble figuring out where it goes.
Pasan PremaratneTreehouse Teacher
Hey Sean,
You essentially have to do this for every single request you send to the API. You can either do it like the example above and add the API key to the Authorization header of every single request as shown above; the key part there is this line
request.addValue("Bearer \(apiKey)", forHTTPHeaderField: "Authorization")
What I would do though is to modify the Endpoint type and the client. Here's a brief example:
import Foundation protocol Endpoint { var base: String { get } var path: String { get } var queryItems: [URLQueryItem] { get } } extension Endpoint { var urlComponents: URLComponents { var components = URLComponents(string: base)! components.path = path components.queryItems = queryItems return components } // Note here that instead of a computed property that returns a request, we're // defining a function in the protocol extension that takes the apiKey and adds it to the // header func request(withApiKey key: String) -> URLRequest { let url = urlComponents.url! var request = URLRequest(url: url) request.addValue("Bearer \(key)", forHTTPHeaderField: "Authorization") return request } } enum Yelp { case business(id: String) } extension Yelp: Endpoint { var base: String { return "" } var path: String { switch self { case .business(let id): return "/v3/businesses/\(id)" } } var queryItems: [URLQueryItem] { switch self { case .business: return [] } } }! } }
Sean Alves841 Points
Thanks! So would I be able to copy the below example? If so, would that go into the Permissions.Swift or the YelpAccount
Sean Alves841 Points
Sorry to bother, was there any updates on this section? Let me know when you get a chance
Pasan PremaratneTreehouse Teacher
Hey Sean,
It doesn't really matter where you put it. All the code is eventually compiled down to one source ultimately. For organization purposes I would put the endpoint bits in Endpoint.swift and the remaining inside either the client file or the Yelp API file
Deysha Rivera5,088 Points
Pasan Premaratne Any idea when the update to this course will be released?
Pasan PremaratneTreehouse Teacher
I don't work at Treehouse anymore and can't provide any info on updates I'm afraid :/
Jordan LeahyCourses Plus Student 25,054 Points
Jordan LeahyCourses Plus Student 25,054 Points
I also agree
|
https://teamtreehouse.com/community/hi-yelp-api-doesnt-use-oauth-for-authentication-anymore-how-should-i-authenticate-my-app
|
CC-MAIN-2021-39
|
refinedweb
| 650
| 63.19
|
FSF General Counsel Eben Moglen Talks On Upside 86
ContinuousPark writes, "From an Upside article based on conversations with Eben Moglen, FSF general counsel and author of Anarchism Triumphant: 'In such a context, Moglen says, distribution of a software tool that lets European movie watchers watch American films on DVD before they hit the local theaters or lets Web surfers break through their employer's site censorship policies are about as politically expressive as dumping a boatload of tea into Boston Harbor. Granted, it may not be legal under the current framework. But legal frameworks change. How long before today's encryption crackers become the next generation's heroes? In the face of such potential changes, Moglen says the only mistake for an attorney in his position is standing idly by while new history is in the making.' "
Re:Alright asshole... (Score:1)
What about filtering software in libraries, where it may be the person's only access? Not everyone has a computer or internet access you know...
The problem is that the filtering software doesn't filter on the reasons it says it does. It is being used for political reasons, and that should NEVER be forced on anyone
The difference between anarchists and libertarians (Score:1)
Libertarians don't want laws because it'll take away everything that makes them better than others, like their guns and excess money. They just want everybody unable to survive in their world-without-rules to die. The concept is that everybody should be responsible for himself. The fact that people become Libertarian once they've secured their future shows their cowardice.
Anarchists are actually totally different. The true anarchist respects all forms of life and is disciplined enough to be a responsible person without rules and laws. Applying moral values instead of rules makes the world a much more efficient and pleasant place.
This sort of... (Score:1)
No, criminal not "moral" (Score:1)
Not only is it not wrong to break unjust laws; it is every man's moral responsibility.
No, you are an egotistical fool if you think this. Who are you to decide what laws are unjust and which are just? I mean, we don't let 16 year 31337 h4X0rs make the laws do we, and for a good reason - they are prejudiced and hold different views from the rest of society. Just because you think a law in unjust because it makes it harder for you to pirate music and download pr0n, doesn't mean the average American agrees with you.
So by breaking a so-called "unjust" law you are just assuming that your opinion is better than the people which Americans elect to make these laws, which I would call arrogance of the worst kind. Your opinion is no more valid than any others, although in light of your inability to deal with other people's viewpoints I would perhaps say it is *less* valid than most.
Re:Just like MP3s... (Score:1)
It is pitiful for a human to exist as a parasite, or by preying on the weak. And it's a pitiful society that depends on parasitic institutions for essential services (and it's an endangered society that depends on predatory institutions).
These things have evolved, more than been chosen. Can we understand what is happening well enough to guide social evolution to something better? Can improvements be proposed without scaring those now enjoying privileges, engaging them to focus on their humanity so they will want to play a constructive role in changing the institutions that afford them their privileges? Tough, eh? But they don't live by bread alone, either.
What we all need (IMHO) is to have a real choice as to what we want to be a part of. But we also need a vision and hope, otherwise the choices will be petty and defensive at best.
To enhance the possibilities, it's good to remember that democracy is about who gets to choose, and how, not about what is chosen.
Re:I hope he's not really an "anarchist" (Score:1)
No. Anarchism refers to a particular political theory.
One of the most thorough net resources on anarchism is An Anarchist FAQ Webpage [geocities.com].
Re:What if I don't want to be rated by morons? (Score:1)
Well, I used to browse at -1, and sneer at people who complained about the crap and browsed at 0. But guess what? The disruption has just become TOO MUCH. Once upon a time, it was just short "first post" messages, and short dumb "me too" posts that were easy to ignore.
Nowadays, it's 20 page long contentless postings by someone who has a fixation with the paste function of their browser, and a clear intent of _disrupting_ communication. It just has become too much. Now I browse at 0, and sneer at people who browse at 1.
Agreed (Score:1)
The trollers are indeed serving a purpose in trying to point this out. Unfortunately, I don't think it's having much of an effect on most moderators, who seem to have no conscience.
New XFMail home page [slappy.org]
/bin/tcsh: Try it; you'll like it.
The quote listed above.. (Score:1)
I couln't find it in the article it's self or in the anarchy essay!?!?
where is it man?
WHOUPS SORRY!!! WRONG DISCUSSION!! (Score:1)
Wrong again (Score:1)
As far as your claim that most common people merely wanted to remain British subjects, I'll just suggest that if that was the case, it would have been difficult to raise armies. This is far from the case- there were tons of volunteers who suffered through ridiculous conditions and most of whom were never paid for their efforts. While there were many supporters of the crown, if their numbers were as large as you suggest, then I'd think they would have helped the Brits out more- the record would indicate that they were largely ineffective and never tried to organize in any meaningful way.
As far as the corporations... well, wealthy farmers and traders controlled the government long before there was any notion of the modern "corporation." Remember, they held the bonds that financed the government and if they withdrew them they could bring down the whole show. So, they had a lot of say pretty much from day one.
Just a handy history lesson...
~luge(4? informative? please...)
Re:Signal 11, I hope you were trying to be funny (Score:1)
You seem to misinterpret his point. This whole point was that these people were miscreants of their time and extreme, but that we treat them as heros now. He AGREES with you. No need to argue!
Re:Very critical point... (Score:1)
Up!Up!Up! (Score:1)
However, the revolution wasn't just the big names that we're all taught to revere. First, as you yourself point out there must have been enough popular support to raise armies. Second, it seems that most revolutions do have a nucleus which is needed to get things going. Great post.
Re:This ain't no Boston Tea Party, guys (Score:1)
The general public is not yet well aware of the direction in which forces like DMCA, UCITA, et. al. are going.
True.
Americans have a high tolerance for civil disobedience and mischief-makers if it appears they are working for some general good.
Not necessarily true. I think that there is not a hell of a lot of evidence to support this. I think this is a pretty conservative country. This is because of the point that you make about public not being aware of the issues. A widespread ignorance about governance, history and sociology means that people have no mental tools to evaluate these problems. A society without educated citizens is a society that is manipulable by those that run the media. And, like you say it is more important that we get the word out to the common man. Without him, we lose the fight. But we have to get the word out to him about all the other issues that lead us to the conclusion that censorship is bad. That's a relatively large ideological background that has to be communicated. Meanwhile a lot of people are just struggling to live.
Not based on the meta moderation I've seen (Score:1)
Re:Just like MP3s... (Score:1)
--
Re:I hope he's not really an "anarchist" (Score:1)
The reason to "pooh-pooh that idea" is that without a government of some sort there's no one able to prevent a erm.. government emerging.
Getting rid of government, as opposed to trying to limit and control it, just means that whichever group can wield sufficient power locally is the new (local) government. That's what a government is, the group that are strong enough to enforce their will. The only way of creating counterbalances to ensure that nobody can get strong enough to enforce their will is to erm.. have a government enforcing those balances.
What you'd get is a plethora of would-be governments acting on various levels, and more bloodshed as they fight it out amongst themselves or further enforce thier power within their own spheres of influence. Just like those current and historic governments you were talking about.
Unless you can explain how to prevent governments arising (well, I guess killing everyone on the planet would do it...) the idea deserves all the "pooh-poohing" it gets.
Re:Agreed (Score:1)
but you are right in that moderation is not really helping. ie, a person will miss alot just reading at +4 or +5.
nmarshall
#include "standard_disclaimer.h"
R.U. SIRIUS: THE ONLY POSSIBLE RESPONSE
Re:It's not 2 years late (Score:1)
Instead, the Hurd hasn't, and Linux has been pushed and prodded to get past growing pains. Tannebaum's point was that these pains could be bypassed using a modern design.
Now, whether he was right or not doesn't matter. That was his point. The success of Linux doesn't make him wrong, as it might even show his point.
Re:Just like MP3s... (Score:1)
Re:death? (Score:1)
The Geeks _really do_ run the internet, and it would be wise for some of these lawyer-wielding corporations to understand what that means.
Given the right motivation, an organized group of people could attack and cripple these companies in ways the script kiddies never thought of.
Personally, I think the Boston Tea Party is a very good example of the kind of thing we could see if politically motivated crackers decided to launch a directed and coordinated attack.
Re:No, criminal not "moral" (Score:1)
I don't. I am subjectively criticising them. Values don't exist objectively as properties of actions or situations.
Re:Just like MP3s... (Score:1)
If they were not threatened, we would not even be having this discussion.
It was for Mickey Mouse (Score:1)
IMHO, copyright and patent terms should be shortened not lengthened. Modern society changes much faster than historically, works become dated and no longer commercially viable much quicker, and copying and distribution of works takes much less time. The only exception might be patented drugs, which deserve compensation for the lengthy FDA testing period.
Furthermore, I would propose that copyright lapse restorably when new sales are halted. Out-of-print books/disks should lose copyright, but perhaps the copyright holder should be able to reinstate their copyright by restarting publication. Maybe this would lead to a small industry of micropublishers who would maintain copyrights by publishing on demand. But at least the works would be available.
Private works never sold wouldn't have copyright lapse.
Laws are to protect creators (Score:1)
You've touched on the problem of extending this to intellectual property: free duplication. What does the copyright owner lose when a copy is made? Not the work itself at all, but a potential sale.
A sales contract is not property, it is a mutual agreement. So the seller doesn't own it any more than the buyer. If the buyer rejects the sellers terms there is no sale, so no potential to lose. Prohibiting copying then is merely spite, but may be necessary to avoid losing others who _would_ pay.
Re:Not based on the meta moderation I've seen (Score:1)
I guess the problem is that I feel that often trolling is justified and makes a point. Must we be limited to discussion that is not heated, that is not insulting, that doesn't raise anyone's anger?
When I moderate I only mark something as a troll if it's clearly a troll AND it's clearly off-topic. Maybe I should mark it as off-topic instead of troll or flamebait. I guess without the troll and flamebait Moderations there are those who would fill up the threads with nothing but nasty insults that are only very marginally on-topic.
This whole thread is off-topic, in a way. I wish there was a place here for an ongoing discussion of Moderation and Meta-Moderation. These things should be discussed. This thread is on-topic in that it's in the discussion of an article on which people seem to have a hard time being "objective" with their Moderation.
Objectivity in Moderation is unrealizable. The "Insightful" Moderation implies subjectivity, for example. I do feel that there are many who use Moderation where a reply would be more appropriate. If you don't agree with something, reply, don't Moderate down.
-Jordan Henderson
Ahh.. (Score:1)
Yes, we must exterminate free software, the root of all evil in the world.
</endgatesmindset>
Re:Science is only current paradigm (Score:1)
I am a Christan with a scientific background (read "I took some science in school"). The big difference between religious and scientific world-views is their base. Religious world-views are based in faith, whereas the scientific world-view is based in experimental verification. This is a huge distinction. Science makes predictions that can be tested. Let me state that again.
Science makes predictions that can be tested.
Religion is a wonderful thing, and faith certainly still has a place in the world, but science has supplanted religion and philosophy as a description of the world. (The Bible even admits that religion alone can't accurately describe the world. Read the book of Job for a much more eloquent discussion than I could possibly manage.)
Because of this distinction, many of the more vocal members of Slashdot make the argument that religion must therefore be worthless. I feel that this is an uncharacteristically narrow-minded approach. Religion and science can very happily coexist. It is truly unfortunate that the most vocal members of either viewpoint also thend to be the least tolerant.
Re:You are wrong (Score:1)
>you up. Not only is it not wrong to break unjust
>laws; it is every man's moral responsibility.
agreed. "if your not part of the solution your the problem", thats like saying well i know my world sucks but maybe it'll get better some day. ho hum. stand up for yourself. damn ppl are boneless and stupid.
Re:Not based on the meta moderation I've seen (Score:1)
There is... It's in one of the 'hidden' discussion areas. deration [slashdot.org]
I often get the opportunity to moderate, and I never moderate down. I've also moderated up an opinion that I don't necessarily agree with - if it made me stop and think, then I believe it deserves attention.
Unlike most of
/. I am not primarily a *NIX user - I work in a very Microsoft-centric environment, and all the programming that I do is in a Win32 capacity. I come to slashdot for interesting discussion on a wide variety of topics, and I enjoy the disparity of viewpoints. Take that away, and slashdot will be reduced to the equivilant of an AOL chatroom echoing with "ME TOO!"s.
jerdenn
Re:death? (Score:1)
Perl programming is BY FAR best in Unix... learning it in windows must suck. I'm going to add a snooze option next, i personally need it! It will play another song (or the same song) 10 minutes after the first alarm, or whatever. See ya!
Mike Roberto (roberto@soul.apk.net [mailto]) - AOL IM: MicroBerto
Re:death? (Score:1).
For those who don't know what's going on, wakeup is my easy to use mp3 alarm program so you don't have to use at or cron anymore
:)
It's at soul.apk.net/wakeup [apk.net]
Back to the subject... in the wake of civil disobedience, many power mongers cannot control themselves and resort to irrational measures. It happened after the Boston Tea Party. It can happen anywhere. Irrationality is everywhere.
Mike Roberto (roberto@soul.apk.net [mailto]) - AOL IM: MicroBerto
death? (Score:1)
Mike Roberto (roberto@soul.apk.net [mailto]) - AOL IM: MicroBerto
Re:(-1, Idiocy) (Score:1)
Actually you are the one that needs to do some reading. Libertarian is a word the european anarchists adopted for themselves after the agents provacateurs and the bomb throwing idiots managed to co-opt the name in popular usage. In the US that usage never became popular, but the word was similarly applied by emerging right-anarchists like Murray Rothbard to distinguish themselves from the typically european left-anarchists, who were called libertarians in europe but anarchists here.
Yes, it's confusing. But no more so than the fact that liberal is commonly used in this country to refer to the left, whereas in most other countries it refers to the right.
Anarchy - from the greek "an" meaning not or no, and "archos" meaning ruler, thus a synonm for freedom (not being ruled over.)
What you define as Libertarian is actually Minarchism - a doctrine generally held by those who are anarchists at heart but just don't believe that people are ready for it yet, and so advocate a "night-watchman" minimal state which can prevent worse states from taking over until people are ready for complete anarchy.
Here, read and understand this [gmu.edu] - then come back and tell us about anarchism.
Re:This means (Score:1)
Violating, just this once, my normal practice of not feeding the "trolls"...
Moderation is not censorship. It's not even close. You can say whatever you want to say. No one is forced to listen. Not listening is not the same thing as censoring. Anyone that wants to read about Natalie Portman and Hot Grits and Taco's mom and your silly cries of censorship have only to set their thresholds to -1 and they can see it.
I happen to set my threshold to -1. Because sometimes some interesting stuff gets moderated down, and because I am interested in seeing how well the moderation is working. The number of posts that get moderated down wrongly is pretty small though, and those mistakes are usually corrected pretty quickly. The biggest problem I have seen is that way too much crap that should be moderated down isn't - presumably because the system is so stingy with moderation points, and a lot of folk won't moderate because it means they can't post. Those factors, combined with the floods of off-topic posts and flamebait of so-called "trolls" (a horribly misuse of the word by the way, real trolls are great, and usually get moderated to +4 at least if they are well written) actually make it so that you need to read at +2 if you want to avoid the crap. Doing that, of course, guarantees that you will miss some good postings. Very sad. Oh well, enough rambling, back to reading.
Then go away (Score:1)
This is called "voting with your feet." If enough of you do it maybe the rest of us could have decent discussions without 20 page posts about bestiality and similar nonsense wasting the bandwidth once again.
Re:Rating, not censorship (Score:1)
This is probably due to you reading too much. I know that's why I almost never get to moderate (only once, and that was after I was away from my connection for several days.) The way I understand it, they take the bell curve of how often different people request pages, and chop off the ends - i.e. if you are a real "regular" that checks in several times a day, or someone that only drops by once a year, then you are automatically excluded. The idea is to make sure the moderation is done by "average" readers.
Anyway, I think it's a bad idea, but if you really want to moderate, just log out, clear your cookies, and don't log in for several days. When you do log back in *poof* you have moderator access. Kinda funky.
Re:Just like MP3s... (Score:1)
Threatened industry? Please tell me that's sarcasm. Please. Time Warner made a 1.91 Billion [yahoo.com] dollar profit last year, and this is a threatened industry? Granted, there are other record companies (and other movie houses) that made less, but I really don't see any reason to consider them threatened just yet.
or have I just been trolled?
Re:No, criminal not "moral" (Score:1)
I'm a moral relativist of a sort, but this is relativism of the stupidest sort. Imagine this conversation was being conducted in a repressive state like Indonesia.
Your thoughts betray you, my young Jedi. You can't be much of a moral relativist if you really believe you can objectively criticize the Indonesian government's ways of knowing and doing.
Now, strike me down with all of your hatred, and your journey towards the dark side will be complete.
Re:I hope he's not really an "anarchist" (Score:1)
As for Libertarianism being an "adolescent mental malady" I note that very few of my Libertarian friends are either adolescent or of stunted mental ability. Most of us started out as Democrats or Republicans, but then grew out of those blinkered viewpoints into consistent Libertarianism.
If you like having the force and violence of the FBI, the IRS, the DEA and the ATF (etc. etc. ad nauseum) perpetrated against you, continue to vote for the mentally blinkered Demoblicans that litter the political landscape. If, on the other hand, you'd prefer that those bureacracies be done away with, vote Libertarian.
Re:Just like MP3s... (Score:1)
Re:Just like MP3s... (Score:1)
Well there are always thieves in every society. But the point about it being easier to steal is what needs to be addressed. If it's easier to buy it and not be hassled from copyright people (ideally the format won't allow free distribution), then most people will go the legal route. Just like burning CDs from your friends--sure some people do it and it costs software companies some revenue, but in the long run it's a tiny fraction because it's so much easier to buy a cd with documentation, etc. than to spend an hour making a bootleg copy. Most people will pay simply for the convenience.
Re:Intellectual property isn't like Physical Prope (Score:1)
That's why copyrights and patents are of limited duration.
They're supposed to be limited, but the Sonny Bono act (which IMHO is unconstitutional) sets copyright at life plus 70 (sounds like a prison term), which goes entirely against the constitutional reason for copyright, to promote the progress of science and the useful arts.
Re:Just like MP3s... (Score:1)
Re:I hope he's not really an "anarchist" (Score:1)
But mafias surely don't have healthcare or other social interest, except for their bigbosses. How, neither American gov't, or am I wrong?
True anarchy can't exist in populated area. Anarchy in the first meaning of the word, without power, needs very little human colony, small villages of less than 200 peoples, scattered and far, far away from the nearest other village. Then, it need also a strong anarchic culture, with people loving and wanting neither command nor be commanded. It has been historically proven that larger human population need rules and law to avoid barbary. It's also proven that if nobody hold power, someone will want it and take it.
Computer Knowledge Evolution (Score:1)
The real question is "Will we still have to use closed standards ?".
In this computer age we are seeing technologies shifting from hardware to software based. Where twenty years earlier we would have use tape or LP players, we are now decoding bytes of informations to listen to MP3 files. When it was very difficult to an individual to create such devices, we are now all able to duplicate them in a mouse click or recreate them by reading to their desassemblied code.
Duplicate or reverse-engine a program are both illegal. But watching movies or listening to musics are the principal mankind entertainment activities that are now possible by software on any hardware. We are not more tied to buy specific products designed for one task, we have computer that run any software, some written instructions capable of anything.
The cracker actions are to make sure we won't have to pay for something we don't have to. The CSS protection is only here to make sure DVDs are not anymore some video files playable in a standard way (MPEG2) but video files you have to pay for a way to play them. We still have to buy some computer hardware: DVD-ROM for videos and/or CD-ROM for audio. But where we were able to listen easily to our audio CDs, we have to buy software to watch DVDs.
In the way (common) people are tied with Window$ those days, we are tied to pay to play videos. Maybe the way (some) people still to pay for an OS explain the conduct of some corporation and industry.
In twenty years, after 10 to 15 more new Windows releases, ALL (left) people will be tired to update or buy it with each new computer and will understand why open source OS and software IS the only way. Linux and all the BSDs will gain the momentum and the market share they deserve, that is 100%.
We will all remember the way all those DVD crakers have fought to bring open source DVD software, to open standards to mankind and we will all value them for what we will enjoy, free and open knowledge.
Re:What if I don't want to be rated by morons? (Score:1)
I don't like seeing the -1 posts. I go through them occasionally to see what they say but I don't want them to appear by default because I find them brainless and a waste of my browsing time. If I were to find that posts were modded down for expressing a contraversial point of view, or anything else I don't agree with, I would quit supporting the moderators.
To sum up-I support moderation because they don't abuse it. They save us time without stifling anybody's point of view.
Re:Rating, not censorship (Score:1)
Not that I know anything, or that that changes my position at all.
Sure..... (Score:1)
I hope he's not really an "anarchist" (Score:2)
What happens with "anarchy" is not that the laws really disappear, but that they become arbitrary, the playthings of the powerful, totally beyond the reach of majorities to consent to, modify or rescind. Civil disobedience and anarchism should always be distinguished.
Anarchism is the end-state of that other adolescent mental malady, Libertarianism--although neither political camp will admit and owns up to the family relationship.
Thus I hope the Anarchism referred to as triumphant are the arbitrary and unjust laws that Moglen opposes and wants to see rescinded, such the Digital Millennium Copyright Act.
My thoughts (Score:2)
Okay, he might have a point but I think his position is biased to the point of no return. I mean, sure, some of the people that today we see vilified will become the "heroes" of tomorrow, but that's only to be expected - people who get caught up in changes through no real action or intent of their own often seem like "heroes" to those looking back at the situation.
But he has an opinion on the matter that makes what he says immediately suspect. As an "anarchist", he is obviously going to be anti-regulations, and so his stance that today's DeCSS authors are tomorrow's heroes is little short of ridiculous. They're not heroes, they're just in way over their heads after having done something before thinking of the consequences. And look at Matthew Skala. Yeah sure he broke Cyberpatrol's encrypted site list, but he's certainly no "hero" - he caved in as soon as Mattel threatened him with a lawsuit..
Anyway, I think that the whole premise is flawed - I don't think anyone would see these people as "heroes" in retrospect. Look at RMS as a prime example of a so-called "hero" - while
/.ers seem to worship him they seem to have a miraculous ability to ignore any flaws in what he says and his arrogrant, elitist attitude. This is the man who throws a tantrum over the name Linux, insisting it be called "GNU/Linux" or, even worse, "Lignux". In ten years he won't be considered a "hero", just another piece of a larger whole.
Re:It was for Mickey Mouse (Score:2)
Reading the Constitution, it seems pretty vague about copyright length. It simply gives Congress the power to grant copyright for a limited amount of time, but it doesn't define "limited." That seems to be the problem. The corps have pushed their agenda and Congress caved in, screwing the rest of us.
What threat? (Score:2)
We're having this discussion because the industry THINKS they are threatened. I've seen no evidence that leads me to believe there is actually a real risk right now. But, it will probably depend on how they behave and on the outcome of some court cases. Either way, there is no immediate threat that I know of.
As sorehands said, they thought for sure that their industry was doomed when VCRs were ruled legal and people were allowed to tape things off the tv and had the ability to make copies of videos. Instead, they ended up making even more money than before. Now they are claiming that the sky is falling again.
Very critical point... (Score:2)
A society without educated citizens is a society that is manipulable by those that run the media.
This is somewhat OT, but I was just talking about this with someone the other day. I was wondering why the government doesn't do a lot more to improve public education in this country. Instead they leave it up to those who can afford the best schools and those few who make it on their own to lead the country. The rest of the people remain inadequately educated, many severely so. As long as education remains at its current shockingly poor level, the country will be full of people who lack the critical thinking skills and background knowledge to effectively evaluate the issues we're faced with every day. Instead, they will rely on the media to tell them what to think about things. This does not bode well for our future as a country.
It's not 2 years late (Score:2)
He isn't an anarchist- read before you rant (Score:2)
Anyway, go read it... it's a fascinating look into legal history and how in the past this type of "anarchy" has led to huge changes in our legal system. He feels that this will occur again, and has some pretty convinving arguments as to why.
Remember- someone is going to be in charge- no one is claiming that no one should be. The only question is who, and that is a stunningly important question in a way that it hasn't been in many, many years.
~luge
Re:death? (Score:2)
*L* Do you even remember me? Back in the good ol days of TPD and my very own SPLHC BBS? *LOL*.
A couple of very simple reasons actually:
1) I can
2) I too am learning perl, as the only thing i have available to me at work is Win32 I'm learning perl there ( no, telneting into my home PC is not an option until i get my DSL line
The only problems are...while wakeup HIDES at/cron it still uses them....I suppose I could have wake up call the task scheduler, but that would defeat the purpose *L* I have some ideas but *shrugs*
Back to the subject... in the wake of civil disobedience, many power mongers cannot control themselves and resort to irrational measures. It happened after the Boston Tea Party. It can happen anywhere. Irrationality is everywhere.
This is true, but I hope we would be able to stop it before it got THAT far...
Shit..there goes our Karma...off topic posts and all....
Sgt Pepper
Re:death? (Score:2)
PS: I'm working on porting Wakeup to Win32, if I actually get it done want me to send ya the code?
(hey it gives me something to do at work! *L*)
Sgt Pepper
Another Cleveland Linux [lug.net] Geek
Re:death? (Score:2)
Re:Just like MP3s... (Score:2)
Oh how we wish that was so. But the sad, sad fact is that stealing is almost always easier, and certianly cheaper, then buying. Therefore there will always be people trying to get something for nothing. I see a shift in the business world though, a shift from product based selling to service based. See, how much easier would it be ( and profitiable? ) for Time Warner, et al., to set up..oh..i don't know...a $15 a month service in which you get to download unlimted ( or a set amount of Megs/bytes/minutes ) music/movies/etc.? This, I believe is the wave of the future. Some places might even do it for free, supported by advertising ( kinda like commercial radio stations ). Is this a bad thing? I'm not sure yet, we'll have to see how it plays out, but I
Re:This ain't no Boston Tea Party, guys (Score:2)
Actually, the net price of tea fell when the East India Company was given the monopoly and the tax was added -- there was no immediately felt pressure.
Steven E. Ehrbar
Re:Breaking the law (Score:2)
Let's not forget Ireland, 1916. Several hundred years of "liberals" bemoaning the "Irish problem" and working within the law - among them the Home Rule League, Daniel "The Liberator" O'Connell, the Land League, all the Irish Whig MP's in the British Parliament. A lot of trying to convince vested interests to change by the rules, a lot of failure. Then the Phoenix Club, the Irish Republican Brotherhood, the Citizen's Army and revolution, several years of guerrilla action initiated by several hundred law-breakers - the result? Freedom from the imperialist yoke. The reason that the North is still a problem is that the sectarian, nationalist elements of the IRA gained the upper hand over the socialists. There was then no chance for forging an alliance with the equally oppressed Protestant working class. In fact during the strikes in the shipyards in the 20's there were alliances between workers from both backgrounds which were quickly suppressed by the Unionist bosses using religion to drive a wedge._)
So here people broke the law by not paying the tax, organized their neighbourhoods so that bailiffs were physically repulsed when they appeared, co-ordinated with each other in mass non-payment to swamp the courts. These things are all civil disobedience, _breaking_the_law_. People didn't ignore it as an excuse not to pay taxes, they point of not paying it was not paying it!
I think that both of your examples are strong evidence against what you argue for!
Re:Ahead of the curve (Score:2)
I'm interested in the Franklin day planners. What the hell were those?
B.t.w. as far as being a social miscreant goes Thomas Paine is right up there, a persona non grata in Britain _and_ revolutionary France
Re:Ahead of the curve (Score:2)
BTW, corporation haven't taken over our government (I'm assuming you are talking about the US). That's just paranioa.
But notice that many of the people who were for the American Revolution had the most to loose. That is, they were land owners and the wealthy.
Newspaper lockbox as counterexample (Score:2)
Re:Just like MP3s... (Score:2)
Who wants to preserve this threatened industry? I thought we had a free market, you don't set up industrial preserves in a free market. Serious question here, and I'd like to see reasons.
--
Re:Ahead of the curve (Score:2)
Yea, you ever hear of this site called
--
Re:No, criminal not "moral" (Score:2)
When I talked to my Congressman the other day, he didn't know what the DMCA was. He barely understood the term reverse-engineer. He didn't know you could patent sotware, or why that might *possible* cause a problem. He does have a killer website though [house.gov]. My point, although our leaders are trying to do their jobs the best they can (...) often there is just too much to keep track of. The Internet brings this to light even quicker because of it's incredibly liquid and fast changing nature. It has becoming obvious, IMHO, that many of our IP laws were built for a differnt time. With a different set of rules. It's as if the laws of physics for media have been changed, now the laws of the the land need to adapt. They have become so stale and brittle from years of selfish interfence, that the most logical way to change them, is to break them. Repeatedly and casually.
--
Re:No, criminal not "moral" (Score:2)
I'm a moral relativist of a sort, but this is relativism of the stupidest sort. Imagine this conversation was being conducted in a repressive state like Indonesia. Would you still say criticising the rulers was arrogant? And identify the rulers views with people's views? Give me a break. You have got to be trolling.
And some opinions are more valid than others!! Scientifically-ground ones for one thing.
Breaking the law (Score:2)
Some examples (UK bias):
Re:This ain't no Boston Tea Party, guys (Score:2)
Mike Roberto (roberto@soul.apk.net [mailto]) - AOL IM: MicroBerto
This ain't no Boston Tea Party, guys (Score:2)
Re:My thoughts (Score:2)
I am looking forward to the day when Intelectual Property laws are struck down and we are free to copy bits around as we feel fit. At this time I will look back on the people who made this possible and think of them as heros. It is the people who are willing to risk jail time to break a law that is clearly unfair that will make this possible.
Just like MP3s... (Score:2)
With the studios fighting the new internet community, the natural reaction is to fight back--which is clearly demonstrated with the rise of Napster, Gnutella, and even DeCSS. Working together, I think that paying to buy movies online (maybe even before they're out in theatres) or buy songs will be the first step in preserving this threatened industry.
Threatened my ass! (Score:2)
If VCRs were not outlawed and blank tapes were not taxes, then they would go out of existence.
Re:My thoughts (Score:2)
So Martin Luther King was just childish and immature?
Change in the law itself needs to be enacted within the law, otherwise it is little more than throwing tantrums, something which
But if nobody realizes the law is corrupt, how will it get changed? The only way is for people to see ordinary, upstanding citizens like themselves who haven't done anything wrong being arrested. (or killed, or beaten by cops, or whatever)
Look at RMS as a prime example of a so-called "hero" - while
Funny, because I haven't seen anything positive about RMS on
Ahead of the curve (Score:3)...
Link to Canadian IP articles (Score:3)
legal documents [macleoddixon.com]
I found these quite useful.
The GNUML (Score:3)
I was hoping to create something I like to think of as the GNU Media License. The basic idea is to both protect the right of the original creator to profit from their work, and to encourage free and widespread distrubution of creative works.
This is what I've got so far translated from a scribbled barroom napkin.
--
Protecting the right to profit. This is accomplished by explicitly prohibiting anyone from selling the artifact. Also, it must be set-up (somewhat like an EULA) that by using the artifact you are agreeing to the agreement under which it is released.
There must also be some provision to both enforce the spirit of the agreement (widespread free (beer/gratis) distrubuition) and lessen the value of the artifact in absolute (supply/demand) terms. This is accomplished by requiring each "mirror" of the artifact to include a link to another "working mirror" (mirror defined as another Internet address where the file can be downloaded with less than 2 "clicks" (or redirects) working mirror is any Internet address with 90%(?) uptime, aggregated per week(?)) Including a working mirror helps to increase total distrubution as well as creating an open environment for exchange. This also makes it much more difficult to illegally control access to artifacts released under this license, as well as helping to ferret out abuses.
Also, each mirror would also be required to produce "source" upon any request within 48 hours of receipt of request. This "source" is of course the original source (i.e. Internet address, most likely a URL) of the artifact. A bit of a problem here, as abuses could develop as someone uses this license to benefit from the distrubution and later remove the source. Perhaps some legalese is needed here, any lawmakers in the audience today?
I just put this together last night, but there will most likely be few tracks released under it tommorrow (got a mini-disc and a mic today
I don't have much cash for lawyers, and I will be submitting for some help from the FSF, but I figured I'd post it for the phreaks here first, and I know they're pretty busy right now
Oh, and the original creator is allowed to profit by releasing the artifact under a different license (i.e. regular copyright) as the original author of the work, CDs, DVDS, Mini-Discs, Memory Sticks, whatever. Not to mention any other way they can from free global distrubution of their artifacts.
Comments, flames, suggestions? (oh, and this is the "dealing with it" part..)
--
Re:Intellectual property isn't like Physical Prope (Score:3)
The basis for all "economic" laws is the prospect of the items that the laws proctect as being scarce. With the advent (or possibly the evolution depending on where one stands) of goods into a digital form, scarcity of those good disapear because of the inherent properties of the digital world. That is, the ability to create enough perfect copies of the good to actaully sate all possible desire for the good. What effect this had upon a market is simple: The market begins to devalue that good, in terms of market value, not personal or emotional value (see all of the toys and other collectables...there are literally millions of Star Wars action figures out there, but I can go to my local gaming store and find that any give one is well above original market value) and eventually expects it to be there as a part of the larger overall market for free or next to free (see paper, pencils, ink pens, etc). Those people, therefore, who are fighting the incomming digital world are fighting to keep thier goods scarce and by that, their ability to demand a certain price for their goods.
However it becomes easy to predict that this is a loosing battle. History shows us a previous example for such use: Look only at what happened to the value of home-made or cottage goods when mass industrial manufacturing became a dominate method of creation of goods. While it may have taken close to a century for the home clothing industry to disapear, it was a very hard faught battle, with both sides waging campagins of what we now call FUD. One side would claim that their goods were superior, the other would shout that your could rely upon theirs much more. There was no right or wrong about it, theirs' was simply a time of change. And both worlds survived. People still make their on clothes, taylors and seamstristes are still employed, and while it may have fallen by the marketable good wayside, such goods are still valued by the people who own them.
We are truly at a cross roads here, one that will define how the world will treat goods and the people that produce the goods of a digital market. With the ability to create as many goods as it will take to sate desire, we can for the first time in human history elmiate a form of want. Let us not waste it flippantly by claiming to be better than those who fight the future. In the end, both sides are needed for the future to occur.
Rating, not censorship (Score:3)
Initially, I also considered Slashdot's moderation system to be censorship, but I don't any more. There is a clear rule that no post is ever deleted, so all information is available to everyone. Censorship by definition involves removal, so the term does not apply here. The cases in the article do involve removal, and that is why people are fighting back. If someone put "Score: -1 Immoral" above a link to cphack or DeCSS, I don't think anyone would care.
The Slashdot moderation system is a rating system, governed by your fellow readers. What this produces is a rating which reflects the opinion of a (presumably) representative majority of the reader base. This can be useful for extracting related information from the responses (such as related links), but is generally not useful for opinion-related discussions, as it tends to filter out uncomfortable criticism.
While a rating system can be useful, I don't think thresholds are. I always keep my threshold at -1, and even though most -1 posts are just noise, some are very intelligent and funny. The Slashdot trolls are like South Park in a way, and just like South Park plays an important role in criticizing our culture (mostly American though), so do the trolls for this subculture.
Slavery/War in the intellectual property era. (Score:3)
Yes it is amazing how history repeats itself, but it gets better
After all to them, the industrial revolution was not about industry but about the cotton gin and using it to leverage, grow, and extend their slave plantations as never seen before. One would think that the cotton gin would have been used to minimize slavery, but beeing so greedy it wasn't. Today they think that the internet is about extending these massive intellectual property mega-corporations like time-warner to leverage it's control in every home. Of course one would think that the internet would encourage them to share information more freely, but them being greedy it doesn't.
AND OH GOD HOW there were always those fools who thought that the slave states could peacifully get along with the free states, and that those today who think that the GPL can peacefully exist in a world with intellectual property. They are wrong and will pay as dearly.
Well, back then, it was only a matter of time before things hit the fan and war broke out. But BEWARE - the US civil war was one of the bloodiest in the history of the world and lost more lives for America than both WW1 and WW2 combined. This was directly because the industrial revolution was just bringing about new technologies like the machine gun and gas weapons, but society had not developed defenses for them yet. Yes for those of you who uphold in intellectual property, history has taught us that we should not try to compromize and hold no bars back in terms of simply "putting you out of busisness". Today we can know on faith that history, technology, and ethics are on our side. I just pray that you'll get it, before you get it, but if history is any indication you won't change until it's too late.
David
dmchr@netcom.com
Moglen knows dick about CyberPatrol! (Score:3)
It is true that CPHack may allow a person to shut CyberPatrol on a system (if they go through enough hoops).
What Moglen fails to note is that most companies use The Border Manager and Proxy versions. Since this is on a server, nothing anyone can do with CPHack on their machine won't matter.
Moglen also forgets that ways to bypass CPHack have been around for quite a while.
Re:My thoughts (Score:4)
As an "anarchist", he is obviously going to be anti-regulations, and so his stance that today's DeCSS authors are tomorrow's heroes is little short of ridiculous.So if a "vegetarian" were to enter a debate over whether or not to eat meat and praised people that had found a way to avoid eating meat as heroes, that would be ridiculous? You seem to misunderstand that the classification of anyone as a hero is a subjective thing and he is merely expressing the opinion that these are people that he admires
They're not heroes, they're just in way over their heads after having done something before thinking of the consequences.
A little patronizing that. How do you know that they haven't made a principled and thoughtful decision to do this?
And look at Matthew Skala. Yeah sure he broke Cyberpatrol's encrypted site list, but he's certainly no "hero" - he caved in as soon as Mattel threatened him with a lawsuit.
That would seem to indicate a rational, thoughtful desire on his part not to become a martyr. He's already done a lot to admire, he's raised the issue, made it public and shown us the threat..
Tell it to the people that founded this country on the basis of a tax-revolt, tell it to Gandhi, tell it to Martin Luther King, tell it to Nelson Mandela, tell it to the German underground resistance during WW2, tell it to anyone that's ever done anything to bring about change.
Your diatribe about Stallman descends into the merely personal. No one is really evaluating him as a hero or otherwise on the basis of his arrogance or the colour of his socks or anything other than the fact that he had a idea that involved change to the way things are and tried to implement it.
Intellectual property isn't like Physical Property (Score:4)
Originally, Intellectual property law was modelled after the highly successful and largely settled physical property laws. But from the outset, it was recognized that intellectual property was differenent. That's why copyrights and patents are of limited duration.
The holders of IP, some of whom are large, powerful corporations naturally want to protect and enhance their existing assets. This is to the detrement of the citizenry, because no new creation can retroactively occur. At the very least, new IP protections should apply _only_ to new works.
We need people like Eben who know the law to help ensure the IP corps don't run roughshod over the rights of the people.
|
https://slashdot.org/story/00/04/08/095218/fsf-general-counsel-eben-moglen-talks-on-upside
|
CC-MAIN-2017-04
|
refinedweb
| 8,766
| 70.94
|
Introduction: HackerBox 0032: Locksport
This month, HackerBox Hackers are exploring physical locks and elements of security alarm systems. This Instructable contains information for working with HackerBox #0032, which you can pick up here while supplies last. Also, if you would like to receive a HackerBox like this right in your mailbox each month, please subscribe at HackerBoxes.com and join the revolution!
Topics and Learning Objectives for HackerBox 0032:
- Practice the tools and skills of modern Locksport
- Configure the Arduino UNO and Arduino IDE
- Explore NFC and RFID technology
- Develop a demonstration security alarm system
- Implement motion sensors for the alarm system
- Implement laser tripwires for the alarm system
- Implement proximity switches for the alarm system
- Code a state machine controller for the alarm system
- Understand the operation and limitations of Blue Boxes
HackerBoxes is the monthly subscription box service for DIY electronics and computer technology. We are hobbyists, makers, and experimenters. We are the dreamers of dreams. HACK THE PLANET!
Step 1: HackerBox 0032: Box Contents
-
Some other things that will be helpful:
- Soldering iron, solder, and basic soldering tools
- Computer for running software tools
- Solderless breadboard and jumper wires (optional)
- One 9V battery (optional)
Most importantly, you will need a sense of adventure, DIY spirit, and hacker curiosity. Hardcore DIY electronics is not a trivial pursuit, and HackerBoxes are not watered down..
There is a wealth of information for current, and prospective, members in the HackerBoxes FAQ. good introduction, we suggest the MIT Guide to Lock Picking.."
Checking the calendar on the TOOOL site shows that you will be able to meet folks from TOOOL this summer at both HOPE in New York and DEF CON in Las Vegas. Try to find TOOOL wherever you can in your travels, show them some love, and pick up some useful Locksport knowledge and encouragement.
Diving deeper, this video has some good pointers. Definitely look for the "Lockpicking Detail Overkill" PDF recommended in the video.: Arduino UNO R (datasheet)
-
Arduino UNO boards UNO into a USB port of your computer, a red power light (LED) will turn on. Almost immediately after, a red user LED will start to blink quickly. This happens because the processor is pre-loaded with the BLINK program, which is now running on the board.
Step UNO into the MicroUSB cable, plug the other end of the cable into a USB port on the computer, and quickly blink the red user LED. However, the BLINK code in the IDE blinks the LED a little more slowly, so after loading it onto the board, you will notice the blinking of the LED will have changed from fast to slow. Load the BLINK code into the UNO by clicking the UPLOAD button (the arrow icon) just above your modified code. Watch below the code for the status info: "compiling" and then "uploading". Eventually, the IDE should indicate "Uploading Complete" and your LED should be blinking slower.
Once you are able to download the original BLINK code and verify the change in the LED speed. Take a close look at the code.? Load the modified code into the UNO: Security Alarm System Technology
The Arduino UNO can be used as controller for experimental demonstration of a security alarm system.
Sensor (such as motion sensors, magnetic door switches, or laser tripwires) can be used to trigger the security alarm system.
User inputs, such as keypads or RFID cards, can provide user control for the security alarm system.
Indicators (such as buzzers, LEDs, and serial monitors) can provide output and status to users from the security alarm system..
If you have a smartphone,.
Regarding the included NFC tag types, the white card and the blue key fob both contain Mifare S50 chips (datasheet).
Step 7: PN532 RFID Module
This NFC RFID module is based on the feature-rich NXP PN532 (datasheet). The module breaks out almost all of the IO pins of the NXP PN532 chip. The module design provides a detailed manual.
To use the module, we will solder in the four pin header.
The DIP switch is covered with Kapton tape, which should be peeled off. Then the switches may be set to I2C mode as shown.
Four wires are used to connect the header to pins of the Arduino UNO.
Two libraries must be installed into the Arduino IDE for the PN532 module.
Install the NDEF Library for Arduino
Install the PN532 Library for Arduino
Once the five folders are expanded into the Libraries folder, close and restart the Arduino IDE to "install" the libraries.
Load up this bit of Arduino code:
Files->Examples->NDEF->ReadTag
Set the Serial Monitor to 9600 baud and upload the sketch.
Scanning the two RFID tokens (the white card and the blue key fob) will output scan data to the serial monitor like so:
Not Formatted
NFC Tag - Mifare Classic
UID AA AA AA AA
The UID (unique identifier) can be use as an access control mechanism that requires that particular card for access - such as to unlock a door, open a gate, or disarm an alarm system.
Step 8: Passcode Keypad
A keypad can be used to enter a passcode for obtaining access - such as to unlock a door, open a gate, or disarm an alarm system.
After wiring the keypad to the Arduino as shown, download the Keypad Library from this page.
Load up the sketch:
File->Examples->Keypad->HelloKeypad
And then modify these lines of code:
const byte ROWS = 4;
const byte COLS = 4;
char keys[ROWS][COLS] = {
{'1','2','3','A'},
{'4','5','6','B'},
{'7','8','9','C'},
{'*','0','#','D'}
};
byte rowPins[ROWS] = {6, 7, 8, 9};
byte colPins[COLS] = {2, 3, 4, 5};
Use the serial monitor to observe which keys of the keypad are being pressed.
Step 9: Siren Using Piezo Buzzer
What alarm system doesn't need an alarm siren?
Wire up the Piezo Buzzer as shown. Note the "+" indicator on the buzzer.
Try out the attached code in the file siren.ino
Step 10: Shift Register RGB LED
The APA106 (datasheet) is three LEDs (red, green, and blue) packaged together with a shift register driver to support a single pin data input. The unused pin is a data output that would allow the APA106 units to be chained together if we were using more than one.
The APA106 timing is similar to the WS2812 or the class of devices broadly referred to as NeoPixels. To control the APA106, we will use the FastLED Library.
Try out the attached sketch onepixel.ino which uses FastLED to cycle the colors on an APA106 wired to pin 11 of the Arduino UNO.
Attachments
Step 11: Magnetic Proximity Switch
A magnetic proximity switch (or contact switch) is often used in alarm systems to detect the open or closed state of windows or doors. A magnet on one side closes (or opens) a switch on the other side when they are in proximity. The circuit and code here show how easily these "prox switches" can be used.
Note that the included prox switch is "N.C." or Normally Closed. This means that when the magnet is not near the switch, the switch is closed (or conducting). When the magnet is near the switch, it opens up, or stops conducting.
Attachments
Step 12: PIR Motion Sensors
The HC-SR501 (tutorial) is a motion detector based on a passive infrared (PIR) sensor. PIR sensors measure infrared (IR) radiation from objects in their field of view. All objects (at normal temperatures) emit heat energy in the form of radiation. This radiation is not visible to the human eye because it is mostly at infrared wavelengths. However, it can be detected by electronic devices such as PIR sensors.
Wire up the components as shown and load the example code to feast your eyes on a simple demonstration of motion activated LED illuminations. The activating motion causes the example code to toggle the coloring of the RGB LED.
Attachments
Step 13: Laser Tripwire
A laser combined with a light sensor module makes a nice laser tripwire to detect intruders.
The light sensor module includes a potentiometer to set a trip threshold and a comparator to trigger a digital signal upon crossing the threshold. The result is a robust, turn-key solution.
Alternatively, you may wish to try rolling your own laser detector by arranging a bare LDR and a 10K resistor as a voltage divider feeding an analog (not digital) input. In this case, the thresholding is done inside the controller. Check out this example.
Attachments
Step 14: A Security Alarm System State Machine
The demonstrated elements can be combined into a basic, experimental alarm system. One such example implements a simple state machine with four states:
STATE1 - ARMED
- Illuminate LED to YELLOW
- Read Sensors
- Sensors Tripped -> STATE2
- Correct Keypad Code Entered -> STATE3
- Correct RFID Read -> STATE3
STATE2 - ALARM
- Illuminate LED to RED
- Sound Siren on Buzzer
- Exit Button "D" Pressed -> STATE3
STATE3 - DISARMED
- Illuminate LED to GREEN
- Turn off Siren on Buzzer
- Arm Button "A" Pressed -> STATE1
- NewRFID Button "B" Pressed -> STATE4
STATE4 - NEWRFID
- Illuminate LED to BLUE
- Card Scanned (ADD IT) -> STATE3
- Exit Button "D" -> STATE3
Step 15: Blue Box Phreaking
The Blue Box was an electronic phone hacking (phreaking) device that replicates the tones that were used to switch long-distance telephone calls. They allowed routing your own calls and bypassing normal telephone switching and billing. Blue Boxes no longer work in most countries, but with an Arduino UNO, keypad, buzzer, and RGB LED, you can build a cool Blue Box Replica. Also check out this similar project.
There is a very interesting historical connection between Blue Boxes and Apple Computer.
Project MF has some cool information on a living, breathing simulation of analog SF/MF telephone signaling just as it was used in the telephone network of the 1950s through the 1980s. It lets you "blue box" telephone calls just like the phone phreaks of yesteryear.
Step 16: HACK THE PLANET
If you have enjoyed this Instrucable and would like to have a cool box of hackable electronics and computer tech projects descend upon your mailbox each month, please join the revolution by surfing over to HackerBoxes.com and subscribing to the monthly surprise box.
Reach out and share your success in the comments below or on the HackerBoxes Facebook Page. Certainly let us know if you have any questions or need some help with anything. Thank you for being part of HackerBoxes!
Be the First to Share
Recommendations
21 Comments
2 years ago
Anyone yet combined all of the pieces into the "4 state alarm" as the One example suggests? By the way HB, any sample code for the "One example" mentioned in step 14?? And what does the "work, home, bed" map have to do with coding an alarm? Wiring up all the devices on the separate pins as detailed, was it easy to use as one big alarm system. Would be a great escape room project too.
Reply 2 years ago
Yes, indeed! If you want to set up a demo "alarm system" you can combine the seven provided example programs together into whatever subset of sensors, user controls, and indicators that you wish. They all work together fine. The "work, home, bed" image is a graphical example of a simple "state machine" from everyday life having three states and four transition events. The alarm example below that has four states with eight transition events. Here is some general info on state machines:
Reply 2 years ago
Thanks, I've been reading up on State diagrams today and state machines today.
Reply 2 years ago
Been working on the code for combining the programs, running into a few compile errors though, see below.
#include "FastLED.h"
#include <Keypad.h>
#define LED_PIN 11
#define MOTION_PIN A3
#define SWITCH_PIN 10
#define LASERTRIP_PIN A2
CRGB leds[1];
/* @file HelloKeypad.pde
|| @version 1.0
|| @author Alexander Brevig
|| @contact alexanderbrevig@gmail.com
||
|| @description
|| | Demonstrates the simplest use of the matrix Keypad library.
|| #
*/
const byte ROWS = 4;
const byte COLS = 4;
char keys[ROWS][COLS] = {
{'1','2','3','A'},
{'4','5','6','B'},
{'7','8','9','C'},
{'*','0','#','D'}
};
byte rowPins[ROWS] = {6, 7, 8, 9};//connect to the row pinouts of the keypad
byte colPins[COLS] = {2, 3, 4, 5};//connect to the column pinouts of the keypad
Keypad keypad = Keypad( makeKeymap(keys), rowPins, colPins, ROWS, COLS );
void setup(){
Serial.begin(9600);
}
FastLED.addLeds<WS2812B, LED_PIN, GRB>(leds, 1);
FastLED.setBrightness(30);
pinMode(SWITCH_PIN, INPUT_PULLUP);
pinMode(MOTION_PIN, INPUT);
pinMode(LASERTRIP_PIN, INPUT);
}
void loop()
{
int s = digitalRead(SWITCH_PIN);
if (s)
{
//Magnet in Proximity - Door Closed
//Open Switch - PULLUP wins - Input HIGH
//GREEN LED
leds[0] = CRGB(0, 150, 0);
}
else
{
//Magnet NOT in Proximity - Door Open
//Since "Normally Closed" - ground wins - input LOW
//RED LED
leds[0] = CRGB(150, 0, 0);
}
FastLED.show();
delay(200);
}
int t = digitalRead(MOTION_PIN);
if (t)
{
//Now Motion - GREEN LED
leds[0] = CRGB(0, 150, 0);
}
else
{
//Motion Detected - RED LED
leds[0] = CRGB(150, 0, 0);
}
FastLED.show();
//delay(50);
{
char key = keypad.getKey();
if (key){
Serial.println(key);
}
}
int buzz=12; // Buzzer Pin
void setup()
{
siren();
}
void siren()
{
for(int hz = 440; hz < 1000; hz+=25)
{
tone(buzz, hz, 50);
delay(5);
}
for(int hz = 1000; hz > 440; hz-=25)
{
tone(buzz, hz, 50);
delay(5);
}
}
Reply 2 years ago
We're you successful in combining all the code together? If so, can you share?
2 years ago
I don’t know if anybody else has said it in the comments or not... but if you’re using Linux and the IDE give an error about sync when you try to upload the program... get a USB hub. It’s a long story... but just trust me... put a USB hub in line with the board and make sure to go back and check your port menu for a USB port option. Hope this prevents the two days of annoyance I experienced for someone else.
2 years ago
Concerned about the legitamacy of the "Arduino" controller in this box. Hackerbox claims it to be Arduino but does not appear to have the insignia nor the quality. This controller came with missing headers on the analog ports (but filled with solder). No way to contact them to ask about it.
Reply 2 years ago
It works great for me. It cheap and it works. What more can you ask for?
Reply 2 years ago
You can always contact support@hackerboxes.com for issues with anything in your box. Also, if it is easier, you can just reply to one of the many notification emails you received as part of the purchase and fulfillment process.
2 years ago
Wow! Love that we went retro with the Blue Box! I remember building and using one in the early eighties. Love this theme this month!!
Reply 2 years ago
I can't see much use for the blue box. My favorite phone encoder for phone systems and pay phones was my Sharp Pocket Auto Dialer EL-6260. Haven't had a use for it in years...
2 years ago
I'm not usually a picky person, but, the lock was easy to pick. LoL!
Great box! ;)
Reply 2 years ago
Mine flew apart from the plastic window after the first pick. Got one from another group recently and it is rock solid. Love the picking though. Went out to the garage and been picking other padlocks I had laying around...
Reply 2 years ago
I thought so too, then I grabbed a selection of pad locks from my garage. They weren't much harder than the transparent lock. It is a lot of fun.
Reply 2 years ago
Being able to see into the lock makes it much easier to pick. Have you tried it with your eyes closed?
Reply 2 years ago
I haven't tried it with my eyes closed. I was able to open a different padlock with a similar lock mechanism I have on a toolbox and got it picked open.
Reply 2 years ago
Nice! It sounds like you have "the touch" :)
2 years ago
I was pretty astounded at how easy it was to pick this lock and a few others lying around including a 'Master' padlock. Was able to feel the tumblers and rotate the cylinder without too much difficulty. Now to try to find some more challenging locks (of my own of course)!
Reply 2 years ago
I had a couple of locks that I had lost the keys for. The raking tool worked like a charm for both. Fun!
2 years ago on Step 7
To avoid confusion this is what the switches should be set at for 12C mode
|
https://www.instructables.com/HackerBox-0032-Locksport/
|
CC-MAIN-2021-25
|
refinedweb
| 2,789
| 71.04
|
One day I stumbled upon this awesome site called “Exercism”. It mainly focuses on improving your skills via Deep Practice & Crowdsourced Mentorship, Strengthening your problem-solving skills by guiding others through the process.
I got hooked onto it and then I thought why not share my learning with all. Being a Ruby programmer, my notes are Ruby based.
THINGS TO KEEP IN MIND
Before tackling any program/problem/challenge you should keep certain points in your mind. Some are general, some are more language specific.
Some questions you might want to ask yourself:
- Am I adhering to the Single Responsibility Principle?
- Is all my code on the same abstraction level?
- Can I combine conditional clauses?
- How does my API look to clients of this code (i.e. how do other classes interact with this class)?
- Do I have duplication?
- What requirements are likely to change? Would I be able to implement such changes painlessly? [Note that it is not that you should normally guess what future requirements are, but it can be helpful on Exercism]
Use Private, that way you limit the public API of any class. It is considered a good practice to expose as few methods as possible, so you don’t come up with a “fat interface” and has to maintain a lot of methods that probably nobody else uses (this is troublesome when you want to refactor your code but you aren’t sure if somebody is using method x or not.
If you are using in-built methods you should understand them first — how they work, what are the pros & cons, in which use-case you should use them. Don’t assume, dig them up. Some examples:
- p & puts are not the same
p foo does puts foo.inspect , i.e., it prints the value of inspect instead of to_s, which is more suitable for debugging (for example, you can tell the difference between 1, “1” and “2\b1”, which you can’t when printing without inspect).
- to_a is super-expensive on big ranges.
- Use string interpolation only when necessary. Example where strinng interpolation is needed:
name = ‘Anjali Jaiswal’puts “Welcome #{name}”
# Example where string interpolation is not needed
puts ‘Hello World!’
- a.map(&:to_i) i.e. Symbol#to_proc : you are passing a reference to the block (instead of a local variable) to a method. Also Symbol class implements the to_proc method which will unwrap the short version into it’s longer variant. Example:
a = [“2”, “3”]a.map(&:to_i) # a(&:to_i.to_proc)#=> a = [2, 3]
SOME PROBLEMS FROM EXERCISM
I have described what I learned from individual problem. I am also putting names of the relevant problems so that you can look them up. My code is uploaded on Github. Also I have explained why I used certain in-built Ruby methods. I have highlighted some specific things that you should consider in general.
ACCUMULATE
Two ways to yield a block in ruby : yield & call
# yield statementdef speak yieldend# call statementdef speak(&block) puts block.callend
(Read “ZOMG WHY IS THIS CODE SO SLOW?”)
- Yield returns the last evaluated expression (from inside the block). So in other words, the value that yield returns is the value the block returns.
- If you want to return an array use map/select depending on situation. You don’t need to initialize an array & return it.
ANAGRAM
Anagrams are same if sorted.
select will return element if condition satisfies. So in case of conditions use select over map / each .
BEER-SONGS
BINARY
Use map.with_index so that you do not have to initialize another variable & directly apply inject on the result.
BOB
Start simple. Try to make test pass in a very easy way.
ETL
Use of each_with_object :
input.each_with_object({}) do |(score, letters), result| letters.each do |letter| result[letter.downcase] = score endend
GRADE-SCHOOL
Initialize an empty hash and declare values as an array. To sort & return a hash use hash.sort.to_h .
GRAINS
Sometimes solution could be a single word. Check for patterns. For instance, here you don’t have to iterate 64 separate calculations in
self.total. Think about the totals for the first few squares: 1, 3, 7, 15, 31. Now think about those totals in relation to powers of two. Total is a series of (2x-1) where x is 2**(n-1).
HAMMING
- raise could be another method. Always look out for single responsibility principle.
- select & count are perfect methods here.
- It is considered a good practice to expose as few methods as possible, so you don’t come up with a “fat interface” and maintain a lot of methods that probably nobody else will use. This is troublesome when you want to refactor your code but you aren’t sure if somebody is using that method or not. In this exercise, the only method being called is self.compute , so this means you could make self.raise_size_mismatch private and avoid exposing it.
HELLO-WORLD
You can pass default argument to a Ruby method :
def hello(name = ‘world’) #some dataend
LEAP
You can minimize loop:
if year % 4 == 0 && year % 100 == 0 year % 400 == 0 ? true : falseelsif year % 4 == 0 trueelse falseendto:year % 400 == 0 || year % 4 == 0 && year % 100 != 0
But I won’t prefer it, because the former one is more readable. Sometimes you have to gauge which is worthier.
PHONE-NUMBER
Interesting use of sub:
‘0987654321’.sub(/(\d{3})(\d{3})/, “(\\1) \\2-”) # will result in “(098) 765–4321”# Can return a part of string like:‘0987654321’[0..2] # will result in ‘098’
RAINDROPS
Assign common values as constant. Avoid unnecessary computing. For example you only need to check if 3, 5 & 7 are factors.
ROBOT-NAME
shuffle method of Array (can be applied to a-z but will be futile for 0–999 as it takes lots of memory & computation). Use Default assignment ||=
That’s it for now. Please do let me know how this blog helped you and whether I missed something that should be included. Sayonara for now!
|
https://anjali-jaiswal.medium.com/performing-exercism-on-ruby-66f34f03e16c
|
CC-MAIN-2022-40
|
refinedweb
| 1,002
| 67.45
|
to 3, then 1 to 5 and finally 1 to 7.
What is the number x ?
x mod 3≡ 2
x mod 5≡ 3
x mod 7≡ 2
By doing this Han Xin Counting Off Algorithm, he is pretty confident the spies could not figure out the accurate number of total soldiers, Unless the spy knew Number Theory or have a python program 😝 .
The solution to the Han Xin Counting Algorithm was written in a poem:
3 friends walk for 70 miles.
5 plum tree with 21 blossom flowers.
7 kids party in the full moon(15).
Mod 105 you would know!
三人同行七十稀, 五树梅花廿一枝, 七子团圆正半月, 除百零五便得知!
Solutions:
Direction Solution From the Poem:
#1. Multiply the Residue of according equation. 2*70 + 3*31 + 2*15=233 #2. Mod the Least Common Multiple of (3,5,7) 233 mod 105= 23
Note: The solution is NOT unique any multiple of 105 can be added to the solution and still satisfy the condition, such as 128, 233, 338 ….
The solution from Python Code:
from functools import reduce def chinese_remainder(n, a): sum=0 prod=reduce(lambda a, b: a*b, n) for n_i, a_i in zip(n,a): p=prod/n_i sum += a_i* mul_inv(p, n_i)*p return sum % prod n=[3,5,7] a=[2,3,2] print(chinese_remainder(n,a)) 23.0
General Solution:
How do we find a general solution for Chinese Remainder Theorem Problems?
Step1.Find the numbers for 1≡ Mod(ni, nj) for pairwised coprime n1… nm
Step2.Multiply the residue to each corresponding number in 1.
Step3. Add all the number in Step2 and mod N=n1*n2..*nm
n mod3 =2 n mod5 =3 n mod7 =2 Step1. # Find remainder 1 for mod 3 5*7=35 mod 3= 2 5*7*2=70 mod 3 =1 * # Find remainder 1 for mod 5 3*7 mod 5 =1 * # Find remainder 1 for mod 7 3*5 mod 7 =1 * Step 2. Add the Residue* number in Step 1 2*70 + 3*21+ 2*15=233 Step 3. Find remainder mod 3*5*7 233 mod 105 =23
Why this Algorithm works? Now introduce the famous Chinese Remainder Theorem:
Chinese Remainder Theorem :.
In abstract algebra, the theorem is often restated as: if the ni are pairwise coprime, the map
defines a ring isomorphism[12]
Summary:
The solution of Chinese Remainder Theorem problem is relatively easy to find as long as the problem satisfy the pairwise co-prime assumption, however, the idea and concept of Ring Isomorphism behind the theorem is complicated and took a long time for me to really understand.
Note: I really loved Chinese Remainder theorem when I was in School, not only because that’s probably the only thing in Math book that’s from China, but also the interesting story behinds it. I still doubt if Han Xin really used the Chinese Remainder Theorem as Soldier Counting System, because how would he know if he has 23 or 105 or 233 soldiers…
p.s. my friend used to joke it is Tibetan Remainder Theorem
Happy Studying!🐵
Reference :
|
https://fangya18.com/2019/02/08/the-chinese-remainder-theorem/
|
CC-MAIN-2021-17
|
refinedweb
| 515
| 61.87
|
Referential integrity is a relational database concept that states implied relationships among data should be enforced. Referential integrity ensures that the relationship between rows in two tables will remain synchronized during all updates and deletes.
Rails allows us to easily set up these implied relationships, but does nothing to help us enforce referential integrity. It’s very simple to accidentally or intentionally break referential integrity in most Rails applications.
An Example Scenario
Consider the following minimal set of models describing a blogging engine:
class User < ActiveRecord::Base has_many :posts validates :name, presence: true end class Post < ActiveRecord::Base belongs_to :user validates :user, presence: true end
Our blogging platform has taken off, but we’ve received requests from some users to delete their accounts. We add an interface for administrators to delete users and everything works fine.
A few days later we receive a report that we’re getting 500s on our “Popular Posts” page. Looking into it, we find that we’re getting:
undefined method `name' for nil:NilClass
This is happening when we render the name of the user associated with each post.
Somehow we’ve got a post that has no associated user even though
Post has a
validation that requires a
user.
We quickly realize that we allowed administrators to delete users but never cleaned up the deleted users’ posts. We manually clean the data and make the following change to our model to prevent this in the future:
class User < ActiveRecord::Base has_many :posts, dependent: :destroy end
The addition of
dependent: :destroy means when a user is
destroyed their posts will be as well. Administrators can now delete users
without fear of orphaned posts causing problems.
Months pass and our now-venture-backed blogging engine has attracted millions of
users. Unfortunately, lots of those users are spammers. We’re told we’ll be
given a daily list of
user_ids corresponding to spammers and need to write a
job to delete them. We know this list could include thousands of ids on any
given day, so we write the following code to avoid instantiating those objects
and issuing thousands of queries to destroy them:
user_ids = CSV.read(csv_path).flatten User.where(id: user_ids).delete_all
We soon receive a call telling us we’re getting 500s on the “Recent Posts” page.
You guessed it; we violated referential integrity once again and we’re seeing
the same
NoMethodError as before.
Why didn’t
dependent: :destroy save us here? Well,
delete_all doesn’t
instantiate the objects it is deleting and thus does not fire any
after_destroy callbacks. The
dependent options work via that callback.
Add Foreign Key Constraints
Fool me once, shame on me. Fool me thrice and I gotta find a new job. We can’t let this happen again. Rails can’t be trusted to maintain referential integrity, but you know what’s really good at doing that? Our relational database.
We can add foreign key constraints at the database level and ensure that the database will reject any operation that would violate referential integrity. Until Rails 4.2 ships with native support for foreign keys, we’ll need to add the Foreigner gem in order to do this. We add Foreigner and run the following migration:
def change add_foreign_key :posts, :users end
This will run the following SQL if you’re using Postgres and Foreigner:
ALTER TABLE `posts` ADD CONSTRAINT `posts_user_id_fk` FOREIGN KEY (`user_id`) REFERENCES `users`(id);
With the foreign key in place, any operation that causes a post to point to a
non-existent user will fail. It’s important to realize that a
user_id of
NULL is allowed, so we still need appropriate presence validations and
NOT
NULL constraints.
Now our nightly job is failing due to the foreign key constraint. The database is preventing us from deleting any users that still have associated posts. Does this mean we have to go back to the dreaded N+1 query scenario to destroy individual users?
Cascading Deletes
With a slight tweak to our foreign key, we can have the database, rather than ActiveRecord callbacks, handle the cascading deletes. Let’s change our foreign key just a bit:
def change remove_foreign_key :posts, :users add_foreign_key :posts, :users, dependent: :delete # or in the upcoming native support in Rails 4.2 # add_foreign_key :posts, :users, on_delete: :cascade end
This will run the following SQL when creating the foreign key:
ALTER TABLE `posts` ADD CONSTRAINT `posts_user_id_fk` FOREIGN KEY (`user_id`) REFERENCES `users`(id) ON DELETE CASCADE;
With the
dependent option functionality now moved to our foreign key, the
database can now handle cleaning up the associated records. We no longer need to
rely on callbacks for this behavior, so let’s remove the option.
class User # Old association: # has_many :posts, dependent: :destroy has_many :posts end
Foreigner and the native support in Rails 4.2 both support options that cascade, nullify, and restrict changes. See the documentation for Foreigner and Rails 4.2.
Adding Foreign Key Constraints to an Existing Application
With immigrant, you can automatically generate a migration that will add any
foreign key constraints your application is missing. With
immigrant added to
your
Gemfile, run
rails generate immigration add_foreign_keys to
create the migration.
If you’re working with an application of any substantial size that has been running for some time, you are very likely to encounter errors applying this migration to your production data. Foreign key constraints cannot be applied if they are not valid for all current data.
I suggest downloading a copy of your production data and trying to run the migration on that data to surface any issues you will have at deployment time. Once the data is fixed and your migration applied in production the actions that were causing the invalid data will result in errors, which you can then target for fixes.
Caveats
Polymorphic associations are maintained by Rails; the database knows nothing about them. Foreign key constraints cannot help you here so you must keep logic in your Rails application to try to maintain referential integrity.
The Lesson
Foreign key constraints help us maintain valid data and are yet another way of
helping us to avoid unexpected
nil values in our applications. It’s
unrealistic to expect application logic alone to provide the same level of
protection.
Enforcing referential integrity is another job relational databases are better prepared to handle than Rails application code. Be a Juke Box Hero and check out Foreigner (or Rails 4.2) today.
|
https://thoughtbot.com/blog/referential-integrity-with-foreign-keys
|
CC-MAIN-2020-45
|
refinedweb
| 1,073
| 51.68
|
Getting Started for EducatorsHow to create a course with PyCharm Edu
- A course is just a project of a special type. It consists of lessons.
- A lesson is a directory where the task files are stored. Each lesson can contain several tasks.
- A task is a directory where the following files are stored: task description which you have to type in the Task Description tool window, the file with the extension .py, that contains the exercise code and can contain answer placeholders, and the test file tests.py that helps you make sure that the students have fulfilled your task correctly. Also a task can contain more files required for fulfilling it.
- An answer placeholder is a frame shown to the students that replaces and hides a part of your initial code. These placeholders should contain descriptions of actions to be taken by the students to complete the tasks. You have to create descriptions of these actions yourself.
- If the students are not sure of themselves, they can view hints. The hints are also created by your goodself. In this guide you’ll find a detailed explanation of each action.
Prerequisites
Make sure that the following prerequisites are met:
- You are working with the latest version of PyCharm Edu.
- You have a Python interpreter properly installed on your computer. Refer to the help section Configuring Python SDK.
Creating a course
You can create a project in two possible ways: either from the Welcome screen, or by choosing File | New Project on the main menu.
In the Select project type dialog, choose the project type Course creation. You are suggested to can see in the Project view, PyCharm Educational has created some infrastructure:
There is a single top-level node PyCharmTutorialCourse, denoted with the icon
. If you expand it, the nested elements become visible.
- Under the number 1 (PyCharmTutorialCourse), you see all files and folders pertaining to your new project. As you can see, there is not so much so far:
- The folder lesson1 denoted with the icon lesson
. allows you to check the task execution (
), move between tasks (
), refresh task (
), view hints (
), and edit tasks (
). Note the following:
- these actions are available to the students.
- New
Project dialog. again and press Shift+F6:
Note that other refactorings work for the lessons and tasks too: Copy, Move, Safe Delete.
Next, let's add an image file that should be read. PyCharm Edu makes importing such a file quite easy — just drag it to the Project tool window, and then specify the target directory in the Move dialog:
However, this image file does not belong to the Read Images task — so, let's add it. To do so, right-click the image file, and choose Course Creator | Make Visible to Students:
Writing a task text
Now it's time to write a task description. Go to the Task Description tool window and click
. You see that HTML markup appeared:
Select the existing text fragment, and replace it with the following text:
<html> Write your task text here. <br> </html>
(Note that you can also use Markdown to write your description… The language is configured in the page Education under the node Tools of the Settings/Preferences dialog. )
Instead of the existing text, type the following:
Use
<code>imread</code> function to load PyCharm logo and play with it a little bit. file
read_images.py Add answer placeholder.
- In the Add Answer Placeholder dialog box, specify the text that will replace the selected fragment in the student's project:
- Click OK when ready.
If you want to show a prompt to your students, or a theoretical help for the specific answer placeholder, just type the hint text. If you want to add more hints, click plus.
Previewing a task
You would probably like to see how your task will be viewed by your students. To do that,
right-click the task file in the Project tool window, and on the context menu choose Course
Creator | Show Preview. In the example we are working on, right-click the file
read_images.py:
PyCharm EDU immediately opens a separate window with the task preview, as if it were were running test function follows:") if task_file.image_name == "PyCharm.png": passed() else: failed("PyCharm logo filename is incorrect") if __name__ == '__main__': run_common_tests() test_answer_placeholders()
CHECK YOUR EXERCISE CODE AND TESTS
OK, let's try to execute code and tests for our example.
To run tests for.answer predefined>
2. Open the file
write_image.py (F4) and enter the following code:
from skimage import io from skimage import data coffee = data.coffee() filename = "coffee.png" io.imsave(filename, coffee)
3. In the file
write_image.py, create the task window. To do that, select the piece of code
filename,
coffee and choose Add Answer Placeholder from the context menu of the selection.
4. In the Add Answer Placeholder dialog, write the text that will be shown to the students, and the hint:
5.()
6. Add the image file
coffee-answer.png to the root of the
project.
7. Run the test. To do that, right-click
in the:
1. In the Project tool window, select the lesson Advanced, and press Alt+Insert again to create a new task named Let's swirl PyCharm.
2. Rename the file
task.py to
swirl_pycharm.py
(Shift+F6), open it for editing (F4)
and type the following code:
from skimage import io from skimage.transform import swirl image_name = "PyCharm.answer and
swirled2.png.
3. Then, add the answer placeholder to the code
rotation=0, strength=20, radius=120, order=2
of the statement
swirled2 = swirl(img, rotation=0, strength=20, radius=120, order=2) with
the text “Сhoose your parameters”.
.
4.Add the image
PyCharm.png to the root of the task Let's swirl PyCharm.
5.()
6.Write the task text in the Task Description tool window (
)::
<html> <br/> Use <b>imread </b> function to load PyCharm logo and play with it a little bit..answer.txtwith the following text:
Hope you enjoyed our tutorial! Create your own courses and have fun!
- tests.py
Creating the course archive to share it with students
OK, your first course is ready. What's next?
1. Right-click anywhere in the Project tool window and choose Course Creator | Generate Course Archive: :
2. Type the archive name and location (or accept defaults) in the dialog box that opens:
After clicking OK, PyCharm Edu notifies that the course zip archive has been created successfully:
View the archive file
PyCharmTutorialCourse.zip with the
actual course archive in the Project tool window.
Students can use this archive to go through your course!
Quick creation of educational courses (File | New Project... -> Educational). Course Creator | Upload Course to Stepik:
PyCharm Edu shows the login dialog:
PyCharm Edu saves these credentials in the Educational page of the Settings/Preferences dialog.
Note that your course is private by default. To make it public, you have to clear the check box Private course (invite learners by private invitation link) of the course settings.
That's it, folks... Congrats! You've created you first educational course.
|
https://www.jetbrains.com/pycharm-edu/quickstart/getting_started_educators.html
|
CC-MAIN-2016-44
|
refinedweb
| 1,179
| 65.73
|
Support Engineer: Hi, welcome to Red Hat support, how can we help?
Caller: Our web-server stopped responding and we had to reboot to restore it back, we need to find the root cause.
Support Engineer: Sure, was anything changed recently on this server?
[...]
The above is an example of a regular root cause analysis (RCA) request Red Hat’s support receives. RCA methodology has been used systematically in the past 30 or so years to help the IT industry find the origin of problems and ultimately how to fix them. In this blog post I argue - I think I’m not first and won’t be the last - that the current RCA process will not be suitable for the future of IT industry and a different approach is needed.
The origin
RCA can be rooted all the way back to Newton’s third law of motion “For every action, there is an equal and opposite reaction.” But in the modern age, it can be linked primarily to Toyota’s 5-whys process developed back in 1958, which requires asking a 5-level deep “why?”, and the answers to the whys should eventually reveal the root of all evil, AKA the root cause. The example provided on this Wikipedia page is straightforward:)
The RCA process for the IT world wouldn’t be any different.
The web server is not responding. (the problem)
Why? - The process was killed. (First why)
Why? - The kernel killed it. (Second why)
Why? - The server was running out of memory. (Third why)
Why? - Too many child processes were spawned and consumed all memory and swap. (Fourth why)
Why? - “MaxRequestsPerChild” was set to zero, which stopped recycling of child processes. (Fifth why, a root cause)
The problem(s)
Let’s check some of the reasons why the current RCA process won’t be the best fit for modern IT systems.
Binary state of mind
RCA implies that there are two states for our systems: working and not working. While this might be true for a monolithic legacy system, like the web server example above, it is either serving our web pages properly or not, but a modern microservices system is far more complicated.
This complexity makes it more probable to operate, somehow, with a broken component, as Dr. Richard Cook describes in his paper “How Complex Systems Fail,” “complex systems run in degraded mode.”
And it comes down to all the talented DevOps engineers, self-healing, load-balancing and failover mechanisms we have built throughout the years to keep those systems up and running. Reaching a failed state in modern microservices systems require multiple failures to be aligned together.
For example, consider the following scenario. In a CI/CD OpenShift environment, poorly tested functionality in an application is pushed to production OpenShift pods. The application and the mentioned functionality receives a large volume of traffic due to the holiday season. Writing to a busy SAN storage array, the slow writes lead to increasing CPU load, which triggers autoscaling of pods. Finally, autoscaling hits the namespace’s underestimated resource-quota and the cluster cannot scale any more and the website is unresponsive to its visitors.
What would be the root cause here? The poorly written functionality, the busy SAN storage, the resource quotas, or all of them combined?
You might think this is an over complicated scenario but if you read some post-mortems like those of Monzo bank, Slack and Amazon you will see this is not science fiction at all.
What You Look For Is What You Find
Or (WYLFIWYF) is a principle known to resilience engineers, that means that an RCA usually finds what it looks for. The assumptions about the nature of the incident guide the analysis. This predetermination sometimes hinders the ability to address secondary factors contributing to the outage.
Systems are static
RCA is like a witch hunt for which “change” caused the system to fail. This might work with a monolithic legacy system, where changes were scarce and infrequent, but the whole microservices idea we are moving to is about rapidly changing, loosely coupled components.
The domino effect
In simple linear systems - think of your favorite 3 tier application - malfunctions and their causes were perceived in a “domino effect” type-of-thinking. The IT industry is moving to a non linear - microservices - model, where failures will be mostly governed by the resonance and amplitude of failures instead. That is it to say, the alignment of various components’ failures and their magnitude will be the culprits behind major incidents.
What’s next
Hopefully, by now you’re convinced that the current RCA process is not ideal for complex microservices systems. Adapting to these new system models require changes on different levels. I will try to list here what I think might be helpful.
Learning
- Invest in training DevOps engineers to understand the microservices system inside out. During or after a major incident you don’t want to hear “we have no clue where to start looking.” Microservices systems are complex by nature and require deep understanding.
- Propagate the learning from an incident. Build an incident database accessible to the wider organization. Don’t rely on emails and newsletters.
Adapting
- Adopt a post-incident review process rather than an RCA process. A post-incident review process aims to keeping a record of an incident’s timeline, its impact, the actions taken, and provide the context of contributing factors, as most microservices major incidents will have multiple, none of them is more important than the other.
- Avoid finger pointing culture, instead encourage a blameless post-incident review process that encourages reporting errors, even human errors. You can’t "fix" people, but you can fix systems and processes to better support people making the right decision.
- Introduce chaos engineering principles to your workflow. Tools such as Netflix’s chaosmonkey or kube-monkey for containers help you validate the development of failure-resilient services and build confidence in your microservices system.
- A shameless plug, acquiring TAM services helps in progression of microservices adoption, knowledge transfer and handling incident analysis, especially those with multiple vendors involved.
Monitoring
- Microservices systems have different, and more intensive, monitoring requirements, which require correlating data from different sources when compared to monolithic systems. Prometheus and Grafana provide a great combination of data aggregation tools.
- Use monitoring dashboards providing both business and system metrics.
- Sometimes the signal-to-noise ratio in microservices systems metrics makes it hard to find what to monitor. Here are some pointers that can help with anticipating or analysing a problem:
- Success/failure ratio for each service.
- Average response time for each service endpoint.
- Average execution time for the slowest 10% requests.
- Average execution time for the fastest 10% requests.
- Map monitoring functions to the organizational structure by reflecting microservices’ structure on the teams monitoring them. That means smaller, loosely coupled teams with autonomy, yet still focused on the organization strategic objectives.
Responding
- Establish a regular cadence of incident review meetings to work out incident review reports closing out any ongoing discussions and comments, to capture ideas, and to finalize the state.
- Sometimes, political pressures push for getting an RCA as early as possible. In complex microservices systems that might not address the real problems and only address the symptoms of these problems. Take adequate time to absorb, reflect, and take actions post-incident.
Final thoughts.
That being said, I want to leave you with what led me to write this blog post, a wonderful research paper titled How Complex Systems Fail by Dr. Richard I. Cook, MD, and his presentation from Velocity Conference.
Header image provided courtesy of Kurt:S via Flickr under the Attribution 2.0 Generic (CC BY 2.0) Creative Commons license.
|
https://www.redhat.com/de/blog/not-root-cause-youre-looking
|
CC-MAIN-2018-47
|
refinedweb
| 1,287
| 55.44
|
Re: Hiding base class methods
- From: "DanielF" <DanielF@xxxxxxxxxxxxxxxxxxxxxxxxx>
- Date: Wed, 1 Jun 2005 08:35:12 -0700
OK, thanks - that does the trick!.
"Tim Wilson" wrote:
> If you compile the following code into an assembly, and then reference this
> assembly in a project, then when you create an instance of the "Derived"
> class, the method named "Method" will not show up in the editor through
> intellisense. AFAIK, this is the only way to accomplish hiding inherited
> members from an end-developer. Of course, if the end-developer specified
> that the "Method" should be called against an instance of the "Derived"
> class, although they will not get intellisense to complete the code, it will
> compile since the method does exist. This is the reason why the
> "NotSupportedException" is pitched.
>
> using System;
> using System.ComponentModel;
>
> namespace MyNamespace
> {
> public class Base : System.Object
> {
> public void Method()
> {
> // Do something.
> }
> }
>
> public class Derived : Base
> {
> [EditorBrowsableAttribute(EditorBrowsableState.Never)]
> public new void Method()
> {
> throw (new NotSupportedException("This method is not supported."));
> }
> }
> }
>
> --
> Tim Wilson
> ..Net Compact Framework MVP
>
> "DanielF" <DanielF@xxxxxxxxxxxxxxxxxxxxxxxxx> wrote in message
> news:78234655-2BEF-4A35-9881-CC21C1DAA541@xxxxxxxxxxxxxxxx
> > I would like to find a way to completely hide selected methods of a base
> > class when inheriting from that base class. I know how to override a
> method
> > and replace a base class method using the "new" modifier on the method,
> but
> > then the method of the derived class shows up. I would like to be able to
> > completely hide a method so that it doesn't show up at all in an
> application
> > that instantiates the derived class. Any help would be appreciated.
>
>
>
.
- Prev by Date: show windows in pocket pc
- Next by Date: Re: what is wrong with help system?
- Previous by thread: show windows in pocket pc
- Next by thread: VPN to sqlserver?
- Index(es):
|
http://www.tech-archive.net/Archive/DotNet/microsoft.public.dotnet.framework.compactframework/2005-06/msg00057.html
|
crawl-002
|
refinedweb
| 303
| 61.87
|
ASP.NET MVC: Converting business objects to select list items
Some of our business classes are used to fill dropdown boxes or select lists. And often you have some base class for all your business classes. In this posting I will show you how to use base business class to write extension method that converts collection of business objects to ASP.NET MVC select list items without writing a lot of code.
BusinessBase, BaseEntity and other base classes
I prefer to have some base class for all my business classes so I can easily use them regardless of their type in contexts I need.
NB! Some guys say that it is good idea to have base class for all your business classes and they also suggest you to have mappings done same way in database. Other guys say that it is good to have base class but you don’t have to have one master table in database that contains identities of all your business objects. It is up to you how and what you prefer to do but whatever you do – think and analyze first, please. :)
To keep things maximally simple I will use very primitive base class in this example. This class has only Id property and that’s it.
public class BaseEntity
{
public virtual long Id { get; set; }
}
Now we have Id in base class and we have one more question to solve – how to better visualize our business objects? To users ID is not enough, they want something more informative. We can define some abstract property that all classes must implement. But there is also another option we can use – overriding ToString() method in our business classes.
public class Product : BaseEntity
{
public virtual string SKU { get; set; }
public virtual string Name { get; set; }
public override string ToString()
{
if (string.IsNullOrEmpty(Name))
return base.ToString();
return Name;
}
}
Although you can add more functionality and properties to your base class we are at point where we have what we needed: identity and human readable presentation of business objects.
Writing list items converter
Now we can write method that creates list items for us.
public static class BaseEntityExtensions
{
public static IEnumerable<SelectListItem> ToSelectListItems<T>
(this IList<T> baseEntities) where T : BaseEntity
{
return ToSelectListItems((IEnumerator<BaseEntity>)
baseEntities.GetEnumerator());
}
public static IEnumerable<SelectListItem> ToSelectListItems
(this IEnumerator<BaseEntity> baseEntities)
{
var items = new HashSet<SelectListItem>();
while (baseEntities.MoveNext())
{
var item = new SelectListItem();
var entity = baseEntities.Current;
item.Value = entity.Id.ToString();
item.Text = entity.ToString();
items.Add(item);
}
return items;
}
}
You can see here to overloads of same method. One works with List<T> and the other with IEnumerator<BaseEntity>. Although mostly my repositories return IList<T> when querying data there are always situations where I can use more abstract types and interfaces.
Using extension methods in code
In your code you can use ToSelectListItems() extension methods like shown on following code fragment.
...
var model = new MyFormModel();
model.Statuses = _myRepository.ListStatuses().ToSelectListItems();
...
You can call this method on all your business classes that extend your base entity. Wanna have some fun with this code? Write overload for extension method that accepts selected item ID.
|
http://weblogs.asp.net/gunnarpeipman/asp-net-mvc-converting-business-objects-to-select-list-items
|
CC-MAIN-2015-27
|
refinedweb
| 521
| 54.32
|
This is the mail archive of the libstdc++@gcc.gnu.org mailing list for the libstdc++ project.
Posted by request. It would be nice if this could be examined, to figure out what's up. Any takers? -benjamin ---------- Forwarded message ---------- Date: Wed, 26 Sep 2001 12:25:12 +0200 From: Gerhard Wesp <gwesp@cosy.sbg.ac.at> To: Benjamin Kosnik <bkoz@redhat.com> Subject: Re: a note on std::complex performance hi benj, sorry, after an involuntary compiler upgrade (accidentally deleted the binaries), i'm currently unable to reproduce the discrepancy. i'll get back should it reappear sometimes. anyway, a short benchmark program with my inner loop is included. greetings -gerhard
#include <complex> typedef std::complex< double > complex ; template< typename T > inline T sqr( T const& t ) { return t * t ; } inline double abs_squared( complex const& z ) { return sqr( z.real() ) + sqr( z.imag() ) ; } int main() { complex c( -.1 , -.1 ) ; complex z = c ; long iter = 0 ; while( ++iter < 100000000 && abs_squared( z ) < 4 ) z = z * z + c ; }
|
https://gcc.gnu.org/legacy-ml/libstdc++/2001-09/msg00110.html
|
CC-MAIN-2021-04
|
refinedweb
| 166
| 59.9
|
Hi,
Thanks for the quick response. I wrote a small test service code to check whether
I can get a text
in chinese from the server. Here is central piece of the code,
import org.apache.soap.util.xml.*;
import org.apache.soap.util.DOMUtils;
public class TextService {
public String getText ( ) throws Exception {
File ff = new File("news.txt"); // for testing only
FileReader rr = new FileReader (ff);
BufferedReader in = new BufferedReader(rr);
String out;
String ret ="<![CDATA[";
ret += "<?xml version='1.0' encoding='Big5' ?>";
ret += "<?xml:stylesheet type='text/xsl' href='quotes.xsl'?>";
try {
while ( (out=in.readLine()) != null ) {
System.out.println(out);
ret += out + "\n";
}
} catch ( Exception e ) { System.err.println("Something Wrong!"); }
ret +="]]>";
return ret;
}
}
I checked it out on the server screen and indeed give me the correct message but when received
on the
client side it just displayed ???????......., any suggestions and comments?
Jun-Liang Chen
jlchen@intumit.com
----- Original Message -----
From: Guy Galil
To: soap-user@xml.apache.org
Sent: Tuesday, July 18, 2000 7:21 PM
Subject: Re: internationlization
I think in the XML using cdata for all string values can solve this problem.
This is also a better solution then escaping special characters as done in the current code.
Guy
----- Original Message -----
From: Sanjiva Weerawarana
To: soap-user@xml.apache.org
Cc: Apache SOAP
Sent: Tuesday, July 18, 2000 11:49 AM
Subject: Re: internationlization
I'm afraid internationalization is not my area of strength.
Can u please indicate what may be going wrong? If someone would
like to take on ensuring that the SOAP codebase is properly
internationalized that'll be dandy! Any takers?
Sanjiva.
----- Original Message -----
From: Jun-Liang Chen
To: soap-user@xml.apache.org
Sent: Tuesday, July 18, 2000 5:02 AM
Subject: internationlization
Dear All,
Trivial Question, Is SOAP currently support the non-English characters in the
message body?
I did a test from client requesting the message from server sending chinese
message in Big5
but got "?????????....." on the screen. It looks like the problem was in the
serialization/deserialization
process. If that is true, any fix or way to go around? Thanks!
Jun-Liang Chen
jlchen@intumit.com
|
http://mail-archives.apache.org/mod_mbox/ws-soap-user/200007.mbox/%3C006501bff0a2$c1357ee0$ec1c0d3d@xml%3E
|
CC-MAIN-2014-52
|
refinedweb
| 360
| 58.99
|
OpenEBSOpenE-services.
OpenEBS itself is deployed as just another container on your host and enables storage services that can be designated on a per pod, application, cluster or container level, including:
- Automate the management of storage attached to the Kubernetes worker nodes and allow the storage to be used for Dynamically provisioning OpenEBS PVs or Local PVs.
-.
- Management of tiering to and from S3 and other targets.
An added advantage of being a completely Kubernetes native solution is that administrators and developers can interact and manage OpenEBS using all the wonderful tooling that is available for Kubernetes like kubectl, Helm, Prometheus, Grafana, Weave Scope, etc.
Our vision is simple: let storage and storage services for persistent workloads be fully integrated into the environment so that each team and workload benefits from the granularity of control and Kubernetes native behaviour.
Start the OpenEBS Services using helm
helm repo update helm install --namespace openebs --name openebs stable/openebs
You could also follow our QuickStart Guide.
OpenEBS can be deployed on any Kubernetes cluster - either in
OpenEBS is one of the most widely used and tested Kubernetes storage infrastructures in the industry.. Enterprise customers have been using OpenEBS in production since 2018 and the project supports 2.5M+ docker pulls a week.
The status of various storage engines that power the OpenEBS Persistent Volumes are provided below.
For more details, please refer to OpenEBS Documentation.
ContributingContributing
OpenEBS welcomes your feedback and contributions in any form possible.
- Join our community
- Already signed up? Head to our discussions at #openebs-users
- Want to raise an issue or help with fixes and features?
- See open issues
- See contributing guide
- Want to join our contributor community meetings, check this out.
- Join our OpenEBS CNCF Mailing lists
- For OpenEBS project updates, subscribe to OpenEBS Announcements
- For interacting with other OpenEBS users, subscribe to OpenEBS Users
Show me the CodeShow me the Code
This is a meta-repository for OpenEBS. Please start with the pinned repositories or with OpenEBS Architecture document.
LicenseLicense
OpenEBS is developed under Apache License 2.0 license at the project level. Some components of the project are derived from other open source projects and are distributed under their respective licenses.
OpenEBS is part of the CNCF Projects.
Commercial OfferingsCommercial Offerings
This is a list of third-party companies and individuals who provide products or services related to OpenEBS. OpenEBS is a CNCF project which does not endorse any company. The list is provided in alphabetical order.
|
https://go.ctolib.com/openebs-openebs.html
|
CC-MAIN-2019-51
|
refinedweb
| 411
| 53.61
|
Last year (2009), I needed a 'owner draw' scroll bar for UltiMate Grid I was using in my project. My first thought was to subclass CScrollBar and do my own painting, this turned out to be impossible. My next option was to search the internet for a custom scroll bar, but soon found that no one had bothered to create it, at least not for free, all the scroll bars I could find could not be used to replace the standard windows scroll bars because they did not behave in the way a standard scroll bar does. That is why I created this scroll bar I present here in this article.
CScrollBar
My mission was simple, to create a custom scroll bar control that behaved exactly like a standard windows scroll bar but having a 'custom' look. I mean how hard can it be? As it turns out, not very hard but a lot of work, far more than I originally intended.
Many rarely used features of the standard Windows scroll bar are implemented in this scroll bar, including the context menu. I have taken great care to implement every little thing the standard scroll bar does. If you see something I've omitted or done incorrectly, please let me know. I've also implemented some features not found in the standard Windows scroll bar, e.g. mouse wheel support.
CXeScrollBar
WS_HSCROLL
WS_VSCROLL
CXeScrollBar can be used in place of the standard windows scroll bar (CScrollBar) in any situation except as noted above. Keep in mind though that CXeScrollBar is dependant on MFC.
Using this code in your own project is very simple. Just add XeScrollBar.h, XeScrollBar.cpp, XeScrollBarBase.h, XeScrollBar.cpp and XeScrollBar.rc to your project, copy the eight bitmap files (XSB_?_???.bmp) to the 'res' folder of your project. In your code, create CXeScrollBar object in same way you would create CScrollBar. The following code is an example of how the Create member is used.
Create
// Create CXeScrollBar scroll bars.
m_sbH.Create( WS_CHILD | WS_VISIBLE | SBS_HORZ, CRect(11,265,400,282), this, IDC_SB_H );
m_sbV.Create( WS_CHILD | WS_VISIBLE | SBS_VERT, CRect(419,11,436,260), this, IDC_SB_V );
// Note - m_sbH and m_sbV are of CXeScrollBar type in header file.
Another common use of CXeScrollBar is on a form or in a dialog box, in that case, the scroll bar(s) already exist and need to be replaced. The following code shows how you can replace existing scroll bars with CXeScrollBar. Typically you would do that in OnInitialUpdate() for a form view or in OnInitDialog() for a dialog. It's best to use the CreateFromExisting member to do this because that function takes care of all the boring details involved when creating a window to replace another; Window style, Control ID, size, position, scroll range, scroll position and scroll page size are copied. Z-order (Tab order) is also preserved, that is very important if the existing scroll bar has a WS_TABSTOP style set because when a new child window is created, it is placed last in the Z-order of child windows (of parent window).
OnInitialUpdate()
OnInitDialog()
CreateFromExisting
WS_TABSTOP
// Replace existing scroll bars on form or in dialog
// with CXeScrollBar scroll bars.
m_sbH.CreateFromExisting( this, IDC_XE_SB_H );
m_sbV.CreateFromExisting( this, IDC_XE_SB_V );
// Note - m_sbH and m_sbV are of CXeScrollBar type in header file.
Using CXeScrollbar in UltiMate Grid is also fairly simple. It's a simple matter of modifying the CUGHScroll and CUGVScroll classes a little bit: change base class from CScrollBar to CXeScrollBar. Please read 'Using CXeScrollBar in Ultimate grid.txt' I've included with the downloads above, for details on how to modify the source files and how to build and run the UltiMate Grid samples.
CXeScrollbar
CUGHScroll
CUGVScroll
CXeScrollBar
CXeScrollBar is implemented as two classes, CXeScrollBar and CXeScrollBarBase. CXeScrollBar does all the painting and CXeScrollBarBase implements all the 'business' logic needed. I decided early in the development to split the 'business' logic and the painting into two classes to make it easy to change the look of the scroll bar in the future.
CXeScrollBarBase
Here is where the article title finally starts to make sense. If you are feeling artistic, you can create a scroll bar that looks just the way you want it to very simply. Either by creating the eight bitmaps needed for CXeScrollBar or deriving your own class from CXeScrollBarBase and do the painting needed any way you like.
CXeScrollBar uses four bitmaps for horizontal scroll bar and four bitmaps for vertical scroll bar. One for 'up' state, one for 'hot' state, one for 'down' state and one for 'disabled' state.
If you do decide to create bitmaps to replace the ones used by CXeScrollBar - read comments regarding how the code uses the bitmaps in XeScrollBar.cpp before you do. I've included the PhotoShop document I used to create the bitmaps in the download section above. Tip - create horizontal bitmaps first, then open those in PhotoShop and do rotate canvas 90 degrees and flip vertical to create the bitmaps for the vertical scroll bar.
The bitmaps are divided into logical sections: Left button, Left shaft, Thumb, Right shaft and Right button for horizontal scroll bars, Top button, Top shaft, Thumb, Bottom shaft and Bottom button for vertical scroll bars. The offsets in pixels into the bitmaps for each section are hard-coded in CXeScrollBar. For that reason, it is important be very careful when creating the bitmaps. Unless of course you are prepared to change the code in CXeScrollBar to suit your bitmaps.
The 'business' logic in CXeScrollBarBase provides the 'state' and size information of each scroll bar section for CXeScrollBar to use when painting. The following code shows what steps are needed when painting the scroll bar.
XSB_EDRAWELEM eState;
const CRect *prcElem = 0;
stXSB_AREA stArea;
// loop through all UI elements.
for( int nElem = eTLbutton; nElem <= eThumb; nElem++ )
{
stArea.eArea = (eXSB_AREA)nElem;
// Get bounding rect of UI element to draw (client coords.)
prcElem = GetUIelementDrawState( stArea.eArea, eState );
if( !prcElem || eState == eNotDrawn ) // Rect empty or area not drawn?
continue;
// stArea.eArea identifies UI element to draw:
// eTLbutton or eTLchannel or eThumb or eBRchannel or eBRbutton.
// eState identifes in what state the UI element is drawn:
// eDisabled or eNormal or eDown or eHot.
// m_bHorizontal is TRUE if 'this' is a horizontal scroll bar.
// Draw UI element to memory DC. (using prcElem rect).
// Note - use m_bDrawGripper to determine if 'gripper' is drawn on thumb box.
// This is used to implement 'blinking' to show scroll bar has input
// focus. (every 500mS).
}
The DrawScrollBar( CDC* pDC ) member in CXeScrollBar uses the eState enum to select the bitmap to use for painting each section of the scroll bar.
DrawScrollBar( CDC* pDC )
eState enum
The constructor in CXeScrollBar loads all bitmaps into memory from resources when called for the first time. The resource name of each bitmap is hard-coded. A reference counter is used to keep track of how many objects have been created. The bitmaps remain in memory until the last object is destroyed.
I've taken great care to faithfully implement this scroll bar so it behaves exactly like the standard windows scroll bar does. There are a few minor (IMHO) areas I deviated from the standard.
The SBM_ENABLE_ARROWS message handler behaves a little bit differently that the standard scroll bar does. During testing, the standard scroll bar behaved strangely when ESB_DISABLE_LEFT, ESB_DISABLE_RIGHT, ESB_DISABLE_UP or ESB_DISABLE_DOWN flags are used, for that reason I've only implemented support for the ESB_ENABLE_BOTH and ESB_DISABLE_BOTH flags.
SBM_ENABLE_ARROWS
ESB_DISABLE_LEFT, ESB_DISABLE_RIGHT, ESB_DISABLE_UP
ESB_DISABLE_DOWN
ESB_ENABLE_BOTH
ESB_DISABLE_BOTH
The SBM_GETSCROLLBARINFO message handler is also different from the standard. I've implemented better support for the STATE_SYSTEM_INVISIBLE and STATE_SYSTEM_PRESSED flags. I've implemented this as per Microsoft documentation, not as Microsoft implemented the standard scroll bar.
SBM_GETSCROLLBARINFO
STATE_SYSTEM_INVISIBLE
STATE_SYSTEM_PRESSED
There are a few other message handlers that I think deserve special mention.
The WM_LBUTTONDOWN message handler is interesting because here is where many of the other scroll bars I looked at before creating this one got it wrong. A scroll bar MUST not 'take' focus unless it has the WM_TABSTOP windows style. This is also the only place in the code where we 'take' focus. The dialog manager will set focus to us in all other cases. This may seem trivial but keep in mind that users will find it very annoying if input focus changes unexpectedly.
WM_LBUTTONDOWN
WM_TABSTOP
The WM_GETDLGCODE message handler is very interesting because here is where we interact with the dialog manager. Tab order and input focus management is implemented here. The dialog manager will ask if we want input focus when the user presses the Tab key, we return DLGC_WANTTAB or DLGC_STATIC depending on the presense or absense of the WM_TABSTOP style. The dialog manager will also send a WM_GETDLGCODE message for every keystroke, we need to tell the dialog manager if we will process the keyboard message by returning DLGC_WANTMESSAGE. We do so only if 'this' window has WM_TABSTOP style set. There is an exception to this, the standard windows scroll bar will process Up/Down keys for a vertical scroll bar and Left/Right keys for a horizontal scroll bar even if the window does not have WM_TABSTOP style set. I've also implemented this behaviour just to remain true to the standard.
WM_GETDLGCODE
DLGC_WANTTAB
DLGC_STATIC
DLGC_WANTMESSAGE
The WM_MOUSEWHEEL message handler is also noteworthy because we will never get those messages unless we have input focus. This is normal windows behaviour. The parent window will usually provide mouse wheel message handling. E.g. UltiMate grid is implemented like that. I should also mention that the standard windows scroll bar does not implement support for mouse wheel messages.
WM_MOUSEWHEEL
The WM_CONTEXTMENU message handler is worth mentioning because the context menu is loaded from user32.dll if possible and because of that should be in the 'local' language. If the load from user32.dll failed, a menu in English is created instead. Note - I have not been able to test if this method works for other languages. I would be very interested to know.
WM_CONTEXTMENU
WM_KEYDOWN and WM_KEYUP message handlers are interesting in that those messages are not sent to us unless we have input focus. Also when those messages are processed, it's important to return 0 to indicate the message was processed even if we ignored the key, otherwise the dialog manager will look for another child window to process the message and we lose input focus.
WM_KEYDOWN
WM_KEYUP
0
The WM_SETFOCUS and WM_KILLFOCUS message handlers implement the 'blinking' input focus state of the scroll bar. The class derived from CXeScrollBarBase needs to show that the scroll bar has input focus somehow, in CXeScrollBar I've implemented this by painting the 'gripper' section of the thumb box only when m_bDrawGripper is TRUE, the base class uses a timer to 'blink' every 500mS. Most users will never see this because it's rather unusual for a scroll bar to receive focus
WM_SETFOCUS
WM_KILLFOCUS
m_bDrawGripper
TRUE
The demo project included with this article demonstrates how CXeScrollBar compares to the standard windows scroll bar.
I used this extensively when developing the CXeScrollBar.
The 'Tab stops' check box - sets/clears the WM_TABSTOP style on all scroll bars.
The scroll bar information dialog shows information from all the scroll bars on the main form view. The 'Lock scroll bars' locks/unlocks together the horizontal and vertical scroll bars. The 'Set scroll info' button will set scroll range, page size and position to all scroll bars. The 'Set range' button will set scroll range only. The 'Set POS' button will set scroll position only. The edit boxes show information we get from the scroll bars when we send SBM_GETSCROLLINFO and SBM_GETSCROLLBARINFO messages. The lower left corner shows the state of the STATE_SYSTEM_XXX flags for each section of the scroll bars.
SBM_GETSCROLLINFO
STATE_SYSTEM_XXX
The XeScrollBar.cpp file includes a list of research sources I used during the development of this scroll bar. Of particular interest is SkinControls 1.1 - A journey in automating the skinning of Windows controls article by .dan.g. here on CodeProject. If you read it, you will understand why subclassing a Windows scroll bar and do your own painting does not work.
CScrollBar
CWnd
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
error C2065: 'SBM_GETSCROLLBARINFO' : undeclared identifier
error C2039: 'SetPoint' : is not a member of 'CPoint'
c:\archivos de programa\microsoft visual studio\vc98\mfc\include\afxwin.h(176) : see declaration of 'CPoint'
error C2065: 'GET_KEYSTATE_WPARAM' : undeclared identifier
error C2065: 'GET_WHEEL_DELTA_WPARAM' : undeclared identifier
//m_ptMenu.SetPoint(0,0);
m_ptMenu.x = 0;
m_ptMenu.y = 0;
//
#if !defined(AFX_STDAFX_H__5B36668A_AB6D_40BA_B681_2F38312436A3__INCLUDED_)
#define AFX_STDAFX_H__5B36668A_AB6D_40BA_B681_2F38312436A3__INCLUDED_
#define WINVER 0x500
#if _MSC_VER >
error C2065: 'SBM_GETSCROLLBARINFO' : undeclared identifier
: error C2065: 'GET_KEYSTATE_WPARAM' : undeclared identifier
: error C2065: 'GET_WHEEL_DELTA_WPARAM' : undeclared identifier
#if(_WIN32_WINNT >= 0x0501)
#define SBM_GETSCROLLBARINFO 0x00EB
#endif /* _WIN32_WINNT >= 0x0501 */
#define _WIN32_WINNT 0x0501
#define SBM_GETSCROLLBARINFO 0x00EB
#define GET_KEYSTATE_WPARAM(wParam) (LOWORD(wParam))
#define GET_WHEEL_DELTA_WPARAM(wParam) ((short)HIWORD(wParam))
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
|
http://www.codeproject.com/Articles/120636/Roll-Your-Own-Scroll-Bar?msg=4391169
|
CC-MAIN-2015-18
|
refinedweb
| 2,181
| 54.83
|
0
Hi
I was doing an exercise in the Real Python book.. On average, how many times will I have to flip the coin total? "
So created the code:
from __future__ import division from random import randint flips = 0 trials = 10000 for trial in range(0, trials): first_flip = randint(0,1) while randint(0,1) == first_flip: flips += 1 flips += 1 print "Average = ", flips / trials
So this produces an average of 2 which I thought would be right considering a coin is 2 sided.
However the solution is different:
from __future__ import division from random import randint flips = 0 trials = 10000 for trial in range(0, trials): flips += 1 # first flip if randint(0, 1) == 0: # flipped tails on first flip while randint(0, 1) == 0: # keep flipping tails flips += 1 flips += 1 # finally flipped heads else: # otherwise, flipped heads on first flip while randint(0, 1) == 1: # keep flipping heads flips += 1 flips += 1 # finally flipped tails print flips / trials
And this produces an answer of 3.
The only difference I can see is that in my code I don't care whether its head or tails just measure whether its the same as the first flip.
Why are the averages different?
|
https://www.daniweb.com/programming/software-development/threads/450982/coin-toss-simulation-real-python
|
CC-MAIN-2016-44
|
refinedweb
| 202
| 55.92
|
The why and how of Kubernetes, instead of the what. A gentle introduction to Kubernetes.
When asked what Kubernetes is, most people will answer with "a container orchestration system".
This is a good factual statement of the what, but it misses the why and how, which are much more important for developing a good understanding of Kubernetes.
In this article, we'll explore why and how Kubernetes is (instead of the what), and why containers are just an implementation detail and not the main point of Kubernetes.
If "Kubernetes = Docker but highly complex" is you, read on. I'll try to correct some misunderstandings in this post. Otherwise, you may be interested in my next post on declaratively running Kubernetes without building up bad state (link at the end of this article).
So
what why is Kubernetes?
Think of Kubernetes like an operating system (OS).
In a traditional OS you have resources like CPU, memory, and storage. The job of the OS is to schedule which software gets what resources at any given time.
Kubernetes does the same thing. You have resources on each one of your nodes, and Kubernetes assigns those resources to the software running on your cluster.
For our purposes, there are two key differences between Kubernetes and a traditional OS.
The first difference is the way you define what software to run. In a traditional OS, you have an init system (like systemd) that starts your software. In Kubernetes, you define objects, and Kubernetes uses these objects to configure and run the software you specify. Objects are similar to those in programming languages: they are typed and contain data specific to their type.
The second difference is that Kubernetes works across multiple computers. In a traditional OS, you need to decide what software gets deployed to what machine. With Kubernetes, you deploy one piece of software to the entire cluster, and Kubernetes figures out what machine to run it on based on the rules you give it. For example, you might say you want to run IPython notebooks on any node with a GPU, and run your web apps on nodes without GPUs.
Notice how my definition of Kubernetes doesn't include the word "container". This is on purpose. The main idea behind Kubernetes is declarative infrastructure management, not running containers. Someone could also have implemented the ideas behind Kubernetes but with VMs or processes as the smallest compute units.
Credit: I heard this analogy first from ~fydai.
What kinds of Objects are there?
As mentioned in the previous section, the main idea behind Kubernetes is its object system. All objects have some basic fields (like their
name,
uid, and user-customizable
labels), and many common objects additionally have a
namespace that helps keep logical groups of objects separate.
Here are some common object types, and what you might use them for. Obviously, some of these are oversimplified (a
Deployment actually makes a
ReplicaSet not a
Pod), but we're ignoring details for now.
Pod - A small group of containers (but is usually just one container). Note: Containers follow the OCI standard, meaning any image built by any OCI-compatible software (like Docker, Podman, etc.) will work on any other OCI-compatible runner.
Deployment - A group of pods, with some basic fields for things like the number of replicas to deploy.
Service - A group of ports that point to pods, depending on some user-specified rules.
Ingress - Expose your service with some rules, such as the hostname to respond on. This requires an Ingress provider like nginx to be installed on your cluster (more on this in a later post).
PersistentVolumeClaim - A reference to a persistent filesystem that you can store your data in. This requires a pvc provider to be installed on your cluster- which range from local filesystem storage on one node to distributed storage with Ceph (more on this in a later post).
CustomResourceDefinition - It is also possible to make your own types, and write your own logic to determine what they do. This can allow you to create higher level objects like
PostgresInstance or
MonitoringEndpoint. This is known as the "Operator Pattern" and is beyond the scope of this article.
These objects are your Kubernetes building blocks. To deploy software, you just need to combine them in various ways to serve your needs.
Let's look at an example.
Example: A Typical Web App
To make this less scary, let's do this without looking at the actual objects. Objects are typically written in YAML, but this was a bad design choice by the designers of Kubernetes. Most typical uses will require generating YAML in some way, which I'll talk more about in the next article. If you want to read the YAML I used for this, check out this gist.
Say we have an OCI image that contains a web app that listens on port 8080. To represent this in Kubernetes, we will use a Pod.
The Pod definition will contain some variables that we set, such as the image reference (
k8s.gcr.io/echoserver:1.4), the pod name (
echo), some resource requests (
100MB RAM and half a CPU core), a label (
app=echo), and the port that it listens on (
8080).
When we add this object to the cluster, Kubernetes will run the container on an available node that has enough compute resources. However, a Pod is an ephemeral unit. If it crashes or stops, it won't restart. To fix this, we'll wrap our Pod object in a
Deployment object.
The Deployment definition will have a name (
echo), a label (
app=echo), and a copy of the Pod object definition we wrote earlier.
Remember, a Deployment object creates Pod objects according to some specification. For now, we'll create just one.
We'll then add the
Deployment object to the cluster by running
kubectl apply -f my_definition_file.yaml, and pretty soon we can see both the
Deployment and the
Pod it made in
kubectl get deployment or
kubectl get pod.
Next we need a way to access the deployment. To do this, we'll use a
Service and an
Ingress.
The
Service will point to port
8080 on all
Pods that are labeled
app=echo, and the
Ingress will route all traffic on
echo.example.com to the
Service with the name
echo.
If we visit
echo.example.com, we'll see our echo service! Hooray!
But wait, there's more. Remember when I said that using a
Deployment to manage our
Pod would give us an extra-cool feature?
Let's see what happens when we set
replicas to
2 in our
Deployment.
The deployment makes two pods!
Now, even if one
Pod crashes, the other one is still around to serve requests, while the other one restarts! We've successfully deployed Highly Available HTTP Echo™!
Hopefully this gave you a good idea of how to use the various Kubernetes building blocks to define infrastructure.
In my next post, I'll talk about how you can declaratively setup and manage your Kubernetes cluster without building up bad state or having to manage too much YAML.
Here are the Kubernetes-related articles I'm working on...
|
https://www.tefter.io/bookmarks/560850/readable
|
CC-MAIN-2021-39
|
refinedweb
| 1,200
| 65.22
|
In this chapter,
- We will understand the concepts behind Harris Corner Detection.
- We will see the functions: cv2.cornerHarris(), cv2.cornerSubPix()
in all directions. This is expressed as below: books can contain a corner or not.
So the values of these eigen values decide whether a region is corner, edge or flat.
- When
is small, which happens whenis small, which happens when
andand
are small, the region is flat.are small, the region is flat.
- When
, which happens when, which happens when
or vice versa, the region is edge.or vice versa, the region is edge.
- When
is large, which happens whenis large, which happens when
andand
are large andare large and
, the region is a corner., the region is a corner. :
- import numpy as np filename = 'chessboard.jpg' img = cv2.imread(filename) gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY) gray = np.float32(gray) dst = cv2.cornerHarris(gray,2,3,0.04) #result is dilated for marking the corners, not important dst = cv2.dilate(dst,None) # Threshold for an optimal value, it may vary depending on the image. img[dst>0.01*dst.max()]=[0,0,255] cv2.imshow('dst',img) if cv2.waitKey(0) & 0xff == 27: cv2.destroyAllWindows().
import cv2 import numpy as np filename = 'chessboard2.jpg' img = cv2.imread(filename) gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY) # find Harris corners gray = np.float32(gray) dst = cv2.cornerHarris(gray,2,3,0.04) dst = cv2.dilate(dst,None) ret, dst = cv2.threshold(dst,0.01*dst.max(),255,0) dst = np.uint8(dst) # find centroids ret, labels, stats, centroids = cv2.connectedComponentsWithStats(dst) # define the criteria to stop and refine the corners criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 100, 0.001) corners = cv2.cornerSubPix(gray,np.float32(centroids),(5,5),(-1,-1),criteria) # Now draw them res = np.hstack((centroids,corners)) res = np.int0(res) img[res[:,1],res[:,0]]=[0,0,255] img[res[:,3],res[:,2]] = [0,255,0] cv2.imwrite('subpixel5.png',img)
Below is the result, where some important locations are shown in zoomed window to visualize:
|
http://docs.opencv.org/3.0-beta/doc/py_tutorials/py_feature2d/py_features_harris/py_features_harris.html
|
CC-MAIN-2017-30
|
refinedweb
| 338
| 56.32
|
A Python SOCKS client module. See for more information.
Project description
PyS().
Features
- SOCKS proxy client for Python 2.7 and 3.4+
- TCP supported
- UDP mostly supported (issues may occur in some edge cases)
- HTTP proxy client included but not supported or recommended (you should use urllib2's or requests' own HTTP proxy interface)
- urllib2 handler included.
pip install/
setup.py installwill automatically install the
sockshandlermodule.
Installation
pip install PySocks
Or download the tarball /
git clone and...
python setup.py install
These will install both the
socks and
sockshandler modules.
Alternatively, include just
socks.py in your project.
Warning: PySocks/SocksiPy only supports HTTP proxies that use CONNECT tunneling. Certain HTTP proxies may not work with this library. If you wish to use HTTP (not SOCKS) proxies, it is recommended that you rely on your HTTP client's native proxy support (
proxies dict for
requests, or
urllib2.ProxyHandler for
urllib2) instead.
Usage
socks.socksocket
import socks s = socks.socksocket() # Same API as socket.socket in the standard lib s.set_proxy(socks.SOCKS5, "localhost") # SOCKS4 and SOCKS5 use port 1080 by default # Or s.set_proxy(socks.SOCKS4, "localhost", 4444) # Or s.set_proxy(socks.HTTP, "5.5.5.5", 8888) # Can be treated identical to a regular socket object s.connect(("", 80)) s.sendall("GET / HTTP/1.1 ...") print s.recv(4096)
Monkeypatching
To monkeypatch the entire standard library with a single default proxy:
import urllib2 import socket import socks socks.set_default_proxy(socks.SOCKS5, "localhost") socket.socket = socks.socksocket urllib2.urlopen("") # All requests will pass through the SOCKS proxy
Note that monkeypatching may not work for all standard modules or for all third party modules, and generally isn't recommended. Monkeypatching is usually an anti-pattern in Python.
urllib2 Handler
Example use case with the
sockshandler urllib2 handler. Note that you must import both
socks and
sockshandler, as the handler is its own module separate from PySocks. The module is included in the PyPI package.
import urllib2 import socks from sockshandler import SocksiPyHandler opener = urllib2.build_opener(SocksiPyHandler(socks.SOCKS5, "127.0.0.1", 9050)) print opener.open("") # All requests made by the opener will pass through the SOCKS proxy
Original SocksiPy README attached below, amended to reflect API changes.
SocksiPy
A Python SOCKS module.
See LICENSE file for details.
WHAT IS A SOCKS PROXY?
A SOCKS proxy is a proxy server at the TCP level. In other words, it acts as a tunnel, relaying all traffic going through it without modifying it. SOCKS proxies can be used to relay traffic using any network protocol that uses TCP.
WHAT IS SOCKSIPY?
This Python module allows you to create TCP connections through a SOCKS proxy without any special effort. It also supports relaying UDP packets with a SOCKS5 proxy.
PROXY COMPATIBILITY
SocksiPy is compatible with three different types of proxies:
- SOCKS Version 4 (SOCKS4), including the SOCKS4a extension.
- SOCKS Version 5 (SOCKS5).
- HTTP Proxies which support tunneling using the CONNECT method.
SYSTEM REQUIREMENTS
Being written in Python, SocksiPy can run on any platform that has a Python interpreter and TCP/IP support. This module has been tested with Python 2.3 and should work with greater versions just as well.
INSTALLATION
Simply copy the file "socks.py" to your Python's
lib/site-packages directory,
and you're ready to go. [Editor's note: it is better to use
python setup.py install for PySocks]
USAGE
First load the socks module with the command:
>>> import socks >>>
The socks module provides a class called
socksocket, which is the base to all of the module's functionality.
The
socksocket object has the same initialization parameters as the normal socket
object to ensure maximal compatibility, however it should be noted that
socksocket will only function with family being
AF_INET and
type being either
SOCK_STREAM or
SOCK_DGRAM.
Generally, it is best to initialize the
socksocket object with no parameters
>>> s = socks.socksocket() >>>
The
socksocket object has an interface which is very similiar to socket's (in fact
the
socksocket class is derived from socket) with a few extra methods.
To select the proxy server you would like to use, use the
set_proxy method, whose
syntax is:
set_proxy(proxy_type, addr[, port[, rdns[, username[, password]]]])
Explanation of the parameters:
proxy_type - The type of the proxy server. This can be one of three possible
choices:
PROXY_TYPE_SOCKS4,
PROXY_TYPE_SOCKS5 and
PROXY_TYPE_HTTP for SOCKS4,
SOCKS5 and HTTP servers respectively.
SOCKS4,
SOCKS5, and
HTTP are all aliases, respectively.
addr - The IP address or DNS name of the proxy server.
port - The port of the proxy server. Defaults to 1080 for socks and 8080 for http.
rdns - This is a boolean flag than modifies the behavior regarding DNS resolving.
If it is set to True, DNS resolving will be preformed remotely, on the server.
If it is set to False, DNS resolving will be preformed locally. Please note that
setting this to True with SOCKS4 servers actually use an extension to the protocol,
called SOCKS4a, which may not be supported on all servers (SOCKS5 and http servers
always support DNS). The default is True.
username - For SOCKS5 servers, this allows simple username / password authentication
with the server. For SOCKS4 servers, this parameter will be sent as the userid.
This parameter is ignored if an HTTP server is being used. If it is not provided,
authentication will not be used (servers may accept unauthenticated requests).
password - This parameter is valid only for SOCKS5 servers and specifies the
respective password for the username provided.
Example of usage:
>>> s.set_proxy(socks.SOCKS5, "socks.example.com") # uses default port 1080 >>> s.set_proxy(socks.SOCKS4, "socks.test.com", 1081)
After the set_proxy method has been called, simply call the connect method with the traditional parameters to establish a connection through the proxy:
>>> s.connect(("", 80)) >>>
Connection will take a bit longer to allow negotiation with the proxy server.
Please note that calling connect without calling
set_proxy earlier will connect
without a proxy (just like a regular socket).
Errors: Any errors in the connection process will trigger exceptions. The exception may either be generated by the underlying socket layer or may be custom module exceptions, whose details follow:
class
ProxyError - This is a base exception class. It is not raised directly but
rather all other exception classes raised by this module are derived from it.
This allows an easy way to catch all proxy-related errors. It descends from
IOError.
All
ProxyError exceptions have an attribute
socket_err, which will contain either a
caught
socket.error exception, or
None if there wasn't any.
class
GeneralProxyError - When thrown, it indicates a problem which does not fall
into another category.
Sent invalid data- This error means that unexpected data has been received from the server. The most common reason is that the server specified as the proxy is not really a SOCKS4/SOCKS5/HTTP proxy, or maybe the proxy type specified is wrong.
Connection closed unexpectedly- The proxy server unexpectedly closed the connection. This may indicate that the proxy server is experiencing network or software problems.
Bad proxy type- This will be raised if the type of the proxy supplied to the set_proxy function was not one of
SOCKS4/
SOCKS5/
HTTP.
Bad input- This will be raised if the
connect()method is called with bad input parameters.
class
SOCKS5AuthError - This indicates that the connection through a SOCKS5 server
failed due to an authentication problem.
Authentication is required- This will happen if you use a SOCKS5 server which requires authentication without providing a username / password at all.
All offered authentication methods were rejected- This will happen if the proxy requires a special authentication method which is not supported by this module.
Unknown username or invalid password- Self descriptive.
class
SOCKS5Error - This will be raised for SOCKS5 errors which are not related to
authentication.
The parameter is a tuple containing a code, as given by the server,
and a description of the
error. The possible errors, according to the RFC, are:
0x01- General SOCKS server failure - If for any reason the proxy server is unable to fulfill your request (internal server error).
0x02- connection not allowed by ruleset - If the address you're trying to connect to is blacklisted on the server or requires authentication.
0x03- Network unreachable - The target could not be contacted. A router on the network had replied with a destination net unreachable error.
0x04- Host unreachable - The target could not be contacted. A router on the network had replied with a destination host unreachable error.
0x05- Connection refused - The target server has actively refused the connection (the requested port is closed).
0x06- TTL expired - The TTL value of the SYN packet from the proxy to the target server has expired. This usually means that there are network problems causing the packet to be caught in a router-to-router "ping-pong".
0x07- Command not supported - For instance if the server does not support UDP.
0x08- Address type not supported - The client has provided an invalid address type. When using this module, this error should not occur.
class
SOCKS4Error - This will be raised for SOCKS4 errors. The parameter is a tuple
containing a code and a description of the error, as given by the server. The
possible error, according to the specification are:
0x5B- Request rejected or failed - Will be raised in the event of an failure for any reason other then the two mentioned next.
0x5C- request rejected because SOCKS server cannot connect to identd on the client - The Socks server had tried an ident lookup on your computer and has failed. In this case you should run an identd server and/or configure your firewall to allow incoming connections to local port 113 from the remote server.
0x5D- request rejected because the client program and identd report different user-ids - The Socks server had performed an ident lookup on your computer and has received a different userid than the one you have provided. Change your userid (through the username parameter of the set_proxy method) to match and try again.
class
HTTPError - This will be raised for HTTP errors. The message will contain
the HTTP status code and provided error message.
After establishing the connection, the object behaves like a standard socket.
Methods like
makefile() and
settimeout() should behave just like regular sockets.
Call the
close() method to close the connection.
In addition to the
socksocket class, an additional function worth mentioning is the
set_default_proxy function. The parameters are the same as the
set_proxy method.
This function will set default proxy settings for newly created
socksocket objects,
in which the proxy settings haven't been changed via the
set_proxy method.
This is quite useful if you wish to force 3rd party modules to use a SOCKS proxy,
by overriding the socket object.
For example:
>>> socks.set_default_proxy(socks.SOCKS5, "socks.example.com") >>> socket.socket = socks.socksocket >>> urllib.urlopen("")
PROBLEMS
Please open a GitHub issue at
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/PySocks/
|
CC-MAIN-2021-43
|
refinedweb
| 1,832
| 57.57
|
Your Account
This.
Table 10-1. ActionScript
statements
Statement
Usage
Description
break
Aborts a loop or switch
statement. See Chapter 2,
Conditionals and Loops.
case
caseexpression:
substatements
expression
substatements
Identifies a statement to be executed conditionally in a
switch statement. See Chapter 2,
Conditionals and Loops.
switch
continue
continue;
Skips the remaining statements in the current loop and begins
the next iteration at the top of the loop. See Adobe
documentation.
default
default:substatements
Identifies the statement(s) to execute in a switch statement when the test expression does
not match any case clauses. See
Chapter 2,
Conditionals and Loops.
do-while
do {substatements
} while (expression);
A variation of a while loop that
ensures at least one iteration of the loop is performed. See
Chapter 2,
Conditionals and Loops.
while
for
for (init; test; update) {
statements
}
init
test
update
statements
Executes a statement block repetitively (a for loop). It is synonymous with a while loop but places the loop initialization and
update statements together with the test expression at the top of
the loop. See Chapter 2,
Conditionals and Loops.
for-in
for (variable in object) {
statements
}
variable
object
Enumerates the names of the dynamic instance variables of an
object or an array's elements. See Chapter 15,
Dynamic ActionScript.
for-each-in
for each
(variableOrElementValue in
object) {
statements
}
variableOrElementValue
Enumerates the values of an object's dynamic instance variables
or an array's elements. See Chapter 15,
Dynamic ActionScript.
if-else if-else
if (expression) {
substatements
} else if (expression) {
substatements
} else {
substatements
}
Executes one or more statements, based on a condition or a
series of conditions. See Chapter 2,
Conditionals and Loops.
label
label: statement
label: statements
Associates a statement with an identifier. Used with
break or continue. See Adobe documentation.
return
return;
returnexpression;
Exits and optionally returns a value from a function. See
Chapter 5,
Functions.
super
super(arg1, arg2, ...argn)
super.method(arg1, arg2, ...argn)
arg1
arg2
argn
Invokes a superclass's constructor method or overridden instance
method. See Chapter 6,
Inheritance.
switch (expression) {
substatements
}
Executes specified code, based on a condition or a series of
conditions (alternative to if-else
if-else). See Chapter 2,
Conditionals and Loops.
throw
throwexpression
Issues a runtime exception (error). See Chapter 13,
Exceptions and Error Handling.
try/catch/finally
try {
// Code that might
// generate an exception
} catch (error:ErrorType1) {
// Error-handling code
// for ErrorType1.
} catch (error:ErrorTypeN) {
// Error-handling code
// for ErrorTypeN.
} finally {
// Code that always executes
}
ErrorType1
N
ErrorTypeN
Wraps a block of code to respond to potential runtime
exceptions. See Chapter 13,
Exceptions and Error Handling.
while (expression) {
substatements
}
Executes a statement block repetitively (a while loop). See Chapter 2,
Conditionals and Loops.
with
with (object) {
substatements
}
Executes a statement block in the scope of a given object. See
Chapter 16,
Scope. (*).
5 * 6.
4 + 5 * 6
4
+".
"result: " + "a" < "b"
"result:"
"a"
"result: a"
"b"
false
(b * c) / d
In contrast, the = (assignment)
operator is right-associative, so the expression:
=
a = b = c = d
says, "assign d to c, then assign c to
b, then assign b to a," as in:
d
c
b:
Number
"50"
"50" / 10
In standard mode, the preceding code does not cause a
compile-time error. Instead, at runtime, ActionScript converts the
String value "50" to the Number
datatype, yielding 50, and the entire expression has the value
5.
String.
Table 10-2. ActionScript
operators
Operator
Precedence
Example
.
15
Multiple uses:
Accesses a variable or method
Separates package names from class names and other package
names
Accesses children of an XML or
XMLList object (E4X)
XML
XMLList
// Access a variable
product.price
// Reference a class
flash.display.Sprite
// Access an XML child element
novel.TITLE
[]
Initializes an array
Accesses an array element
Accesses a variable or method using any expression that yields a
string
Accesses children or attributes of an XML or XMLList
object (E4X)
// Initialize an array
["apple", "orange", "pear"]
// Access fourth element of an array
list[3]
// Access a variable
product["price"]
// Access an XML child element
novel["TITLE"]
( )
Specifies a custom order of operations (precedence)
Invokes a function or method
Contains an E4X filtering predicate
// Force addition before multiplication
(5 + 4) * 2
// Invoke a function
trace( )
// Filter an XMLList
staff.*.(SALARY <= 35000)
@
Accesses XML attributes
// Retrieve all attributes of novel
novel.@*
::
Separates a qualifier namespace from a name
// Qualify orange with namespace fruit
fruit::orange
..
Accesses XML descendants
// Retrieve all descendant elements
// of loan named DIRECTOR
loan..DIRECTOR
{x:y}
Creates a new object and initializes its dynamic variables
// Create an object with dynamic variables,
// width and height
{width:30, height:5}
new
Creates an instance of a class
// Create TextField instance
new TextField( )
<tag> <tag/>
Defines an XML element
// Create an XML element named BOOK
<BOOK>Essential ActionScript 3.0</BOOK>
x++
x
14
Adds one to x and
returns x's former value
(postfix increment)
// Increase i by 1, and return i
i++
x--
Subtracts one from x
and returns x's former
value (postfix decrement)
// Decrease i by 1, and return i
i--
++ x
++
Adds one to x and
returns x's new value
(prefix increment)
// Increase i by 1, and return the result
++i
-- x
--
Subtracts one from x
and returns x's new value
(prefix decrement)
// Decrease i by 1, and return the result
--i
-
Switches the operand's sign (positive becomes negative, and
negative becomes positive)
var a:int = 10;
// Assign −10 to b
var b:int = -b;
~
Performs a bitwise NOT
// Clear bit 2 of options
options &= ~4;
!
Returns the Boolean opposite of its single operand
// If under18's value is not true,
// execute conditional body
if (!under18) {
trace("You can apply for a credit card")
}
delete
Removes the value of an array element
Removes an object's dynamic instance variable
Removes an XML element or attribute
// Create an array
var genders:Array = ["male","female"]
// Remove the first element's value
delete genders[0];
// Create an object
var o:Object = new Object( );
o.a = 10;
// Remove dynamic instance variable a
delete o.a;
// Remove the <TITLE> element from the XML
// object referenced by novel
delete novel.TITLE;
typeof
Returns a simple string description of various types of objects.
Used for backwards compatibility with ActionScript 1.0 and
ActionScript 2.0 only.
// Retrieve string description of 35's type
typeof 35
void
Returns the value undefined
var o:Object = new Object( );
o.a = 10;
// Compare undefined to the value of o.a
if (o.a == void) {
trace("o.a does not exist, or has no value");
}
13
Multiplies two numbers
// Calculate four times six
4 * 6
Divides left operand by right operand
// Calculate 30 divided by 5
30 / 5
%
Returns the remainder (i.e., modulus) that results when the left
operand is divided by the right operand
// Calculate remainder of 14 divided by 4
14 % 4
12
Adds two numbers
Combines(concatenate) two strings
Combines (concatenate) two XML or XMLList objects
// Calculate 25 plus 10
25 + 10
// Combine "He" and "llo" to form "Hello"
"He" + "llo"
// Combine two XML objects
<JOB>Programmer</JOB> + <AGE>52</AGE>
Subtracts right operand from left operand
// Subtract 2 from 12
12 - 2
<<
11
Performs a bitwise left shift
// Shift 9 four bits to the left
9 << 4
>>
Performs a bitwise signed right shift
// Shift 8 one bit to the right
8 >> 1
>>>
Performs a bitwise unsigned right shift
// Shift 8 one bit to the right, filling
// vacated bits with zeros
8 >>> 1
10
Checks if the left operand is less than the right operand.
Depending upon the evaluation of the operands, returns true or false.
true
// Check if 5 is less than 6
5 < 6
// Check if "a" has a lower character code point
// than "z"
"a" < "z"
<=
Checks if the left operand is less than or equal to the right
operand. Depending on the evaluation of the operands, returns
true or false.
// Check if 10 is less than or equal to 5
10 <= 5
// Check if "C" has a lower character code point
// than "D", or the same code point as "D"
"C" <= "D"
>
Checks if the left operand is greater than the right operand.
Depending upon the evaluation of the operands, returns true or false.
// Check if 5 is greater than 6
5 > 6
// Check if "a" has a higher character code point
// than "z"
"a" > "z"
>=
Checks if the left operand is greater than or equal to the right
operand. Depending on the evaluation of the operands, returns
true or false.
// Check if 10 is greater than or equal to 5
10 >= 5
// Check if "C" has a higher character code point
// than "D", or the same code point as "D"
"C" >= "D"
as
Checks if the left operand belongs to the datatype specified by
the right operand. If yes, returns the object; otherwise returns
null
var d:Date = new Date( )
// Check if d's value belongs to
// the Date datatype
d as Date
is
Checks if the left operand belongs to the datatype specified by
the right operand. If yes, returns the true; otherwise returns false.
var a:Array = new Array( )
// Check if a's value belongs to
// the Array datatype
a is Array
in
Checks if an object has a specified public instance variable or
public instance method. Depending on the evaluation of the
operands, returns true or false.
var d:Date = new Date( )
// Check if d's value has a public variable or
// public method named getMonth
"getMonth" in d
instanceof
Checks if the left operand's prototype chain includes the right
operand. Depending on the evaluation of the operands, returns
true or false.
var s:Sprite = new Sprite( )
// Check if s's value's prototype chain
// includes DisplayObject
s instanceof DisplayObject
==
9
Checks whether two expressions are considered equal (equality).
Depending on the evaluation of the operands, returns true or false.
// Check whether the expression "hi" is equal to
// the expression "hello"
"hi" == "hello"
!=
Checks whether two expressions are considered not equal (in
equality). Depending upon the evaluation of the operands, returns
true or false.
// Check whether the expression 3 is not equal to
// the expression 3
3 != 3
===
Checks whether two expressions are considered equal without
datatype conversion for primitive types (strict equality).
Depending on the evaluation of the operands, returns true or false.
// Check whether the expression "3" is equal to
// the expression 3. This code compiles in
// standard mode only.
"3" === 3
!==
Checks whether two expressions are considered not equal without
datatype conversion for primitive types (strict equality).
Depending on the evaluation of the expression, returns true or false.
false.
// Check whether the expression "3" is not equal to
// the expression 3. This code compiles in
// standard mode only.
"3" === 3
&
8
Performs a bitwise AND
// Combine bits of 15 and 4 using bitwise AND
15 & 4
^
7
Performs a bitwise XOR
// Combine bits of 15 and 4 using bitwise XOR
15 ^ 4
|
6
Performs a bitwise OR
// Combine bits of 15 and 4 using bitwise OR
15 | 4
&&
5
Compares two expressions using a logical AND operation. If the left operand is
false or converts to false, && returns the left operand;
otherwise && returns the right operand.
var validUser:Boolean = true;
var validPassword:Boolean = false;
// Check if both validUser and validPassword
// are true
if (validUser || validPassword) {
// Do login...
}
||
Compares two expressions using a logical OR operation. If the left operand is
true or converts to true, || returns the left operand; otherwise ||
returns the right operand.
var promotionalDay:Boolean = false;
var registeredUser:Boolean = false;
// Check if either promotionalDay or registeredUser
// is true
if (promotionalDay || registeredUser) {
// Show premium content...
}
3
Performs a simple conditional. If the first operand is
true or converts to true, the value of the second operand is evaluated
and returned. Otherwise, the value of the third operand is
evaluated and returned.
// Invoke one of two methods based on
// whether soundMuted is true
soundMuted ? displayVisualAlarm() : playAudioAlarm( )
2
Assigns a value to a variable or array element
// Assign 36 to variable age
var age:int = 36;
// Assign a new array to variable seasons
var seasons:Array = new Array( );
// Assign "winter" to first element of seasons
seasons[0] = "winter";
+=
Adds (or concatenates) and reassigns
// Add 10 to n's value
n += 10; // same as n = n + 10;
// Add an exclamation mark to the end of msg
msg += "!"
// Add an <AUTHOR> tag after the first <AUTHOR>
// tag child of novel
novel.AUTHOR[0] += <AUTHOR>Dave Luxton</AUTHOR>;
-=
Subtracts and reassigns
// Subtract 10 from n's value
n -= 10; // same as n = n - 10;
*=
Multiplies and reassigns
// Multiply n's value by 10
n *= 10; // same as n = n * 10;
/=
Divides and reassigns
// Divide n's value by 10
n /= 10; // same as n = n / 10;
%=
Performs modulo division and reassigns
// Assign n%4 to n
n %= 4; // same as n = n % 4;
<<=
Shifts bits left and reassigns
// Shift n's bits two places to the left
n <<= 2; // same as n = n << 2;
>>=
Shifts bits right and reassigns
// Shift n's bits two places to the right
n >>= 2; // same as n = n >> 2;
>>>=
Shifts bits right (unsigned) and reassigns
// Shift n's bits two places to the right, filling
// vacated bits with zeros
n >>>= 2; // same as n = n >>> 2;
&=
Performs bitwise AND and reassigns
// Combine n's bits with 4 using bitwise AND
n &= 4 // same as n = n & 4;
^=
Performs bitwise XOR and reassigns
// Combine n's bits with 4 using bitwise XOR
n ^= 4 // same as n = n ^ 4;
|=
Performs bitwise OR and reassigns
// Combine n's bits with 4 using bitwise OR
n |= 4 // same as n = n | 4;
,
1
Evaluates left operand, then right operand
// Initialize and increment two loop counters
for (var i:int = 0, j:int = 10; i < 5; i++, j++) {
// i counts from 0 through 4
// j counts from 10 through 14
}
This chapter covered some of ActionScript's basic built-in
programming tools. In the next chapter, we'll study another
essential ActionScript tool: arrays. Arrays are used to manage
lists of information.
© 2014, O’Reilly Media, Inc.
(707) 827-7019
(800) 889-8969
All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners.
|
http://archive.oreilly.com/pub/a/actionscript/excerpts/essential-actionscript/chapter-10.html
|
CC-MAIN-2014-52
|
refinedweb
| 2,351
| 51.28
|
31 August 2012 08:10 [Source: ICIS news]
MELBOURNE (ICIS)--Korea Alcohol Industrial’s domestic ethyl acetate (etac) pricing for September will be at a rollover from August, a company official said on Friday.
The South Korean producer’s September etac price will be at won (W)1,180/kg ($1.04/kg) ex-works (EXW), unchanged from August, the official said.
The company on 1 August announced a W80/kg reduction for its August pricing to W1,180/kg EXW, from W1,260/kg EXW in July.
Several South Korean importers said at the time that the company decreased its price to boost its competitiveness over imports from ?xml:namespace>
Korea Alcohol’s July pricing was itself a W80/kg reduction from June.
Korea Alcohol is
South Korean etac demand in 2011 was estimated by market sources at 90,000-100,000 tonnes. Most of the 69,000 tonnes of etac imported in 2011 came from
($1 = W1
|
http://www.icis.com/Articles/2012/08/31/9591508/korea-alcohol-to-roll-over-august-domestic-etac-price-into.html
|
CC-MAIN-2014-52
|
refinedweb
| 159
| 62.58
|
Hello,
I have been using the python challenge to teach myself python as my first programming/scripting language outside of HTML. As I go along, I find that while I'm understanding the concepts rather well, I am not being as efficient as I could be. While the code I whip up gets things done, the suggestions others have shared with me make me thing that python can be as simple as speaking in english.
Case in point:
from string import ascii_letters import re, urllib2, webbrowser as wb wb.open('' % ''.join(chr for chr in re.findall('<!--(.*?)-->', urllib2.urlopen('') .read(), re.S)[1] if chr in ascii_letters))
This snippet was suggested to me for the 2nd puzzle. If I read it (although it's organized differently), it comes back like this:
Importing ASCII letters from string, re, and urllib2 modules and webrowser as wb, here is what we want to do:
*open the url
*read the source and find all unique ascii characters between <!-- and --!>
*join those characters and replace % in the following url'
Is this correct? My problem is that when I come up with an idea for a program, while I can get it done, there is usually a way to accomplish the same goal in much less code. How do you learn what to use when?
|
https://www.daniweb.com/programming/software-development/threads/171694/help-me-undestand
|
CC-MAIN-2016-50
|
refinedweb
| 219
| 75.54
|
I wanted to have a scrolling about box in my project that could display text, images, and automatically scroll them as a credits dialog. This is an additional feature I'd rather add to some of my finished applications. After spending many hours searching on the Internet, I found a few articles and example codes, and especially in this site, I was impressed by the following articles:
Both of the above are done well, but they were not what I wanted. These programs were developed in Visual C++ 6.0 or Visual C++ 7.1 (.NET) but using MFC because of which I couldn't build a C# control based on their modules. And I had to change them to fit them to my work.
As said earlier, these example programs used MFC for coding convenience. The functions CreateCompatibleDC, CreateCompatibleBitmap and SelectObject etc.. within MFC are common ways for image processing and operating with the device context of the drawing window. But, I now work in the .NET Framework, using either VB.NET or C#, so the above programs can't help me any longer.
CreateCompatibleDC
CreateCompatibleBitmap
SelectObject
I began searching on the Internet and looked in MSDN, and came to know that there was a workaround in C# to use the old Win32 API graphics functions for smooth drawing. The idea is to let C# know that you will be using a few functions from an unmanaged DLL, using the DllImport attribute. The detailed documentation of DllImport can be found in .NET documentation. I am a newbie in C#, so it's too difficult for me to try solving the problem in that way.
DllImport
Fortunately, we can achieve the same goal with graphics' System.Drawing namespace in .NET. Image rendering in .NET is much simple and efficient when compared to MFC. There are two functions in Graphics class to render your Image object on the screen, these are DrawImage and DrawImageUnscaled. So, we would do our off-screen drawing in an Image object. After that, we can use these functions to render this object directly to DC and have the smooth animated effects.
System.Drawing
Graphics
Image
DrawImage
DrawImageUnscaled
As in my code, I'll try to draw all the display objects within the OnPaint() event of the control:
OnPaint()
private void ctlScrollAbout_Paint(object sender,
System.Windows.Forms.PaintEventArgs e)
{
using (System.Drawing.Graphics objGraphics = e.Graphics)
{
// draw off-screen m_TempDrawing bitmap on control screen
objGraphics.DrawImage(m_TempDrawing, m_XMargin, m_YMargin,
m_TempDrawing.Width, m_TempDrawing.Height);
}
}
It is not surprising if you are wondering when the off-screen was drawn. Usually, it is drawn just before we call the DrawImage function. In this case, our control is an autoscroll aboutbox, so we must have a timer to operate the speed of scrolling. It's OK to draw the off-screen along with timer tick-event and call the OnPaint() function manually. Look at this:
// timer tick event
private void tmeScrolling_Tick(object sender, System.EventArgs e)
{
BuildScrollingBitMap(); // build offline screen
this.Invalidate(); // call to OnPaint() event
}
The next session will describe how the scrolling AboutBox administers its display objects. There are two kinds of display objects for this control: a text object and an image object. It is possible to use a common way that holds all these objects in a list. .NET supports some collection base objects such as ArrayList, HashTable, Queue, Stack, SortedList etc., which can be used easily. But, we want our own list for the specific display objects, so we should inherit a new class from the available ones. We have clsDisplayList to manage the display objects as shown below:
ArrayList
HashTable
Queue
Stack
SortedList
clsDisplayList
// inherits a new custom list object from CollectionBase class
public class clsDisplayList : System.Collections.CollectionBase
{
// add new display object to list
public int Add(clsDisplayObject value)
{
return List.Add(value);
}
// provide array-index access to display object list
public clsDisplayObject this[int index]
{
get
{
return (clsDisplayObject)List[index];
}
set
{
List[index] = value;
}
}
}
For the display object, it is installed with a class which has all the necessary member variables and corresponding properties. The scrolling AboutBox would use these properties to draw the display object correctly on the screen:
// a instance of the display object
public class clsDisplayObject
{
// for text object
private string _displayText = "";
// for text object
private System.Drawing.Font _objFont;
// for text object
private System.Drawing.FontStyle _fontStyle =
System.Drawing.FontStyle.Bold;
// for text object
private string _fontName = "Tahoma";
// for text object
private int _fontSize = 10;
// for text object
private System.Drawing.Brush _textColor =
System.Drawing.Brushes.White;
// determine if current display object is bitmap or text
private bool _isBitMap = false;
// the height of display object
private int _displayHeight = 0;
// offset in vertical of display object
private int _displayYOffset = 0;
// bitmap display object
private System.Drawing.Bitmap _bitmap;
private int _bitmapHeight = 100;
private int _bitmapWidth = 100;
.....
}
In the control's class, we declare an instance of clsDisplayList() and provide some methods to add a new display object:
clsDisplayList()
public class ctlScrollAbout : System.Windows.Forms.UserControl
{
....
// display object list
private clsDisplayList m_DisplayObject = new clsDisplayList();
...
/// <summary>
/// Add a text display object.
/// </summary>
public void AddDisplayText(string text, string fontName, ?
int fontSize, System.Drawing.FontStyle fontStyle,
System.Drawing.Brush textColor)
{
m_DisplayObject.Add(new clsDisplayObject(text, fontName,
fontSize, fontStyle, textColor));
BuildScrollingBitMap();
}
/// <summary>
/// Add a bitmap display object.
/// </summary>
public void AddDisplayBitmap(string fileName,
int bmpHeight, int bmpWidth)
{
m_DisplayObject.Add(new clsDisplayObject(fileName,
bmpHeight, bmpWidth));
BuildScrollingBitMap();
}
}
Now that everything is clear you can integrate them to make your own control. For a step by step guide on creating a C# control, you can refer to the article: Simple introduction to writing your first .NET control.
To use the above control in your application is very simple. Download the control source code and build it to make a C# control. It would be a DLL file in the compile output directory; in my case, it was .\bin\Debug\ScrollerAbout.dll.
After having the DLL file, you create a new Windows Application project. In the design window, please open the Toolbox tab and right click and choose Add/Remove Items. Then browse to our DLL file, select it and press OK. There is a scrolling aboutbox control in your Toolbox.
You can now drag the about control to any form in your .NET application and start using it. The following code shows how to initialize and pass parameters to the about control when a form loads:
' VB.Net example code
Private Sub Form1_Load(ByVal sender As System.Object, _
ByVal e As System.EventArgs) Handles MyBase.Load
ctrAbout.LineSpacing = 20
ctrAbout.XMargin = 20
ctrAbout.YMargin = 20
ctrAbout.ScrollingStep = 1
ctrAbout.ScrollingTime = 100", 100, 100)
ctrAbout.StartScrolling()
End Sub
// C# example code
private void Form1_Load(object sender, System.EventArgs e)
{
ctrAbout.LineSpacing = 20;
ctrAbout.XMargin = 20;
ctrAbout.YMargin = 20;
ctrAbout.ScrollingStep = 1;
ctrAbout.ScrollingTime = 500;",320,240);
ctrAbout.StartScrolling()
}
We now know how to create a custom user control for our specific purpose in .NET Framework. We also saw that making a new inherited collection class that is used to manage our own objects is very simple. This scrolling AboutBox has some advanced features that I would like to improve such as text wrapping, hyperlink object, animation image etc. But due to lack of time, I just created this article, and hope that with this open source code someone can help me make a new useful scrolling AboutBox version.
This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.
A list of licenses authors might use can be found here
public ctlScrollAbout()
{
// This call is required by the Windows.Forms Form Designer.
InitializeComponent();
// TODO: Add any initialization after the InitComponent call
// | System.Windows.Forms.ControlStyles.DoubleBuffer
this.SetStyle(System.Windows.Forms.ControlStyles.AllPaintingInWmPaint | System.Windows.Forms.ControlStyles.DoubleBuffer | System.Windows.Forms.ControlStyles.UserPaint,true);
this.UpdateStyles();
}
Invalidate()
private void tmeScrolling_Tick(object sender, System.EventArgs e)
{
BuildScrollingBitMap(); // build offline screen
//this.Invalidate();
Draw_ScrollingBitMap();
}
private void Draw_ScrollingBitMap()
{
using (System.Drawing.Graphics objGraphics = this.CreateGraphics())
{
objGraphics.DrawImage(m_TempDrawing, m_XMargin, m_YMargin, m_TempDrawing.Width, m_TempDrawing.Height);
}
}
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
|
http://www.codeproject.com/Articles/8737/A-simple-C-Scrolling-AboutBox-Control
|
CC-MAIN-2015-32
|
refinedweb
| 1,387
| 50.53
|
Red Hat Bugzilla – Bug 538087
Review Request: dgc - A system for the creation of digital circuits
Last modified: 2009-12-31 01:55:21 EST
Spec URL:
SRPM URL:
Description: Digital Gate Compiler is a tool for the creation of digital netlists. DGC does an optimization and technology mapping for an abstract description of boolean functions and state machines. Output formats are EDIF, XNF and VHDL. Input formats are KISS, PLA and others.
Successful Koji builds for F-10, F-11, F-12 and EPEL:
#001: use the sourceforge url as URL
#002: the build failed
checking for ANSI C header files... (cached) yes
checking for cos in -lm... yes
checking for pkg-config... /usr/bin/pkg-config
checking for xml2-config... /usr/bin/xml2-config
configure: creating ./config.status
config.status: creating Makefile
config.status: creating replace/Makefile
config.status: creating util/Makefile
config.status: creating cube/Makefile
config.status: creating encoding/Makefile
config.status: creating syl/Makefile
config.status: creating data/Makefile
config.status: creating gnet/Makefile
config.status: creating app/Makefile
config.status: creating config.h
config.status: executing depfiles commands
./config.status: line 1337: replace/Makefile: No such file or directory
sed: can't read util/Makefile: No such file or directory
sed: can't read cube/Makefile: No such file or directory
sed: can't read encoding/Makefile: No such file or directory
sed: can't read syl/Makefile: No such file or directory
sed: can't read data/Makefile: No such file or directory
sed: can't read gnet/Makefile: No such file or directory
sed: can't read app/Makefile: No such file or directory
config.status: executing libtool commands
./config.status: line 1435: libtoolT: No such file or directory
./config.status: line 1848: libtoolT: No such file or directory
./config.status: line 1853: libtoolT: No such file or directory
./config.status: line 2025: libtoolT: No such file or directory
./config.status: line 2050: libtoolT: No such file or directory
mv: cannot stat `libtoolT': No such file or directory
cp: cannot stat `libtoolT': No such file or directory
chmod: cannot access `libtool': No such file or directory
./configure: line 14272: config.log: No such file or directory
error: Bad exit status from /var/tmp/rpm-tmp.rHxLnd (%build)
forget #002, sorry I miscopy-paste.
By just launching "dgc" from the console, it crashes (reproduce-able everytime):
Searchpath:
. /home/chitlesh/.dgc
*** glibc detected *** dgc: munmap_chunk(): invalid pointer: 0x08c025a0 ***
======= Backtrace: =========
/lib/libc.so.6(+0x6e261)[0x260261]
dgc[0x80491f4]
/lib/libc.so.6(__libc_start_main+0xe6)[0x208bb6]
dgc[0x8048991]
======= Memory map: ========
00110000-0018a000 r-xp 00000000 fd:00 125904 /usr/lib/libdgccube.so.0.0.0
0018a000-0018c000 rw-p 00079000 fd:00 125904 /usr/lib/libdgccube.so.0.0.0
0018c000-001c6000 rw-p 00000000 00:00 0
001c6000-001e1000 r-xp 00000000 fd:00 125912 /usr/lib/libdgcutil.so.0.0.0
001e1000-001e2000 rw-p 0001a000 fd:00 125912 /usr/lib/libdgcutil.so.0.0.0
001e2000-001f2000 rw-p 00000000 00:00 0
001f2000-00360000 r-xp 00000000 fd:00 57627 /lib/libc-2.11.so
00360000-00361000 ---p 0016e000 fd:00 57627 /lib/libc-2.11.so
00361000-00363000 r--p 0016e000 fd:00 57627 /lib/libc-2.11.so
00363000-00364000 rw-p 00170000 fd:00 57627 /lib/libc-2.11.so
00364000-00367000 rw-p 00000000 00:00 0
00367000-0036a000 r-xp 00000000 fd:00 57632 /lib/libdl-2.11.so
0036a000-0036b000 r--p 00002000 fd:00 57632 /lib/libdl-2.11.so
0036b000-0036c000 rw-p 00003000 fd:00 57632 /lib/libdl-2.11.so
00457000-00469000 r-xp 00000000 fd:00 59907 /lib/libz.so.1.2.3
00469000-0046a000 rw-p 00011000 fd:00 59907 /lib/libz.so.1.2.3
004f6000-00505000 r-xp 00000000 fd:00 125910 /usr/lib/libdgcsyl.so.0.0.0
00505000-00508000 rw-p 0000f000 fd:00 125910 /usr/lib/libdgcsyl.so.0.0.0
00508000-00516000 rw-p 00000000 00:00 0
006ca000-006d6000 r-xp 00000000 fd:00 125906 /usr/lib/libdgcencode.so.0.0.0
006d6000-006d7000 rw-p 0000c000 fd:00 125906 /usr/lib/libdgcencode.so.0.0.0
0071a000-0071b000 r-xp 00000000 00:00 0 [vdso]
00bd6000-00d1a000 r-xp 00000000 fd:00 59958 /usr/lib/libxml2.so.2.7.6
00d1a000-00d1f000 rw-p 00143000 fd:00 59958 /usr/lib/libxml2.so.2.7.6
00d1f000-00d20000 rw-p 00000000 00:00 0
00d25000-00d4d000 r-xp 00000000 fd:00 57634 /lib/libm-2.11.so
00d4d000-00d4e000 r--p 00027000 fd:00 57634 /lib/libm-2.11.so
00d4e000-00d4f000 rw-p 00028000 fd:00 57634 /lib/libm-2.11.so
00dc0000-00ddd000 r-xp 00000000 fd:00 8183 /lib/libgcc_s-4.4.2-20091027.so.1
00ddd000-00dde000 rw-p 0001c000 fd:00 8183 /lib/libgcc_s-4.4.2-20091027.so.1
00e81000-00ed3000 r-xp 00000000 fd:00 125908 /usr/lib/libdgcgnet.so.0.0.0
00ed3000-00ed6000 rw-p 00052000 fd:00 125908 /usr/lib/libdgcgnet.so.0.0.0
00fdb000-00ff9000 r-xp 00000000 fd:00 5264 /lib/ld-2.11.so
00ff9000-00ffa000 r--p 0001d000 fd:00 5264 /lib/ld-2.11.so
00ffa000-00ffb000 rw-p 0001e000 fd:00 5264 /lib/ld-2.11.so
08048000-0804b000 r-xp 00000000 fd:00 125897 /usr/bin/dgc
0804b000-0804f000 rw-p 00002000 fd:00 125897 /usr/bin/dgc
0804f000-0805f000 rw-p 00000000 00:00 0
08c02000-08c23000 rw-p 00000000 00:00 0 [heap]
b779e000-b77a1000 rw-p 00000000 00:00 0
b77c3000-b77c6000 rw-p 00000000 00:00 0
bfd8e000-bfda4000 rw-p 00000000 00:00 0 [stack]
Aborted (core dumped)
#Comment 3: A bug :)
In app/dgc.c main() function, there is:
319: char *s;
320: s = b_get_ff_searchpath();
324: free (s);
But, b_get_ff_searchpath() is invoked in util/b_ff.c, where memory is allocated for 's', and the pointer 's' is returned. So, trying to free() the memory in the calling function, dumps :)
But, giving proper lib file and examples as input, doesn't enter this logic, and hence doesn't hit this error. You can copy the /usr/share/doc/dgc-0.98/tests/*.sh to any folder, chmod +x *.sh, and running them should run the test cases.
I will update this, and make an updated release by EOD.
--- (In reply to comment #4)
| So, trying to free() the memory in
| the calling function, dumps :)
\--
To be more precise, util/b_ff.c includes "mwc.h" that redefines free() and that is the instance of free() that crashes. When you use the free() available from glibc, freeing of memory works.
I will disable the b_get_ff_searchpath() in those dgc tools that use it, as it simply prints the search path, which is not required or used since we are packaging it.
* Updated URL to use sourceforge.net project website.
* dgc-0.98 will not print searchpath, as it is not required.
Spec URL:
SRPM URL:
#001: Summary
I think the summary should be "Digital Gate Compiler" which reflects the letters "dgc".
#002: Incorrect Path
In /usr/share/doc/dgc-0.98/tests/d_latch.dgd
import "../data/
should be
import "/usr/share/dgc/
#003: %doc
ChangeLog and TODO is missing in the %doc. Actually, you don't need to create an extra doc directory for -devel. In that case, you will end up having 2 directory for docs and users are confused to where to search for docs.
The package looks fine and for final review.
#001:
Summary updated
#002:
Fixed the path.
#003:
Added ChangeLog and moved TODO to base package. Ignoring rpmlint warning for -devel package on no documentation.
Latest package, now from Fedora 12:
SPEC:
SRPM:
- MUST: The package is named according to the Package Naming Guidelines.
- MUST: The spec file name matches the base package %{name}
- MUST: The package meets the Packaging Guidelines.
- MUST: The package is licensed (GPL)386.
- MUST: All build dependencies is listed in BuildRequires.
- MUST: The spec file handles locales properly.
- MUST: If the package does not contain shared library files located in the
dynamic linker's default paths
- MUST: the package is not designed to be relocatable
- MUST: the package owns all directories that it creates.
- MUST: the package does not contain any duplicate files in the %files listing.
- MUST: Permissions on files are set properly.
- MUST: The package has a %clean section, which contains rm -rf %{buildroot} (or
$RPM_BUILD_ROOT).
- MUST: The package consistently uses macros, as described in the macros section
of Packaging Guidelines.
- MUST: The package contains code, or permissable containing GUI applications includes a %{name}.desktop file, and
that file must be properly installed with desktop-file-install in the %install
section.
- MUST: Package does not own files or directories already owned by other packages.
SHOULD Items:
- SHOULD: The source package does include license text(s)
- SHOULD: mock builds succcessfully in i386.
- SHOULD: The reviewer tested that the package functions as described. A
package should not segfault instead of running, for example.
- SHOULD: No scriptlets were used, those scriptlets must be sane.
- SHOULD: No subpackages present.
APPROVED
New Package CVS Request
=======================
Package Name: dgc
Short Description: Digital Gate Compiler
Owners: shakthimaan chitlesh
Branches: F-11 F-12 EL-5
cvs done.
dgc-0.98-3.fc12 has been submitted as an update for Fedora 12.
dgc-0.98-3.fc11 has been submitted as an update for Fedora 11.
dgc-0.98-3.el5 has been submitted as an update for Fedora EPEL 5.
dgc-0.98-3.fc12 has been pushed to the Fedora 12 stable repository. If problems still persist, please make note of it in this bug report.
dgc-0.98-3.fc11 has been pushed to the Fedora 11 stable repository. If problems still persist, please make note of it in this bug report.
dgc-0.98-3.el5 has been pushed to the Fedora EPEL 5 stable repository. If problems still persist, please make note of it in this bug report.
|
https://bugzilla.redhat.com/show_bug.cgi?id=538087
|
CC-MAIN-2016-22
|
refinedweb
| 1,664
| 60.51
|
Return to mozilla-dev-extensions
HI want to change the XUL user-interface at run-time, and save the modified user-interface into a XUL file to be prepared for reusing. How can I do this?
The only way of "saving" XUL is to serialize the modified DOM using XMLSerializer. Here are some links you can check out about the XMLSerializer:
I developed an extension which works fine for Firefox 1.5. I just switched to Firefox 2.0. My extension does not work. Firefox 2.0 says it is not compatible. How to fix the compatibility problem?
You may only need to update the max version in the install.rdf file. Here's a link for more information:
Updating extensions for Firefox 2
When my sidebar opens, there's a default tab acting as an "About" box fitting the height of it's text content. Then my JavaScript dynamically creates new XUL tabs (see tabbox/tabs/tabpanels). That works great, however, the new tabs all have the same height as the first one (the About). Also, even if I do tabpanels.selectedIndex=0 it doesn't refresh the UI correctly. Any hints?
All the child elements of the <tabpanels> are forced to the same width and height. However you can use align and pack attributes on boxes as appropriate.
Also, I think you're missing a flex="1" on the <tabpanels> element itself.
How can i dynamically change the content type/namespace of a document?
How are you filling the the new document with the original nodes? Don't use things like appendChild(), insertBefore() because nodes shouldn't be shared between documents. You should use cloneNode():
I have a chrome function that I want to execute when a page has finished loading. And it works really well. The only problem is that some webpages have some onLoad JavaScript too, and I would like to run *after* they complete. Is there any way to be notified that a page has completely loaded and the onload JavaScript functions finished too?
It's possible that setting a zero-second timeout in your init() function to call another function would work, i.e.:
function init() { window.setTimeout(init2, 0); } function init2() { // Put your actual init logic here. }
If this doesn't work, then I don't know what would. Changing your event listener to not capture the "load" event (i.e. passing "false" as the third parameter to window.addEventListener) might work, but then the page would probably be able to stop propagating the event, preventing your listener from ever getting called.
I was just wondering if there is a way to get the HWND of the main document in firefox/mozilla?
This has been discussed before on this mailing list. Why not just use the windows API ::FindWindow() function to get Firefox's HWND?
For multi-window applications, it needs to occasionally iterate over each of the application windows and do something to my add-on's UI. I have cobbled something together that works fine. But it seems that the application must know of its open windows. Is there any way for my add-on to get access to this information?
Use the window mediator to iterate over open windows. For example:
var mediator = Components.classes["@mozilla.org/appshell/window-mediator;1"]. getService(Components.interfaces.nsIWindowMediator); // Grab browser windows; for all windows, pass null as the parameter. var windows = mediator.getEnumerator("navigator:browser"); while (windows.hasMoreElements()) { var win = windows.getNext(); // Do something with the window... }
It would be great if tabs would go to multiple rows after having five or six tabs up and set colors for different tabs or rows just to help sort things out. Is that possible?
There are are a few possibilities for colours:
ColorfulTabs which gives each tab a different colour based on domain or random color. It has options to differentiate the background tabs by hilighting the selected tab:
Chromatabs which colours tabs based on host:
HashColouredTabs, which I have been working on recently, which gives sites, without a site icon, a coloured icon based on the host:
[My version includes the icon in the location bar and 'list all tabs' menu]
I need to remove certain cookies from time to time. So I decided to add small submenu to Tools menu to do this. While testing, I don't install it regular way, I just copy it to ext. folder. Now the menu is added to Tools menu, I see no errors in err console. But when I choose the menuitem, no command is done. I even changed the command to simple alert, but it just doesn't work. JavaScript part is not important right now, because it works fine when I type it to Error console to "evaluate" field. It removes cookie correctly. I just don't know why the menu doesn't work. Do you have any idea why this doesn't work?
Try using
<commandset id="mainCommandSet">
so that your <command>s are merged under existing <commandset> element.
|
https://developer.mozilla.org/en-US/docs/Extensions/Questions_and_answers_from_the_newsgroups_2006_12_01
|
CC-MAIN-2016-26
|
refinedweb
| 837
| 66.54
|
31814/how-to-use-pandas-hdf5-as-a-database-in-python
Pandas is not a database, right?
Is there a way to integrate the analysis power of pandas into a flat HDF5 file database? I know that unfortunately HDF5 is not designed to deal natively with concurrency.
Also been looking for inspiration into parallel HDF5, flat file database managers or multiprocessing but I still lack an idea. Help?
HDF5 works fine for concurrent read only access.
For concurrent write access you either have to use parallel HDF5 or have a worker process that takes care of writing to an HDF5 store.
I recommend to use a hybrid approach and expose it via a RESTful API. You can store meta-information in a SQL/NoSQL database and keep the raw data (time series data) in one or multiple HDF5 files.
There is one public REST API to access the data and the user doesn't have to care what happens behind the curtains.
connect mysql database with python
import MySQLdb
db = ...READ MORE
David here, from the Zapier Platform team. ...READ MORE
it is a python sequency which stores ...READ MORE
You can use '\n' for a next ...READ MORE
You can also use the random library's ...READ MORE
Syntax :
list. count(value)
Code:
colors = ['red', 'green', ...READ MORE
can you give an example using a ...READ MORE
You can simply the built-in function in ...READ MORE
g1 here is a DataFrame. It has a hierarchical index, ...READ MORE
For Python 3, try doing this:
import urllib.request, ...READ MORE
OR
Already have an account? Sign in.
|
https://www.edureka.co/community/31814/how-to-use-pandas-hdf5-as-a-database-in-python?show=31816
|
CC-MAIN-2020-24
|
refinedweb
| 268
| 68.36
|
WL#6899: Reduce lock_sys_t::mutex contention when converting implicit lock to an explicit lock
Status: Complete — Priority: Medium
Converting a transaction's implicit lock to an explicit lock is costly. It impacts both the lock_sys_t::mutex and the trx_sys_t::mutex. The cost comes from the the check that determines whether a transaction that inserted a new record is still active. If it is determined to be active then an explicit record lock needs to be created for the active transaction. The current design does the following: Acquire the lock_sys_t::mutex Acquire the trx_sys_t::mutex Scan the trx_sys_t::rw_trx_list for trx_id_t (only RW transactions can insert) Release the trx_sys_t::mutex Return handle if transaction found if handle found then do an implicit to explicit record conversion endif Release the lock_sys_t::mutex The above pseudo code should make it clear that as the trx_sys_t::rw_trx_list grows it has a proportional cost on the lock_sys_t::mutex and that causes a sharp drop in TPS at higher concurrency e.g., 1K RW threads in Sysbench. The solution is: Acquire the trx_sys_t::mutex Scan the trx_sys_t::rw_trx_list for trx_id_t (only RW transactions can insert) if handle found then Acquire the trx_t::mutex Increment trx_t::n_ref_count Release the trx_t::mutex endif Release the trx_sys_t::mutex Return handle if transaction found if handle found then Acquire the lock_sys_t::mutex do an implicit to explicit record conversion Release the lock_sys_t::mutex Acquire the trx_t::mutex Decrement trx_t::n_ref_count Release the trx_t::mutex endif During commit we do the following check: Acquire the trx_t::mutex if trx_t::n_ref_count > 0 while (trx_t::n_ref_count > 0) Release the trx_t::mutex sleep/delay Acquire the trx_t::n_ref_count end while endif
When converting an implicit lock to an explicit lock we first acquire the lock_sys_t::mutex and then traverse over the trx_sys_t::rw_trx_list to check if the transaction is active. We also acquire the trx_sys_t::mutex before we traverse the list to maintain correctness. As the list grows this puts a lot of pressure on the lock_sys_t::mutex. The lock_sys_t::mutex is needed to ensure that if an active transaction instance is found it is not committed or rolled back while we are holding a pointer to it. During commit/rollback we will acquire the lock_sys_t::mutex before we release any locks. The fix is to use a reference count on the trx_t instance. When we find a transaction in the trx_sys_t::rw_trx_list we will acquire the trx_t::mutex andincrement the reference count. In lock_rec_convert_impl_to_expl() we will first check and increment the reference count of the transaction if found. Only if the transaction is found do we need to acquire the lock_sys_t::mutex. Later after converting the implicit lock to an explicit lock we decrement the reference count. In lock_trx_release_locks() after changing the trx_t state TRX_STATE_COMMITTED_IN_MEMORY we will wait on this ref count to become zero before we release any locks. Use the same technique in row_vers_impl_x_locked_low(), modify it so that it returns the trx instance and not the trx_id. This should save excessive scanning of the rw_trx_list.
Add a reference count field to trx_t: ulint n_ref_count; /*!< Count of references, protected by trx_t::mutex. We can't release the locks nor commit the transaction until this reference is 0. We can change the state to COMMITTED_IN_MEMORY to signify that it is no longer "active". */ Add 3 new functions that use this field: ** Increase the reference count. If the transaction is in state TRX_STATE_COMMITTED_IN_MEMORY then the transaction is considered committed and the reference count is not incremented. @param trx Transaction that is being referenced @param do_ref_count Increment the reference iff this is true @return transaction instance if it is not committed */ UNIV_INLINE trx_t* trx_reference( trx_t* trx, bool do_ref_count); /** Release the transaction. Decrease the reference count. @param trx Transaction that is being released */ UNIV_INLINE void trx_release_reference( trx_t* trx); /** Check if the transaction is being referenced. */ #define trx_is_referenced(t) ((t)->n_ref_count > 0) When a running transaction needs to do an implicit to explicit conversion record conversion for another transaction it has to first check whether the other transaction that owns the implicit record lock is still active. This check requires traversal of the trx_sys_t::rw_trx_list while holding the trx_sys_t::mutex. A transaction cannot commit while this mutex is being held. During the traversal if an active transaction instance is found for the corresponding trx_id_t that owns the implicit record lock, acquire the trx_t::mutex then increment the trx_t::n_ref_count. This will guarantee that the transaction cannot be committed until the trx_t::n_ref_count drops to zero. The decrement of the trx_t::n_ref_count is done while holding the trx_t::mutex too. Before we release a read-write transaction's locks we check the reference count and do a busy wait for the trx_t::n_ref_count to drop to 0. The assumption behind the busy wait is that the implicit to explicit conversion is a short operation and that the busy wait should be sufficient.
Copyright (c) 2000, 2016, Oracle Corporation and/or its affiliates. All rights reserved.
|
https://dev.mysql.com/worklog/task/?id=6899
|
CC-MAIN-2016-44
|
refinedweb
| 827
| 50.87
|
import "github.com/apache/beam/sdks/go/pkg/beam"
Package beam is an implementation of the Apache Beam () programming model in Go. Beam provides a simple, powerful model for building both batch and streaming parallel data processing pipelines.
For more on the Beam model see:
For design choices this implementation makes see:
Code:
// In order to start creating the pipeline for execution, a Pipeline object is needed. p := beam.NewPipeline() s := p.Root() // The pipeline object encapsulates all the data and steps in your processing task. // It is the basis for creating the pipeline's data sets as PCollections and its operations // as transforms. // The. // Transformations are applied in a scoped fashion to the pipeline. The scope // can be obtained from the pipeline object. // Start by reading text from an input files, and receiving a PCollection. lines := textio.Read(s, "protocol://path/file*.txt") // Transforms are added to the pipeline so they are part of the work to be // executed. Since this transform has no PCollection as an input, it is // considered a 'root transform' // A pipeline can have multiple root transforms moreLines := textio.Read(s, "protocol://other/path/file*.txt") // Further transforms can be applied, creating an arbitrary, acyclic graph. // Subsequent transforms (and the intermediate PCollections they produce) are // attached to the same pipeline. all := beam.Flatten(s, lines, moreLines) wordRegexp := regexp.MustCompile(`[a-zA-Z]+('[a-z])?`) words := beam.ParDo(s, func(line string, emit func(string)) { for _, word := range wordRegexp.FindAllString(line, -1) { emit(word) } }, all) formatted := beam.ParDo(s, strings.ToUpper, words) textio.Write(s, "protocol://output/path", formatted) // Applying a transform adds it to the pipeline, rather than executing it // immediately. Once the whole pipeline of transforms is constructed, the // pipeline can be executed by a PipelineRunner. The direct runner executes the // transforms directly, sequentially, in this one process, which is useful for // unit tests and simple experiments: if err := direct.Execute(context.Background(), p); err != nil { fmt.Printf("Pipeline failed: %v", err) }
Code:
// Metrics can be declared outside DoFns, and used inside.. outside := beam.NewCounter("example.namespace", "count") extractWordsDofn := func(ctx context.Context, line string, emit func(string)) { // They can be defined at time of use within a DoFn, if necessary. inside := beam.NewDistribution("example.namespace", "characters") for _, word := range wordRE.FindAllString(line, -1) { emit(word) outside.Inc(ctx, 1) inside.Update(ctx, int64(len(word))) } } ctx := ctxWithPtransformID("example") extractWordsDofn(ctx, "this has six words in it", func(string) {}) extractWordsDofn(ctx, "this has seven words in it, see?", func(string) {}) dumpAndClearMetrics()
Output:
Bundle: "exampleBundle" - PTransformID: "example" example.namespace.characters - count: 13 sum: 43 min: 2 max: 5 example.namespace.count - value: 13
Code:
// Metrics can be used in multiple DoFns c := beam.NewCounter("example.reusable", "count") extractWordsDofn := func(ctx context.Context, line string, emit func(string)) { for _, word := range wordRE.FindAllString(line, -1) { emit(word) c.Inc(ctx, 1) } } extractRunesDofn := func(ctx context.Context, line string, emit func(rune)) { for _, r := range line { emit(r) c.Inc(ctx, 1) } } extractWordsDofn(ctxWithPtransformID("extract1"), "this has six words in it", func(string) {}) extractRunesDofn(ctxWithPtransformID("extract2"), "seven thousand", func(rune) {}) dumpAndClearMetrics()
Output:
Bundle: "exampleBundle" - PTransformID: "extract1" example.reusable.count - value: 6 Bundle: "exampleBundle" - PTransformID: "extract2" example.reusable.count - value: 14
beam.shims.go coder.go combine.go create.go doc.go encoding.go external.go flatten.go forward.go gbk.go impulse.go metrics.go option.go pardo.go partition.go pcollection.go pipeline.go runner.go util.go validate.go windowing.go
var ( TType = typex.TType UType = typex.UType VType = typex.VType WType = typex.WType XType = typex.XType YType = typex.YType ZType = typex.ZType )
These are the reflect.Type instances of the universal types, which are used when binding actual types to "generic" DoFns that use Universal Types.
var EventTimeType = typex.EventTimeType
EventTimeType is the reflect.Type of EventTime.
var PipelineOptions = runtime.GlobalOptions
PipelineOptions are global options for the active pipeline. Options can be defined any time before execution and are re-created by the harness on remote execution workers. Global options should be used sparingly.
Init is the hook that all user code must call after flags processing and other static initialization, for now.
Initialized exposes the initialization status for runners.
NewPipelineWithRoot creates a new empty pipeline and its root scope.
func ParDo0(s Scope, dofn interface{}, col PCollection, opts ...Option)
ParDo0 inserts a ParDo with zero output transform into the pipeline.
func ParDo2(s Scope, dofn interface{}, col PCollection, opts ...Option) (PCollection, PCollection)
ParDo2 inserts a ParDo with 2 outputs into the pipeline.
func ParDo3(s Scope, dofn interface{}, col PCollection, opts ...Option) (PCollection, PCollection, PCollection)
ParDo3 inserts a ParDo with 3 outputs into the pipeline.
func ParDo4(s Scope, dofn interface{}, col PCollection, opts ...Option) (PCollection, PCollection, PCollection, PCollection)
ParDo4 inserts a ParDo with 4 outputs into the pipeline.
func ParDo5(s Scope, dofn interface{}, col PCollection, opts ...Option) (PCollection, PCollection, PCollection, PCollection, PCollection)
ParDo5 inserts a ParDo with 5 outputs into the pipeline.
func ParDo6(s Scope, dofn interface{}, col PCollection, opts ...Option) (PCollection, PCollection, PCollection, PCollection, PCollection, PCollection)
ParDo6 inserts a ParDo with 6 outputs into the pipeline.
func ParDo7(s Scope, dofn interface{}, col PCollection, opts ...Option) (PCollection, PCollection, PCollection, PCollection, PCollection, PCollection, PCollection)
ParDo7 inserts a ParDo with 7 outputs into the pipeline.
RegisterCoder registers a user defined coder for a given type, and will be used if there is no existing beam coder for that type. Must be called prior to beam.Init(), preferably in an init() function.
The coder used for a given type follows this ordering:
1. Coders for Known Beam types. 2. Coders registered for specific types 3. Coders registered for interfaces types 4. Default coder (JSON)
Coders for interface types are iterated over to check if a type satisfies them, and the most recent one registered will be used.
Repeated registrations of the same type overrides prior ones.
RegisterCoder additionally registers the type, and coder functions as per RegisterType and RegisterFunction to avoid redundant calls.
Supported Encoder Signatures
func(T) []byte func(reflect.Type, T) []byte func(T) ([]byte, error) func(reflect.Type, T) ([]byte, error)
Supported Decoder Signatures
func([]byte) T func(reflect.Type, []byte) T func([]byte) (T, error) func(reflect.Type, []byte) (T, error)
where T is the matching user type.
RegisterFunction allows function registration. It is beneficial for performance and is needed for functions -- such as custom coders -- serialized during unit tests, where the underlying symbol table is not available. It should be called in init() only. Returns the external key for the function.
RegisterInit registers an Init hook. Hooks are expected to be able to figure out whether they apply on their own, notably if invoked in a remote execution environment. They are all executed regardless of the runner.
RegisterRunner associates the name with the supplied runner, making it available to execute a pipeline via Run.
RegisterType inserts "external" types into a global type registry to bypass serialization and preserve full method information. It should be called in init() only. TODO(wcn): the canonical definition of "external" is in v1.proto. We need user facing copy for this important concept.
Run executes the pipeline using the selected registred runner. It is customary to define a "runner" with no default as a flag to let users control runner selection.
ValidateKVType panics if the type of the PCollection is not KV<A,B>. It returns (A,B).
func ValidateNonCompositeType(col PCollection) typex.FullType
ValidateNonCompositeType panics if the type of the PCollection is not a composite type. It returns the type.
Coder defines how to encode and decode values of type 'A' into byte streams. Coders are attached to PCollections of the same type. For PCollections consumed by GBK, the attached coders are required to be deterministic.
DecodeCoder decodes a coder. Any custom coder function symbol must be resolvable via the runtime.GlobalSymbolResolver. The types must be encodable.
NewCoder infers a Coder for any bound full type.
IsValid returns true iff the Coder is valid. Any use of an invalid Coder will result in a panic.
Type returns the full type 'A' of elements the coder can encode and decode. 'A' must be a concrete full type, such as int or KV<int,string>.
Counter is a metric that can be incremented and decremented, and is aggregated by the sum.
NewCounter returns the Counter with the given namespace and name.
Dec decrements the counter within by the given amount.
Inc increments the counter within by the given amount.
type Distribution struct { *metrics.Distribution }
Distribution is a metric that records various statistics about the distribution of reported values.
func NewDistribution(namespace, name string) Distribution
NewDistribution returns the Distribution with the given namespace and name.
func (c Distribution) Update(ctx context.Context, v int64)
Update adds an observation to this distribution.
type ElementDecoder = coder.ElementDecoder
ElementDecoder encapsulates being able to decode an element from a reader.
func NewElementDecoder(t reflect.Type) ElementDecoder
NewElementDecoder returns an ElementDecoder the given type.
type ElementEncoder = coder.ElementEncoder
ElementEncoder encapsulates being able to encode an element into a writer.
func NewElementEncoder(t reflect.Type) ElementEncoder
NewElementEncoder returns a new encoding function for the given type.
EncodedCoder is a serialization wrapper around a coder for convenience.
func (w EncodedCoder) MarshalJSON() ([]byte, error)
MarshalJSON returns the JSON encoding this value.
func (w *EncodedCoder) UnmarshalJSON(buf []byte) error
UnmarshalJSON sets the state of this instance from the passed in JSON.
type EncodedFunc struct { // Fn is the function to preserve across serialization. Fn reflectx.Func }
EncodedFunc is a serialization wrapper around a function for convenience.
func (w EncodedFunc) MarshalJSON() ([]byte, error)
MarshalJSON returns the JSON encoding this value.
func (w *EncodedFunc) UnmarshalJSON(buf []byte) error
UnmarshalJSON sets the state of this instance from the passed in JSON.
EncodedType is a serialization wrapper around a type for convenience.
func (w EncodedType) MarshalJSON() ([]byte, error)
MarshalJSON returns the JSON encoding this value.
func (w *EncodedType) UnmarshalJSON(buf []byte) error
UnmarshalJSON sets the state of this instance from the passed in JSON.
EventTime represents the time of the event that generated an element. This is distinct from the time when an element is processed.
FullType represents the tree structure of data types processed by the graph. It allows representation of composite types, such as KV<int, string> or CoGBK<int, int>, as well as "generic" such types, KV<int,T> or CoGBK<X,Y>, where the free "type variables" are the fixed universal types: T, X, etc.
Gauge is a metric that can have its new value set, and is aggregated by taking the last reported value.
NewGauge returns the Gauge with the given namespace and name.
Set sets the current value for this gauge.
Option is an optional value or context to a transformation, used at pipeline construction time. The primary use case is providing side inputs.
PCollection is an immutable collection of values of type 'A', which must be a concrete type, such as int or KV<int,string>. A PCollection can contain either a bounded or unbounded number of elements. Bounded and unbounded PCollections are produced as the output of PTransforms (including root PTransforms like textio.Read), and can be passed as the inputs of other PTransforms. Some root transforms produce bounded PCollections and others produce unbounded ones.
Each element in a PCollection has an associated timestamp. Sources assign timestamps to elements when they create PCollections, and other PTransforms propagate these timestamps from their input to their output implicitly or explicitly.
Additionally, each element is assigned to a set of windows. By default, all elements are assigned into a single default window, GlobalWindow.
func AddFixedKey(s Scope, col PCollection) PCollection
AddFixedKey adds a fixed key (0) to every element.
func CoGroupByKey(s Scope, cols ...PCollection) PCollection
CoGroupByKey inserts a CoGBK transform into the pipeline.
func Combine(s Scope, combinefn interface{}, col PCollection, opts ...Option) PCollection
Combine inserts a global Combine transform into the pipeline. It expects a PCollection<T> as input where T is a concrete type. Combine supports TypeDefinition options for binding generic types in combinefn.
func CombinePerKey(s Scope, combinefn interface{}, col PCollection, opts ...Option) PCollection
CombinePerKey inserts a GBK and per-key Combine transform into the pipeline. It expects a PCollection<KV<K,T>>. The CombineFn may optionally take a key parameter. CombinePerKey supports TypeDefinition options for binding generic types in combinefn.
func Create(s Scope, values ...interface{}) PCollection
Create inserts a fixed set of values into the pipeline. The values must be of the same type 'A' and the returned PCollection is of type A.
The returned PCollections can be used as any other PCollections. The values are JSON-coded. Each runner may place limits on the sizes of the values and Create should generally only be used for small collections.
func CreateList(s Scope, list interface{}) PCollection
CreateList inserts a fixed set of values into the pipeline from a slice or array. It is a convenience wrapper over Create.
func DropKey(s Scope, col PCollection) PCollection
DropKey drops the key for an input PCollection<KV<A,B>>. It returns a PCollection<B>.
func DropValue(s Scope, col PCollection) PCollection
DropValue drops the value for an input PCollection<KV<A,B>>. It returns a PCollection<A>.
func Explode(s Scope, col PCollection) PCollection
Explode is a PTransform that takes a single PCollection<[]A> and returns a PCollection<A> containing all the elements for each incoming slice.
func External(s Scope, spec string, payload []byte, in []PCollection, out []FullType, bounded bool) []PCollection
External defines a Beam external transform. The interpretation of this primitive is runner specific. The runner is responsible for parsing the payload based on the spec provided to implement the behavior of the operation. Transform libraries should expose an API that captures the user's intent and serialize the payload as a byte slice that the runner will deserialize.
func Flatten(s Scope, cols ...PCollection) PCollection
Flatten is a PTransform that takes either multiple PCollections of type 'A' and returns a single PCollection of type 'A' containing all the elements in all the input PCollections. The name "Flatten" suggests taking a list of lists and flattening them into a single list.
By default, the Coder of the output PCollection is the same as the Coder of the first PCollection.
func GroupByKey(s Scope, a PCollection) PCollection
GroupByKey is a PTransform that takes a PCollection of type KV<A,B>, groups the values by key and windows, and returns a PCollection of type GBK<A,B> representing a map from each distinct key and window of the input PCollection to an iterable over all the values associated with that key in the input per window. Each key in the output PCollection A are compared for equality by first encoding each of the keys using the Coder of the keys of the input PCollection, and then comparing the encoded bytes. This admits efficient parallel evaluation. Note that this requires that the Coder of the keys be deterministic.
By default, input and output PCollections share a key Coder and iterable values in the input and output PCollection share an element Coder..
Code:
type Doc struct{} var urlDocPairs beam.PCollection // PCollection<KV<string, Doc>> urlToDocs := beam.GroupByKey(s, urlDocPairs) // PCollection<CoGBK<string, Doc>> // CoGBK parameters receive an iterator function with all values associated // with the same key. beam.ParDo0(s, func(key string, values func(*Doc) bool) { var cur Doc for values(&cur) { // ... process all docs having that url ... } }, urlToDocs) // PCollection<KV<string, []Doc>>
func Impulse(s Scope) PCollection
Impulse emits a single empty []byte into the global window. The resulting PCollection is a singleton of type []byte.
The purpose of Impulse is to trigger another transform, such as ones that take all information as side inputs.
func ImpulseValue(s Scope, value []byte) PCollection
ImpulseValue emits the supplied byte slice into the global window. The resulting PCollection is a singleton of type []byte.
func Must(a PCollection, err error) PCollection
Must returns the input, but panics if err != nil.
func MustN(list []PCollection, err error) []PCollection
MustN returns the input, but panics if err != nil.
func ParDo(s Scope, dofn interface{}, col PCollection, opts ...Option) PCollection
ParDo is the core element-wise PTransform in Apache Beam, invoking a user-specified function on each of the elements of the input PCollection to produce zero or more output elements, all of which are collected into the output PCollection. Use one of the ParDo variants for a different number of output PCollections. The PCollections do no need to have the same types.
Elements are processed independently, and possibly in parallel across distributed cloud resources. The ParDo processing style is similar to what happens inside the "Mapper" or "Reducer" class of a MapReduce-style algorithm.
The function to use to process each element is specified by a DoFn, either as single function or as a struct with methods, notably ProcessElement. The struct may also define Setup, StartBundle, FinishBundle and Teardown methods. The struct is JSON-serialized and may contain construction-time values.
Conceptually, when a ParDo transform is executed, the elements of the input PCollection are first divided up into some number of "bundles". These are farmed off to distributed worker machines (or run locally, if using the direct runner). For each bundle of input elements processing proceeds as follows:
* If a struct, a fresh instance of the argument DoFn is created on a worker from json serialization, and the Setup method is called on this instance, if present. A runner may reuse DoFn instances for multiple bundles. A DoFn that has terminated abnormally (by returning an error) will never be reused. * The DoFn's StartBundle method, if provided, is called to initialize it. * The DoFn's ProcessElement method is called on each of the input elements in the bundle. * The DoFn's FinishBundle method, if provided, is called to complete its work. After FinishBundle is called, the framework will not again invoke ProcessElement or FinishBundle until a new call to StartBundle has occurred. * If any of Setup, StartBundle, ProcessElement or FinishBundle methods return an error, the Teardown method, if provided, will be called on the DoFn instance. * If a runner will no longer use a DoFn, the Teardown method, if provided, will be called on the discarded instance.
Each of the calls to any of the DoFn's processing methods can produce zero or more output elements. All of the of output elements from all of the DoFn instances are included in an output PCollection.
For example:
words := beam.ParDo(s, &Foo{...}, ...) lengths := beam.ParDo(s, func (word string) int) { return len(word) }, works)
Each output element has the same timestamp and is in the same windows as its corresponding input element. The timestamp can be accessed and/or emitted by including a EventTime-typed parameter. The name of the function or struct is used as the DoFn name. Function literals do not have stable names and should thus not be used in production code.
While a ParDo processes elements from a single "main input" PCollection, it can take additional "side input" PCollections. These SideInput along with the DoFn parameter form express styles of accessing PCollection computed by earlier pipeline operations, passed in to the ParDo transform using SideInput options, and their contents accessible to each of the DoFn operations. For example:
words := ... cufoff := ... // Singleton PCollection<int> smallWords := beam.ParDo(s, func (word string, cutoff int, emit func(string)) { if len(word) < cutoff { emit(word) } }, words, beam.SideInput{Input: cutoff})
Optionally, a ParDo transform can produce zero or multiple output PCollections. Note the use of ParDo2 to specfic 2 outputs. For example:
words := ... cufoff := ... // Singleton PCollection<int> small, big := beam.ParDo2(s, func (word string, cutoff int, small, big func(string)) { if len(word) < cutoff { small(word) } else { big(word) } }, words, beam.SideInput{Input: cutoff})
By default, the Coders for the elements of each output PCollections is inferred from the concrete type.
There are three main ways to initialize the state of a DoFn instance processing a bundle:
* Define public instance variable state. This state will be automatically JSON serialized and then deserialized in the DoFn instances created for bundles. This method is good for state known when the original DoFn is created in the main program, if it's not overly large. This is not suitable for any state which must only be used for a single bundle, as DoFn's may be used to process multiple bundles. * Compute the state as a singleton PCollection instance, in a StartBundle method. This is good if the initialization doesn't depend on any information known only by the main program or computed by earlier pipeline operations, but is the same for all instances of this DoFn for global variable state in their DoFn, without understanding that the Go bundles. This means that a DoFn instance might process a bundle partially, then crash for some reason, then be rerun (often as a new process).
See for the web documentation for ParDo
Optionally, a ParDo transform can produce zero or multiple output PCollections. Note the use of ParDo2 to specify 2 outputs.
Code:
var words beam.PCollection // PCollection<string> var cutoff beam.PCollection // Singleton PCollection<int> small, big := beam.ParDo2(s, func(word string, cutoff int, small, big func(string)) { if len(word) < cutoff { small(word) } else { big(word) } }, words, beam.SideInput{Input: cutoff}) _, _ = small, big
func ParDoN(s Scope, dofn interface{}, col PCollection, opts ...Option) []PCollection
ParDoN inserts a ParDo with any number of outputs into the pipeline.
func Partition(s Scope, n int, fn interface{}, col PCollection) []PCollection
Partition takes a PCollection<T> and a PartitionFn, uses the PartitionFn to split the elements of the input PCollection into N partitions, and returns a []PCollection<T> that bundles N PCollection<T>s containing the split elements.
func Seq(s Scope, col PCollection, dofns ...interface{}) PCollection
Seq is a convenience helper to chain single-input/single-output ParDos together in a sequence.
func SwapKV(s Scope, col PCollection) PCollection
SwapKV swaps the key and value for an input PCollection<KV<A,B>>. It returns a PCollection<KV<B,A>>.
func TryCoGroupByKey(s Scope, cols ...PCollection) (PCollection, error)
TryCoGroupByKey inserts a CoGBK transform into the pipeline. Returns an error on failure.
func TryCombine(s Scope, combinefn interface{}, col PCollection, opts ...Option) (PCollection, error)
TryCombine attempts to insert a global Combine transform into the pipeline. It may fail for multiple reasons, notably that the combinefn is not valid or cannot be bound -- due to type mismatch, say -- to the incoming PCollections.
func TryCombinePerKey(s Scope, combinefn interface{}, col PCollection, opts ...Option) (PCollection, error)
TryCombinePerKey attempts to insert a per-key Combine transform into the pipeline. It may fail for multiple reasons, notably that the combinefn is not valid or cannot be bound -- due to type mismatch, say -- to the incoming PCollection.
func TryCreate(s Scope, values ...interface{}) (PCollection, error)
TryCreate inserts a fixed set of values into the pipeline. The values must be of the same type.
func TryExternal(s Scope, spec string, payload []byte, in []PCollection, out []FullType, bounded bool) ([]PCollection, error)
TryExternal attempts to perform the work of External, returning an error indicating why the operation failed.
func TryFlatten(s Scope, cols ...PCollection) (PCollection, error)
TryFlatten merges incoming PCollections of type 'A' to a single PCollection of type 'A'. Returns an error indicating the set of PCollections that could not be flattened.
func TryGroupByKey(s Scope, a PCollection) (PCollection, error)
TryGroupByKey inserts a GBK transform into the pipeline. Returns an error on failure.
func TryParDo(s Scope, dofn interface{}, col PCollection, opts ...Option) ([]PCollection, error)
TryParDo attempts to insert a ParDo transform into the pipeline. It may fail for multiple reasons, notably that the dofn is not valid or cannot be bound -- due to type mismatch, say -- to the incoming PCollections.
func TryWindowInto(s Scope, ws *window.Fn, col PCollection) (PCollection, error)
TryWindowInto attempts to insert a WindowInto transform.
func WindowInto(s Scope, ws *window.Fn, col PCollection) PCollection
WindowInto applies the windowing strategy to each element.
func (p PCollection) Coder() Coder
Coder returns the coder for the collection. The Coder is of type 'A'.
func (p PCollection) IsValid() bool
IsValid returns true iff the PCollection is valid and part of a Pipeline. Any use of an invalid PCollection will result in a panic.
func (p PCollection) SetCoder(c Coder) error
SetCoder set the coder for the collection. The Coder must be of type 'A'.
func (p PCollection) String() string
func (p PCollection) Type() FullType
Type returns the full type 'A' of the elements. 'A' must be a concrete type, such as int or KV<int,string>.
Pipeline manages a directed acyclic graph of primitive PTransforms, and the PCollections that the PTransforms consume and produce. Each Pipeline is self-contained and isolated from any other Pipeline. The Pipeline owns the PCollections and PTransforms and they can by used by that Pipeline only. Pipelines can safely be executed concurrently.
NewPipeline creates a new empty pipeline.
Build validates the Pipeline and returns a lower-level representation for execution. It is called by runners only.
Root returns the root scope of the pipeline.
Scope is a hierarchical grouping for composite transforms. Scopes can be enclosed in other scopes and for a tree structure. For pipeline updates, the scope chain form a unique name. The scope chain can also be used for monitoring and visualization purposes.
IsValid returns true iff the Scope is valid. Any use of an invalid Scope will result in a panic.
Scope returns a sub-scope with the given name. The name provided may be augmented to ensure uniqueness.
type SideInput struct { Input PCollection }
SideInput provides a view of the given PCollection to the transformation.
Code:
// words and sample are PCollection<string> var words, sample beam.PCollection // analyzeFn emits values from the primary based on the singleton side input. analyzeFn := func(primary string, side string, emit func(string)) {} // Use beam.SideInput to declare that the sample PCollection is the side input. beam.ParDo(s, analyzeFn, words, beam.SideInput{Input: sample})
T is a Universal Type used to represent "generic" types in DoFn and PCollection signatures. Each universal type is distinct from all others.
type TypeDefinition struct { // Var is the universal type defined. Var reflect.Type // T is the type it is bound to. T reflect.Type }
TypeDefinition provides construction-time type information that the platform cannot infer, such as structured storage sources. These types are universal types that appear as output only. Types that are inferrable should not be conveyed via this mechanism.
U is a Universal Type used to represent "generic" types in DoFn and PCollection signatures. Each universal type is distinct from all others.
V is a Universal Type used to represent "generic" types in DoFn and PCollection signatures. Each universal type is distinct from all others.
W is a Universal Type used to represent "generic" types in DoFn and PCollection signatures. Each universal type is distinct from all others.
X is a Universal Type used to represent "generic" types in DoFn and PCollection signatures. Each universal type is distinct from all others.
Y is a Universal Type used to represent "generic" types in DoFn and PCollection signatures. Each universal type is distinct from all others.
Z is a Universal Type used to represent "generic" types in DoFn and PCollection signatures. Each universal type is distinct from all others.
Package beam imports 20 packages (graph) and is imported by 49 packages. Updated 2019-08-11. Refresh now. Tools for package owners.
|
https://godoc.org/github.com/apache/beam/sdks/go/pkg/beam
|
CC-MAIN-2019-35
|
refinedweb
| 4,561
| 50.63
|
Quick Note: Implementing Running for Character in Godot
Forewords
Before beginning, I'd like to announce there's a new top-down action RPG tutorial here by HeartBeast. It's the most up-to-date tutorial for Godot in Youtube right now, which is version 3.2. As of I'm writing this, he's already 11 episodes in. Go and show your love.
By the way, I'm trying out C# in Godot, but the tutorial series above uses GDScript, so mine is going to be a bit different. I also need to state that I've never coded in C# before, what I had in mind before I started C# was that C# was kinda like Java, which helped me understand it a lot but, if you see any quirky code, like violation of style guides or common practices in C#, let me know or simply ignore them.
This post also follows the logic of HeartBeast's tutorial, so I strongly recommend you check it out, at least 4 episodes in.
Movement 101
Firstly, we need to understand that there are three parts for a smooth movement:
- Acceleration is how fast a player starts moving.
- Friction is how fast a player stops moving.
- Of course when a player accelerates, you need to define a peek value, which is the limit of how fast it can accelerate. This is called maximum speed.
When you imagine it in a graph, it starts increasing as soon as the player receives an input and it starts decreasing back to zero when it stops receiving the input. See below:
(I know this is not the most beautiful graph you've ever seen but bear with me please.)
Knowing this, we can assume a player's code (assuming the player is a
KinematicBody2D) is going to be as such:
using Godot; using System; public class Player : KinematicBody2D { private const int ACCELERATION = 500; private const int MAX_SPEED = 60; private const int FRICTION = 500; private Vector2 velocity; public override void _PhysicsProcess(float delta) // compute next frame { // let's create a vector for user input, defining which coordinates the vector is targeting to var input_vector = Vector2.Zero; // equivalent to `new Vector2()`, I use it for shorthand sometimes input_vector.x = Input.GetActionStrength("ui_right") - Input.GetActionStrength("ui_left"); // assign *if it is left or right* to input vector input_vector.y = Input.GetActionStrength("ui_down") - Input.GetActionStrength("ui_up"); // assign *if it is up or down* to input vector input_vector = input_vector.Normalized(); // see heartbeast for this, he explains better if (input_vector == Vector2.Zero) // if we do not receive any input { velocity = velocity.MoveToward(Vector2.Zero, FRICTION * delta); } else // if we receive any input { velocity = velocity.MoveToward(input_vector * MAX_SPEED, ACCELERATION * delta); } velocity = MoveAndSlide(velocity); // move to target and new assign target coordinates to velocity } }
This is a basic movement which has a constant speed. You can play with
ACCELERATION,
MAX_SPEED and
FRICTION constants to change the feel of your movement.
How to Run
Let's assume we'd like to bump our speed when we, say, press
SHIFT key. In order to do that, we first need to define an input. You can do that by clicking "Project > Project Settings" menu...
...and going into "Input Map" tab.
As you can see, the ones such as
ui_up and
ui_down in the code are defined here along with the keys they target. So, as in the image, we need to add a new action and mapping it to a key by first defining a name for our new action and clicking the "Add" button.
Then we need to add a key for our new
accelerate action...
...and define a key for it.
When you do that, you can now check if
accelerate action is received by writing a code like below in C#:
bool is_accelerating = Input.IsActionPressed("accelerate");
Now we need to refactor our code a little bit. I wanted to do this with as little change to the code as possible. Our target is
ACCELERATION,
MAX_SPEED and
FRICTION constants.
The problem is these are constants, which are not meant to be mutated on the runtime. However, we'd like to change the values of these depending on if we receive
accelerate action or not. So, we change these to static.
private static int ACCELERATION = 500; private static int MAX_SPEED = 60; private static int FRICTION = 500;
There is also a very nice feature in C#. Just like Kotlin (for the sake of giving an example from JVM-based languages), we can write getters (and also setters) in property line. So, my code changes to:
private static int ACCELERATION { get { var value = 500; if (Input.IsActionPressed("accelerate")) { value = 700; } return value; } } private static int MAX_SPEED { get { var value = 60; if (Input.IsActionPressed("accelerate")) { value = 90; } return value; } } private static int FRICTION { get { var value = 500; if (Input.IsActionPressed("accelerate")) { value = 700; } return value; } }
We define default values with
value variable and change it if we receive
accelerate action. The code above results in the table below:
You can, again, change the values as your needs. Some values feel like you are controlling a rock while others feel like the ground is ice.
Final Words
As I've said in the beginning, be sure to check HeartBeast's Godot RPG series. It's the hottest thing in Godot tutorials right now. See you in the next post.
|
https://erayerdin.hashnode.dev/quick-note-implementing-running-for-character-in-godot-ck8mb466e01bb74s13k4zp40c?guid=none&deviceId=e170e959-44ed-46b6-92fa-bbd04d2e0432
|
CC-MAIN-2021-17
|
refinedweb
| 889
| 62.17
|
nfsservctl — syscall interface to kernel nfs daemon
Synopsis
#include <linux/nfsd/syscall.h> long nfsservctl(int cmd, struct nfsctl_arg *argp, union nfsctl_res *resp);
Description). */.
Versions
This system call was removed from the Linux kernel in version 3.1. Library support was removed from glibc in version 2.28.
Conforming to
This call is Linux-specific.
Colophon
This page is part of release 5.04 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at.
Referenced By
capabilities(7), syscalls(2).
2019-03-06 Linux Programmer's Manual
|
https://dashdash.io/2/nfsservctl
|
CC-MAIN-2020-29
|
refinedweb
| 103
| 62.44
|
/>
Everyone understands averages, both their meaning and how to calculate them. However, there are situations, particularly when dealing with real-time data, when a conventional average is of little use because it includes old values which are no longer relevant and merely give a misleading impression of the current situation.
The solution to this problem is to use moving averages, ie. the average of the most recent values rather than all values, which is the subject of this post.
To illustrate the problem I will show part of the output of the program I'll write in this post. It shows the last rows of a set of server response times
Program Output - tail end of 1000 rows
------------------ | codedrome.com | | MovingAverages | ------------------ ------------------------------------------------- | value|overall average| moving average| ------------------------------------------------- . . . | 46.00| 29.77| 31.00| | 20.00| 29.76| 32.75| | 38.00| 29.76| 37.25| | 44.00| 29.78| 37.00| | 49.00| 29.80| 37.75| | 36.00| 29.80| 41.75| | 11.00| 29.79| 35.00| | 20.00| 29.78| 29.00| | 40.00| 29.79| 26.75| | 10.00| 29.77| 20.25| | 47.00| 29.78| 29.25| | 13.00| 29.77| 27.50| | 24.00| 29.76| 23.50| | 38.00| 29.77| 30.50| | 31.00| 29.77| 26.50| | 50.00| 29.79| 35.75| | 32.00| 29.79| 37.75| | 21.00| 29.78| 33.50| | 42.00| 29.80| 36.25| | 165.00| 29.93| 65.00| | 256.00| 30.16| 121.00| | 419.00| 30.55| 220.50| | 329.00| 30.85| 292.25| | 128.00| 30.94| 283.00| -------------------------------------------------
Most times in the left hand column are between 10ms and 50ms and can be considered normal but the last few shoot up considerably. The second column shows overall averages which we might use to monitor the server for any problems. However, the large number of normal times included in these averages mean that although the server has slowed down considerably for the last few requests the averages have hardly risen at all and we wouldn't realise anything was wrong. The last column shows 4-point moving averages, or the averages of only the last four values. These of course do increase considerably and so alarm bells should start to ring.
Having explained both the problem and its solution, let's write some code. This project consists of the following files which can be downloaded as a zip or you can clone/download the Github repository if you prefer.
- movingaverageslist.py
- movingaverages_test.py
Source Code Links
The movingaverageslist.py file implements a class which maintains a list of numerical values, and each time a new value is added the overall average and moving average up to that point are also calculated.
movingaverageslist.py
class MovingAveragesList(object): """ This class implements a list to which numeric values can be appended. Doing so actually appends a dictionary containing three values: "value" - the value added "average" - the arithmetic mean of all values up to and including the current one "movingaverage" - the arithmetic mean of the specified number of previous values The underlying list can be accessed using objectname.data. """ def __init__(self, points): """ The points argument specifies how many previous values should be used to calculate each moving average. """ self.data = [] self.points = points def append(self, n): """ Adds a dictionary of value, overall average and moving average to the list. """ average = self.__calculate_overall_average(n) moving_average = self.__calculate_moving_average(n) self.data.append({"value": n, "average": average, "movingaverage": moving_average}) def __calculate_overall_average(self, n): length = len(self.data) if length == 0: average = n else: average = (((self.data[length - 1]["average"]) * length) + n) / (length + 1) return average def __calculate_moving_average(self, n): length = len(self.data) if length == 0: moving_average = n elif length <= self.points - 1: moving_average = (((self.data[length - 1]["average"]) * length) + n) / (length + 1) else: moving_average = ((self.data[length - 1]["movingaverage"] * self.points) - (self.data[length - self.points]["value"]) + n) / self.points return moving_average def __str__(self): """ Create a grid from the data in the list. """ items = [] items.append("-" * 49 + "\n") items.append("| value|overall average| moving average|\n") items.append("-" * 49 + "\n") for item in self.data: items.append("|{:15.2f}|{:15.2f}|{:15.2f}|\n" .format(item["value"], item["average"], item["movingaverage"])) items.append("-" * 49) return "".join(items)
In __init__ we simply create an empty list, and set the points attribute, ie. the number of values used to calculate the average.
In the append method, the overall and moving averages are calculated using separate functions which I'll come to in a minute. Then a dictionary containing the new value and the two averages is appended to the list.
In __calculate_overall_average we don't need to add up all the values each time, we can just multiply the previous average by the count and then add the new value. This is then divided by the length + 1, ie. the length the list will be when the new value is added.
The __calculate_moving_average function uses a similar technique but is more complex as it has to allow for the list not yet having reached the length of the number of points. In this situation it just calculates the mean of whatever data the list has.
Lastly we implement __str__ which returns the data in a table format suitable for outputting to the console.
The MovingAveragesList class is now complete so let's put together a simple demo.
movingaverages_test.py
import random import movingaverageslist def main(): print("------------------") print("| codedrome.com |") print("| MovingAverages |") print("------------------\n") response_times_ms = populate_response_times() print(response_times_ms) # Quick demo of accessing the list directly. print(response_times_ms.data[-1]) def populate_response_times(): """ Create a MovingAveragesList object and populate it with random response times. """ response_times_ms = movingaverageslist.MovingAveragesList(4) # Add a large number of normal times for t in range(1, 996): response_times_ms.append(random.randint(10, 50)) # Add a few excessively long times for t in range(1, 6): response_times_ms.append(random.randint(100, 500)) return response_times_ms main()
In main we call populate_response_times to get a MovingAveragesList object with 1000 items, and then print the object. As we implemented __str__ in the class this will be called and therefore we'll see the table described above.
I have also added a line which prints the last item in the list just to show how to access the most recent value and averages. A possible enhancement would be to wrap this in a method to avoid rummaging around in the inner workings of the class.
The populate_response_times function creates a MovingAveragesList object with a points value of 4. This is probably too low for practical purposes but it does make manual testing easier!
It then adds a large number of "normal" values to it; remember that each time a value is added new overall and moving averages are also added. Then a few large numbers are added to simulate a server problem before we return the object.
Now we can run the program like this...
Running the program
python3.7 movingaverages_test.py
I won't repeat the output but you'll see 1000 rows of data whizzing up your console.
Possible Improvements
The MovingAveragesList class has been tailored to demonstrating the problem it solves and how it does it. In a production environment this are unnecessary and there are a few improvements which could make the class more efficient and useful.
- We could drop the overall averages
- Only the latest moving average could be kept
- We could delete the oldest value each time a new one is added, just keeping a restricted number of the latest values
- We could forget the list concept entirely and just keep a single moving average, updated from any new values added
- We could include a threshold and function to be called if the threshold is exceeded, for example sending out emails if the server response time slows to an unacceptable level
|
https://www.codedrome.com/moving-averages-in-python/
|
CC-MAIN-2021-31
|
refinedweb
| 1,309
| 53.61
|
m2wsgi 0.5.1
a mongrel2 => wsgi gateway and helper tools
This module provides a WSGI gateway handler for the Mongrel2 webserver, allowing easy deployment of python apps on Mongrel2. It provides full support for chunked response encoding, streaming reads of large file uploads, and pluggable backends for evented I/O a la eventlet.
You might also find its supporting classes useful for developing non-WSGI handlers in python.
Command-line usage
The simplest way to use this package is as a command-line launcher:
m2wsgi dotted.app.name tcp://127.0.0.1:9999
This will connect to Mongrel2 on the specified request port and start handling requests by passing them through the specified WSGI app. By default you will get a single worker thread handling all requests, which is probably not what you want; increase the number of threads like so:
m2wsgi --num-threads=5 dotted.app.name tcp://127.0.0.1:9999
If threads aren't your thing, you can just start several instances of the handler pointing at the same zmq socket and get the same effect. Better yet, you can use eventlet to shuffle the bits around like so:
m2wsgi --io=eventlet dotted.app.name tcp://127.0.0.1:9999
You can also use --io=gevent if that's how you roll. Contributions for other async backends are most welcome.
Programmatic Usage
If you have more complicated needs, you can use m2wsgi from within your application. The main class is 'WSGIHandler' which provides a simple server interface. The equivalent of the above command-line usage is:
from m2wsgi.io.standard import WSGIHandler handler = WSGIHandler(my_wsgi_app,"tcp://127.0.0.1:9999") handler.serve()
There are a lot of "sensible defaults" being filled in here. It assumes that the Mongrel2 recv socket is on the next port down from the send socket, and that it's OK to connect the socket without a persistent identity.
For finer control over the connection between your handler and Mongrel2, create your own Connection object. Here we use 127.0.0.1:9999 as the send socket with identity AA-BB-CC, and use 127.0.0.2:9992 as the recv socket:
from m2wsgi.io.standard import WSGIHandler, Connection conn = Connection(send_sock="tcp://AA-BB-CC@127.0.0.1:9999", recv_sock="tcp://127.0.0.1:9992") handler = WSGIHandler(my_wsgi_app,conn) handler.serve()
If you're creating non-WSGI handlers for Mongrel2 you might find the following classes useful:
Middleware
If you need to add fancy features to the server, you can specify additional WSGI middleware that should be applied around the application. For example, m2wsgi provides a gzip-encoding middleware that can be used to compress response data:
m2wsgi --middleware=GZipMiddleware dotted.app.name tcp://127.0.0.1:9999
If you want additional compression at the expense of WSGI compliance, you can also do some in-server buffering before the gzipping is applied:
m2wsgi --middleware=GZipMiddleware --middleware=BufferMiddleware dotted.app.name tcp://127.0.0.1:9999
The default module for loading middleware is m2wsgi.middleware; specify a full dotted name to load a middleware class from another module.
Devices
This module also provides a number of pre-built "devices" - stand-alone executables designed to perform a specific common task. Currently availble devices are:
Don't we already have one of these?
Yes, there are several existing WSGI gateways for Mongrel2:
None of them fully met my needs. In particular, this package has transparent support for:
- chunked response encoding
- streaming reads of large "async upload" requests
- pluggable IO backends (e.g. eventlet, gevent)
It's also designed from the ground up specifically for Mongrel2. This means it gets a lot of functionality for free, and the code is simpler and lighter as a result.
For example, there is no explicit management of a threadpool and request queue as you might find in e.g. the CherryPy server. Instead, you just start up as many threads as you need, have them all connect to the same handler socket, and mongrel2 (via zmq) will automatically load-balance the requests to them.
Similarly, there's no fancy arrangement of master/worker processes to support clean reloading of the handler; you just kill the old handler process and start up a new one. Send m2wsgi a SIGHUP and it will automatically shutdown and reincarnate itself for a clean restart.
Current bugs, limitations and things to do
It's not all perfect just yet, although it does seem to mostly work:
- Needs tests something fierce! I just have to find the patience to write the necessary setup and teardown cruft.
- It would be great to grab connection details straight from the mongrel2 config database. Perhaps a Connection.from_config method with keywords to select the connection by handler id, host, route etc.
- support for expect-100-continue; this may have to live in mongrel2
- Author: Ryan Kelly
- Keywords: wsgi mongrel2
- License: MIT
- Categories
- Package Index Owner: rfk
- DOAP record: m2wsgi-0.5.1.xml
|
http://pypi.python.org/pypi/m2wsgi/0.5.1
|
crawl-003
|
refinedweb
| 834
| 54.93
|
Question:
How can I add a line break to text when it is being set as an attribute i.e.:
<TextBlock Text="Stuff on line1 \n Stuff on line2" />
Breaking it out into the exploded format isn't an option for my particular situation. What I need is some way to emulate the following:
<TextBlock> <TextBlock.Text> Stuff on line1 <LineBreak/> Stuff on line2 </TextBlock.Text> <TextBlock/>
Solution:1
<TextBlock Text="Stuff on line1
Stuff on line 2"/>
You can use any hexadecimally encoded value to represent a literal. In this case, I used the line feed (char 10). If you want to do "classic"
vbCrLf, then you can use
By the way, note the syntax: It's the ampersand, a pound, the letter x, then the hex value of the character you want, and then finally a semi-colon.
ALSO: For completeness, you can bind to a text that already has the line feeds embedded in it like a constant in your code behind, or a variable constructed at runtime.
Solution:2
May be you can use the attribute xml:space="preserve" for preserving whitespace in the source XAML
<TextBlock xml: Stuff on line 1 Stuff on line 2 </TextBlock>
Solution:3
When you need to do it in a string (eg: in your resources) you need to use
xml:space="preserve" and the ampersand character codes:
<System:String x:First line Second line</System:String>
Or literal newlines in the text:
<System:String x:First line Second line</System:String>
Warning: if you write code like the second example, you have inserted either a newline, or a carriage return and newline, depending on the line endings your operating system and/or text editor use. For instance, if you write that and commit it to git from a linux systems, everything may seem fine -- but if someone clones it to Windows, git will convert your line endings to
\r\n and depending on what your string is for ... you might break the world.
Just be aware of that when you're preserving whitespace. If you write something like this:
<System:String x: First line Second line </System:String>
You've actually added four line breaks, possibly four carriage-returns, and potentially trailing white space that's invisible...
Solution:4
You need just removing
<TextBlock.Text> and simply adding your content as following:
<Grid Margin="20"> <TextBlock TextWrapping="Wrap" TextAlignment="Justify" FontSize="17"> <Bold FontFamily="Segoe UI Light" FontSize="70">I.R. Iran</Bold><LineBreak/> <Span FontSize="35">I</Span>ran or Persia, officially the <Italic>Islamic Republic of Iran</Italic>, is a country in Western Asia. The country is bordered on the north by Armenia, Azerbaijan and Turkmenistan, with Kazakhstan and Russia to the north across the Caspian Sea.<LineBreak/> <Span FontSize="10">For more information about Iran see <Hyperlink NavigateUri="">WikiPedia</Hyperlink></Span> <LineBreak/> <LineBreak/> <Span FontSize="12"> <Span>Is this page helpful?</Span> <Button Content="No"/> <Button Content="Yes"/> </Span> </TextBlock> </Grid>
Solution:5
Note that to do this you need to do it in the Text attribute you cannot use the content like
<TextBlock>Stuff on line1
Stuff on line 2</TextBlock>
Solution:6
Maybe someone prefers
<TextBlock Text="{Binding StringFormat='Stuff on line1{0}Stuff on line2{0}Stuff on line3', Source={x:Static s:Environment.NewLine}}" />
with
xmlns:s="clr-namespace:System;assembly=mscorlib".
Solution:7
I have found this helpful, but ran into some errors when adding it to a "Content=..." tag in XAML.
I had multiple lines in the content, and later found out that the content kept white spaces even though I didn't specify that. so to get around that and having it "ignore" the whitespace, I implemented such as this.
<ToolTip Width="200" Style="{StaticResource ToolTip}" Content="'Text Line 1'
'Text Line 2'
'Text Line 3'"/>
hope this helps someone else.
(The output is has the three text lines with an empty line in between each.)
Solution:8
For those that have tried every answer to this question and are still scratching their heads as to why none of them work for you, you might have ran into a form of the issue I ran into.
My
TextBlock.Text property was inside of a
ToolTipService.ToolTip element and it was databound to a property of an object whose data was being pulled from a SQL stored procedure. Now the data from this particular property within the stored procedure was being pulled from a SQL function.
Since nothing had worked for me, I gave up my search and created the converter class below:
public class NewLineConverter : IValueConverter { public object Convert(object value, Type targetType, object parameter, CultureInfo culture) { var s = string.Empty; if (value.IsNotNull()) { s = value.ToString(); if (s.Contains("\\r\\n")) s = s.Replace("\\r\\n", Environment.NewLine); if (s.Contains("\\n")) s = s.Replace("\\n",("<br />")) s = s.Replace("<br />", Environment.NewLine); if (s.Contains("<LineBreak />")) s = s.Replace("<LineBreak />", Environment.NewLine); } return s; } public object ConvertBack(object value, Type targetType, object parameter, CultureInfo culture) { throw new NotImplementedException(); } }
I ended up having to use the
Enivornment.NewLine method from @dparker's answer. I instructed the converter to look for any possible textual representation of a newline and replace it with
Environment.NewLine.
This worked!
However, I was still perplexed as to why none of the other methods worked with databound properties.
I left a comment on @BobKing's accepted answer:
@BobKing - This doesn't seem to work in the ToolTipService.ToolTip when binding to a field that has the line feeds embedded from a SQL sproc.
He replied with:
@CodeMaverick If you're binding to text with the new lines embedded, they should probably be real char 10 values (or 13's) and not the XML sentinels. This is only if you want to write literal new lines in XAML files.
A light bulb went off!
I went into my SQL function, replaced my textual representations of newlines with ...
CHAR( 13 ) + CHAR( 10 )
... removed the converter from my
TextBlock.Text binding, and just like that ... it worked!
Solution:9
Also doesn't work with
<TextBlock><TextBlock.Text>NO USING ABOVE TECHNIQUE HERE</TextBlock.Text>
No big deal, just needed to use
<TextBlock Text="Cool
Newline trick" />
instead.
Solution:10
I realize this is on older question but just wanted to add that
Environment.NewLine
also works if doing this through code.
Solution:11
<TextBox Name="myTextBox" TextWrapping="Wrap" AcceptsReturn="True" VerticalScrollBarVisibility="Visible" />
Solution:12
<TextBlock> Stuff on line1 <LineBreak/> Stuff on line2 </TextBlock>
not that it's important to know but what you specify between the TextBlock tags is called inline content and goes into the TextBlock.Inlines property which is a InlineCollection and contains items of type Inline. Subclasses of Inline are Run and LineBreak, among others. see TextBlock.Inlines
Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com
EmoticonEmoticon
|
http://www.toontricks.com/2018/11/tutorial-newline-in-string-attribute.html
|
CC-MAIN-2019-04
|
refinedweb
| 1,154
| 54.93
|
Running the LAME MP3 encoder under the control of .NET isn't difficult and it provides a really good example of controlling an external .EXE.
The idea of using runtime components predates COM, ActiveX and managed code. The concept of filters, standard executables that can be linked together by data pipes is an old one. It never really worked well because the degree of interaction and control one filter could exert over another was generally minimal.
However the idea of using some sort of computational engine and a front end is still attractive when the task in hand is fairly simple. If the computational engine is a .EXE application then it is easy to produce and can be easily tested using the command line.
The problems start when you try to automate the running of such an engine. You have to mimic a user pressing keys via facilities such as Shellm, SendKeys and AppActivate and so on.
These generally don’t work very well due to lack of control over how the application is run and the difficulty of dealing with any unexpected behaviour of the application that is being controlled.
For example, if the application crashes it doesn’t inform the front end that launched it. If you have tried to use any of these facilities you will know what the pitfalls are.
What is really needed is some way to take a .EXE application and run it in a much more controlled way – to put it in a box. From looking at the documentation you might conclude that .NET had little to offer that is new in this area.
It does have all of the commands familiar to VB programmers within the Visual Basic namespace - they just don’t do any better than their VB 6 equivalents. However the good news is that .NET does have the facilities to let you run and keep control of any application you care to work with – but it is in the Diagnostics namespace so you might well miss it.
As an example of how to “box” a standard .EXE or application engine the LAME MP3 encoder is ideal. It is an open source MP3 encoder in the form of a .EXE file that accepts command line parameters and will convert wav files to mp3. The only problem with it is its name – why do so many open source projects have embarrassingly silly names.
The main LAME web site is:
but it only supplies source code that has to be compiled. If you want a pre-compiled version then look at the links page at the LAME website or try:
All you need is a copy of Lame.exe in the project directory but you will also find some limited documentation included with the distribution.
Once you have the LAME encoder installed, start a new C# Windows project (you can achieve the same results using VB .NET or C++) and add three buttons – Browse, Encode and Exit - a textbox and a multi-line textbox. Enable scroll bars on the multi-lne textbox.
You can create a more interesting user interface later.
The first command button, Browse, simply allows the user to browse to a suitable wav file to encode. This is straightforward C# and Framework code:
private void button1_Click(object sender, EventArgs e){ OpenFileDialog Dialog1 = new OpenFileDialog(); Dialog1.InitialDirectory = "c:\\"; Dialog1.Filter = "Wav files |*.wav|All files (*.*)|*.*"; Dialog1.FilterIndex = 1; Dialog1.RestoreDirectory = false; if (Dialog1.ShowDialog() == DialogResult.OK) { textBox1.Text=Dialog1.FileName; };}
Hopefully at this point the user has selected a suitable audio file that needs to be MP3 encoded. The full file name, including the path is stored in the textbox.
The user can change the selected file as many times as they want but sooner or later they are going to click the Encode Now button and this starts the conversion to MP3.
There are a lot of options that can be set on the LAME command line but the simplest is to use one of the “presets”:
--preset medium
--preset standard
--preset extreme
For simplicity we will create a “medium” compression MP3 for which the command line is:
Lame –preset medium inputfile
If you build a program using these arguments you will discover that interpreting the output is a little complicated.
The reason is that LAME uses control characters to create a histogram display in the command console. Once you have seen how output from LAME is collected you can modify the program to display the histogram, or a high resolution version if you want to. For now the simplest solution is to turn off the histogram by adding the –nohist option.
So, the command line we want to use is:
Lame –preset medium --nhist inputfile
|
https://www.i-programmer.info/projects/38/2033-net-mp3.html
|
CC-MAIN-2019-13
|
refinedweb
| 787
| 62.98
|
DataReaders and Schema Information
Schema information is information about the structure of your data. It includes everything from column data types to table relations.
Schema information becomes extremely important when dealing with the
ADO.NET
DataSet, as you’ll learn
in the following chapters. However, even if you
aren’t using the
DataSet, you may
want to retrieve some sort of schema information from a data source.
With ADO.NET, you have two choices: you can use the
DataReader.GetSchemaTable( )
method to retrieve schema information
about a specific query, or you can explicitly request a schema table
from the data source.
Retrieving Schema Information for a Query
As long as a
DataReader is open, you can invoke
its
GetSchemaTable( ) method to return a
DataTable object with the schema information for
the result set. This
DataTable will contain one
row for each column in the result set. Each row will contain a series
of fields with column information, including the data type, column
name, and so on.
Example 5-6 shows code to retrieve schema information for a simple query.
// GetSchema.cs - Retrieves a schema table for a query using System; using System.Data; using System.Data.SqlClient; public class GetSchema { public static void Main() { string connectionString = "Data Source=localhost;" + "Initial Catalog=Northwind;Integrated Security=SSPI"; string SQL = "SELECT * FROM CUSTOMERS"; // Create ADO.NET objects. SqlConnection ...
Get ADO.NET in a Nutshell now with O’Reilly online learning.
O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.
|
https://www.oreilly.com/library/view/adonet-in-a/0596003617/ch05s04.html
|
CC-MAIN-2020-05
|
refinedweb
| 253
| 56.55
|
Use Clojure to write OpenWhisk actions, Part 3
Learn how by developing an inventory control system
Content series:
This content is part # of # in the series: Use Clojure to write OpenWhisk actions, Part 3**auto**
Stay tuned for additional content in this series.
This content is part of the series:Use Clojure to write OpenWhisk actions, Part 3
Stay tuned for additional content in this series.
In the previous two tutorials, you learned how to write a basic OpenWhisk
application using Clojure—a functional programming language based
on Lisp—to create actions for OpenWhisk applications. In this
tutorial, I’ll wrap up the series by showing you how to improve any such
application. First, you’ll learn how to support arguments that include
double quotes. Then I’ll show you how to use a permanent database
(Cloudant) instead of a variable to store the information.
What you’ll need to build your
application
This tutorial builds upon information from the first two tutorials in the
“Use Clojure to write OpenWhisk actions” series, Part 1, Write clear, concise code for OpenWhisk using this Lisp
dialect and Part 2, Connect your Clojure OpenWhisk actions into useful
sequences, so I recommend reading those first. In addition, you will need:
- A basic knowledge of OpenWhisk and JavaScript (Clojure is optional;
the tutorial explains what you need when you need it)
- A free Bluemix account (Sign up
here)
Run the appGet the code
“Functional programming encourages you to isolate side
effects and separate them from the business logic. This results in
applications that are more modular, easier to test, and easier to
debug.”
Parameters that include quotes
In Part 1, I introduced the main.js JavaScript:
//;
That script uses a very simplistic solution to getting the parameters to
Clojure:
var paramsString = JSON.stringify(params); paramsString = paramsString.replace(/"/g, '\"'); … '(js* "' + paramsString + '")))';
This solution can produce a string such as
{"num": 5,. The double backslash
"str": "whatever"}
(
\) is translated into a single backslash (which is the escape character). The Clojure code that results is
(js* " {"num": 5, "str": "whatever"}"). Because
js* evaluates the string parameter it gets as JavaScript,
this brings us back to the original parameters,
{"num": 5, "str": "whatever"}. The problem is that if one of
the string parameters already contains a double quote (
"), it
will be treated exactly the same way as the quotes, leading to an
expression such as
{"num": 5, "str": "what"ever"} and syntax
errors. In theory, you could solve this problem by using the
js/<var name> syntax to access a variable with the
parameters, but for some reason this doesn’t work in OpenWhisk.
Here in Part 3, I introduce the
fixHash function, which
iterates over the parameters (including any nested data
structures) and looks for strings. In a string, it replaces all double
quotes with
\x22. The first backslash escapes the second
backslash, so the real value is
x22. This value eventually
gets translated into ASCII character 0x22 (decimal 34, which is double
quotes), but that happens at a later stage so the
replace
method does not modify these characters.
// Fix a hash table so it won't have internal double quotes var fixHash = function(hash) { if (typeof hash === "object") { if (hash === null) return null; if (hash instanceof Array) { for (var i=0; i<hash.length; i++) hash[i] = fixHash(hash[i]); return hash; } var keys = Object.keys(hash) for (var i=0; i<keys.length; i++) hash[keys[i]] = fixHash(hash[keys[i]]); return hash; } if (typeof hash === "string") { return hash.replace(/"/g, '\x22'); } return hash; };
Save to a database
An inventory management system that resets to the initial value when not in
use for a few minutes isn’t very useful. Therefore, your next step is to
set up an object storage instance to store the database values (the
dbase variable in the
inventory_dbase
action):
- In the Bluemix console, click on the Menu icon and go to
Services > Data & Analytics.
- Click Create Data & Analytics service and select
Cloudant NoSQL DB.
- Name the service “OpenWhisk-Inventory-Storage” and click
Create.
- When the service is created, open it and click Service
credentials > New credential.
- Name the new credential “Inventory-App” and click
Add.
- Click View credentials and copy the credential into a
text file.
- Select Manage > Launch.
- Click on the database icon and Create Database.
- Name the database “openwhisk_inventory.”
- Click the plus icon in the All Documents row and
select New Doc.
- Copy the contents of dbase.json into the text area and click Create
Document.
There are two ways to combine the Cloudant database with the application.
You can either add Cloudant actions to the sequences, or modify the
inventory_dbase action. I chose the second solution, because
it lets me change a single action (because all the database work is
centralized there).
- Add the npm library
for Cloudant to the dependencies in package.json, and update
the inventory_dbase action’s action.cljs file:
(ns action.core) (def cloudant-fun (js/require "cloudant")) (def cloudant (cloudant-fun "url goes here")) (def mydb (.use (aget cloudant "db") "openwhisk_inventory")) ; Process an action with its parameters and the existing database ; return the result of the action (defn processDB [action dbase data] (case action "getAll" {"data" dbase} "getAvailable" {"data" (into {} (filter #(> (nth % 1) 0) dbase))} "processCorrection" (do (def dbaseNew (into dbase data)) {"data" dbaseNew} ) "processPurchase" (do (def dbaseNew (merge-with #(- %1 %2) dbase data)) {"data" dbaseNew} ) "processReorder" (do (def dbaseNew (merge-with #(+ (- %1 0) (- %2 0)) dbase data)) {"data" dbaseNew} ) {"error" "Unknown action"} ) ; end of case ) ; end of processDB (defn cljsMain [params] ( let [ cljParams (js->clj params) action (get cljParams "action") data (get cljParams "data") updateNeeded (or (= action "processReorder") (= action "processPurchase") (= action "processCorrection")) ] ; Because promise-resolve is here, it can reference ; action (defn promise-resolve [resolve param] (let [ dbaseJS (aget param "dbase") dbaseOld (js->clj dbaseJS) result (processDB action dbaseOld data) rev (aget param "_rev") ] (if updateNeeded (.insert mydb (clj->js {"dbase" (get result "data"), "_id" "dbase", "_rev" rev}) #(do (prn result) (prn (get result "data")) (resolve (clj->js result))) ) (resolve (clj->js result)) ) ) ; end of let ) ; end of defn promise-resolve (defn promise-func [resolve reject] (.get mydb "dbase" #(promise-resolve resolve %2)) ) (js/Promise. promise-func) ) ; end of let ) ; end of cljsMain
Let’s look at some of the more interesting parts of action.cljs.
You need to get the database. In JavaScript, you might code it this way:
var cloudant_fun = require("cloudant "); var cloudant = cloudant_fun(<<<URL>>>); var mydb = cloudant.db.use("openwhisk_inventory ");
The Clojure version of the same code is similar, with a few differences.
First,
require is a JavaScript function. To access it, you
need to qualify it with the
js namespace (line 3 in the
listing in step 12 above):
(def cloudant-fun (js/require "cloudant"))
The next line (line 5) is pretty standard. The URL is a URL parameter from
the credentials for the database:
(def cloudant (cloudant-fun <<URL GOES HERE>>))
To get a member from a JavaScript object, use
aget. To use a
method of an object, you can use
(.<method> <object> <other.
parameters>)
See line 7:
(def mydb (.use (aget cloudant "db") "openwhisk_inventory"))
Reading from the database and writing to it are both asynchronous actions.
This means that you cannot simply run them and return the result to the
caller (the OpenWhisk system). Instead, you need to return a Promise object. This constructor accepts a single
parameter—a
function to be called to start the process whose result is needed. The
syntax to call the constructor for a JavaScript object from Clojure is
(js/<object name>.. See line
<parameters>)
75:
(js/Promise. promise-func)
The function that is provided to the Promise object’s constructor is
promise-func. It receives two arguments. One is the function
to call in case of success (one argument, the result of the action). The
other is the function to call in the case of failure (also one argument,
the error object). In this case, the function gets the
dbase
document and then calls
promise-resolve with the success
function and the document. The first parameter to the anonymous function
(
#(promise-resolve resolve %2)) is the error, if any. Because
this is a demonstration program, we ignore errors for the sake of
simplicity. See lines 71-73:
(defn promise-func [resolve reject] (.get mydb "dbase" #(promise-resolve resolve %2)) )
Both
promise-func and
promise-resolve are defined
inside cljsMain. The reason is that
promise-resolve needs the
value of the action parameter. By defining these functions inside
cljsMain, you can use just that local variable without having to drag it
along for the entire calling chain. See lines 52-55:
(defn promise-resolve [resolve param] (let [ dbaseJS (aget param "dbase") dbaseOld (js->clj dbaseJS)
The function that either gets the data or modifies it is
processDB. This function encapsulates the functionality
explained in Part 1. See line 56:
result (processDB action dbaseOld data)
The revision (
_rev) is necessary to update the database
because of the algorithm that Cloudant uses to ensure state consistency.
When you read a document, you get the current revision
(
_rev). When you write the updated version, you have to
provide Cloudant with the revision you are updating. In the meantime, if another process
has updated the document, the versions won’t match and
the update will fail. See line 57:
rev (aget param "_rev")
If the data has been modified, update Cloudant. See line 59:
(if updateNeeded
Provide the database with the new data, the document name, and the revision
you are updating. See lines 60-62:
(.insert mydb (clj->js {"dbase" (get result "data"), "_id" "dbase", "_rev" rev})
When the update is done (we just assume it is successful; this is an
educational sample, not production code), run the function below. Notice
the mechanism to add debugging printouts. Use
do to evaluate
several expressions, have any number of
prn function calls to
print the information you need, and put the expression you actually want
In this case, you call the
resolve function that you got through
the Promise object. Because this function is in JavaScript, it needs to
receive a JavaScript object, not a Clojure one, so you use
clj->js. See lines 63-64:
#(do (prn result) (prn (get result "data")) (resolve (clj->js result))) )
If there is no need to update Cloudant, just run the
resolve
function. See lines 65-66:
(resolve (clj->js result)) )
Compile the Clojure code
Currently, we send the Clojure code to OpenWhisk and compile it there
whenever the Node.js applications restarts. Another option is to
compile the Clojure to JavaScript locally once and send the already
compiled version. If you are interested, you can see how it works here. However, this method does not improve performance
significantly.
As you can see in the screen capture below, even with compiled
ClojureScript code, the first invocation of the action still takes a lot
more time than subsequent ones.
The reason for the slowness is that the actual compilation is not very
resource intensive. The creation of the Clojure environment is the
resource-intensive part of using Clojure, and that is
required even if the Clojure code itself is compiled into JavaScript. The
JavaScript that’s produced by the compiler uses some heavy libraries.
Conclusion
This concludes the series on OpenWhisk actions in Clojure. I hope I have
shown you some of the advantages to using functional programming for
functions as a service (FaaS). At the end of the day, nearly every application
needs to use side effects at some point, but functional programming
encourages you to isolate those side effects and separate them from the
business logic. This results in applications that are more modular, easier
to test, and easier to debug. By separating the application logic into
actions and sequences, it is easy to make the large-scale structure of the
application clearer, and write unit tests to run as REST calls.
Downloadable resources
Related topics
|
https://nikolanews.com/improve-your-openwhisk-clojure-applications/
|
CC-MAIN-2020-50
|
refinedweb
| 1,980
| 61.77
|
Java Programming
C Programming
Convertion of upper case char in a string to lower case and vice versa?
Related Questions
Asked in .NET Programming, C Programming
Write a C program to convert an upper case word to lower case and vice-versa.?
You would use the ToUpper and ToLower functions. string name = "XVengeance"; string upperName = name.ToUpper(); string lowerName = name.ToLower(); Console.WriteLine(upperName); Console.WriteLine(lowerName); Console.ReadLine(); I don't think I'm supposed to do your homework for you, but this code should get you started on making a more dynamic program.
Asked in C Programming
Write a program convert a string from uppercase to lower case and vice-versa without using string tolower and string toupper commands in TCL?
Asked in C Programming
C programming to convert upper case string into lower case string and vice versa without using builtin function?
#include<stdio.h> #include<conio.h> char foo(char a) { if(a>=97 && a<=122) { a = a - 32; return (a); } else { return (a); } } void main() { char a[20],b[20]; int i=0; clrscr(); printf("Please enter a string: "); gets(a); printf("The String in Lower Case is: "); do{ b[i]=foo(a[i]); i++; }while (a[i]!=0); while(i<=20) { b[i]=0; i++; } printf("%s",b); getch(); }
Asked in Human Anatomy and Physiology
What is metacarpals metatarsals tarsals and carpals?
Asked in Musical Instruments
How can you lower the pitch of a note on a quitar by altering then tension of the string?
The pitch is determined by by the frequency in which the string is swinging, which, in turn, is determined by the speed with which a wave can travel through the string. The higher the tension in the string is, the easier it is for a wave to travel through it, and if the speed of the wave increase, so will the frequency, and by default the pitch of the note. And vice versa. If I remember my physics correctly :)
Asked in Electric Guitar
How different are the sound produced by each string with different thickness in guitar?
Asked in Ancient Egypt
What were the red and white crowns of Egypt?
Asked in Music
What does the musical term inversion mean?
Asked in Physics, Magnetism
How does distance affect magnetism?
Asked in Tennis
How do you test string tension of a tennis racket?
For ease, you can purchase a string meter or any other string tension meter that measures it for you. All you need to do is clip it between the strings and it will read the number in the display. If not, you can always use your hand and lightly tap the string bed to get an rough est. The harder it is the higher the tension and vice versa.
Why is the left testicle usually lower than the right - I'd like to know how natural evolution arrived at the left-lower situation?
Asked in Baking, Cooking Times and Temperatures, Pies
Can you bake pie longer at lower temperature?
Asked in 2008 Economic Crisis
What is the law of demand and the determinants of demands?
Asked in Shocks and Struts
Do adjustable struts raise or lower ride height?
Asked in String Instruments, Violin
Is a viola or violin easier to play?
The violin and the viola are very similar in playing. The only difference is that the violin has a high E string and are easier to hear while the viola has a low C string and are lower in sound than the violin. It also depends on preferences; which instrument that you prefer better. If you are going to be switching from a violin to a viola, or vise versa, the playing is pretty much the same, other than getting used to either the low C string or the high E string. The other difference is that violins and violas have different notes and fingerings, as well as being in different 'Clefs'.
Asked in Musical Instruments, Bass Guitar, String Instruments
How do electric bass strings compare to upright bass strings?
Asked in Temperature
How does temperature change sea level?
Asked in Science
What happens when electrons moves to higher energy level?
Asked in Human Anatomy and Physiology
What is the importance of Wilson curve?
The Curve of Wilson is characterized by the buccal cusps of the lower posterior teeth being higher than the lingual cusps and vice versa for the upper arch poster teeth. This delineates a curved medio-lateral shape which is designed to enhance the chewing cycle. As the tongue bring the food bolus into the oral cavity proper, the increased height of the lower buccal cusps and the complementary form of the upper posterior teeth cusps allow the food to remain in the chewing area and not spill out laterally.
Asked in Java Programming, C Programming, The Difference Between
Whats the difference between character and string if only one character is there?
Nothing Most computer languages have a string-termination character that is "invisible" to people. So, while a character variable may contain 'A', and a string variable appears to be simply "A", the string variable will actually have two characters. This difference is often masked by compilers and languages, but it exists nonetheless, and it is sloppy programming practice to compare a string to a character (or vice versa) without doing a type cast.
Asked in Cable Internet, Computer Networking, Internet
Explain the role of modem in internet?
A modem stands for modulation demodulation. A modem basically converts analog signals to digital signals or vice versa. This convertion is required because the subscriber line through which you are getting your internet, carries analog signals. But, a computer understands only digital signals. So the convertion to digital signals is required. Also it has to be changed back to analog signals because analog signals can go longer distances through your subscriber line. abhishek jaiswal IEM MCA
Asked in Science
How does size effect accelaration?
Asked in Human Anatomy and Physiology
What are effects of toxicity on hemolysis of red blood cells?
Asked in Headlights Tail and Brake Lights, Jeep Grand Cherokee Limited, Ford Explorer XLS
Why would your lower brake lights not work when the upper brake light does work?
By the nature of the question I assume that you are referring to the tail lights and brake lights. My Explorer has four bulbs. Tail, brake turn and backup. They are all in separate compartments of the light assembly. So it could be that you have just burned out one bulb. If you are referring to the upper brake light in the rear window or at the top of the tailgate, same thing. One or more bulbs are burned out. I see people driving all the time with both lower lights burned out and the top working or visa versa.
|
https://www.answers.com/Q/Convertion_of_upper_case_char_in_a_string_to_lower_case_and_vice_versa
|
CC-MAIN-2020-24
|
refinedweb
| 1,136
| 62.27
|
Hello,
I know that algorithm has a shuffle function. Someone told me when I was making my program in a chat room lol. but I wanted to make it anways to see if I can do it. Please leave feed back and tell me what you think
and improvements and stuff. Thanks. BYebyeand improvements and stuff. Thanks. BYebye
Code:#include <iostream> #include <time.h> using namespace std; void Scramble(string &s){ int scram_num=s.size()*s.size(); srand(time(NULL)); for(int a=0;a<scram_num;a++){ int i=rand()%s.size(); char c=s[i]; s.erase(i,1); int ii=rand()%s.size(); s.insert(ii,1,c); } } int main(){ string s="Country"; Scramble(s); for(unsigned int i=0;i<s.size();i++){ cout<<s[i]; } system("PAUSE"); return 0; }
|
http://cboard.cprogramming.com/cplusplus-programming/62266-string-scrambler.html
|
CC-MAIN-2013-48
|
refinedweb
| 133
| 80.48
|
% set a apple apple %we could say
% tcl9var a apple apple %and instead of saying
% puts "my $a is red" my apple is red %we could say
% puts "my [a] is red" my apple is red %and instead of changing the variable a by saying
% set a ${a}tree appletree %we could say
% a becomes [a]tree appletree %Larry Smith: The Tinker programming language, which is part of my ctools package, does exactly this. This sort of behavior is characteristic of all TRAC-based languages (Tint and others), only Tcl has been different.I no longer have a website to host ctools, but the Tinker source code in C is less than 700 lines, it could be posted here is anyone is interested in experimenting.Here is a toy implementation of such behaviour:
set persistentCounter 0 array set persistent {} proc tcl9var {name value} { variable persistentCounter variable persistent set persistent([incr persistentCounter]) $value uplevel 1 [subst { proc $name {{cmd get} args}\ {eval ::handleVar $persistentCounter \$cmd \$args} }] set persistent($persistentCounter) } proc handleVar {count cmd args} { variable persistent switch -exact -- $cmd { get { set persistent($count) } set - = - <- - becomes { eval set persistent($count) $args } append - lappend { eval $cmd persistent($count) $args } + - - - * - / - % - ** - < - <= - > - >= - == - != { expr "$persistent($count) $cmd $args" } default { eval $cmd $persistent($count) $args } } }Possibilities -- see source of proc handleVar above. Variables behave like objects.Consequences -- closures must handle namespace of procedures instead of variables. Differs from current behaviour where procedures defined globally and procedures defined in current namespace are visible.Incompatibilities with set -- variables and procedures use the same namespace, so it is not possible to give a var the name of an existing procedure. that's a pretty HUGE incompatibility!Summary -- just a day-dream. (What's wrong with day-dreams? Who said, man will never fly?)
LES By calling it tcl9var, the summary rather becomes scaring the bejesus out of Tcl users who hope this will never replace set in new versions of Tcl.
LV The above is rather anti-tcl, in my mind. The reason why is this- you have three different ways of addressing the object a
% tcl9var a apple % puts "my [a] is red" % a becomes [a]treeWhy is this all necessary? What is it that is actually being attempted? Is the idea to make variables objects with methods? If so, then let's call the initial command createvar or varobject or something indicative of the action being taken.Then there is the [a] vs $a usage. The latter is more common to people than the first. If there are going to be more methods than becomes, then I suppose that it is something that people would painfully become adjusted to. But please, give more details on the benefits you envision. Because right now, I see no benefits, but do see detriments - I don't care to end up typing a lot more characters for no benefit...wdb Your Line 1 describes creation. Line 2 describes usage. Line 3 describes change of exisiting value. That is why it is necessary.Usage of [a] vs. $a -- if the dollar sign can be replaced by brackets (truly undispensable), then we have one less cryptic special letter. You say, $ is more common -- if you mean, more used, alright. But more common -- no, I prefer less rules and less syntax.Your name suggestions: I do not mind. Create better names then my ones. But I had no object orientation in mind. d When appropriate, I use OO, but I am not at all an OO fanatic.When regarding the truly different opinions to this page, I think that my proposal appears too ... say, radical or dramatic. I am not willed to rule the world. If you think it's a doubtful proposal, just say no to it. That's ok. But without such proposal, the world would be quite boring, wouldn't it?
NEM Added a -exact to the switch, to avoid treating * etc as glob characters. Nicely done -- this is exactly the sort of thing I'd like to see. The main problem, as you state in your "consequences" section, is that procs have local vars, but not local commands. I think Lars H may have suggested proc-local commands at one point, and to my shame I think I thought they were a bad idea!- RS From 8.5 we can have local procs, sort of:
proc f y { set square {x {expr {$x*$x}} apply $square $y }The lambda named square is cleaned up on return But this way of using a variable to hold a lambda runs of course in the opposite direction of using procs to replace variables :^)
GN here is a small XOTcl implementation:
Class tcl9var tcl9var instproc init {{value ""}} {my set _ $value} tcl9var instproc defaultmethod {} {my set _} tcl9var instproc unknown {args} {my set _ $args} tcl9var a 123 puts [a] puts [a 4711] puts my-[a]-value puts my-[a 1 2 3]-valuedepending on the type of the variable, different methods can be provided (e.g. for lists, numbers, whatever)
MJ: This is similar to the slot concept in Self. The Self extension uses slots for both methods and values.
|
http://wiki.tcl.tk/16929
|
CC-MAIN-2016-50
|
refinedweb
| 855
| 63.09
|
WEB SITE
WEB SITE can any one plzz give me some suggestions that i can implement in my site..(Some latest technology)
like theme selection in orkut
like forgot password feature..
or any more features
Sorry but its
struts hi
Before asking question, i would like to thank you for clarifying my doubts.this site help me a lot to learn more & more technologies like servlets, jsp,and struts.
i am doing one struts application where i
web site creation
web site creation Hi All ,
i want to make a web site , but i am using Oracle data base for my application .
any web hosting site giving space for that with minimum cost .
please let me know which site offering all
Can i insert image into struts text field
Can i insert image into struts text field please tell me can i insert image into text.
HTML FAQ site
HTML FAQ site For a school project need simple website. It is an FAQ site which uses natural language processing to get the answers. Natural... of answers or the actual answer should be generated. As close as possible.
I want detail information about switchaction? - Struts
I want detail information about switch action? What is switch action in Java? I want detail information about SwitchAction
hello there i need help
i am a beginner, and aside from that i am really eager to learn java please help...hello there i need help : i need to do a program like... OPtions:
once i have chosen an option then i should proceed here
if i choose b
I/O Java
System.out.println(" Error in Concat:"+e);
}
}
}
I am not really sure why...I/O Java import java.io.File;
import java.io.FileNotFoundException... writer = new PrintWriter(outFile);
for(int i=0;i<inputFiles.length;i
file uploads to my web site
file uploads to my web site How can I allow file uploads to my web site internationalisation - Struts
struts internationalisation hi friends
i am doing struts iinternationalistaion in the site
i followed the procedure as mentioned but i am not getting
What you Really Need to know about Fashion
What you Really Need to know about Fashion
You might think... know about fashion, even if you are not really
interested in following... to the
masses. Personally, I have to see fashion as the way people decide
Struts
Struts Hi i am new to struts. I don't know how to create struts please in eclipse help me
Struts Books
-Controller (MVC) design
paradigm. Want to learn Struts and want get started really quickly? Get Jakarta Struts Live for free, written by Rick Hightower... of design are deeply rooted. Struts uses the Model-View-Controller design pattern in struts I want two struts.xml files. Where u can specify that xml files location and which tag u specified Hi,
i am writing a struts application
first of all i am takeing one registration form enter the data into fileld that
will be saved perfectly.
but now i can apply some validations to that entry details
but with out
MVC - Struts
MVC Can any one help me in good design of an struts MVC....tell me any e-book so that i can download from site
i/o
i/o
Write a Java program to do the following:
a. Write into file the following information regarding the marks for 10 students in a test
i. Matric no
ii. Marks for question 1
iii. Marks for question 2
iv. Marks
How do I compile the registration form?
How do I compile the registration form? How do I compile the registration form as stated at the bottom of the following page (URL is below). Do I need ANT? If so, please give instructions. I am a student.
http
Struts - Framework
/struts/". Its a very good site to learn struts.
You dont need to be expert...Struts Good day to you Sir/madam,
How can i start struts application ?
Before that what kind of things necessary
how do i grab the url in php?
how do i grab the url in php? I want to grab the 'entire' url, including any special characters like & and #, from a site. I then want...
# = %23
I tried URL encoding but it only grabs the part until the #, which shud i write all the beans in the tag of struts-config.xml
Parameter month I Report - Java Beginners
Parameter month I Report hy,
I want to ask about how to make parameter in I Report, parameter is month from date. How to view data from i report... like Java/JSP/Servlet/JSF/Struts etc ...
Thanks
How can i use Facebook connect button
How can i use Facebook connect button Please to meet you all guys
I wonder how can i use this Connect to facebook for me to post in a particular...
How can i apply this kind of comment with "Connect
Struts
Struts I want to create tiles programme using struts1.3.8 but i got jasper exception help me
Struts ui tags example What is UI Tags in Strus? I am looking for a struts ui tags example. Thanks
i got an error while compile this program manually.
mapping.findForward("errors.jsp");
}
}
i set both servlet,struts jar files and i got an error in saveErrors()
error
Heading
cannot find...i got an error while compile this program manually. import
struts <p>hi here is my code in struts i want to validate my form fields but it couldn't work can you fix what mistakes i have done</p>...
}/...://
I hope that, this link will help you
struts - Struts
struts I want to know clear steps explanation of struts flow..please explain the flow clearly
Struts - Struts
Struts Hello Experts,
How can i comapare
in jsp scriptlet in if conditions like
struts
struts how to make one jsp page with two actions..ie i need to provide two buttons in one jsp page with two different actions without redirecting to any other
Java Bean tags in struts 2 i need the reference of bean tags in struts 2. Thanks! Hello,Here is example of bean tags in struts 2: Struts 2 UI
Struts Hi,
I m getting Error when runing struts application.
i have already define path in web.xml
i m sending --
ActionServlet...
/WEB-INF/struts-config.xml
1
Not sure what I am missing ? Any ideas?
Not sure what I am missing ? Any ideas? import java.util.*;
public...)
{
if(str.length()==0)
return false;
for(int i=0;i<str.length();i++)
if (c==str.charAt(i));
return true
struts - Struts
struts My struts application runs differently in Firefox and IE...
What's the problem?
I initially viewed it in Firefox.It was OK...
But in IE the o/p was different
struts Hi,
I am new to struts.Please send the sample code for login... - Struts
Struts Hello
I like to make a registration form in struts inwhich....
Struts1/Struts2
For more information on struts visit to :
|
http://www.roseindia.net/tutorialhelp/comment/3413
|
CC-MAIN-2014-35
|
refinedweb
| 1,172
| 74.59
|
...
Try 3 layers of fiberglass drywall mesh tape. works great , but not very attractive unless painted or backed over tape with linnen.
1,569 favorites
License:
Archery Factsby Blaakenth
How to Make a Childrens Bow and Arrowsby craftknowit 3 layers of fiberglass drywall mesh tape. works great , but not very attractive unless painted or backed over tape with linnen.
First, yeah, safety. If/when the bow fails, it's not good to have pieces flung about. Oak shrapnel is no fun.
Second is to reduce string follow. Oak is a good, solid wood which helps resist compression in the belly (string side) of the bow. However it's missing a natural backing. A good, elastic backing is needed to make sure the bow returns to it's full upright position. Over time the belly will compress and you will see more and more arc in the bow when unstrung. This is string follow. A good, elastic backing reduces this.
Third, the right backing could also provide a little more speed to the bow. Again, er go back to the elastic nature of a good backing. The best modern backing is fiberglass. Put a couple coats of fiberglass on the back and you'll have a very snappy bow.
The string is 2 inches shorter than the nock to nock length. Yes you can use twisting to adjust the length.
I just bought a cheap 67" string off the net.
Hours of finish sanding and rubbing down with steal wool, an I'm at the point where I'm ready to apply the finish. I believe I'm going to use tungoil and a rub down of raw beas wax melted into cheese cloth.
((melt the wax in a double boiler (pot with water in it, and a smaller pot that fits inside the first one. you can use a sauce pan and a steal mixing bowl.) fold cheese cloth into a nice palm sized square, 4 maybe 5 layers thick, or a 1/2 inch thick of folds, then soak the cheese cloth in the wax, saturate it as much as possible, then let it cool off and solidify. now you got a ice block of reinforced bea's wax you can use to rub down your bow. its great because its nontoxic (so if you need to sand again, your not putting toxic wax dust into the air) and it comes off easy with some paint thinner or wood cleaner.))
This information is not quite correct. Applied properly, a rawhide backing will also improve a bows performance (this is how many Asiatic bows were given their reflex profiles), as will a backing made up of any fiber cordage that has a decent amount of stretch and return (such as silk). It would be accurate to say, however, that sinew is probably the appropriate backing for this type of bow in most situations and is the hardest one to screw up.
|
http://www.instructables.com/id/Red-Oak-Pyramid-Bow/step10/Addendum/
|
CC-MAIN-2014-15
|
refinedweb
| 491
| 81.12
|
Better CSS with JavaScript
Learn how to modernize your CSS build using css-js-loader.
Many thanks to Izaak Schroeder for Webpack wizardry and to Mike Gazdag and the rest of the team at MetaLab for their knowledge and feedback. Also, all credit to Pete Hunt for his initial work on this problem.
At MetaLab, we are continuously striving to improve our FE development process, within which the CSS pipeline is an integral part. While tools like Sass and Stylus offer definite advantages over vanilla CSS, we have found that these tools can cause unwanted friction in a modern component-based project. We want the features of a preprocessor: variables, mixins, and functions, but we dislike how preprocessors give CSS its own context, isolated from the rest of the build process. A separate CSS build that is isolated from a project’s main JavaScript build is not an option when every component is an independent module that encapsulates its own styles. We want the code that defines our styles to share the tools and configuration with the rest of our project. We need our CSS build to run in a unified context with the JavaScript build.
Our solution is to define all of our styles in
.css.js (CSS JS) modules by leveraging a variety of community tools composed through Webpack.
🚨 If you just want to see code, here’s a complete example project on github.
Background
The idea of “writing CSS with JavaScript” is far from a new concept. There is an excellent 2014 talk by Vjeux that serves as a fantastic introduction.
Fast forward a few years and there are numerous options for “CSS in JS” build systems. With no shortage of fresh ideas in this space, there are a lot of differences between the available solutions. Some solutions target inline element styles, some create
<style> tags or inject inline
style attributes at run-time, and some generate static
.css files at build-time.
One similarity among all existing approaches is a lack of composability. Each is available as a “complete” solution that takes ownership of the CSS build process. This is in contrast to our approach, which produces a build process by composing existing tools and libraries. As a result, it offers an excellent balance of simplicity, flexibility, compatibility and performance.
Here’s what it offers:
☑️ Unified JavaScript and CSS codebase and build.
☑️ Shared imports between JavaScript and CSS.
☑️ Feature-parity with CSS preprocessors.
☑️ Works with “pure CSS” tooling (PostCSS,
css-split-webpack-plugin)
☑️ Support for full CSS syntax (including
@keyframes and
@media)
☑️ Locally scoped identifiers (with CSS Modules)
☑️ Target static
.css files (with
extract-text-webpack-plugin)
☑️ Hot style reloading (with
react-hot-loader)
☑️ Compatibility with different projects (not bound to React components).
Getting Started
Our “CSS in JS” solution isn’t a single package that you can add to your project. It’s essentially just a set of config patterns that we’ll refer to as
.css.js.
The basic concept:
- CSS styles are defined as plain objects exported from
.css.jsfiles.
- At build time,
.css.jsfile exports are converted to CSS markup.
- Converted CSS markup is processed by Webpack
css-loader.
To anyone experienced with loading CSS through Webpack, the majority of required the setup will be immediately familiar since it is composed from standard tools. Only the addition of
css-js-loader to a standard CSS build is needed in your Webpack config to add support for
.css.js files.
Here’s a very basic config example:
// Base Webpack config.
module.exports = {
entry: {
index: 'src/index.js',
},
module: {
loaders: [{
test: /\.css$/,
loaders: ['style-loader', 'css-loader'],
}],
},
};
// Updated CSS JS config.
module.exports = {
entry: {
index: 'src/index.js',
},
module: {
loaders: [{
test: /\.css(\.js)?$/,
loaders: ['style-loader', 'css-loader'],
}, {
test: /\.css\.js$/,
loader: 'css-js-loader',
}],
},
};
Just add
css-js-loader below any CSS loaders (Webpack executes loaders in order from bottom to top) and update the
test expression of
css-loader to optionally match
.css.js files. You could of course define entirely separate loader entries for
.css and
.css.js, but this “hybrid” approach, with single loaders supporting different file types, helps keep your config as minimal and as consistent as possible.
To process
.css.js files written as ES6, ensure that any JS loaders are included below the entry for
css-js-loader to ensure the they are run first.
module.exports = {
entry: {
index: 'src/index.js',
},
module: {
loaders: [{
test: /\.css(\.js)?$/,
loaders: ['style-loader', 'css-loader'],
}, {
test: /\.css\.js$/,
loader: 'css-js-loader',
}, {
test: /\.js$/,
loader: 'babel-loader',
}],
},
};
Now any file with a
.js extension is handled by
babel-loader, including
.css.js files. Any CSS is handled
css-loader and
style-loader regardless of whether it came from a
.css or a
.css.js file.
Webpack loaders are a composed “series of tubes.” Each file type travels a different path depending on its extension with individual loaders shared among different file types.
The compositional nature of a
css-js-loader build allows us to include only the features we need, and allows for an incredible amount of extensibility through community provided tools.
Defining Styles
As stated above, the basic concept is here is that CSS styles are defined as plain objects exported from
.css.js files. The default export of each file can contain any number of CSS properties and blocks. The exported object is, in effect, a very basic AST that can represent any CSS language structure.
A simple
.css.js file:
export default {
'.blueText': {
color: 'blue',
fontSize: 12,
},
};
Yields:
.blueText {
color: blue;
font-size: 12px;
}
Any string or number property on the exported object represents a CSS property and any nested object value represents a nested CSS block. At-rules such as media queries, animation keyframes, and font-face definitions are supported as nested objects.
Nested CSS structures:
export default {
'.hiddenSmall: {
display: 'none',
},
'@media (min-width: 480px)': {
'.hiddenSmall': {
display: 'block',
},
},
'.spinner > div': {
animation: '3s infinite spin',
},
'@keyframes spin': {
'0%': {
transform: 'rotate(0)',
},
'100%': {
transform: 'rotate(360deg)',
},
},
'@font-face': { ... },
};
A large single object full of string keys isn’t the nicest thing to work with. To provide a cleaner interface for defining static classes using ES6,
css-js-loader offers an additional syntax. Named exports alongside the default export are individually parsed as class definitions. This provides a consistent visual continuity between
.css.js files and other
.js files.
Essentially this:
export const blueText = { ... };
export const greenText = { ... };
export default {
'.redText': { ... },
};
Is an alias for this:
export default {
'.blueText': { ... },
'.greenText': { ... },
'.redText' { ... },
};
CSS Modules
Since
css-js-loader returns CSS documents, it is fully compatible with
css-loader and CSS Modules. The combination of CSS Modules with
.css.js is really makes this solution a pleasure to use when styling components.
The “named class export” pattern mentioned above offers a uniquely consistent experience when working with CSS Modules. On importing a
.css.js file loaded as a CSS Module into a component, the destructured contents map directly to named exports of the source
.css.js file. Although the process behind the scenes is more complicated, there is a direct continuity between the style and component files.
Style file:
export const base = { ... };
export const inner = { ... };
Component file:
import React from 'react';
import {base, inner} from './component.css.js';
export default () =>
<div className={base}>
<div className={inner}>
...
</div>
</div>;
Furthermore, tools like
eslint-plugin-import no longer have to be configured to ignore style imports and are capable of warning if you of misspelled or missing CSS classes.
Writing Maintainable Styles
The main motivation for using
css-js-loader over a CSS preprocessor is for access to the JS context.
Here’s a more involved example to illustrate the advantage of a shared context. This hypothetical
Button component is defined as its own encapsulated module.
src
├─┬ component
│ └─┬ button
│ ├── button.config.js
│ ├── button.css.js
│ └── button.js
└─┬ util
└── css.util.js
The
button.config.js file is accessible to both the
button.css.js file and
.button.js, allowing the component to be highly configurable and maintainable.
Any “mixins” and other shared style utilities are plain JS and are equally useful within components that apply inline styles at runtime. Sharing logic between a CSS and JS build has not been possible before this.
Since
css-js-loader is just a Webpack loader that returns CSS, it can be chained with other CSS loaders. And while the loader itself is not capable of transforming non-standard CSS syntax into something understandable by a browser, we can achieve the desired result through loader composition. By adding additional tools like
postcss-loader to our build, we can extend the basic functionality that
css-js-loader provides.
Adding
postcss-loader to the existing loader chain:
module.exports = {
entry: {
index: 'src/index.js,
},
module: {
loaders: [{
test: /\.css(\.js)?$/,
loaders: ['style-loader', 'css-loader'],
}, {
test: /\.css(\.js)?$/,
loader: 'postcss-loader',
options: {
plugins: () => [require('
postcss-nesting')],
},
}, {
test: /\.css\.js$/,
loader: 'css-js-loader',
}, {
test: /\.js$/,
exclude: /node_modules/,
loader: 'babel-loader',
}],
},
};
The
postcss-nesting plugin allows us to use nested selectors:
export default {
'.container': {
display: 'flex',
'& .button': {
cursor: 'pointer',
'&:hover': {
color: 'blue',
},
},
},
};
Which generates the expected:
.container {
display: flex;
}
.container .button {
cursor: pointer;
}
.container .button:hover {
color: blue,
}
The other advertised features, including hot reloading and static
.css file creation are also provided by additional loaders and plugins. Their usage is unchanged when combined with
css-js-loader, so I won’t go into depth here on how to set them up.
Conclusiony Type Thing
A lot of the examples, particularly the Webpack config samples are simplified beyond real-world usability. There is a lot of nuance that must go into any production-ready build config that is far beyond the scope of this article. Success with these tools will come most easily to those who are familiar with setting up Webpack, and in particular, with CSS loaders. Just about any problem encountered in a
css-js-loader Webpack config would be common to a regular
css-loader config. Tutorials, examples and StackOverflow questions not directly pertaining to
css-js-loader are still a very relevant resource.
I’ve set up a slightly more complete example that makes use of Webpack config partials to target both
production and
development builds.
Additional Resources
- Learn how to create static
.cssfiles with extract-text-webpack-plugin.
- Set up hot module reloading with React Hot Loader.
- Polished (not yet released) is a set of utilities for writing styles in JavaScript.
webpack-config-cssfor a curated config partial that offers an OOB
css-js-loadersetup.
|
https://medium.com/making-internets/better-css-with-javascript-88463deecf3
|
CC-MAIN-2017-22
|
refinedweb
| 1,748
| 58.69
|
Ruby Language for Beginners, Part 2: Ruby Methods
Ruby Language for Beginners, Part 2: Ruby Methods
We continue our series with a look at Ruby methods and variables and how developers work with these facets of the language.
Join the DZone community and get the full member experience.Join For Free
Deploying code to production can be filled with uncertainty. Reduce the risks, and deploy earlier and more often. Download this free guide to learn more. Brought to you in partnership with Rollbar.
Hello!
This is a post from the Ruby Language for Beginners in 8 Parts!
Part 1 - Ruby Characteristics and first Ruby code
Part 2 - Ruby Methods and Variables
-
Part 4 - Ruby Classes, Objects, and Instances
Part 5 - Ruby Conditionals
-
-
-
In the today's post, we're going to look at Methods and Variables in Ruby
Ruby Methods
Let's create the following mathematical calculation:
puts 1 + 2 puts 4 + 2
puts is a Ruby method. Actually, it's a built-in Ruby method and we can create our own Ruby methods.
We could create a calculator to add numbers by creating a method called sum, receiving 2 parameters:
def sum(a, b) return a + b end puts sum(1, 2) puts sum(4, 2)
Notice the pattern to create a Ruby method:
def your_method_name # your awesome logic here! end
Ruby is really smart and you don't need to use the return keyword:
def sum(a, b) a + b end
Ruby evaluates the last expression and returns that!
Just remember, we don't need to use parentheses when calling a method:
puts sum 1, 2 puts sum 4, 2
Let's jump into Ruby Variables!
Ruby Variables
We need to keep values somewhere, right?
Let's create a method to sum up 10 to a given number and then return the result:
def sum_by_ten number ten = 10 sum = number + ten return sum end puts sum_by_ten 5
As you can see, we created the variable ten to hold the value 10.
Can we change the variable in the middle of the program? Yes, we can!
def sum_by_five number five = 5 sum = number + five # changing in the middle of the game! five = 10 puts five return sum end puts sum_by_five 10
The output will be:
10 15
Changing the Type of the Variable
Ruby has another important characteristic: you can change the type of the variable, even after you've initiated it.
In the previous example, we had the following Ruby code:
ten = 10 puts ten ten = "My value 10" puts ten
This is totally fine in Ruby!
That's it! In the next post: Part 3 - Ruby Strings we're going to look at Ruby Strings!
I hope that this will be useful to you!
Thanks!
Deploying code to production can be filled with uncertainty. Reduce the risks, and deploy earlier and more often. Download this free guide to learn more. Brought to you in partnership with Rollbar.
Published at DZone with permission of Alexandre Gama . See the original article here.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }}
|
https://dzone.com/articles/ruby-language-for-beginners-part-2-ruby-methods-1
|
CC-MAIN-2018-17
|
refinedweb
| 523
| 61.87
|
Installing OpenCV
Options: you can use HomeBrew, PIP, or PIP3. Or Download source and Build.
Building the source though not the easiest but you might be required to do it under certain circumstances (for some library support)
Python 2: pip install opencv or pip install opencv-python
Python 3: pip3 install opencv or pip3 install opencv-python
you might need to install the following packages such as cv2, numpy,
matplotlib
pip3 install numpy
pip3 install
matplotlib
An Example
import cv2
import numpy as np import matplotlib.pyplot as plt import cv2 img = cv2.imread('C:\Users\USER\Desktop\image.jpg',0) cv2.imshow('image', img) cv2.waitKey(0) cv2.destroyAllWindows() reference:
The instructions on the linkes below will be of good help
Install OpenCV 3 on MacOS
Some Other Image Processing Libraries
Octave's image And for Scilab there is SIP and SIVP
Both Octave and Scilab are very Matlab like, but I'm not sure how similar the image toolkits are.
Also look into OpenCV if you are comfortable with C/C++ or Python
Reference:
ImageJ (also read:)
use AForget.net if you use C#. AForge.net is supported Image Processing and AI
OpenCV Source on GitHUB:
--
Install OpenCV using Homebrew
Install OpenCV 3 on macOS with Homebrew (the easy way)
How to build:
HOWTO: Install, Build and Use openCV (MacOSX 10.10)
|
http://sitestree.com/python-and-image-processing-packages-like-opencv/
|
CC-MAIN-2018-05
|
refinedweb
| 224
| 57.2
|
A while ago I wrote a post explaining how require and module.exports work. Today I wanted to show a practical example of how we can use that knowledge to organize routes in an express.js project.
To recap the original post,
module.exports is how we make something within our file publicly accessible to anything outside our file. We can export anything: a class, a hash, a function or a variable. When you
require something you get whatever it exported. That means that the return value from
require can be a class, a hash, a function or a variable. The end result is that how the value returned from
require is used can vary quite a bit:
# the user file exported a class User = require('./models/user') leto = new User() # the initialize file exported a function initliaze = require('./store/inialize') initialize(config) # the models file exported as hash (which has a User key which was an object) Models = require('./models') user = new Models.User()
With that in mind, I happen to be building a website that'll have two distinct sections: an API and a user-facing portal. I'd like to organize my routes/controllers as cleanly as possible. The structure that I'd like is:
/routes /routes/api/ /routes/api/stats.coffee /routes/api/scores.coffee /routes/portal/ /routes/portal/users.coffee /routes/portal/stats.coffee
This is very Rails-esque with controller namespaces and resources. Ideally, I'd like controllers under API (stats and scores) to have access to common API-based methods. How can we cleanly achieve this?
Before we can dive any further we need to understand how express loads routes. You basically attach the route to the express object. The simplest example might look something like:
express = require('express') app = express.createServer() app.configure -> app.use(app.router) app.get '/', (req, res) -> res.send('hello world')
On line 4 we tell express to enable its routing, and then use the
get and
post methods (to name a few) to define our routes.
Taking a baby step, if we wanted to extract our route into its own file, we could do:
# routes.coffee module.exports = (app) -> app.get '/', (req, res) -> res.send('hello world') # and use it like so: express = require('express') app = express.createServer() app.configure -> app.use(app.router) require('./routes')(app)
The syntax might be strange, but there's nothing new here. First, our
exports isn't doing anything other than exporting a function that takes a single parameter. Secondly, our require is loading that function and executing it. We could have rewritten the relevant parts more verbosely as:
# routes.coffee routes: (app) -> .... module.exports = routes # and use it like so: routes = require('./routes') routes(app)
So how do we take this to the next level? Well, we don't want to have to import a bunch of different routes at the base level. We want to use
require('./routes')(app) and let that load all the necessary routes. So, the first thing we'll do is put an
index file in a routes folder:
# routes/index.coffee module.exports = (app) -> require('./api')(app) require('./portal')(app)
When you tell node.js to require a folder, it automatically loads its index file. With the above, our initial route loading remains untouched:
require('./routes')(app). Now, this loading code doesn't just include
routes/index.coffe it actually executes it (it's calling the function and passing in the
app as an argument). All our routes function does is load and execute more functions.
For both our API and portal, we can do the same thing we did for routes:
# routes/api/index.coffee module.exports = (app) -> require('./stats') require('./scores')
Stats and scores can contain actual routes:
# routes/api/scores.coffee module.exports = (app) -> app.post '/api/scores', (req, res) -> # save as score app.get '/api/scores', (req, res) -> # get scores # routes/api/stats.coffee module.exports = (app) -> app.post '/api/stats', (req, res) -> # save a stat
This is a good solution to help us organize our code, but what about sharing behavior? For example, many API functions will want to make sure requests are authenticate (say, using an SHA1 hash of the parameters and signature).
Our first attempt will be to add some utility methods to the
api/index.coffee file:
# routes/api/index.coffee routes = (app) -> require('./stats') require('./scores') signed = (req, res, next) -> if is_signed(req) next() else res.send('invalid signature', 400) is_signed = (req) -> #how to sign a request isn't the point of this port module.exports = routes: routes signed: signed
Since we are now exporting a hash rather than a function, our
routes/index.coffee file also needs to change:
# OLD routes/index.coffee module.exports = (app) -> require('./api')(app) require('./portal')(app) # NEW routes/index.coffee module.exports = (app) -> require('./api').routes(app) require('./portal').routes(app) #assuming we do the same to portal
Now, we can use our signed method from our scores route:
# routes/api/scores.coffee api = require('./index') module.exports = (app) -> app.post '/api/scores', api.signed, (req, res) -> # save as score
For those who don't know, when an express route is given 3 parameters, the 2nd one is treated as something of a before-filter (you can pass in an array of them and they'll be processed in order).
This solution is pretty good, but we can take it a step further and create some inheritance. The main benefit is not having to specify
api. all over the place. So, we change our
api/index.coffee one last time:
# routes/api/index.coffee routes = (app) -> require('./stats') require('./scores') class Api @signed = (req, res, next) => if @is_signed(req) next() else res.send('invalid signature', 400) @is_signed = (req) -> #how to sign a request isn't the point of this port module.exports = routes: routes Api: Api
Since the routes are still exposed the same way, we don't have to change the main
routes/index.coffee file. However, we can change our scores files like so:
class Scores extends require('./index').Api @routes: (app) => app.post '/api/scores', @signed, (req, res) -> # save as score module.exports = Scores.routes
And that, is that. There's a lot going on in all of this, but the key takeaway is that you can export anything and that flexibility can lead to some fairly well organized code. You can also leverage the fact that the
index file is automatically loaded when its folder is required to keep things nice and clean.
I know this jumped all over the place, but hopefully you'll find it helpful!
|
http://openmymind.net/NodeJS-Module-Exports-And-Organizing-Express-Routes/
|
CC-MAIN-2015-06
|
refinedweb
| 1,104
| 68.67
|
Java applet programming is more than just running it. you can do much more with java applets, like parameter passing and running in browser, and much more, learn in this tutorial
In the last article We told you to do some practice and observation with the applet window code we provided, We hope you already practiced that. However in this article we are going to tell you about more on applets and we will also be creating a small animation using Java applet and threads.
If you have not read the introduction article please read it here, The Java Applet - first step to web application programming.
Let us show you the working of the applet we created in last post. In the images below you will see what happens when we do what.
when first you run the applet using applet viewer it will start the applet viewer window and execute the init and start messages and hence prints the message on the console like image shows
Then we minimized the window of applet to see what happens, now you see that the stop method is called
when we again maximize/restore the window of applet it will again move to start stage.
And finally when we closed the window it will call the stop followed by destroyed method. Hope you now properly understand the working of an applet.
Now its time to show you how an applet looks into the web browser. For this purpose we are going to do little bit of change into our precious applet and of course these changes are much more than nothing. First let us change our original applet file. And because we are going to run this applet on the browser we will not need the applet tag in the Java file so we are going to remove it. instead we are creating the HTML file which will invoke the applet.
<html> <body> <applet code="FirstApplet" height="400" width="400"></applet> </body> </html>
and the Java file looks like this with the addition of a method paint(Graphics g), which takes Graphics type of argument. The Graphics is found in the java.awt package so we also need to import this package or the class we are using. We need graphics because we want to draw the string on the applet as the browser has not consult with the console in personal.
import java.applet.Applet; import java.awt.*; public class FirstApplet extends Applet{ public void init(){ } public void start(){ } public void stop(){ } public void destroy(){ } public void paint(Graphics g){ g.drawString("Hello World from Applet!", 20, 100); } }
When you will call your html file with the applet tag in it, you will be asked for permission to run this applet, as we told you already that the applet needs permission to be executed as otherwise it may be cause security breach. You might be wondering what is the paint() method all about.
We need paint method to paint something on the applet, we are in this case painting a simple string message, using the method drawString(String message, int x-axis, int y-axis);, this will draw the message on the specified position, where x-axis is the distance from left and -axis is distance from top.
Image below will show the IcedTea web plug-in asking permission to run the applet, and second to this is the applet itself on the browser.
Note: the iced tea is Java plug-in for the browser for Linux for windows the jvm will provide the plugin all you need to do is to allow and activate it. Every time you try to run an applet It will ask for permission.
This tutorial was to show you how you can work with browser. From now on we will demonstrate in the applet viewers for easy understanding and debugging.
When you are working with applets it is useful to pass the parameters in the applet, you can accomplish this task by using <param> tag from HTML which is used to pass parameters to the applet. In the following example We will show you how you can pass the values using parameters and how you can retrieve them into your program to use them. Also We will show you how you can show the applet status or some custom message in your status bar of applet.
import java.applet.Applet; import java.awt.*; public class FirstApplet extends Applet{ public void init(){ } public void start(){ } public void stop(){ } public void destroy(){ } public void paint(Graphics g){ String msg = getParameter("examsmyantra"); String tag = getParameter("tagline"); g.drawString(msg, 20, 100); showStatus(tag); } } /* <applet code="FirstApplet" width="500" height="500"> <param name="examsmyantra" value="" > <param name="tagline" value="Study with Awesomeness" > </applet> */
This program on running results following applet:
If you see there is nothing special in this program but only two extra methods and two extra tags in the comment. the <param> tag is used to pass the parameter, the format is:
<PARAM name="name-of-parameter" value="value-of-parameter" >
when you want to access these parameters in your program you will use the getParameter(String name-of-parameter), and corresponding to the parameter name it will get you the value.
In the program above we have done the same, we passed two parameters and in the program we fetched these two parameters and used them as message to display, however one is done through the drawString and one is shown in the status bar using the showStatus() method.
If you have come through all the content above you would not want to miss this part, this will teach you how to create the animations with applet and threads. The Applet and Animations.
|
http://www.examsmyantra.com/article/54/java/java-applet-programming-playing-with-various-utilities
|
CC-MAIN-2019-09
|
refinedweb
| 957
| 64.64
|
Overview to resource’s read, create, modify and removal operations respectively.
Any action performed by a client over HTTP, contains an URL and a HTTP method. The URL represents the resource and the HTTP method represents the action which needs to be performed over the resource.
Being a broad architectural style, REST always have different interpretations. The ambiguity is exacerbated by the fact that there aren’t nearly enough HTTP methods to support common operations. One of the most common examples is the lack of a ‘search’ method. Search being one of the most extensively used features across different applications, but there have been no standards for implementing this feature. Due to this different people tend to design search in different ways. Given that REST aims to unify service architecture, any ambiguity must be seen as weakening the argument for REST.
Further in this document, we shall be discussing how search over REST can be simplified. We are not aiming at developing standards for RESTful search, but we shall be discussing how this problem can be approached.
Search Requirements
Search being mostly used feature across different web applications, supports almost similar features around different applications. Below is the list of some common constituents of search features:
- Search based on one or more criteria at a time
- Search red colored cars of type hatchback
- color=red && type=hatchback
- Relational and conditional operator support
- Search red or black car with mileage greater than 10
- Colour=red|black && mileage > 10
- Wild card search
- Search car manufactured from company name starting with M
- company=M*
- Pagination
- List all cars but fetch 100 results at a time
- upperLimit=200 && lowerLimit=101
- Range searches
- Get me all the cars launched between 2000 and 2010
- launch year between (2000, 2010)
When we support search with such features, search interface design itself becomes complex. And when implemented in a REST framework, meeting all these requirements (while still conforming to REST!) is challenging.
Coming back to the basic REST principles, we are now left with following two questions:
- Which HTTP method to use for “search”?
- How to create effective resource URL for search?
- Query parameters versus Embedded URLs
- Modelling filter criteria
HTTP Method Selection
Query Criteria vs. Embedded Criteria: Effectively, REST categorizes the operations by its nature and associates well-defined semantics with these categories. The idempotent operations are GET, PUT and DELETE (GET for read-only, PUT for update, DELETE for remove). While POST method is used for non-idempotent procedures like create.
By the definition itself, search is a read only operation, which is used to request for a collection of resources, filtered based on some criteria. So, GET HTTP method for search feature is an obvious choice. However, with GET, we are constrained with respect to URL size if we add complex criteria in the URL.
URL Representation
Let’s discuss this using an example: a user wish to search four-doored sedan cars of blue color; how shall the resource URL for this request look like? Below two different URLs are syntactically different but semantically same:
- /cars/?color=blue&type=sedan&doors=4
- /cars/color:blue/type:sedan/doors:4
Both of the above URLs conform to RESTful way of representing a resource query, but are represented differently. First one uses URL query criteria to add filtering details while the later one goes by an embedded URL approach.
The embedded URL approach is more readable and can take advantage of the native caching mechanisms that exist on the web server for HTTP traffic. But this approach limits user to provide parameter in a specific order. Wrong parameter positions will cause an error or unwanted behaviour. Below two looks same but may not give you correct results
- /cars/color:red/type:sedan
- /cars/type:sedan/color:red
Also, since there’s no standardization for embedding criteria, people may tend to device their own way of representation.
So, we consider query criteria approach over the embedded URL approach, though the representation is a bit complex and lacks readability
Modeling Filter Criteria: A search-results page is fundamentally RESTful even though its URL identifies a query. The URL shall be able to incorporate SQL like elements. While SQL is meant to filter data fetched from relational data, the new modelling language shall be able to filter data from hierarchical set of resources. This language shall help in devising a mechanism to communicate complex search requirements over URLs. In this section further, two such styles are discussed in detail.
- Feed Item Query Language (FIQL): The Feed Item Query Language (FIQL, pronounced “fickle”) is a simple but flexible, URI-friendly syntax for expressing filters across the entries in a syndicated feed. These filter expressions can be mapped at any RESTful service and can help in modelling complex filters. Below are some samples of such web URLs against their respective SQLs.
- Resource Query Language (RQL) : Resource Query Languages (RQL) defines a syntactically simple query language for querying and retrieving resources. RQL is designed to be URI friendly, particularly as a query component of a URI, and highly extensible. RQL is a superset of HTML’s URL encoding of form values, and a superset of Feed Item Query Language (FIQL). RQL basically consists of a set of nestable named operators which each have a set of arguments and operate on a collection of resources.
Casestudy: Apache CXF advance search features
To support advance search capabilities Apache CXF introduced FIQL support with its JAX-RS implementation since 2.3.0 release. With this feature, users can now express complex search expressions using URI. Below is the detailed note on how to use this feature:
To work with FIQL queries, a SearchContext needs be injected into an application code and used to retrieve a SearchCondition representing the current FIQL query. This SearchCondition can be used in a number of ways for finding the matching data.
@Path("books") public class Books { private Map books; @Context private SearchContext context;@GET public List getBook() {SearchCondition sc = searchContext.getCondition(Book.class); //SearchCondition is method can also be used to build a list of// matching beans iterate over all the values in the books map and // return a collection of matching beans List found = sc.findAll(books.values()); return found; } }
SearchCondition can also be used to get to all the search requirements (originally expressed in FIQL) and do some manual comparison against the local data. For example, SearchCondition provides a utility toSQL(String tableName, String… columnNames) method which internally introspects all the search expressions constituting a current query and converts them into an SQL expression:
// find all conditions with names starting from 'ami' // and levels greater than 10 : // ?_s="name==ami*;level=gt=10" SearchCondition sc = searchContext.getCondition(Book.class); assertEquals("SELECT * FROM table WHERE name LIKE 'ami%' AND level > '10'", sq.toSQL("table"));
Conclusion
Data querying is a critical component of most applications. With the advance of rich client-driven Ajax applications and document oriented databases, new querying techniques are needed; these techniques must be simple but extensible, designed to work within URIs and query for collections of resources. The NoSQL movement is opening the way for a more modular approach to databases, and separating out modelling, validation, and querying concerns from storage concerns, but we need new querying approaches to match more modern architectural design.
Reference: Guava’s Strings Class from our JCG partner Dustin Marx at the Inspired by Actual Events blog.
|
http://www.javacodegeeks.com/2012/01/simplifying-restful-search.html
|
CC-MAIN-2014-52
|
refinedweb
| 1,236
| 50.57
|
memoized-property 1.0.2
A simple python decorator for defining properties that only run their fget function once
A simple python decorator for defining properties that only run their fget function once.
- Free software: BSD license
What?
A Python property that only calls its fget function one time. How many times have you written this code (or similar)?
def class C(object): @property def name(self): if not hasattr(self, '_name'): self._name = some_expensive_load() return self._name
I’ve written it just enough times to be annoyed enough to capture this module. The result is this:
from memoized_property import memoized_property def class C(object): @memoized_property def name(self): # Boilerplate guard conditional avoided, but this is still only called once return some_expensive_load()
Why?
I couldn’t find a pre-existing version of this on PyPI. I found one other on GitHub,, but it was not published to PyPI.
History
1.0.2 (2014-05-02)
- Remove dependency on six
1.0.1 (2014-01-01)
- Added python 3.2 compatability
1.0.0 (2013-12-26)
- First release on PyPI.
- Downloads (All Versions):
- 25 downloads in the last day
- 877 downloads in the last week
- 2990 downloads in the last month
- Author: Steven Cummings
- Keywords: memoized property decorator
- License: BSD
- Categories
- Development Status :: 5 - Production/Stable
- Intended Audience :: Developers
- License :: OSI Approved :: BSD License
- Natural Language :: English
- Programming Language :: Python :: 2
- Programming Language :: Python :: 2.6
- Programming Language :: Python :: 2.7
- Programming Language :: Python :: 3
- Programming Language :: Python :: 3.2
- Programming Language :: Python :: 3.3
- Package Index Owner: estebistec
- DOAP record: memoized-property-1.0.2.xml
|
https://pypi.python.org/pypi/memoized-property/1.0.2
|
CC-MAIN-2015-40
|
refinedweb
| 267
| 50.63
|
Introduction:
Here I will explain how to delete all files in folders and subfolders in asp.net using C# and VB.NET.
Description:
In previous articles I explained Create or Delete directory/folder in c#, Delete files from uploaded folder in asp.net, jQuery Dropdown Menu example, start and stop timer example in JavaScript, Asp.net Interview questions and many articles relating to Gridview, SQL, jQuery,asp.net, C#,VB.NET. Now I will explain how to delete files in folders and subfolders in asp.net using C# and VB.NET.
To delete files in folders and subfolders first we need to add namespace using System.IO
Once we add namespace need to write the code like as shown below
C# Code
VB.NET Code
If you want to see complete example check below url
1 comments :
thanks for this c# code
|
http://www.aspdotnet-suresh.com/2013/04/c-delete-files-in-folders-and.html
|
CC-MAIN-2017-04
|
refinedweb
| 142
| 66.84
|
FRUITY LOOPS SOUNDS AND FRUITY LOOPS SAMPLES
All of our Wav Samples are available to import
into Fruity Loops Studio.
Now you can get Hot New GN Sounds
for Fruity Loops!
Get All New Fruity Loops Sounds today!
Get Image Line Fruity Loops Sounds HERE!
Image Line Fruity Samples Specials!
Dirty South Sounds
East Coast Sounds
West Coast Sounds
Midwest Sounds
Reggaeton Sounds
Latin Sounds
Neo Soul Sounds
R&B Sounds
UK Hip Hop Sounds
Hip Hop Instruments
Hip Hop Drum Kits
Motif Classic Drum Kits
Free Fruity Loops Sounds!
Get Image Line Fruity Loop Samples and
import them into your Fruity Loops Studio today!
Order Image Line FL Studio
Virtual Studio
Order Image Line FL Studio
Virtual Studio
Order Image Line FL Studio
Virtual Studio
Get $200 worth of Free Samples from Gotchanoddin.com when ordering!
Instructions: After completing your purchase, Email the sound kits
you would like FREE. Download links will be sent from
1-24 hours after purchase. Make sure to email the order
number of purchase, desired format and collection,
and the email address of where to send the link to.
IMAGE LINE FL STUDIO 7 XXL VIRTUAL STUDIO
FL Studio 7 XXL is a fully featured, open-architecture music creation and production environment for PC. No extra music virtual studio music production software includes an industry-leading 64 stereo track mixer. Each track can include up to 8 effects and can also be routed to any of the other 64 tracks or one of 4 dedicated ‘send’ Image Line.
Image Line FL Studio 7 XXL Virtual Studio Features:
- FL now supports track-based sequencing. Pattern Clips present sequence data in the method as Audio and Automation Clips.
- Edison replaces the Wave Editor – Edison is a fully integrated audio editing and recording tool complete with spectral analysis, convolution reverb + more
- Undo.Generators/Softsynths:
- 3xOsc, Sampler, BooBass, Beepmap, Plucked, Fruit Kick
- Chrome ,FPC, WaveTraveller, Dashboard, KB Controller
- Fruity Vibrator, Midi Out, FL Slayer, FL Keys, Granuliser
- SimSynth Live
- DX-10
- VideoPlayer
- Sytrus
- Soundfont player
- DrumSynth Live
- DirectWave Sampler
- Formats
Image Line FL Studio 7 XXL Virtual Studio Specifications:
-),)
Save BIG when you buy today!
Get Hip Hop, Dance, Reggaeton, and
R&B Fruity Loops Samples!
Download Free Fruity Loop Samples!
|
https://www.gotchanoddin.com/home/fruity-loops-sounds?v=1d20b5ff1ee9
|
CC-MAIN-2019-26
|
refinedweb
| 373
| 62.38
|
This is your resource to discuss support topics with your peers, and learn from each other.
01-12-2011 04:02 PM
when you type a message in the bottom field (setstatus()??) on a text message from os 5 the rounded field they have adjusts its height to accomodate your text. How on earth is this accomplised? I can't find anything near to it in code samples which is where i usually start from being a novice. Can anyone offer a way to do it or even provide a code sample please?
regards
a very desperate person!
Solved! Go to Solution.
01-12-2011 05:00 PM
dont know if i have made myself clear in my last message, what i need is the rounded rectangle to just be white with no border but then the containing manager to be black and then all this being added as the status (setStatus()).
The filled rounded rectangle would be the editfield and its height would expand when the text goes on to another line. This is the bit i am stuck on.
Any help is appreciated.
01-12-2011 05:01 PM
01-12-2011 05:23 PM
Thanks for your post RexDoug, I have read through the thread but it just went over my head - i thought i was getting clever at bb coding but obviosly not.
Just so you know im not being lazy i will go through my thoughts on this - it might even help me!
The editfield wraps on its own - i had it working but in a scrollable way with a rounded rectangle being drawn in the paint method. Didnt look as good as the rim textbox for sms where it expands (height wise)
so im thinkink for what i want i need fillroundrect(x, x, x, x, x, x) in the paint method off the overriding editfield class. Then to control the height of the editfield i think i need to give a changing variable to setExtent(x, here).
If anyone can see if im on the right/wrong track please let me know - this is very much needed for myself and i suspect others in the future and i will post my full code once complete to help others,
Regards
Alex
01-12-2011 06:54 PM
What OS leveles are you targeting? With anything OS 4.6 or later, you could always use a Border.
01-12-2011 07:11 PM
Hi Peter,
I am targeting 4.7 + but please correct me if im wrong but i am thinking to just fill a rounded rectangle as there doesnt seem to be a border on the textbox with rims new sms ui (os 5+).
I have come up with this
public class CustomEditField extends EditField { protected void paint(Graphics g) { g.setColor(0x000000); super.paint(g); g.setColor(0xffffff); g.fillRoundRect(0, 5, Display.getWidth() - 10, 50, 10, 10); } public int getPreferredWidth() { return Display.getWidth() - 10; } protected void layout(int width,int height) { super.layout(getPreferredWidth(), height); super.setExtent(getPreferredWidth(), getHeight()); } }
and..
HorizontalFieldManager hfm = new HorizontalFieldManager(HorizontalFieldManager.VERT
ICAL_SCROLL) { protected void paint(Graphics graphics) { //blue graphics.setBackgroundColor(0x000000); graphics.clear(); super.paint(graphics); } }; CustomEditField tb = new CustomEditField(); hfm.add(tb); setStatus(hfm);ICAL_SCROLL) { protected void paint(Graphics graphics) { //blue graphics.setBackgroundColor(0x000000); graphics.clear(); super.paint(graphics); } }; CustomEditField tb = new CustomEditField(); hfm.add(tb); setStatus(hfm);
this gets me as close as i have been in that the editfield does get painted as a white rounded editfield over black background and it expands upwards as each line of text is created but the text is all over and you can only see it when you're typing. (it's not in the white rounded rectangle) I hope with what I have put down what i want is clear.
Regards
Alex
01-12-2011 08:38 PM
please if anyone can help me achive an editfield which stretches height when a newline occurs I would be in your debt forever
01-12-2011 11:06 PM
Some clues:
Here is how I would set the preferred height of the field, based on the number of text rows and the font (assuming m_rows is the current number of rows):
public int getPreferredHeight()
{
int fontHeight = getFont().getHeight();
return (fontHeight * m_rows);
}
You'll also have to "fix" the width, either as a property set in the constructor, or just calculate a percentage of the screen width.
Then, your layout() override has to look something like this:
protected void layout(int width,int height)
{
width = getPreferredWidth();
height = getPreferredHeight();
super.layout(width,height);
}
01-13-2011 09:32 AM
I think the following should be relatively easy to implement:
1) Decide how much padding you need around the text in order for your rounded rectangle to be clearly separated. Use setPadding(int top, int right, int bottom, int left) (or setPadding(XYEdges padding)) on your text field to do that;
2) In order to limit the width of your TextField (BasicEditField, etc.) override its layout to do something like that:
protected void layout(int width, int height) { super.layout(Math.min(width, myMaxWidth), height); }
3) Override your text field's paint method like this:
protected void paint(Graphics g) { g.clear(); int color = g.getColor(); g.setColor(myRectColor); g.fillRoundRect(0, 0, getWidth(), getHeight(), diameter, diameter); g.setColor(color); super.paint(g); }
Experiment with your color, padding and arc diameter values to get the view you are most pleased with.
This way, you delegate all the heavy lifting of word-wrapping to the framework, using the results of its hard work (getWidth(), getHeight()) in your fine tuning. setPadding method was not documented in earlier API versions, but it has been working the same way since API 4.0.0 and was finally documented in the most recent 6.0 version, so you can safely use it.
Moral of the story - if the framework can do something for you, let it do that. On the other hand, BB GUI takes a lot of time to get used to, so don't blame yourself if you miss something that seems "obvious" in hindsight.
Good luck!
01-13-2011 12:11 PM
Hi guys, thanks for your messages. I plan to solve this in a couple of hours when get home. Arkaydyz, does just sending height to super.layout(Math.min(width, myMaxWidth), height); will height contain the measurements which will change each time a new line is created or do i have to do what the other post said and work out font size to get the height i need for my field and then pass this worked height?
Thanks again, i have posted before trying as i am getting different approaches, work out the height of the field via font size * rows or does height just contain the desirable height.
Hope i make sense
Alex
|
https://supportforums.blackberry.com/t5/Java-Development/adjustable-height-editfield-textbox-Please-help-me-accomplish/td-p/731449
|
CC-MAIN-2017-13
|
refinedweb
| 1,144
| 70.02
|
User-Agent: Mozilla/5.0 (X11; U; Linux i686; de; rv:1.9.2.16) Gecko/20110323 Ubuntu/10.10 (maverick) Firefox/3.6.16
Build Identifier: Mozilla/5.0 (Windows NT 5.1; rv:2.0) Gecko/20100101 Firefox/4.0
In-line SVG allows to have HTML entities in plain-text tags be auto decoded. This enables XSS by injecting entities via URL. Check the example link for a PoC.
PoC:;img%20src=x%20onerror=alert%281%29%26ampgt;%3Cp%3E
<!doctype html><svg><style><img src=x onerror=alert(1)><p>
// also works with crippled named entities (!!)
<!doctype html><svg><style><img src=x onerror=alert(1)><p>
The bug also triggers on innerHTML/outerHTML access (see example link). No other tested browser was affected by this quirk (IE9, GC10-12, O11).
Other browsers encounter this problem by excluding a certain range of characters from being auto-decoded. FF4-6 is the only browser auto-decoding < and > (among others).
Tested on FF4 - 6.0a1
Reproducible: Always
Steps to Reproduce:
1. Click on the link
2. Shock
3. Awe
This seems like just a problem in the innerHTML getter, right?
@:bz Yes and no.
Compare:
<!doctype html><xxx><style><img src=x onerror=alert(1)><p> // no effect
<!doctype html><svg><style><img src=x onerror=alert(1)><p> // XSS
The problem is not the innerHTML getter as far as I can see, but the fact that > and < get auto-decoded and enable XSS. Other tested UAs decode a to a (as they should) but omit XSS critical chars.
I can confirm the problem with the innerHTML testcase link, but not when loading the above content in a file by itself. Since we're only seeing one alert(1) I agree w/bz that it's the innerHTML getter.
In the testcase "canvas" DOM looks like [svg[svg-style[text]]][html-para]. Since the entities aren't re-encoded by the innerHTML getter you get what's shown in the log, which when parsed by the second innerHTML of course leads to the XSS. "canvas2" DOM looks like [svg[svg-style]][html-img][html-para] or
<svg><style></style></svg><img src="x" onerror="alert(1)"><p></p>
I guessed "sg:moderate" because it seems unlikely many sites are using embedded <svg> let alone putting user-generated content in it and then reading it back out and reparsing it. But it could happen, and probably will as more browsers support in-line svg.
We should fix this by making the HTML serializer escape the contents of text descendants of style and script even though the elements with the same local names in the HTML namespace shouldn't get their content escaped in serialization.
Meant to take this, too.
Created attachment 526199 [details] [diff] [review]
Make the HTML serializer pay attention to namespaces
While reviewing the code, could you, please, also check that the commit message is OK for a security patch. AFAICT, it doesn't review anything that's not obvious from the code change.
Created attachment 526200 [details] [diff] [review]
Mochitest
Note: This hasn't been through the tryserver, since pushing this to try would make the patch public.
I tested the fix against a list of SVG elements, HTML elements, comments, proc-insts etc - looks valid to me.
The range of elements (<style>, <script>, <noframes>, <noscript>) seems sufficient. I was thinking <noembed> could be an issue as well - but wasn't. Good job!
Comment on attachment 526199 [details] [diff] [review]
Make the HTML serializer pay attention to namespaces
r=me, though I really wish there were more IsHTML(tagname) stuff going on in this code....
Comment on attachment 526200 [details] [diff] [review]
Mochitest
r=me
Which repos should I push this to and when? Should I push the test at the same time with the fix?
You need to get approvals for fx4 and fx5, I think. Need to push to Fx4 and aurora, and presumably m-c?
Not sure about the test....
Comment on attachment 526199 [details] [diff] [review]
Make the HTML serializer pay attention to namespaces
(In reply to comment #14)
> You need to get approvals for fx4 and fx5, I think.
Requesting those approvals.
> Need to push to Fx4 and aurora, and presumably m-c?
Should I refrain from pushing to m-c until I have approvals to push to other repos at the same time?
I would have said no, but I'm not sure what the new mozilla-shadow setup is... That never got clearly explained. :(
dveditz, what should the landing plan be here?
Land on trunk and everywhere you have approvals to land.
Comment on attachment 526199 [details] [diff] [review]
Make the HTML serializer pay attention to namespaces
Don't plan any further "mozilla2.0" releases, clearing approval request for that branch.
let's not wait for the shadow-repo to be setup for this one. we'll get it going soon, though.
let's get this onto m-c so we can see it working. if it looks good there we'll approve for aurora.
(In reply to comment #18)
> Land on trunk and everywhere you have approvals to land.
Should I defer the test case landing?
Fix without test landed on trunk:
(In reply to comment #19)
> Don't plan any further "mozilla2.0" releases, clearing approval request for
> that branch.
We (SeaMonkey) just (2 hours before patch landed on m-c) started spinning our 2.1rc1 from mozilla2.0; assuming we want this, would we just make a relbranch?
[Also is there a way to find other bugs that might affect us?]
Comment on attachment 526199 [details] [diff] [review]
Make the HTML serializer pay attention to namespaces
approved for mozilla-aurora
Thanks. Pushed to Aurora:
Created attachment 534254 [details] [diff] [review]
Patch for mozilla-2.0
Per security group discussion, requesting landing on mozilla-2.0. (I can't set the request-approval flag as it is disabled.) This is the same patch, modified for that branch.
Comment on attachment 534254 [details] [diff] [review]
Patch for mozilla-2.0
Approved for the mozilla2.0 repository, a=dveditz for release-drivers
When landed please add a link to the changeset and flip the status2.0 field to ".x-fixed"
Does the test case (attachment 526200 [details] [diff] [review]) need to be kept confidential still? Would it be OK to land the test case on m-c now? (I was silly enough qcommit a backup of the test case to my mercurial queue and now I'm wondering if I should try to strip it from my the queue history in order to be able to a backup of my queue to a public repo.)
We're releasing the fix next week. Sure, should be OK to check in the test now given the moderate severity.
FWIW, the security advisory for this bug isn't quite right. The bug wasn't in decoding character references. The bug was in encoding stuff in innerHTML *getter* in a way that didn't round-trip.
Landed the test finally:
|
https://bugzilla.mozilla.org/show_bug.cgi?id=650001
|
CC-MAIN-2016-44
|
refinedweb
| 1,177
| 67.15
|
- also essential to equip the board with a library, that communicates with the TCP/IP manager and makes it easier to program the Arduino and to let it communicate with other computers via the Internet.
The WiFi shield
Despite the proliferation of hardware to connect Arduino with the web, and especially despite YUN, we considered useful to design and propose a new WiFi shield for Arduino, which replaces the one already presented.
This time we indeed added an interface processor that manages the TCP/IP protocol: a definitely too challenging task for Arduino. Anyway a network interface card becomes really useful if it’s driven by software which simplifies as much as possible its use, especially in a context of reduced hardware resources such as with Arduino. For this reason we supplied the card with a Arduino library, that is easier than that for more complex hardware such as that of YUN.
The hardware is essentially based on a radio element (MRF24WB0MA) and a MCW1001A processor.
The board is then connected to the host (Arduino) with a two-wires serial interface in TTL logic (RX, TX). This forced us to use a software simulated serial port with a customized version of the SoftwareSerial library. The customization was necessary because of the two stop bits mode, but also to adjust the optimum speed and make the use of the library transparent. In fact, this version of the SoftwareSerial is included in the library. But this mode of connection precludes the possibility of using the standard SoftwareSerial library for other uses in the same sketch.
Simulated serial port could be connected to different Arduino pins. By two jumpers you can choose whether to use D2 or D11 for RX (default D2) and D3 or D10 for TX (default D3). In case you decide to change the default values, you must edit the file MWIFI.h and change the following lines:
#define RXPIN 2
#define TXPIN 3
Hence, chosen pins, along with the D7 pin used to reset the WiFi card, cannot be used for other purposes. Anyway the MCW1001A processor provides additional digital pins. The first four are connected to as many LEDs, while the other three are connected to as many connectors along with power (5V or 3V) and GND. In fact, these three pins (GPIO5, GPIO 6, GPIO7) can also manage levels of 5V with a current of about 25 mA maximum (in/out). The GPIO pins can be activated by the library functions, but the first two LEDs are reserved to the library to indicate booting and network connection.
As mentioned, the MCW1001A processor manages the TCP/IP stack, running the basic protocol by using its internal RAM for transmission and reception buffers, as well as for storing various parameters. Anyway the processor, not having a non-volatile memory available, must be reconfigured each time you reboot it.
In any case, the low-level management is handled by the library that provides a set of functions to open network sockets or manage the HTTP protocol.
The library covers both Server and Client functionalities. It can be used when we want to give to Arduino the task of server, so that it reacts and responds to messages from another computer. Or the library can be used to give to Arduino the ability to connect on its own initiative to a server computer to send, for example, sensor measurements. The connection is established in TCP/IP via entities named network sockets that correspond to the concept of the socket to which connect a communication cable. So there is a network socket for each of the two computers that want to talk. A network socket is created as part of a port number. The port numbers are application sub-addresses in an IP address. Numbers less than 1000 are reserved for services publicly recognized as the Mail, FTP, Web (port 80) etc..
The MWiFi library
Download the Library for WiFi shield
The MWiFi library should be positioned (after decompression) in the “libraries” folder of the Arduino IDE, like other libraries. It must be used including the file <MwiFi.h> in the sketches. At this point it should be instantiated as an object and can be used. The first function to recall is begin() which is used to initialize the board:
#include <MWiFi.h>
MWiFi WIFI;
void setup()
{
WIFI.begin()
The start-up of the board determines the lighting of the first LED. Now try to connect the Arduino to the network using an access point that could be that of the home WiFi router, for example with name (SSID)
WIFI.ConnSetOpen (“your-SSID”);
in case the line is not-protected, or:
WIFI.ConnSetWPA (“your-SSID”, “your-pwd”);
in case the line has WPA protection with password “your-pwd“.
Previous functions are used to prepare the card to connection. But the real connection is given by calling the function:
WIFI.Connect();
If the connection is made, the second LED will light. Be careful though, because in case of a protected network, the connection may take also more than half a minute, as the controller of the card must encode the key using the password. The current version of the library provides an automatic reset of Arduino in case the connection is lost. Even errors detected by the controller cause an automatic reset. In this way, a blockage is avoided and the system can be “unattended”.
For your convenience, direct functions connection have also been added: these prepare the connection and perform it in a single step. Also the ability to generate the numeric key from the password, in order to then use it instead of the password, has been added. In fact the access with password, needing every time to process the numeric key, may take up to a minute, while the key access is very fast.
The router assigns a dynamic IP address to the Arduino, because this is the default behavior of the board. But, if you want, you can impose a fixed address. The address assigned may be required by a function of the library.
At this point choose to let Arduino act as server. First, call the function openServerTCP(), which creates the listener on a certain port (for example 5000), and then put in loop the receiving of a possible link request:
int ssocket=WIFI.openServerTCP(5000);
void loop ()
{
int csocket=WIFI.pollingAccept(ssocket);
The integer variables ssocket and csocket are references (handle) respectively to the server and connection network socket to any computer that wants to make the link. The function pollingAccept() returns a number less than 255 if the connection has been requested, or 255 if there is no incoming connection request.
If the connection is established you will be able to send or receive messages referring to this csocket. For example, to receive a record, that is a string ending with a line-feed, you can use the function:
char*line=WIFI.readDataLn(csocket);
A “null-terminated string” will be returned but without the line-feed. In this case, the library uses a default buffer of 81 characters (but its length can be modified in its define). So you do not need to provide it. However, there are other possibilities. If you want to respond you can use the function:
WIFI.writeDataLn(csocket,answer);
The variable named “answer” corresponds to a char buffer. But it must be a “null terminated string”. NOTE: when you use a “null-terminated string” means that you do not have to provide the length of the useful characters in the array, because the function automatically calculates it, being based on the null terminating character. Most of the functions manage and produce this kind of string, not to be confused with the String object also present in the language reference of Arduino.
In this simple way you have established and used a Wi-Fi connection with a remote computer. In the library, among the examples given, there’s one of a server called CommandServer that allows to control Arduino by using a telnet program on the remote computer. To simplify the testing, a Java program that works as telnet has been added.
If you want to make Arduino act as a “client” that connects to a server, the situation is even simpler, because you only have to create a connection network socket:
int csocket=WIFI.openSockTCP(“192.168.1.2”,5000);
And if csocket is valid (less than 255), it means that the connection with the computer, at the address 192.168.1.2 on port 5000, has been established. At this point you can use the reading and writing functions previously seen. Among the examples, there’s one (SendData) that connects to a remote server to send at regular intervals readings of sensors. Obviously a server program on the remote computer is needed. To ease testing, along with the library, a Java program that receives the data and downloads them on a file by adding a time-stamp (to overcome the lack of an RTC-real-time clock on Arduino) was provided.
In addition to these base features, the library has all the necessary functions to define various parameters such as: the masquerading address of the network (default 255.255.0.0), any Gateway address, the reading of the board’s MAC etc.. In particular, there are functions to detect the access-points present and visible in the environment. For example, to detect all the networks present:
int nn=WIFI.scanNets()
The nn, integer, variable will contain the number of the detected networks. While the function:
char*net=WIFI.getNetScanned(i)
will return the characteristics (as records) of the i-th detected network. Finally, for convenience, a function that returns the name of the more powerful (in terms of signal) unsecured network has been provided.
With the library documentation all the features available will be described. The library contains a help and is documented in the code files (in particular the.h files).
But the library is not only about connection and network socket management. It includes a derived class (and therefore specialized) that handles the HTTP protocol: the Web protocol.
The HTTP protocol is a protocol that contemplates request and response, always. Both request and response let travel in the network packets composed by some headers and out-and-out data (such as HTML pages, images, videos or even only text).
To free up the user from this whole issue, the Http library creatres these packages using the PROGMEM mode, i.e. the possibility to put constants and texts in the flash area. NOTE: The HTTP protocol is a text protocol: it only uses characters. The use of flash memory for the texts saves the small ram of Arduino.
The HTTP library
Being a subclass of MWiFi, the Http library inherits all the features of MWiFi, but if you want to use the new functions must include the file HTTPlib.h (instead of MwiFi.h) and instantiate an HTTP object:
#include<HTTPlib.h>
HTTP WIFI;
Regarding the connection with an access point, and the management of network sockets everything is as described before (maybe now we will choose the port 80). Anyway also this time we have to decide whether to make Arduino act as a server (this time Web Server) or as a client accessing a web server application (such as Tomcat, GlassFish, JBoss, PHP etc.. .)
Suppose you want to create a Web Server to be queried with any browser. To make Arduino operate as Web Server you have to prepare the resources that it can provide. That is the response html pages. These pages will be stored in PROGMEM areas for the reasons mentioned above. For example:
prog_char pageIndex [] PROGMEM =
“<html> <head>”
“<title> Index </
title>”
Now you must link these buffers in memory with the names of the resources to be invoked via the browser. Resources names are the local path part of the URL (or URI), i.e. the file name into a complete URL (eg: ). In this minimal context, the resources to be invoked will be identified only by the name, with no extension. So it comes to associate a page stored with its corresponding Internet name (eg “/ index”).
Actually this page need to be sent, hence you must connect the resource name with a function which will send it. To make the mechanism as automatic as possible, a structure or, better a tipedef named Webres, has been prepared. This structure is formed by the union of two fields: the resource name and the function name (which in C, corresponds to an address). It is, therefore, to form so many name-function pairs to be passed to the function getRequest(),which will launch the correct function (call-back function), or send a standard response “Not Found” message, in case the name does not match up with any of those predefined.
This code shows the example of the construction of an array of 8 Webres structures and its insertion in the getRequest() call.
WEBRES rs[8]=
{
{“/index”,pindex},
{“/Analog”,panalog},
{“/RDigital”,rdigital},
{“/Wdigital”,wdigital},
{“/wdig”,wdig},
{“/Pwm”,pwmpage},
{“/PwmSet”,pwmset},
{“/End”,sessend}
};
void loop()
{
WIFI.getRequest(csocket,8,rs);
:
Every second field of the structures corresponds to the call-back function that getRequest () will launch. The call-back function you will send the buffer corresponding to the selected page and its prototype expected to be of void type (ie it returns nothing) and has a single argument, a pointer to a null-terminated string supplied by the caller (see below):
void pindex(char *query)
{
WIFI.sendResponse(csocket,pageindex);
}
Summarising: placing getRequest() in the loop, it will take care of the entire management of the request. It will detect the used mode, GET or POST, behaving accordingly (the data are contained in a different way) and will launch the function corresponding to the request (or a message “Not Found”).
The call-back function pindex() described above, however, does not but to send in response a static HTML page, that is defined in a fixed way. Arduino Web Server so defined it is not very useful, because it was presumed that it can be used to read values from sensors or switch outputs. To achieve this, the html response page must be built at the moment containing the values that you want to read. That is a dynamic page. But it would be too expensive to build, within the call-back functions, the entire page. To simplify the task it was provided to define the page “one-off” as a static page, but being able to create tags inside (labels placeholders) in the position that you want to complete by the moment. For this purpose there is an alternative to sendResponse(), that is the sendDynResponse()function, which tracks down and replaces the tags and sends the page. The replacement is done sequentially going to run an array of strings arranged at the moment. Hence: the first tag encountered is replaced with the first string of the array, and so on. The tag used is the ‘@’ character. Only one should be used regardless of the length of the string that will replace it.
In the example, three tags will be replaced by three strings that represent the values of three digital inputs.
prog_char pagerdigital[] PROGMEM=
:
“<tr>”
“<td><div align=’center’>@</div></td>”
“<td><div align=’center’>@</div></td>”
“<td><div align=’center’>@</div></td></tr>”
:
void rdigital(char *query)
{
char *val[3];
if(digitalRead(4)) val[0]=ON;else val[0]=OFF;
if(digitalRead(5)) val[1]=ON;else val[1]=OFF;
if(digitalRead(12))val[2]=ON;else val[2]=OFF;
WIFI.sendDynResponse(csocket,pagerdigital,3,val);
}
You must pass to the function sendDynResponse() the array of strings and its size. To make Arduino act following commands launched from browser (for example, by the form buttons), you must read the data sent by the Request along with the name of the resource. The situation is different if the request came in the form of GET instead of POST (the two cornerstone methods of the http protocol). It is, in any case, about using that argument passed to the call-back function by getRequest().
In the first case, the data are represented by name-value pairs that identify a parameter. The parameters are appended to the name of the resource in a format that encodes spaces and special characters, which can be named query-string. The query-string is always provided (even if zero-length) to the call-back function (it’s part of its prototype). Then we can use the getParameter() function to retrieve the value (always a string) of the parameter with a certain name.
void pwmset(char *query)
{
char *pwmval;
pwmval=WIFI.getParameter(query,strlen(query),”PWM10″);
if (pwmval!=NULL)
{int pv;sscanf(pwmval,”%d”,&pv);
analogWrite(10,pv);d10=pv;}
pwmval=WIFI.getParameter(query,strlen(query),”PWM11″);
if (pwmval!=NULL)
{int pv;sscanf(pwmval,”%d”,&pv);
analogWrite(11,pv);d11=pv;}
pwmpage(query);
}
In the second case, instead, the query-string contains values that may be in the query-string format (as generally the forms do) or in a whatsoever format. However, you must consider that the buffer which contains the query-string is provided by the library and has a length of 64 characters (but you can redefine it using “define” on HTTPlib.h). The data in excess are lost.
In the examples there is a complete Web Server that allows to read analog values and digital values, enable and disable digital output and then adjust two PWM outputs. The sketch is very compact (half consists of html pages in PROGMEM) thanks to the automation produced by functions getRequest() and sendDynResponse().
If, instead, you want to use Arduino as a client of a Web Application Server (or a more simple CGI), you will use the functions sendRequest() and getResponse(). SendRequest is actually composed by two separate functions depending on which mode you want to use: GET or POST.
If you use SendRequestGET() it will provide both the name of the resource and parameters in a single query-string. If you use sendRequestPOST() it will separately provide the name of the resource and the data placed in a buffer of null-terminated string kind , in this way:
WIFI.addParameter(query,128,”/TestClient”,NULL);
WIFI.addParameter(query,128,”A1″,sa1);
:
WIFI.sendRequestGET(csocket,query);
Or this:
sprintf(rec,”%d %d %d %d”,an1,an2,d1,d2);
WIFI.sendRequestPOST(csocket,”/TestClient”,rec);
In case of using sendRequestGET(), the query-string was formed with the help of the function addParameter(). First time initializing the query-string with the name of the resource (null), and then with the name-value pairs for the individual parameters.
Function getResponse(), is used to retrieve the response from the server. This can also be formed by a number of data in various formats: from HTML page to data in XML, JSON or CSV (comma separated values).
If the data cannot be contained in a single buffer, you can call the function getNextResponseBuffer() in a loop until it returns 0.
This is an example of elemental connection to receive and dispatch records.
#include <MwiFi.h>
MwiFi WIFI;
setup()
{
WIFI.begin();
WIFI.ConnectWPAwithPSW(“MioAcp”,”pippo”);
server=WIFI.openServerTCP(5000);
}
loop()
{
if (!OpenCom)
socket=WIFI.pollingAccept(server);
if(socket<255) OpenCom=true;
if (OpenCom)
record=WIFI.readDataLn(socket);
WIFI.writeDataLn(socket,”……..”);
}
Sketch example
BOM
R1: 10 kohm (0805)
R2: 4,7 kohm (0805)
R3: 100 kohm (0805)
R4: 10 kohm (0805)
R5: 1 Mohm (0805)
R6: –
R7: –
R8: 1 kohm (0805)
R9: 1,5 kohm (0805)
R10: 4,7 kohm (0805)
R11: 10 kohm (0805)
R12: 330 ohm (0805)
R13: 330 ohm (0805)
R14: 330 ohm (0805)
R15: 330 ohm (0805)
C1: 100 nF (0805)
C2: 220 µF 6,3 VL (D)
C3: 22 pF (0805)
C4: 22 pF (0805)
C5: 100 nF (0805)
C6: 100 nF (0805)
C7: 10 µF 35 VL (B)
C8: 10 µF 35 VL (B)
C9: 100 nF (0805)
C10: 220 µF 6,3 VL (D)
U1: MRF24WB0MA/RM
U2: MCW1001A
U3: TC1262-3.3 (SOT-223)
T1: BC817
Q1: 8 MHz (HCX-7SB)
RST: switch
LD1: LED (0805)
LD2: LED (0805)
LD3: LED (0805)
LD4: LED (0805)
The Store
This WiFi shield for Arduino is available in our store
Pingback: A new Wi-Fi Shield to connect your Arduino to t...()
|
https://www.open-electronics.org/a-new-wi-fi-shield-to-connect-your-arduino-to-the-internet/
|
CC-MAIN-2018-05
|
refinedweb
| 3,390
| 60.85
|
There’s no better way to end the first chapter than to dive in and create a simple Hello World feature from the ground up. This exercise will step you through the fundamental aspects of creating, deploying, and testing a feature. To make things more interesting, at the end of this exercise, we will also add event handlers that will fire whenever the feature is activated and deactivated. The code inside these event handlers will use the WSS object model to change some characteristics of the target site.
In this walk-through, we are going to use Microsoft Visual Studio 2005 to create a new development project for the feature. Using Visual Studio 2005 will provide color coding of XML and ASP.NET tags. Visual Studio 2005 will also provide the convenience of IntelliSense when working with the XML files required to define a feature.
Let’s start off by creating a new Class Library DLL project named HelloWorld. We are going to create a C# project in our example, but you can create a Visual Basic .NET project instead if you prefer. Eventually, we will add code that will be compiled into the output DLL for the feature’s event handlers. However, first we will get started by creating the feature.xml file.
Before creating the feature.xml file, consider that the files for this feature must be deployed in their own special directory inside the WSS system directory named FEATURES. The FEATURES directory is located inside another WSS system directory named TEMPLATE.
c:\Program Files\Common Files\Microsoft Shared\web server extensions\12\TEMPLATE\FEATURES
Given the requirements of feature deployment, it makes sense to create a parallel hierarchy of folders within a Visual Studio project used to develop a WSS feature. This will make it easier to copy the feature files to the correct location and test them as you do your development work. Start by adding a folder named TEMPLATE to the root directory of the current project. Once you have created the TEMPLATE directory, create another directory inside that named FEATURES. Finally, create another directory inside the FEATURES directory using the same name as the name of the feature project. In this case the name of this directory is HelloWorld, as shown in Figure 1-12.
Figure 1-12: A Visual Studio project for developing a WSS feature
Next, create an XML file named feature.xml inside the HelloWorld directory. This is where you will added the XML-based information that defines the high-level attributes of the feature itself. Add the following XML to the feature.xml file to add a top-level Feature element along with attributes that define the feature itself.
<Feature Id="" Title="Hello World Feature" Description="This is my very first custom feature" Scope="Web" Hidden="FALSE" ImageUrl="menuprofile.gif" xmlns=""> <ElementManifests> <ElementManifest Location="elements.xml" /> </ElementManifests> </Feature>
You see that a feature is defined using a Feature element containing attributes such as Id, Title, Description, Version, Scope, Hidden and ImageUrl. You must create a new GUID for the Id attribute so that your feature can be uniquely identified. You create the feature’s Title and Description attributes using user-friendly text. These attributes will be shown directly to the users on the WSS administrative pages used to activate and deactivate features.
The Scope defines the context in which the feature can be activated and deactivated. The feature we are creating has a scope equal to Web, which means it can be activated and deactivated within the context of the site. If you assign a Scope value of Site, your feature will then be activated and deactivated within the scope of a site collection. The two other possible scopes for defining a feature are WebApplication scope and Farm scope.
As you can see, the Hidden attribute has a value of FALSE. This means that, once installed within the farm, our feature can be seen by users who might want to activate it. You can also create a feature where the Hidden attribute has a value of TRUE. This has the effect of hiding the feature in the list of available features shown to users. Hidden features must be activated from the command line, through custom code, or through an activation dependency with another feature. Activation dependencies will be discussed in more detail later in Chapter 9.
You will also notice that the ImageUrl attribute has a value that points to one of the graphic images that is part of the basic WSS installation. This image will be shown next to the feature in the user interface.
The last part of the feature.xml file shown previously is the ElementManifests element. This element contains inner ElementManifest elements that reference other XML files where you will define the elements that make up the feature. In our case, there is a single ElementManifest element that uses the location attribute to point to a file named element.xml.
Inside the TEMPLATE directory there is a directory named XML that contains several XML schemas, including one named wss.xsd. If you associate this schema file with feature files such as feature.xml and elements.xml, Visual Studio will provide IntelliSense, which makes it much easier to author a custom feature. You may also copy these XSD files into C:\Program Files\Microsoft Visual Studio 8\Xml\Schemas\.
Now it’s time to create the element.xml file and define a single CustomAction element that will be used to add a simple menu command to the Site Actions menu. Add the following XML, which defines a CustomAction element to elements.xml.
<Elements xmlns=""> <CustomAction Group <UrlAction Url=""/> </CustomAction> </Elements>
This CustomActions element has been designed to add a menu command to the Site Actions menu. It provides a user-friendly Title and Description as well as a URL that will be used to redirect the user when the Menu command is selected. While this example of a feature with a single element does not go very far into what can be done with features, it provides us with a simple starting point for going through the steps of installing and testing a feature.
Now that we have created the feature.xml file and the elements.xml file to define the HelloWorld feature, there are three steps involved in installing it for testing purposes. First, you must copy the HelloWorld feature directory to the WSS system FEATURES directory. Second, you must run a STSADM.EXE operation to install the feature with WSS. Finally, you must activate the feature inside the context of a WSS site. You can automate the first two steps by creating a batch file named install.bat at the root directory of the HelloWorld project and adding the following command line instructions.
REM – Remember to remove line breaks from first two lines @SET TEMPLATEDIR="c:\program files\common files\microsoft shared\ web server extensions\12\Template" @SET STSADM="c:\program files\common files\microsoft shared\ web server extensions\12\bin\stsadm" Echo Copying files xcopy /e /y TEMPLATE\* %TEMPLATEDIR% Echo Installing feature %STSADM% -o InstallFeature -filename HelloWorld\feature.xml -force Echo Restart IIS Worker Process IISRESET
Actually, you can also automate the final step of activating the feature within a specific site by running the ActivateFeature operation with the STSADM utility. However, we have avoided this in our example because we want you to go through the process of explicitly activating the feature as users will do through the WSS user interface.
Once you have added the install.bat file, you can configure Visual Studio to run it each time you rebuild the HelloWorld project by going to the Build Events tab within the Project Properties and adding the following post-build event command line instructions.
cd $(ProjectDir) Install.bat
The first line with cd $(ProjectDir) is required to change the current directory to that of the project directory. The second line runs the batch file to copy the feature files to the correct location and install the feature with the InstallFeature operation of the command-line STSADM.EXE utility.
Once the feature has been properly installed, you should be able to activate it within the context of a site. Within the top-level site of the site collection you created earlier this chapter, navigate to the Site Settings page. In the Site Administration section, click the link with the title Site Features. This should take you to a page like the one shown in Figure 1-13. Note that if you are working within a farm that has MOSS installed, you will see many more features than if you are working within a farm with just WSS.
Figure 1-13: Once a feature is installed, it can be activated and deactivated by users.
You should be able to locate the HelloWorld feature on the Site Features page. You can then go through the act of clicking the button to activate the feature. Once you have done this, you should be able to drop down the Site Actions menu and see the custom menu item, as shown in Figure 1-14. If you select this custom menu item, you will be redirected to the URL that was defined by the Url attribute of the UrlAction element within the elements.xml file.
Figure 1-14: A CustomAction element can be used to add custom menu commands to the site actions menu.
After you have successfully activated the feature and tested the custom menu command, you should also experiment by returning to the Site Features page and deactivating the feature. Once you have deactivated the HelloWorld feature, you should be able to verify that the custom menu has been removed from the Site Actions menu.
You have now witnessed the fundamental principle behind features. Developers create various types of site elements that can be added or removed from a site through the process of activation and deactivation.
Now it’s time to take the example of the HelloWorld feature a little further by adding event handlers and programming against the WSS object model. First, start by adding a project reference to Microsoft.SharePoint.dll. Next, locate the source file named Class1.cs and rename it to FeatureReceiver.cs. Next, add the following code.
using System; using Microsoft.SharePoint; namespace HelloWorld{ public class FeatureReceiver : SPFeatureReceiver { public override void FeatureInstalled( SPFeatureReceiverProperties properties){} public override void FeatureUninstalling( SPFeatureReceiverProperties properties) { } public override void FeatureActivated( SPFeatureReceiverProperties properties){ SPWeb site = (SPWeb)properties.Feature.Parent; // track original site Title using SPWeb property bag site.Properties["OriginalTitle"] = site.Title; site.Properties.Update(); // update site title site.Title = "Hello World"; site.Update(); } public override void FeatureDeactivating( SPFeatureReceiverProperties properties) { // reset site Title back to its original value SPWeb site = (SPWeb)properties.Feature.Parent; site.Title = site.Properties["OriginalTitle"]; site.Update(); } } }
The first thing you should notice is how you create an event handler that fires when a feature is activated or deactivated. You do this by creating a class that inherits from the SPFeatureReceiver class. As you can see, you handle events by overriding virtual methods in the base class such as FeatureActivated and FeatureDeactivating. There are also two other event handlers that fire when a feature is installed or uninstalled, but we are not going to use them in this introductory example.
The FeatureActivated method has been written to update the title of the current site using the WSS object model. Note the technique used to obtain a reference to the current site–the properties parameter is used to acquire a reference to an SPWeb object. The properties parameter is based on the SPFeatureReceiverProperties class that exposes a Feature property that, in turn, exposes a Parent property that holds a reference to the current site. The site title is changed by assigning a new value to the Title property of the SPWeb object and then calling the Update method.
Also note that this feature has been designed to store the original value of the site Title so that it can be restored whenever the feature is deactivated. This is accomplished by using a persistent property bag scoped to the site that is accessible through an SPWeb object’s Properties collection. Note that many of the objects in the WSS object model have a similar Properties property, which can be used to track name-value pairs using a persistent property bag. WSS handles persisting these named value pairs to the content database and retrieving them on demand.
Now that we have written the code for the feature’s two event handlers, it’s time to think about what’s required to deploy the HelloWorld.dll assembly. The first thing to consider is that this assembly DLL must be deployed in the Global Assembly Cache (GAC), which means you must add a key file to the project in order to sign the resulting output DLL during compilation with a strong name.
Once you have added the key file and configured the HelloWorld project to build HelloWorld.dll with a strong name, you can also add another instruction line to the post-event build command line to install (or overwrite) the assembly in the GAC each time you build the current project. The command line instructions for the post-event build should now look like this:
"%programfiles%\Microsoft Visual Studio 8\SDK\v2.0\Bin\gacutil.exe" -if $(TargetPath) cd $(ProjectDir) Install.bat
The next step is to update the feature.xml file with two new attributes so that WSS knows that there are event handlers that should be fired whenever the feature is activated or deactivated. This can be accomplished by adding the ReceiverAssembly attribute and the ReceiverClass attribute, as shown here.
<Feature Id="" Title="Hello World Feature" Description="This is my very first custom feature" Version="1.0.0.0" Scope="Web" Hidden="FALSE" ImageUrl="menuprofile.gif" ReceiverAssembly="HelloWorld, Version=1.0.0.0, Culture=neutral, PublicKeyToken=b59ad8f489c4a334" Receiver <ElementManifests> <ElementManifest Location="elements.xml" /> </ElementManifests> </Feature>
The ReceiverAssembly attribute should contain the four-part name of an assembly that has already been installed in the GAC. The ReceiverClass attribute should contain the namespace-qualified name of a public class within the receiver assembly that inherits SPFeatureReceiver.
Once you have made these changes to the feature.xml file, you should be able to test your work. When you rebuild the HelloWorld project, Visual Studio should run the install.bat file to copy the updated version of the feature.xml file to the WSS FEATURES directory and to install the updated version of feature.xml with WSS. The build process should also compile HelloWorld.dll with a strong name and install it in the GAC. Note that you will likely be required to run an IISRESET command to restart the IIS worker process. This is due to the fact that features and assemblies loaded from the GAC are cached by WSS within the IIS worker process.
At this point, you should be able to test your work by activating and deactivating the feature within the context of a WSS site. When you activate the site, it should change the Title of the site to “Hello World.” When you deactivate the feature, it should restore the Title of the site to the original value.
If you have successfully completed these steps, you are well on your way to becoming an accomplished WSS developer. That’s because creating features and programming against the WSS object model are the two most basic skills you need to acquire.
|
https://flylib.com/books/en/4.221.1.14/1/
|
CC-MAIN-2020-45
|
refinedweb
| 2,576
| 54.32
|
Dear Rosetta users!
I am trying to build and use rosetta latest version but getting error while compiling source code with ./scons.py.
the error file is attached herewith, kinldy try to help me out.
many thanks in advance!
malkeet
error log file is
genesis:/home/gnss/singhma> python -V
Python 2.6.6
genesis:/home/gnss/singhma> p27
[singhma@genesis ~]$ python -V
Python 2.7.13
[singhma@genesis ~]$ cd rosetta/software
[singhma@genesis software]$ ll
total 3020192
-rw-r--r-- 1 singhma gnss 252221850 Jun 5 16:32 rosetta_3.8_user_guide.tgz
drwxr-xr-x 6 singhma gnss 4096 Feb 26 02:55 rosetta_src_2017.08.59291_bundle
-rw-r--r-- 1 singhma gnss 2840439965 Jun 5 16:31 rosetta_src_3.8_bundle.tgz
[singhma@genesis software]$ cd rosetta_src_2017.08.59291_bundle/
[singhma@genesis rosetta_src_2017.08.59291_bundle]$ cd ma i n/source
bash: cd: ma: No such file or directory
[singhma@genesis rosetta_src_2017.08.59291_bundle]$ ./scons.py -j12 mode=release bin
bash: ./scons.py: No such file or directory
[singhma@genesis rosetta_src_2017.08.59291_bundle]$ cd main/source
[singhma@genesis source]$ ./scons.py -j12 mode=release bin
scons: Reading SConscript files ...
Running versioning script ... fatal: Not a git repository (or any of the parent directories): .git
Done. (0.0 seconds)
Number of option files updated: 0
Total 3922 options.
Finished updating ResidueProperty code -- no changes needed
Finished updating VariantType code -- no changes needed
scons: done reading SConscript files.
scons: Building targets ...
g++ -o build/src/release/linux/2.6/64/x86/gcc/4.4/default/apps/public/AbinitioRelax.o -c -std=c++0x /4.4
In file included from src/utility/options/Option.hh:25,
from src/utility/options/OptionCollection.hh:23,
from src/basic/options/option.hh:17,
...
src/utility/signals/BufferedSignalHub.hh:124: instantiated from 'void utility::signals::BufferedSignalHub<ResultType, Signal>::unblock() [with ReturnType = void, Signal = core::pose::signals::DestructionEvent]'
src/utility/options/ScalarOption_T_.hh:155: instantiated from here
/usr/lib/gcc/x86_64-redhat-linux/4.4.7/../../../../include/c++/4.4.7/bits/stl_iterator.h:708: error: cannot increment a pointer to incomplete type 'core::pose::signals::DestructionEvent'
scons: *** [build/src/release/linux/2.6/64/x86/gcc/4.4/default/protocols/mpi_refinement/MPI_Refinement.os] Error 1
scons: building terminated because of errors.
[singhma@genesis source]$
[singhma@genesis source]$
Your comment was long enough that it was breaking the forum software. I edited out some of the error stream.
The problem is that GCC 4.4 is too old to compile Rosetta 3.8.
thank you, I will try with updated version of gcc.
malkeet
Hello!
I tried compiling rosetta 3.8 with gcc 6.2, but still it is ending up with errors (file atached herwith).
Many thanks in advance!
Malkeet
Interesting. These are the same errors you had with GCC 4.4. I'd suspect something is going wrong in pathing and g++ still points to 4.4, perhaps? What does "which g++" report? What does "g++ --version" (or whatever its version flag is) report?
Hello!
the version command returns:
and the which command : /usr/bin/g++
Thanks!
Malkeet
There is a compiler compatibility script at the top of the documentation page I linked earlier - what does it return?
Have you made modifications to the settings files in tools/build? Modifications to tools/build/user.settings are common to resolve pathing issues - I'm wondering if it's overriding your path in such a way that you still get g++ 4 from scons even though you have a more recent one installed in your normal environment. The prepended "scl enable ..." makes me especially suspicious of that type of problem.
Hello !
Thanks for your response. I tried the compiler command and it passed all tests (file attached)
I didn't make any change to tools/build/user.settings file. Could you please tell what to do in this file ?
Thanks in advance!
malkeet
If `g++ --version` is giving you the correct version at the commandline, but scons doesn't seem to be picking things up, I'd first recommend copying main/source/tools/build/site.settings.killdevil to main/source/tools/build/site.settings
By default Scons will bypass your shell's PATH settings. The site.settings.killdevil has facilities for enabling your path settings in the scons build process.
Hello!
is it /site.settings file? because there is no such file. there are files 'site.settings.xx', where xx is wiggins, vaxmpi etc.
Based on our earlier discussion, I copied the content of site.settings.killdevil to users.settings and tried to compile the source.
[singhma@genesis source]$ ./scons.py -j8 mode=release bin
scons: Reading SConscript files ...
Traceback (most recent call last):
File "/home/gnss/singhma/rosetta/software/rosetta_src_2017.08.59291_bundle/main/source/SConstruct", line 150, in main
build = SConscript("tools/build/setup.py")
File "/home/gnss/singhma/rosetta/software/rosetta_src_2017.08.59291_bundle/main/source/external/scons-local/scons-local-2.0.1/SCons/Script/SConscript.py", line 614, in __call__
return method(*args, **kw)
File "/home/gnss/singhma/rosetta/software/rosetta_src_2017.08.59291_bundle/main/source/external/scons-local/scons-local-2.0.1/SCons/Script/SConscript.py", line 551, in SConscript
return _SConscript(self.fs, *files, **subst_kw)
File "/home/gnss/singhma/rosetta/software/rosetta_src_2017.08.59291_bundle/main/source/external/scons-local/scons-local-2.0.1/SCons/Script/SConscript.py", line 260, in _SConscript
exec _file_ in call_stack[-1].globals
File "/home/gnss/singhma/rosetta/software/rosetta_src_2017.08.59291_bundle/main/source/tools/build/setup.py", line 429, in <module>
build = setup()
File "/home/gnss/singhma/rosetta/software/rosetta_src_2017.08.59291_bundle/main/source/tools/build/setup.py", line 420, in setup
build.settings = setup_build_settings(build.options)
File "/home/gnss/singhma/rosetta/software/rosetta_src_2017.08.59291_bundle/main/source/tools/build/setup.py", line 216, in setup_build_settings
user = Settings.load("user.settings", "settings")
File "/home/gnss/singhma/rosetta/software/rosetta_src_2017.08.59291_bundle/main/source/tools/build/settings.py", line 130, in load
execfile(file, settings)
File "user.settings", line 29, in <module>
NameError: name 'os' is not defined
scons: done reading SConscript files.
scons: Building targets ...
scons: `bin' is up to date.
scons: done building targets.
this seems liek completed but i doubt there is some problem.
- it took some seconds to complete.
- 'bin' folder is empty.
Thanks!
malkeet
Right, the "site.settings" file doesn't exist, so you copy the "site.settings.killdevil" file to that name, creating it.
You can also copy it to the "user.settings" file, though if you do that you may need to change the line which reads `"site" : {` to read `"user" : {` instead.
I'm a little surprised at the error message you're getting, as the 'os' should be present due to the "import os" line in the site.settings.killdevil file. Did you copy site.settings.killdevil exactly? (e.g. with something like `cp site.settings.killdevil user.settings`?) The only way I think you'd get that error is if something was messed up with the copy.
I'd recommend deleting your current `user.settings` file, and then doing a `cp site.settings.killdevil site.settings` in the Rosetta/main/source/tools/build/ directory. That should fix things.
Dear rmoretti,
I am also in vain trying to install rosetta on mac OS high sierra 10.13.3 but I seem to be completly lost in space as a comet lander on this one. I am having a somewhat similar error ouput like malkeet and I was therefore hoping that the copying killdevil solution would solve the problems. However I end up having an output that ends like this with a raise KeyError. Do you have any ideas on what I need to do to solve this? Any help would be most appreciated! Thanks in advance!
..... tools/build/setup.py", line 210, in setup_build_settings
site = Settings.load("site.settings", "settings")
File "/Users/karlmarkusroupe/rosetta_bin_mac_2017.08.59291_bundle/main/source/tools/build/settings.py", line 130, in load
execfile(file, settings)
File "site.settings", line 28, in <module>
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/UserDict.py", line 23, in __getitem__
raise KeyError(key)
KeyError: 'LD_LIBRARY_PATH'
scons: done reading SConscript files.
scons: Building targets ...
scons: `bin' is up to date.
scons: done building targets.
If your system doesn't have an LD_LIBRARY_PATH set, you can simply comment out the line that refers to it in your new site.settings file.
Dear rmoretti,
Sorry for the late reply and for having to bother you again. I had set up an email alert to go off when you answered but something must have gone amiss.
Anyway commenting out the LD_library path did the trick but I now ended up having 4 new error 1 errors. All of them seem to be focused on errors in the reference to the CDRSetOptionsParser. I am copy pasting the end part of it here below. Do you have any good avice on how to get past this new set back. Hope this is as easy a fix and thanks for your time!
src/protocols/antibody/database/CDRSetOptionsParser.cc:104:31: error: reference
to 'core' is ambiguous
for ( core::Size i = 1; i <= core::Size(CDRNameEnum_proto_total)...
^
src/core/types.hh:28:11: note: candidate found by name lookup is 'core'
namespace core {
^
/usr/local/include/boost/core/demangle.hpp:47:11: note: candidate found by name
lookup is 'boost::core'
namespace core
^
4 errors generated.
scons: *** [build/src/release/macos/17.4/64/x86/clang/9.0/default/protocols/antibody/database/CDRSetOptionsParser.os] Error 1
scons: building terminated because of errors.
The earlier site.settings changes make SCons MORE aware of what is on your system - it looks in more places to find the bits and peices it needs.
The error is because it is finding too many things - it is finding your system-installed boost and using that in place of the boost that comes with Rosetta. That boost's namespace core (not present in our boost, apparently) conflicts with ours.
The strongest solution is to tell SCons to ignore that boost install, but I don't know how to do it.
You might be able to re-order the headers in that .cc file so that the boost headers - or, if the headers it includes themselves include boost, THOSE headers - are at the BOTTOM of the list. The fact that the compiler says ambiguous makes me think it won't help. Removing a boost header would work, but if the header is otherwise necessary it will stop compiling...
This is actually due to a bug in the build system that already has a fix pending. The issue is that the compiler is finding a system boost library, when Rosetta is expecting it to use the included Boost library -- this results in the confusion. The fix is to fix the settings for compilation such that the included boost library is found before the system one.
In the Rosetta/main/source/tools/build/basic.settings file, go to line 1750 or so (In the "clang" block) and add four lines and modify two lines, such that it looks like the following:
That should fix it on Macs, assuming you're using the default Clang compiler (if you have things set up for GCC, you'll need to do similar changes in the GCC block).
Hello!
Bingo!! Rosetta is compiled and working fine now!!!!
Thank you dear for your time and suggestions!
Malkeet
Hello!
How to use multiple cores for rosetta calculations?
and second is it possible to use GPUs for ddG calcultions in rosetta?
Thanks in advance!
Malkeet
GPUs: nope. We have (almost) no public support for GPU. They optimize calculation in basically the opposite way Rosetta optimizes things, so it requires a lot of rewrite to use them efficiently.
multiple cores: We don't support multithreading either. We do support MPI, which will let you run multiple Rosetta calls together, sharing their output space (so that one run produces _0001, the next produces _0002, etc. DDG does not support this, by the way - it runs multiple cycles internally and does not share the job distributor with most of Rosetta.
Hello!
I have compiled the source of Rosetta suit, and downloaded the Linux binary file version too.
if i am using binary version for the protein preperation: it shows relax.static.linixgccrelease script but in the compiled version the file is "relax.default.linixgccrelease".
moreover, the relax protocol is very well running in the binary version, however, in the compiled version it is giving errors (/home/gnss/singhma/rosetta/software/ros-c/main/source/bin/relax.default.linuxgccrelease: error while loading shared libraries: libcppdb.so: cannot open shared object file: No such file or directory).
what could be the reason for this?
Malkeet
"error while loading shared libraries: libcppdb.so: cannot open shared object file: No such file or directory"
If you want background - Google around to learn about static versus dynamic linking. The binaries we provide are statically linked, which means they are bigger but rely on no outside shared code. The compiled binaries are dynamically linked, which means they're much smaller but expect system libraries to be in place to rely on.
The system libraries aren't there because either A) the binary has been moved between the machines, B) they've been uninstalled, or most likely C) something has changed in your environment, so the paths are wrong. If this is the thread I think it is (I can't tell from the message compose page) - you had to do some "setup" at commandline to make the compiler available in your path? Probably Rosetta needs the same setup too to make sure the dynamic libraries are there on your path for runtime.
Regarding the setup for Rosetta to find the dynamic libraries, it's specifically setting the LD_LIBRARY_PATH environment variable. (This isn't a Rosetta-specific setting, but a system one.) You should be able to put it in your shell setup file (e.g. ~/.bashrc) and have it be present for all runs.
|
https://www.rosettacommons.org/node/9961
|
CC-MAIN-2018-26
|
refinedweb
| 2,330
| 51.85
|
In this article, we’ll be understanding the working and the need for the Stack Data Structure.
When programming, we often deal with a huge amount of uneven and raw data. This calls for the need of data structures to store the data and enable the user to operate on the data efficiently.
Table of Contents
Getting started with the Stack Data Structure
A stack is a linear data structure that follows a particular order for the insertion and manipulation of data. Stack uses
Last-In-First-Out (LIFO) method for input and output of data.
A stack occupies elements of a particular data type in it in a pre-defined order i.e. LIFO.
As seen in the above pictorial representation of stack, the last element is the first to be popped out from the stack.
The insertion and deletion of data items take place from the same end of the stack.
Let’s try to relate stacks with real-life scenarios.
Consider a pile of books. If you observe, the last book placed would be the first one to be accessed by us. Moreover, a new book can be placed only at the top of the pile. Thus, depicting the working of Stacks.
Operations Performed on a Stack
The Stack data structure primarily deals with the following operations on the data:
push(): This function adds data elements to the stack.
pop(): It removes the top element from the stack.
top(): Returns the topmost element i.e. element at the top position of the stack.
isEmpty(): Checks whether the stack is empty or not i.e. Underflow condition.
isFull(): Checks whether the stack is full or not i.e. Overflow condition.
Working of the Stack Data Structure
Stack data structure follows the LIFO pattern for the insertion and manipulation of elements into it.
Note: We have set a pointer element called ‘top‘ to keep the account of the topmost element in the stack. This would help the insertion and deletion operation to be performed in an efficient manner.
PUSH (insertion) operation:
- Initially, it checks whether the stack is full or not i.e. checks for the Overflow condition.
- In case the stack is found to be full, it will exit with an Overflow message.
- If the stack is found to have space for occupying elements, then the top counter is incremented by 1 and then the data item is added to the stack.
POP (deletion) operation:
- Initially, it checks whether the stack is empty or not, i.e. checks for the Underflow condition.
- In case the stack is found to be empty, it will exist with an Underflow message.
- If the stack is not empty, display the element pointed by the top pointer element and then decrement the top pointer by 1.
Implementation of the Stack Data Structure
Stack can be implemented using either of the following ways:
- Linked List
- Array
We’ve already written a comprehensive article on Stack in C++. For the demonstration, I’ll create a simple implementation of a stack using an Array in C++ language.
Example:
# include<iostream> using namespace std; class stack_data { public: int top; int data[5]; stack_data() { top = -1; } void push_element(int a); int pop_element(); void isEmpty(); void isFull(); void display(); }; void stack_data::push_element(int a) { if(top >= 5) { cout << "Overflow\n"; } else { data[++top] = a; } } int stack_data::pop_element() { if(top < 0) { cout << "Underflow\n"; return 0; } else { int pop = data[top--]; return pop; } } void stack_data::isEmpty() { if(top < 0) { cout << "Stack Underflow\n"; } else { cout << "Stack can occupy elements.\n"; } } void stack_data::isFull() { if(top >= 5) { cout << "Overflow\n"; } else { cout << "Stack is not full.\n"; } } void stack_data::display() { for(int i=0;i<5;i++) { cout<<data[i]<<endl; } } int main() { stack_data obj; obj.isFull(); obj.push_element(40); obj.push_element(99); obj.push_element(66); obj.push_element(40); obj.push_element(11); cout<<"Stack after insertion of elements:\n"; obj.display(); cout<<"Element popped from the stack:\n"; cout<<obj.pop_element()<<endl; }
Output:
Stack is not full. Stack after insertion of elements: 40 99 66 40 11 Element popped from the stack: 11
Features of Stack
- Stack follows
Last-In-First-Outfashion to deal with the insertion and deletion of elements.
- The insertion and deletion of data items happen only from a
single end.
- Stack is considered to be
dynamicin nature i.e. it is a
dynamic data structure.
- Stacks do not occupy a
fixed sizeof data items.
- The size of the stack keeps changing with every
push()and
pop()operation.
Applications of Stack
- In programming, Stack can be used to reverse the elements of an array or string.
- A stack can be useful in situations where Backtracking is necessary such as N-queens problem, etc.
- In editors, the undo and redo functions can be performed efficiently by Stack.
- Infix/Prefix/Postfix conversion of Binary Trees.
Time Complexity of Stack Operations
- Push operation i.e. push(): O(1)
- Pop operation i.e. pop(): O(1)
The time complexities of push() and pop() operations stay O(1) because the insertion and deletion of the data items happen only from one end of the stack which is a single step process to be performed.
Conclusion
Thus, in this article, we have understood the need for and implementation of a Stack data structure in various programming applications.
|
https://www.journaldev.com/36920/stack-data-structure
|
CC-MAIN-2021-25
|
refinedweb
| 883
| 55.74
|
I have narrowed to these files:
main.cpp
#include <iostream> #define HELLO #include "Header.h" int main() { return 0; }
Header.h
#pragma once #if defined( HELLO ) #define MYINT int #else #define MYINT float #endif
Here, everything works as expected, since in the main.cpp I have defined HELLO in the Header.h MYINT is defined as "int".
However, as soon as I add a .cpp file to "Header.h":
Header.cpp
#include <Header.h> void function ( ) { }
then in the Header.h it does no longer recognize the defined "HELLO" in main.cpp, and defines MYINT as "float".
I thought that it might be because the compiler compiles first Header.cpp than main.cpp, however.. I have seen the order in which it comiples it, and it compiles first main.cpp and then Header.cpp.
So I really have no clue why else could it happen such strange behavior.
Any ideas?
Thanks!
|
http://www.gamedev.net/topic/624541-define-doesnt-work/
|
CC-MAIN-2014-41
|
refinedweb
| 151
| 80.17
|
List const value bidirectional iterator.
This iterator traverse the elements of the list in the order they are stored in the list and returns a reference to the user-defined const value when dereferenced. If one wants to traverse elements in the order of the ID numbers instead, just use a "for" loop to iterate from zero to the number of items in the list and make use of the constant-time lookup-by-ID feature.
Iterators are stable across insertion and erasure. In other words, an iterator is guaranteed to not become invalid when other elements are added to or removed from the container. Added elements will become part of any existing iterator traversals when they are inserted between that iterator's current and ending position.
Definition at line 277 of file IndexedList.h.
#include <IndexedList.h>
|
http://rosecompiler.org/ROSE_HTML_Reference/classSawyer_1_1Container_1_1IndexedList_1_1ConstValueIterator.html
|
CC-MAIN-2018-17
|
refinedweb
| 138
| 52.49
|
NAME
SSL_do_handshake - perform a TLS/SSL handshake
SYNOPSIS
#include <openssl/ssl.h> int SSL_do_handshake(SSL *ssl);
DESCRIPTION
SSL_do_handshake() will wait for a SSL/TLS handshake to take place. If the connection is in client mode, the handshake will be started. The handshake routines may have to be explicitly set in advance using either SSL_set_connect_state(3) or SSL_set_accept_state(3).
NOTES
The behaviour of SSL_do_handshake() depends on the underlying BIO.
If the underlying BIO is blocking, SSL_do_handshake() will only return once the handshake has been finished or an error occurred._connect(3), SSL_accept(3), ssl(7), bio(7), SSL_set_connect_state(3)
Licensed under the Apache License 2.0 (the "License"). You may not use this file except in compliance with the License. You can obtain a copy in the file LICENSE in the source distribution or at.
|
https://www.openssl.org/docs/manmaster/man3/SSL_do_handshake.html
|
CC-MAIN-2019-13
|
refinedweb
| 134
| 50.84
|
The wscanf() function is defined in <cwchar> header file.
wscanf() prototype
int wscanf( const char* format, ... );
The wscanf() function reads the data from stdin and stores the values into the respective variables., wscanf() does not assign the result to any receiving argument.
- An optional positive integer number that specifies maximum field width. It specifies the maximum number of characters that w.
wscanf() Return value
- The wscanf() function returns the number of receiving arguments successfully assigned.
- If failure occurs before the first receiving argument was assigned, EOF is returned.
Example: How wscanf() function works?
#include <cwchar> #include <clocale> #include <cwctype> #include <iostream> using namespace std; int main() { wchar_t hebrew_str[] = L"\u05D0 \u05D1 \u05E1 \u05D3 \u05EA"; wchar_t ch; setlocale(LC_ALL, "en_US.UTF-8"); wprintf(L"Enter a wide character: "); wscanf(L"%lc",&ch); if (iswalnum(ch)) wcout << ch << L" is alphanumeric." << endl; else wcout << ch << L" is not alphanumeric." << endl; return 0; }
When you run the program, a possible output will be:
Enter a wide character: ∭ ∭ is not alphanumeric.
|
https://cdn.programiz.com/cpp-programming/library-function/cwchar/wscanf
|
CC-MAIN-2021-04
|
refinedweb
| 167
| 50.73
|
Compare two berval structures.
#include "slapi-plugin.h" int slapi_berval_cmp(const struct berval* L,const struct berval* R);
This function takes the following parameters:
One of the berval structures
The other berval structure
This function checks whether two berval structures are equivalent.
This function returns 0 if the two berval structures are equivalent. It returns a negative value if L is shorter than R, and a positive value if L is longer than R. If L and R are of the same size but their content differs, this function returns a negative value if L is less than R, or a positive value if L is greater than R, where L and R are compared as arrays of bytes.
|
http://docs.oracle.com/cd/E19693-01/819-0996/aaiej/index.html
|
CC-MAIN-2015-22
|
refinedweb
| 119
| 65.05
|
I personally find Google's protocol buffers library (protobuf) extremely convenient for efficient serialization and de-serialization of structured data from multiple programming languages. protobufs are perfect for TCP/IP links in general and socket-based IPC in particular.
Framing (the method of dividing a long stream of bytes into discrete messages) isn't immediately obvious with protobuf. What you get from a protobuf serialization is a binary buffer of data. You almost certainly want to send more than one such buffer over time, so how does your peer know when one message ends and another starts?
I've seen opinions online that failing to specify this is a shortcoming of protobuf. I disagree. The official protobuf documentation clearly mentions this issue, saying:
[...]. [...]
This technique is called length-prefix framing. It's efficient in both space and time, and is trivial to understand and implement as I hope to show in this article.
Let's start with a diagram that demonstrates how a message goes from being created to being sent into a TCP/IP socket:
We have a protobuf message filled in with data [1] The steps are:
- Serialization: protobuf handles this for us, converting the message into binary data (essentially a string of byte values).
- Framing: the length of the serialized string of data is known. This length is packed into a fixed encoding and is prepended to the serialized data.
- Sending: the combined length + data are sent into the socket.
This is neither a protobuf nor socket tutorial, so I'll just focus on step 2 here. What does "length is packed into a fixed encoding" mean?
The length is just an integer of a finite size. Suppose for the sake of discussion we won't be sending messages larger than 4 GiB in size [2]. Then all message sizes fit into 4 bytes. We still have to decide which byte gets sent first. Let's use the high byte first (also known as big-endian), to be true to the network byte order.
What about the receiver? How does one receive full messages with the scheme described above. Very simply - just follow the steps in reverse [3]:
- First receive the length. Since it's fixed size we know how many bytes we need to take off the wire. Using the example encoding described above, we receive 4 bytes and assuming they represent a 32-bit integer in big-endian order, decode them to get the length.
- Receive exactly length bytes - this is the serialized data.
- Use protobuf's de-serialization services to convert the serialized data into a message.
That's about it - we have a fully specified protocol. Given an initial state in which no data has yet been exchanged, we can send and receive arbitrary amounts of messages between peers, safely and conveniently.
To make this even clearer, I will now present some Python code that implements this protocol. Here's how we send a message:
def send_message(sock, message): """ Send a serialized message (protobuf Message interface) to a socket, prepended by its length packed in 4 bytes (big endian). """ s = message.SerializeToString() packed_len = struct.pack('>L', len(s)) sock.sendall(packed_len + s)
The three lines that constitute this function are exactly the three protocol steps outlined in the diagram: serialize, pack, send. There really isn't more to it.
Receiving is just a tad more complicated:
def get_message(sock, msgtype): """ Read a message from a socket. msgtype is a subclass of of protobuf Message. """ len_buf = socket_read_n(sock, 4) msg_len = struct.unpack('>L', len_buf)[0] msg_buf = socket_read_n(sock, msg_len) msg = msgtype() msg.ParseFromString(msg_buf) return msg
Since only the user of get_message knows the actual type of the protobuf message, it (the class) is passed as the msgtype argument [4]. We also use a utility function for reading an exact amount of data from a socket. Here it is:
def socket_read_n(sock, n): """ Read exactly n bytes from the socket. Raise RuntimeError if the connection closed before n bytes were read. """ buf = '' while n > 0: data = sock.recv(n) if data == '': raise RuntimeError('unexpected connection close') buf += data n -= len(data) return buf
Sure, Python has its cute way of making everything look short and simple, but I've also implemented similar code in C++ and Java, and it's not much longer or more complicated there. While I ignored efficiency here (freely copying buffers, which may be large), the protobuf API actually provides all the means necessary to write copy-free code, if you're concerned about runtime. In this article my optimization was for simplicity and clarity.
|
http://eli.thegreenplace.net/2011/08/02/length-prefix-framing-for-protocol-buffers/
|
CC-MAIN-2017-13
|
refinedweb
| 765
| 55.95
|
Hi all,
I have successfully called peaks on my bam files using earlier versions of MACS, however, in my first attempt at using MACS2 (using both the normal Galaxy instance and the Cistrome instance), I am getting fatal error messages when trying to use MACS2 callpeaks on my datasets. I have read in posts with errors similar to mine, that this could be due to the files not being aligned correctly. However, I have bam.bai outputs for both of my datasets, and have even successfully generated new ones using the IdxStats tool. Is is possible I need to merge these files in some way before running them through MACS2, or is there a Galaxy server error that could be causing this?
Any help would be much appreciated!
Please post the entire error message.
Fatal error: Exit code 1 ()
Traceback (most recent call last): File "/galaxy/main/deps/macs2/2.1.0.20151222/iuc/package_macs2_2_1_0_20151222/e1370f7d5e2f/bin/macs2", line 5, in <module> pkg_resources.run_script('MACS2==2.1.0.20151222', 'macs2') File "/galaxy-repl/main/venv/lib/python2.7/site-packages/pkg_resources.py", line 540, in run_script self.require(requires)[0].run_script(script_name, ns) File "/galaxy-repl/main/venv/lib/python2.7/site-packages/pkg_resources.py", line 1462, in run_script exec_(script_code, namespace, namespace) File "/galaxy-repl/main/venv/lib/python2.7/site-packages/pkg_resources.py", line 41, in exec_ exec("""exec code in globs, locs""") File "<string>", line 1, 614, 56, in main
File "build/bdist.linux-x86_64/egg/MACS2/callpeak_cmd.py", line 261, in run File "MACS2/PeakDetect.pyx", line 105, in MACS2.PeakDetect.PeakDetect.call_peaks (MACS2/PeakDetect.c:1632) File "MACS2/PeakDetect.pyx", line 159, in MACS2.PeakDetect.PeakDetect.__call_peaks_w_control (MACS2/PeakDetect.c:1983) ZeroDivisionError: float division
Did your control sample have any alignments? What does it print out in the
#2 Use XXXX as fragment lengthor
#2 d: XXXXor
#2 Since --fix-bimodal is set, MACS will use XXXX as fragment lengthlines (only one of these will be present)?
Yes it does. It's saying to use 0 as fragment length, however I don't get this problem when I run the files as single-end. The data itself was processed using paired-end seq, but the files I have been given are in BAM format, instead of BAMPE. Do you think this could be causing the issue?
There have been bugs at various points with paired-end reads, this is presumably one of them. The "Use 0 as fragment length" is what's causing this crash that you're seeing, but you're not doing anything that's causing this to happen.
I suspect that you can get around this by specify "Do not build the shifting model" under "Build Model" and setting the "Arbitrary extension size in bp" to whatever the median fragment size is (you can use "bamPEFragmentSize" to get this). The results should be pretty close to what you'd get if you weren't running into this bug.
Thank you so much! This worked perfectly.
|
https://biostar.usegalaxy.org/p/20304/
|
CC-MAIN-2021-49
|
refinedweb
| 503
| 57.57
|
Some of the hard-coded places can be seen here:.
I don't know much real-world code in GHC on Windows, so this may be a moot point, but please consider compatibility with existing libraries before going for the numbered CRTs. See for possible issues.
Wouldn't this help? I'm sorry if it's not relevant, but it's one of the changes I did when I was testing GHC with MSYS2 mingw-w64 toolchains.
diff --git a/rts/package.conf.in b/rts/package.conf.in index 5c6d240..db1d281 100644 --- a/rts/package.conf.in +++ b/rts/package.conf.in @@ -57,9 +57,7 @@ unresolved symbols. */ ,"bfd", "iberty" /* for debugging */ #endif #ifdef HAVE_LIBMINGWEX -# ifndef INSTALLING /* Bundled Mingw is behind */ ,"mingwex" -# endif #endif #ifdef USE_LIBDW , "elf"
|
https://gitlab.haskell.org/trac-Elieux.atom
|
CC-MAIN-2020-29
|
refinedweb
| 128
| 61.22
|
Details
- Type:
New Feature
- Status: Closed
- Priority:
Major
- Resolution: Fixed
- Affects Version/s: 3.0.0
-
- Component/s: java client
- Labels:None
- Hadoop Flags:Reviewed
Description.
Issue Links
- is blocked by
ZOOKEEPER-79 Document jacob's leader election on the wiki recipes page
- Closed
Activity
- All
- Work Log
- History
- Activity
- Transitions
-1 overall. Here are the results of testing the latest attachment
against trunk revision 768067.
+1 @author. The patch does not contain any @author tags.
+1 tests included. The patch appears to include 42 new or modified tests.
-1 patch. The patch command could not apply the patch.
Console output:
This message is automatically generated.
Committed revision 768067.
Thanks Mahadev!
this patch should the fix the zk_log.h issue.
thanks for pointing it out pat.
I'd like to commit this asap - just waiting on the change to name the log file "zookeeper_log.h" when it's
moved into the public include directory.
see my comment:
+1, it looks good to me.
-1 for release audit is expected because some files are checked in with non apache headers (but are compatible with apache).
-1 overall. Here are the results of testing the latest attachment
against trunk revision 766160.
111 release audit warnings (more than the trunk's current 105 warnings).
+1 core tests. The patch passed core unit tests.
+1 contrib tests. The patch passed contrib unit tests.
Test results:
Release audit warnings:
Findbugs warnings:
Console output:
This message is automatically generated.
running through hudson
this patch addresses pat's concern.
also
- modularized the c code methods a little more so that its more readable
- changed the return codes and relevant doxygen comments to be in line with c libraries (like 0 for success and not zero for failure).
- the zk_log.h needs to be moved to src/c/include as I had mentioned earlier.
zk_log.h is moved, however Makefile.am has not been updated.
be sure to include it in pkginclude_HEADERS
I'd rather that the include name be "zookeeper_log.h" to be consistent with with the existing files.
(granted some of the files are mis-named, like recordio.h, but we should fix that in 4.0)
this patch should fix most of the issues raised by pat and ben.
you will have to do the following before you can apply the patch –
svn move src/c/src/zk_log.h src/c/include/zk_log.h
the include file is necessary to use LOG_* macros.
ill just list out what I have fixed in this patch..
- improved the docs for recipes amd recipes/lock. both of them have a README.txt. the recipes one lists out how to create new recipes and conventions to follow.
- changed the c libary method names to follow the convention
- renamed the jar file to be zookeeper-version-recipes-name.jar
- for interop I will open another jira
- also we should commit the configure generated after running autoconf. I didnt attach it to the patch since it makes it huge.
- cleaned up imports
- modularized the java code
- i have left the sleep 30 in the c tests. I will open a jira to fix it later.
- i have added logging to the c code.
- improved the javadocs and c doxygen comments to make the return values clearer
- chanegd the version in configure.ac
- the doxygen docs should work now
- added synchronized for the lock methods both in c and java
- fixed to sleep only on a retyr
I think i might have addressed most of the review issues, but please take a look and see if anything still remains. comments are welcome.
thanks for looking at it ben and pat... i am fixing most of the stuff you guys mentioned except for the following –
- the zookeeper c client methods -
pat's idea does make sense for namespace conventions. But zoo_recipes_lock_methodname seems too long. I recommend we use zoo_recipename_methodname().. which is a little mroe convenient thatn the older one.
there are others I will prefer opening a jira later to fix it.. (like interperability of c and java code).. i would like this patch to get committed as soon as possible. maintaing such a huge patch is a lot of work...
wow nice work. it's nice to have someone that knows automake again!
some comments:
in unlock, we shouldn't call the callback if the unlock had an error should we?
in zope.execute() how would id be null in the 2nd if block?
Collections.sort() would simplify your sorting wouldn't it?
if exists() fails when trying to grab the lock, you need to get a new child list and retry.
for both C and java, shouldn't the lock and unlock be synchronized?
in zoo_mutex_lock you always nano_sleep. wouldn't it be better to only nano_sleep if you have to retry?
i think the logic for checking for a previous create with getChildren and create should be implemented the same in C and java.
with regard to that last comment: currently we do a getChildren() see if a previous create suceeded and then do a create(). one problem with this is that a previously issued create may still be in flight. i think it would be better and simpler to keep doing create until you succeed; do a getChildren(); delete any duplicates.
running doxygen-doc doesn't result in any c-docs being generated (html).
Also the version is wrong in .ac file - should be 3.2.0 not 2.2.0
which reminds me - yet another place we need ot update during a release... we should
really try to figure out how to centralize this (perhaps Giri knows?) otw it's an increasing
pain/error point during releases. (not this jira responsible though)
the api methods are named with prefix zoo_mutex_*
seems to me it would be better to name them zoo_recipe_lock_* as prefix
this specifies that it's a recipe, in particular that it's implementing the "lock" recipe
We would expect (and should document in src/recipe/readme) that c api methods
should be named in this fashion, ie zoo_recipe_<recipename>_*
otw the c global namespace is going to cause problems down the line
also easier for users to identify the recipe to which a particular method call pertains
it's good to commit the configure scripts (see hadoop c code for which files)
this was lesson learned for cbinding & zkfuse, makes it easier for users to build
ensure binary flag set in tar package for executable scripts
would be nice to have a readme to orient the new user in src/c
include how to build (easier if configure script is here)
out of the box the patch fails to run-check
Running Zookeeper_locktest::testlocksh: ./tests/zkServer.sh: Permission denied
sh: ./tests/zkServer.sh: Permission denied
: assertion : assertion
be sure to set the exec flag when committing zkServer to svn.
in tests might want to use localhost rather than 127.0.0.1 (if ipv6 only host? or does ipv6 handle that?)
sleep(30) in test is unfortunate - it artificially inflates the test time. having short test time is a win
any chance you can actively poll rather than just sleeping? something reusable by other recipes would be grt
I don't see any logging in zoo_lock.c, I think we should log at least the error conditions, also things like
loop retry at debug level. also operation is pretty complex, some logging in there would help.
also some similar comments as java - for example getting client_id in operation, this may be invalid
if the session is not established?
the api says:
- \return the return code.
for many of the methods. what does this mean?
These comments are relative to the java code:
cleanup imports
writelocktest indentation problem line 87
do we want to talk about leaderelection in locks, or just implement another le in recipes (small wrapper)?
znodename.java - log failure in exception handlers
protocolsupport retrydelay eats interrupt, ie lock cannot be interrupted
shouldn't writelock.lock throw connectionloss if closed?
the javadoc is not clear - what happens in the "later" case? does this block? what happens if expired or discon? etc...
line 148 of writelock
long sessionId = zookeeper.getSessionId();
what ensures that sessionid you get back is valid? (ie non-zero)
This is pretty large anony class - why not have a specific static class definition?
Plus all in one method rather than broken out (even indentation is deep)
Would be easier to maintain if broken into several methods
writelock line 208
try
catch (Exception e){ LOG.warn("Failed to acquire lock: " + e, e); }
what happens if lock returns false (isclosed for example)?
Some of these classes look reusable (prot/name/op) - should they be in this recipe or elsewhere?
If we want to enable additional recipes to be easily added based on the work you've done (blueprint)
we should try to make these easily reusable.
i attached header files to package.html
makefile.am
configure.ac
adn the others have diffierenct headers which are not apache but can be included in the codebase.
This first set of comments is relative to the structure/build/packaging (ie non-code) issues:
I think the jar name should include "recipes"
ie for locks: zookeeper-3.2.0-recipes-lock.jar
how about a "superset" recipes/zookeeper-3.2.0-recipes.jar with all recipes included?
users can choose which to use
There seems to be an issue with recipes in release build - actually same issue with contrib:
you cannot run "ant test" on lock recipe in the released tar file build using "ant tar"
seems build.xml and build-recipes.xml are missing from tar file
docs - I think it's ok not to have specific forrest docs inside locsk, but need:
recipes/README.txt
what is this code? what are requirements for authors? (like interop btw c/java for particular impl)
also req that recipe be documented in recipes documentation in forrest
describe and point to forrest docs in docs/... (should put link to live h.a.o doc as well)
is there some high level bit of informatin that you want to convey to authors re how components
should be designed (in particular session mgmt, error handling, logging, etc...) this is a good place to put
recipes/lock/README.txt
describe and point to forrest docs in docs/... (should put link to live h.a.o doc as well)
pointer to javadoc (ie live h.a.o site)?
are the forrest recipe docs up to date relative to impl for locks? which lock in particular from docs.
any deviation(s), any specific impl detail that's important, configurability?
might be good to update the forrest recipe docs to point to this jar/code for implementation
ie let users know that they can get this from the release in recipes/locks (they don't need to impl)
how do we verify interop?
great job adding tests! but they seem to be tests for c, tests for java, but not interop.
this seems pretty important. We want to document in recipes/readme that authors in particular
should code for c/java to interop for any one particular recipe instance. also that recipes can
co-exists in user space (ie placement of nodes, etc...)
Might be good enough to add a new JIRA for this, but need to impl this asap.
You've done a great job, but I'm holding this jira to a higher standard since ppl who implement further
recipes will use your lock implementation as a "blueprint".
-1 overall. Here are the results of testing the latest attachment
against trunk revision 762602.
116 release audit warnings (more than the trunk's current 105 warnings).
+1 core tests. The patch passed core unit tests.
+1 contrib tests. The patch passed contrib unit tests.
Test results:
Release audit warnings:
Findbugs warnings:
Console output:
This message is automatically generated.
this patch takes care of all the error cases (please review in case i missed some) both in java and c.
- cleaned up the code and added some mroe comments to the code.
- made the java api be lock and unlock compatible with java's lock api
I still need to work on doing the right handling of session expiration and connectionloss everywhere in the code (both c and java)
thanks for your comments chris. You are right, the c version needs to do a better job on handling error cases. I still need to clean it up for that. Will update the patch. I probably think i spent most of the time fighting/learning auto*/libtools rather than coding the c api
....
char retbuf[len+20]; snprintf(buf, len, "%s/%s", path, prefix); ret = zoo_create(zh, buf, NULL, 0, mutex->acl, ZOO_EPHEMERAL|ZOO_SEQUENCE, retbuf, (len+20)); mutex->id = getName(retbuf);
There are some calls to zoo_get_children(), etc., which I think might be usefully checked for failure return codes as well. Sorry not to provide a patch yet.
make sure that you download the patch adn then view it. I tried opening up the patch in the browser and it fails. It thinks that the file is xml (not sure why.. ) and tried opening it as an xml file...
reattaching the patch. had some problems with the earler version.
this patch has the c library in it as well.
Now I think of it, I probably should have done it in a seperate jira with subtasks as java and c libraries.
- added the c library with auto * files to create teh library
- added cpp unit testing for the c library
- similar to java interface, the c interface also allows a callback method to be called in case of lock being avquired and released.
- i will be cleaning up the patch (with some more docs and rmeocing unneccasry printf's and unused code).
I agree with Mahadev – we want to encourage ppl to provide (interoperable) implementations for both c/java.
However the JIRA should be updated to have both java and c client component listed.
Should we add a new "recipe" component?
dont know.. the jira says just talks abt the recipe, so is adequate for both java and c
. .. I would like to have both in one jira so that everyone who wants to contribute to such zookeeeper recipes is encouraged to have both the java and c implementation, but I am open to creating a new jira for c.
> i am implementing the recipes in c.
Should the C version go in a separate Jira issue and patch?
a new patch. I have changed the directory structure in this
the code now is in
root/src/recipes/lock/src/java/org/apache/zookepeer/recipes/lock/
also the tests are in
root/src/recipes/lock/test/org/apache/zookeeper/recipes/lock/
the new directory structure allows us to have both the java and c implemention in the same parent directory strucure.
src/recipes/lock/
also -
- added a new interface Locklistener
- removed runnable to be a locklistener interface whose methods lockAcqured and lockReleased are called on a lock acquired and relase of a lock
- refactored some code
- deleted not required public methods.
- added build files for the recipes directory
- changed the tests to work with new api's
i am implementing the recipes in c. Will have an updated patch up soon. comments are welcome.
sorry for my late response tom.. i havent had a real close look at the interfaces and methods in this patch myself, so thanks for reviewing.. I was mainly looking at the handling of zookeeper events.
1) I think you are rught that we should probably have call back methods with lockacruired and lockreleeased methods. The current implementation is too restrictive.
2) I am with you on this one as well... I hadn't implemented the lock interface just because I had the same reservations as you.. I think for now we should just leave it as it is without implementing the lock interface and see what our users have to say..
3) agreed..
Mahadev,
I'm glad this is being worked on again. Some comments:
- Not sure if a Runnable is right for whenOwner, as Runnable implementations are often one-shot and can't be re-run, whereas it looks like WriteLock can be locked and unlocked repeatedly (is this right?). There's also no way to tell when the lock has been released. Might be better to have a standard listener interface called LockListener, with lockAcquired and lockReleased methods. Writing an implementation to run a Runnable would be trivial.
- I worry a little about WriteLock implementing java.util.concurrent.locks.Lock, since Lock seems oriented to single-process programs. By making WriteLock a Lock we make it easy for programmers to drop it in to programs that use Lock, without having to think about distributed error recovery. For example, Lock#lock() cannot throw checked exceptions, so we would have to wrap KeeperException in an unchecked exception, which could too readily be ignored. Perhaps the answer is to make the method naming and semantics as close as possible, without actually implementing Lock proper? (I think this is similar to the discussion in
ZOOKEEPER-104.)
- Lots of classes and methods are public when they don't need to be. Really only WriteLock should be public, with public methods lock(), unlock(), getDir() (better named as getLockPath() or similar?), getWhenOwner(), setWhenOwner() (or whatever replaces the last two).
an updated patch.
- added acls to writelock
- removed ZookeeperTestSupport, used ClientBase insted of this new class
- made the package name org.apache.zookepeer.protocols.lock so that each protocol has its own direcotry
- moved the docs.html to pakcage.html
it still does not impleement the lock interface of java. Ill add it in the next patch.
it would be great ot have this jira in 3.2.
this patch removes the zookeeper facade, and makes it work with the current trunk. I have to still go through all the corner cases and see if they have been handled. Also, I need to implement the lock interface in writelock.
I've documented on the wiki at why ZooKeeperFacade needs to be removed. Just use the ZooKeeper object. For this patch in particular it is very important not to use a facade.
Can we focus on the issue at hand? This patch is almost done there is no reason for the meta physical discussions or even an external SVN. It's simple enough. The only thing that must be done is to remove the use of the ZooKeeperFacade and use the ZooKeeper object correctly. If you want to discuss why ZooKeeperFacade is a bad idea, lets open an issue to discuss there.
I really like the idea of using the Lock interface, but changing aquire to lock() should be good enough.
Lets get this thing committed so that we can move on.
Per the comments here:
and here:
I'm moving the status from patch available to open.
I agree, we do need to do a better job on the reviews. In our defense:
1) we're trying to complete the migration from SF to Apache. good thing is it looks like we're pretty close to the end on that
2) there's a ZooKeeper workshop @yahoo on Monday, just so happens that all commiters are presenters.
3) the continual push to change the build/wiki/patchsubmission/etc... processes are pulling me off things like patch review in order to track down answers (cuz I'm new to Apache as well - thanks Doug/Owen!)
> FWIW I've attached about 5 patch files so far, with none of them being committed anywhere [ ...]
Are they getting reviewed promptly? A primary job of committers is to keep the Patch Available queue short, by trying to review patches within a few days of their contribution, either committing them or rejecting them with a clear explanation. If the committers are unable to keep up with the contributors, then the project probably needs more committers. In this case, contributors should also do everything they can to make committers lives easier, so as not to further delay their patches.
Contributors should be nominated to become committers when they've provided a series (e.g., 3+) non-trivial, high-quality patches, and demonstrated an ability to work peacefully with existing committers, following procedures, etc. If someone provides patches that apply cleanly, fix real user problems, include unit tests, and the contributor responds to criticism in a positive manner, then they should be nominated as a committer in short order. Then they can start reviewing and grooming new committers themselves!
> Give me a few weeks or so to completely finish the code and documentation and I'll submit a single big patch for all the work to this JIRA.
If you want to get more feedback as you go, please attach interim patches too, rather than just the final patch. That way folks who monitor the -dev mailing list will see the updates and can try them in the usual manner. We generally don't put things into the "Patch Available" state until the contributor believes that the patch is in a final form.
> Everyone has access to ASF svn?
I think all that Nigel meant was that not everyone can submit patches by naming an Apache svn revision since not everyone has commit access to Apache's svn. We try to use a uniform mechanism, the same for committers as non-committers. Committers generally have to do more work, not less, than other contributors, since every patch committed must be reviewed and accepted by a committer. Committing is a duty, not a privilege.
Let's stick with a process for now that all contributors can use, not just ASF committers.
Huh? Everyone has access to ASF svn? Only committers can commit using either approach. I don't grok your point.
Fewer, simpler mechanisms generally include more people.
+1. Let's stick with a process for now that all contributors can use, not just ASF committers.
Wow I confess to be being kinda surprised by that response
I didn't realise you guys were so attached to the exact svn command line used to apply a patch - I thought you'd welcome all contributions and that the svn commit history would be more useful to you, given the large amount of changes and complexity of the code and large number of comments already on this JIRA - rather than focus purely on a minor couple of keystrokes required apply the patch
FWIW I've attached about 5 patch files so far, with none of them being committed anywhere - then made about 7 patches since then in svn with history and I'm sure there'll be another 5 or so changes to go before this patch is done.
Never mind - I'll happily comply with your strict patch acceptance policy. Give me a few weeks or so to completely finish the code and documentation and I'll submit a single big patch for all the work to this JIRA. If you want you can get all the history too with a trivial alternative svn command - but if that offends you, please forget I'm using a sandbox svn area to work on this (pretend I'm just saving it on my hard drive and please disregard the links I've added to some JIRAs to refer to parts of this patch in a simple way) and just use the single patch file I'll attach in a few weeks or so.
Hi Patrick,
The "Notice on the "attach patch" JIRA page that it has the " Grant license to ASF for inclusion in ASF works .... " option. This has to be checked for us to consider a patch for inclusion. " is not accurate in this case.
James and I are are both Apache committers and Members, and as such, when we commit code to the ASF repository a license is granted to the ASF. The jira feature is really only there to be able to accept code from folks who have not filed an ICLA with the ASF.
Another way to view this development model is as if we were ZooKeeper committers who do not commit to trunk but which develop new features and bug fixes in development branches. This model of development is use extensively in projects who are adverse to destabilizing the trunk. They develop and test new features in a branch and then merge back once folks are happy with it.
This model is also outlined at Rules for Revolutionaries
The project's patch submission mechanism is not inflexible, but neither should it be changed unilaterally on an issue-by-issue basis. The currently acceptable mechanism is to attach patch files to Jira, generated by 'svn diff'. When files are renamed, a shell script should be provided that performs the renames, where the script is run before the patch is applied. If some find this awkward, then a separate discussion should be launched on the mailing list. Some projects are now, e.g., using 'git' to handle things like this, but this needs to be done carefully so that the entire community is included in the process. Fewer, simpler mechanisms generally include more people..
Benjamin I added
ZOOKEEPER-89 and ZOOKEEPER-90 to track the dealing with loss of ownership/leader with connection reconnects and with session expiration. I've not been able to test out the latter yet; but I've tested the former and I think both are implemented now via the patch for
ZOOKEEPER-90 and ZOOKEEPER-89
Just added the WhenOwnerListener interface : I just need to figure out how to add notifications of loss of owner/leader status when the connection fails or the session expires etc.
Thanks for the great comments Benjamin! Have already added the constructor for you
BTW I was pondering about switching the whenOwner from a Runnable to some kinda interface that invokes you when you become the leader/owner - or when you stop being the leader/owner. Something like
public interface WhenOwnerListener { void whenOwner(); void whenNotOwner(); }
Where only znodes that are the owner would be notified with the whenOwner() method; but then if a connection fails or session times out, they'd be notified with a call to whenNotOwner();
Spookily - I'd set myself the target today to properly implement the watches so that WriteLock gets a notification of it no longer being the leader/owner when a connection fails (which normally auto-reconnects anyway right now in the base ZooKeeper). Then I was gonna add a notification mechanism so we could notify the leader/owner is no longer the leader/owner when the session expired exception occurs.
So we're absolutely on the same page; once I'd grokked the proper watch code for dealing with normal connection failures & reconnects I was hoping to add something vaguely similar to the ZooKeeperFacade so that higher level protocols can be aware of both when ZooKeeper reconnects and when ZooKeeperFacade creates a whole new connection.
Does that make sense? I totally understand your concerns at making sure the WriteLock and associated helper classes like ProtocolSupport/ZooKeeperFacade do the right thing - I want exactly the same thing
I'd just not yet had the chance to go through all the different failure conditions and scenarios and make sure they all work properly
Moving patches for this issue to subversion for easier tracking
I've already submitted about five patches for this issue so far and I'm sure there's gonna be loads more coming. Developing higher level protocols is a much bigger job than I previously thought
particularly with having tests for all the various failure scenarios and adding support for the various other higher level protocols.
Its kinda time consuming creating loads of patches & attaching them to the same issue and deleting the old ones so its easy for commmitters to review - but more importantly, all the history of the many patches gets totally lost using the attach-patch-to-jira model - which also makes it harder for committers to watch progress on this issue.
I've never done this before on any other Apache project - and this approach is temporary and only reserved for the single
ZOOKEEPER-78 issue; but I've checked in this patch into an svn sandbox area at Apache that I have commit karma on and will continue to work on it there; so that all the history is preserved. I can then do many more frequent & smaller commits; any ZK committer can review and easily apply my patches whenever they feel like - and its gonna be much easier for anyone in the ZK community to track progress on this issue and see how the implementation has changed over time as some things work or I find better ways of solving the issue.
This approach is totally temporary; its not an attempt to move the code outside of the ZK community or anything like that. At any point feel free to commit (actually just copy in svn which will keep all the history & commit comments etc) to the ZK trunk. You could even mirror the code to the ZK tree in sandbox/contrib area if you like - just like Hiram did to mirror the ZK code to the maven-patch example in the activemq sandbox.
I'm hoping in a few weeks my hacking on this issue will near completion and we can permanently move the code back into the ZK tree; but in the meantime its trivial to reuse it where it is or mirror it into the ZK tree as folks in the ZK community see fit. Also if I ever earn committer karma on ZK I can just move it into some ZK contrib area myself
Building the code
In terms of sandbox - I ended up reusing Hiram's sandbox area that shows the maven build working on ZK; as I prefer to use maven and it was then super easy for me to create a new maven module, zookeeper-protocols that just includes the source and test cases for the high level protocols.
If you're new to maven and want to build it, I've checked in instructions here...
Whenever we move this code back into the ZK trunk am sure we can hack an Ant build for it.
Fantastic work! I second Hiram's suggestion to implement the Lock interface. (You could probably throw unsupported exception for the condition method.) It would make your naming more symmetric anyway
I love the whenOwner field! Excellent idea. I think it would be good to allow it to be passed in the constructor.
I have a problem with your use of the facade. Here is the problematic scenario: your application waits for a lock and once acquired becomes the master. This master thread services clients but more importantly updates stuff in ZooKeeper. If the session expires and reconnects automatically, it is very easy for your master thread to make changes without actually being a master using the new session. If you don't use the facade, things fail properly: the connection expires, the master is no longer the master in ZooKeeper, and (most importantly) the master thread cannot do any ZooKeeper operations with the ZooKeeper object.
I realize the facade may seem convenient, but I think we need to encourage safe behavior especially with the first high level primitives and since yours will be the very first I'd like to make sure we get off to a safe start. (It's a great first example by the way!)
Patch is now attached
This patch no longer requires
ZOOKEEPER-84, we now use a ZooKeeperFacade which wraps up the creation of the ZooKeeper instance and allows it to be replaced if a SessionExpiredException occurs.
The test case works in the current patch. To get the test case to hang closing the 3rd client, just edit WriteLockTest and set the workAroundClosingLastZNodeFails field to a value of false. You will then get this stack dump when the test hangs (on OS X at least
...
[junit] "main" prio=5 tid=0x01001710 nid=0xb0801000 in Object.wait() [0xb07ff000..0xb0800148] [junit] at java.lang.Object.wait(Native Method) [junit] - waiting on <0x096105e0> (a org.apache.zookeeper.ClientCnxn$Packet) [junit] at java.lang.Object.wait(Object.java:474) [junit] at org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:822) [junit] - locked <0x096105e0> (a org.apache.zookeeper.ClientCnxn$Packet) [junit] at org.apache.zookeeper.ZooKeeper.close(ZooKeeper.java:329) [junit] - locked <0x0bd54108> (a org.apache.zookeeper.ZooKeeper) [junit] at org.apache.zookeeper.protocols.ZooKeeperFacade.close(ZooKeeperFacade.java:99) [junit] at org.apache.zookeeper.protocols.WriteLockTest.tearDown(WriteLockTest.java:146) [junit] at junit.framework.TestCase.runBare(TestCase.java:140) [junit] at junit.framework.TestResult$1.protect(TestResult.java:110) [junit] at junit.framework.TestResult.runProtected(TestResult.java:128) [junit] at junit.framework.TestResult.run(TestResult.java:113) [junit] at junit.framework.TestCase.run(TestCase.java:124) [junit] at junit.framework.TestSuite.runTest(TestSuite.java:232) [junit] at junit.framework.TestSuite.run(TestSuite.java:227) [junit] at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:81) [junit] at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:36) )
This could maybe just be an OS X based timing issue
this patch depends on the reconnect() method on ZooKeeper to deal with session expired exceptions
Great catch Benjamin! I've a working patch using your algorithm; am using x-sessionId-sequenceNumber and its working a treat (though its a tad hard to force ZK to fail mid-create
.
Am working on some unit tests to try out the server stopping/starting which I'll attach shortly once they're working a bit better...
There is a bug in this block of code:
while (!closed.get() && id == null) { retryDelay(attempt++); try { id = zookeeper.create(dir + "/x-", data, acl, EPHEMERAL | SEQUENCE); idName = new ZNodeName(id); if (LOG.isDebugEnabled()) { LOG.debug("Created id: " + id); } } catch (KeeperException e) { LOG.warn("Caught: " + e, e); } catch (InterruptedException e) { LOG.warn("Caught: " + e, e); } }
zookeeper.create is not idempotent, so blindly retrying will land you into problems:
1) You start the create, a connection error happens, the create completes but you don't get a response
2) You retry your create
3) You may end up waiting on the znode from step 1) which will not go away
I've been thinking of easy ways of getting around this problem and the easiest seems to be constructing the name as x-identifier-sequenceNumber. You could use the hostname or the sessionid as the identifier. When you do the retry you have to do a getChildren() to see if there are any znodes with your identifier and then use that znode if it exists.
BTW I just deleted the other 2 patches to avoid confusion; the latest patch includes the previous changes etc
Here is an improved version.
- we use more optimal comparison by using a ZNodeName object which caches the prefix & sequence number for ordering node names. We can also use this to order node names using different prefixes - maybe useful for read/write locks
- fixed a bug and enhanced the test case so that we now test that a leader is established; then when that leader fails another leader/owner is created
here's an updated patch which has better documentation and includes the recipe documentation linked to from the javadoc - but which could be used stand alone as well if required.
I've also included the description from
ZOOKEEPER-79 as well
Thanks Flavio!
Totally agreed on 1. Strictly speaking we should catch all exceptions and handle them properly (which may mean throwing some, or responding properly to others or whatever).
One of the main reasons for the retry logic was to avoid errors like trying to create a znode that already exists or loosing connection to the ZK server etc - but we should go through all possible exceptions and handle them much cleaner.
In particular we really need test cases that show the server closing and restarting during the process of acquiring the lock or after a lock owner has the lock etc.
I figured I'd send a patch first and see if anyone else had a better implementation lying around - or knew a neater way to solve this - before I spent too much time getting it totally correct etc.
For 2) I just added that so that when running the unit tests you could see INFO or DEBUG level logging etc (particularly when running in your IDE)
This is a nice implementation, James. Good job! My two comments are:
1- It might be a good idea to throw the exceptions instead of trying to catch them and retry. You will end up with a cleaner code;
2- I'm not sure if it is necessary to add the log4j configuration. Is there a particular reason for including it or it is there by accident?
We should document the leader election on the recipes page first.
assiging this to james since he is working on this.
Integrated in ZooKeeper-trunk #289 (See)
. added a high level protocol/feature - for easy Leader Election or exclusive Write Lock creation
|
https://issues.apache.org/jira/browse/ZOOKEEPER-78?focusedCommentId=12697683&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel
|
CC-MAIN-2015-32
|
refinedweb
| 6,200
| 62.98
|
01 April 2008 23:32 [Source: ICIS news]
HOUSTON (ICIS news)--The planned shutdown of one of the two Copesul ethylene (C2) crackers at the Triunfo complex in Brazil should not disrupt polyethylene supplies in the country, a source with Copesul owner Braskem said on Tuesday.
Copesul, a Braskem company that produces basic petrochemicals, shut down its Plant 1 on Monday for planned maintenance and upgrades, a Braskem source said.
The 30-day shutdown was expected to result in a production loss of 60,000 tonnes of ethylene.
Braskem did not expect polyethylene (PE) supply disruptions in ?xml:namespace>
Copesul produces nearly 34.3% of the ethylene produced in
During the shutdown, the company’s Plant 2 will operate normally. The Plant 1 facility was inaugurated in 1982 and produces 60% of the ethylene produced by Copesul.
The shutdown was estimated to cost Brazilian reais (R) 334m ($191m).
$1=R1.74
|
http://www.icis.com/Articles/2008/04/01/9112772/braskem-stops-copesul-c2-unit-for-maintenance.html
|
CC-MAIN-2014-42
|
refinedweb
| 151
| 53.41
|
Bluetooth programming can solve several annoying problems outside of its principal domain areas. For example, it works well if you're working on a standalone device and you want to configure the device via a custom Bluetooth profile, or you have a device that speaks a standard protocol that your Linux machine doesn't yet support. In this article, I demonstrate how to code your way out of both problems using Bluetooth.
Linux has a mature and capable Bluetooth stack called BlueZ. Many Linux distributions including Ubuntu 10.10 ship with a Bluetooth-enabled kernel as well as a set of profiles that enable your machine to do things like transfer files to and from a Bluetooth-enabled phone.
There are three main steps to making Bluetooth devices communicate:
- Scan: Scan for devices that advertise certain Bluetooth services
- Pair devices: Exchange a key to authenticate the devices and initiate secure communication
- Connect to a service: After the devices are paired, either end can initiate a connection to a particular service.
We'll use built-in tools to accomplish these tasks, and we'll use the BlueZ API to write a program that will communicate with the device. In particular, we'll implement the Headset profile (HSP), which allows your smart phone to use your computer as a headset. Sample code is provided in C.
Before diving into how to accomplish these tasks, let's cover system setup.
System Setup
The given examples were written on Ubuntu 10.10. You'll need the following packages:
- gnome-Bluetooth, a gnome applet for administering Bluetooth devices
- bluez-hcidump, communication debugging tool
- bluez, Linux Bluetooth stack and associated tools
- libBluetooth3, the BlueZ library
- libBluetooth-dev, the development files for linking to the BlueZ library
- libasound2, ALSA API for using a sound card
- libasound2-dev, ALSA API for using a sound card
Finally, you'll need a Bluetooth phone that supports the Bluetooth HSP (headset) profile and a Bluetooth adapter. I'm using a TRENDnet TBW-106UB USB Bluetooth adapter. After plugging the dongle into your machine, the Bluetooth-applet should start as indicated by the Bluetooth symbol on the top panel. (Second in from the left; see Figure 1.)
Figure 1.
Using the applet, you can scan for devices, initiate and complete the paring process, and connect Bluetooth services between devices. As the topic is programming, I'm not going to go through the process of scanning and pairing devices, but note that you will have to pair your phone with your machine in order to follow along.
BlueZ API
BlueZ exposes a socket API that's similar to network socket programming. If you're familiar with network programming in Linux, learning the BlueZ API will feel familiar. To implement the headset profile (more on what that means in a bit), we'll use two different sockets: RFCOMM and SCO.
RFCOMM sockets are like TCP sockets: They're useful in situations where reliability trumps latency. SCO sockets provide a two-way stream of audio data at a rate of 64 kb/s. SCO packets are not retransmitted.
Bluetooth Profiles
What does it mean to implement a Bluetooth profile? The standard Bluetooth profiles and core specification are published on the bluetooth.org website. The top of the page lists different version of the core specification. Below the core specification documents are the profiles.
We need two files from the website: Headset profile v1.1 (HSP), and the Core Specifcation Version 2.1 + EDR.
The core specification file tells you everything you need to know to create a Bluetooth stack. Weighing in at just over 1400 pages, it would be a hefty read. Fortunately, we'll only be using this document as a debugging aid. I chose version 2.1 + EDR because that's the same version that the dongle supports.
The HSP document is a much more manageable 27 pages. It shows how to implement the Bluetooth headset profile (HSP). With these documents in hand, let's configure the Bluetooth adapter, pair a smart phone, and finally get to coding.
Configure the Bluetooth Adapter
When scanning, Bluetooth looks for nearby devices that provide specific services. Services offered by a device are identified by the device class. The first thing we need to do is set the class of the adapter. We'll use the hciconfig tool to do this. Later, I'll show how to set the class in the code.
Setting the class requires creating a bitmask from a list of assigned constants. The Bluetooth.org site contains a series of assigned numbers documents. The definitions required to set the appropriate class are in the baseband document. I'm not going to go into detail about how the document formatted, but I'll point you to the relevant pieces for the class that we need to build. Using the second document, I'll construct the class value.
From the major service class, we'll select bits 21 and 19 (audio and capturing), and from the major device class, we'll use audio/video (00100). Because we chose the audio/video device class, scroll down to Table 7 in the baseband document where the minor device classes are listed for audio/video major class. From this table, we want hands-free device (000010). Putting the selected values together, the value of the class in hex is 0x280404.
To see the current configuration of your Bluetooth hardware, run
hciconfig -a. This command displays various data items about your hardware, most notably, the Bluetooth address and the class. Here's sample output from my machine:
hci0: Type: BR/EDR Bus: USB BD Address: 00:11:22:33:44:55 ACL MTU: 310:10 SCO MTU: 64:8 UP RUNNING PSCAN ISCAN RX bytes:7383 acl:0 sco:0 events:175 errors:0 TX bytes:3583 acl:0 sco:0 commands:175 errors:0 Features: 0xff 0xff 0x8f 0xfe 0x9b 0xff 0x59 0x83 Packet type: DM1 DM3 DM5 DH1 DH3 DH5 HV1 HV2 HV3 Link policy: RSWITCH HOLD SNIFF PARK Link mode: SLAVE ACCEPT Name: 'CSR - bc4' Class: 0x280404 Service Classes: Capturing, Audio Device Class: Audio/Video, Device conforms to the Headset profile HCI Version: 2.1 (0x4) Revision: 0x149c LMP Version: 2.1 (0x4) Subversion: 0x149c Manufacturer: Cambridge Silicon Radio (10)
Under the class line, the output contains a text description of the service classes and device class. My machine shows a class of 0x280404, service classes capturing and audio, and device classes audio/video, and device conforms to the headset profile.
Run the following command to set the class:
sudo hciconfig hc0 class 0x280404.
After setting the class, run
hciconfig -a again and verify that the class, service class, and device class lines match my output. If it does, your phone should recognize the machine as supporting the headset profile. You can scan for your computer from your phone and pair them. Of course, we haven't implemented the headset service yet so your phone has nothing to connect to yet.
Bluetooth Addresses and Conversion Functions
Bluetooth addresses are a 6-byte hex number similar to an Ethernet MAC address. BlueZ provides convenience functions for converting between a 6-byte string in the format 00:11:22:33:44:55 and the
bdaddr_t struct. Here are the function prototypes:
int ba2str(const bdaddr_t *ba, char *str);
int str2ba(const char *str, bdaddr_t *ba);
The function
ba2str converts from the internal
bdaddr_t to a zero-terminated string (the
str parameter should have at least 18 bytes), and
str2ba provides the opposite conversion. The first example makes use of the
ba2str function.
Implementing the Headset Profile (HSP)
The HSP profile document describes how to implement the HSP profile. The profile overview (on page 204) contains a diagram that shows two distinct components of the HSP: the audio gateway (AG) and the headset (HS); see Figure 2:
Figure 2: Diagram copied from page 204 of the HSP profile document.
We're going to implement the headset side, (but implementing the audio gateway side is almost identical the audio gateway can be implemented by making a few small changes to the code).
Here are the steps:
- Set the class (as we did earlier, except through a function call)
- Register with the SDP server
- Listen for RFCOMM connections
- Listen for SCO connection
- Process data from the connections
The following sections provide sample code to accomplish these tasks.
Setting the Class
In a previous example, we used the hciconfig tool to set the device class. In this example, we'll set the class in code with a call to
hci_write_class_of_dev as shown in Listing One. Modifying the device attributes is a privileged function so this example must be run as root.
Listing One
#include "btinclude.h" int main() { int id; int fh; bdaddr_t btaddr; char pszaddr[18]; unsigned int cls = 0x280404; int timeout = 1000; printf("this example should be run as root\n"); // get the device ID if ((id = hci_get_route(NULL)) < 0) return -1; // convert the device ID into a 6 byte bluetooth address if (hci_devba(id, &btaddr) < 0) return -1; // convert the address into a zero terminated string if (ba2str(&btaddr, pszaddr) < 0) return -1; // open a file handle to the HCI if ((fh = hci_open_dev(id)) < 0) return -1; // set the class if (hci_write_class_of_dev(fh, cls, timeout) != 0) { perror("hci_write_class "); return -1; } // close the file handle hci_close_dev(fh); printf("set device %s to class: 0x%06x\n", pszaddr, cls); return 0; }
|
http://www.drdobbs.com/embedded-systems/using-bluetooth/232500828
|
CC-MAIN-2016-50
|
refinedweb
| 1,570
| 60.55
|
This is one of the 100 recipes of the IPython Cookbook, the definitive guide to high-performance scientific computing and data science in Python.
import numpy as np
def id(x): # This function returns the memory # block address of an array. return x.__array_interface__['data'][0]
a = np.zeros(10); aid = id(a); aid
b = a.copy(); id(b) == aid
a *= 2; id(a) == aid
c = a * 2; id(c) == aid
In-place operation.
%%timeit a = np.zeros(10000000) a *= 2
With memory copy.
%%timeit a = np.zeros(10000000) b = a * 2
a = np.zeros((10, 10)); aid = id(a); aid
Reshaping an array while preserving its order does not trigger a copy.
b = a.reshape((1, -1)); id(b) == aid
Transposing an array changes its order so that a reshape triggers a copy.
c = a.T.reshape((1, -1)); id(c) == aid
To return a flattened version (1D) of a multidimensional array, one can use
flatten or
ravel. The former always return a copy, whereas the latter only makes a copy if necessary.
d = a.flatten(); id(d) == aid
e = a.ravel(); id(e) == aid
%timeit a.flatten()
%timeit a.ravel()
When performing operations on arrays with different shapes, you don't necessarily have to make copies to make their shapes match. Broadcasting rules allow you to make computations on arrays with different but compatible shapes. Two dimensions are compatible if they are equal or one of them is 1. If the arrays have different number of dimensions, dimensions are added to the smaller array from the trailing dimensions to the leading ones.
n = 1000
a = np.arange(n) ac = a[:, np.newaxis] ar = a[np.newaxis, :]
%timeit np.tile(ac, (1, n)) * np.tile(ar, (n, 1))
%timeit ar * ac
Can you explain the performance discrepancy between the following two similar operations?
a = np.random.rand(5000, 5000)
%timeit a[0, :].sum()
%timeit a[:, 0].sum()
You'll find all the explanations, figures, references, and much more in the book (to be released later this summer).
IPython Cookbook, by Cyrille Rossant, Packt Publishing, 2014 (500 pages).
|
http://nbviewer.jupyter.org/github/ipython-books/cookbook-code/blob/master/notebooks/chapter04_optimization/05_array_copies.ipynb
|
CC-MAIN-2018-13
|
refinedweb
| 348
| 69.79
|
What is a value object?
A small simple object, like money or a date range, whose equality isn't based on identity.
Martin Fowler
Objects in Ruby are usually considered to be entity objects. Two objects may have matching attribute values but we do not consider them equal because they are distinct objects.
In this example
a and
c are not equal:
class Panserbjorn def initialize(name) @name = name end end a = Panserbjorn.new('Iorek') b = Panserbjorn.new('Iofur') c = Panserbjorn.new('Iorek') a == c #=> => false # Three distinct objects: a.object_id #=> 70165973839880 b.object_id #=> 70165971554200 c.object_id #=> 70165971965460
Value objects on the other hand, are compared by value. Two different value objects are considered equal when their attribute values match.
Symbol,
String,
Integer and
Range are examples of value objects in Ruby.
Here,
a and
c are considered equal despite being distinct objects:
a = 'Iorek' b = 'Iofur' c = 'Iorek' a == b #=> false a == c #=> true # Three distinct objects: a.object_id #=> 70300461022500 b.object_id #=> 70300453210700 c.object_id #=> 70300461053840
How can I create a value object?
Say I want a class to represent the days of the week and I also want instances of that class to be considered equal if they represent the same day. A Sunday object should equal another Sunday object. A Monday object should equal another Monday object, etc…
I might begin with the following class:
class DayOfWeek DAYS = { 1 => 'Sunday', 2 => 'Monday', 3 => 'Tuesday', 4 => 'Wednesday', 5 => 'Thursday', 6 => 'Friday', 7 => 'Saturday' }.freeze def initialize(day) raise ArgumentError, 'Day outside range' unless (1..7).cover?(day) @day = day end def to_i day end def to_s DAYS[day] end private attr_accessor :day end
Now, I am going to instantiate three objects to represent the days of the week on which I eat pizza, pay the milk man, and put out the recycling for collection:
pizza_day = DayOfWeek.new(5) milk_money_day = DayOfWeek.new(2) recycling_collection_day = DayOfWeek.new(5)
I know that I eat pizza for dinner the same day that I put out the recycling. I consider these objects to represent the same thing: Thursday. They should be equivalent. But they're not:
pizza_day == recycling_collection_day #=> false
That's because they're not yet value objects.
#== compares the identities of the objects.
I should override
#==. I will use pry to find out where the method comes from so we can see how it derives its current behaviour.
pizza_day.method(:==).owner #=> BasicObject
DayOfWeek inherits
#== from
BasicObject.
The page for
BasicObject#== states:
== returns true only if obj and other are the same object. Typically, this method is overridden in descendant classes to provide class-specific meaning.
Aha! The class specific meaning in this case is I want to compare its instances by value.
I know that these objects expose an integer. It makes sense to compare against that but I don't want to compare a day with an actual integer. Thursday should not be equivalent to the number 5.
I also know that a
DayOfWeek exposes a string as well. It follows that any equivalent days would return matching string and integer values:
class DayOfWeek # ... def ==(other) to_i == other.to_i && to_s == other.to_s end alias eql? == # ... end
I have aliased
#eql? to
#==. The
BasicObject documentation explains:
For objects of class Object, eql? is synonymous with ==. Subclasses normally continue this tradition by aliasing eql? to their overridden ==
Bingo! We have value objects.
pizza_day and
recycling_collection_day are considered equivalent:
pizza_day == recycling_collection_day #=> true
I could override other comparison methods,
<=,
<,
==,
>=,
> and
between? as it makes sense to say that
Monday is less than
Tuesday or
Friday is greater than
Thursday but I have decided that's not needed for now.
There is however, one more important step that I need to implement. These objects are equivalent, so when used as a hash key I would expect them to point to the same bucket.
The
Hash documentation suggests:
Two objects refer to the same hash key when their hash value is identical and the two objects are eql? to each other.
A user-defined class may be used as a hash key if the hash and eql? methods are overridden to provide meaningful behavior. By default, separate instances refer to separate hash keys.
Following that advice, I need to change the default behaviour of
#hash. I already know that integers in Ruby are value objects. I can see that equivalent integers always return the same
#hash.
a = 1 b = 1 a.object_id #=> 3 b.object_id #=> 3 1.object_id #=> 3 1.hash == 2.hash #=> false [a,b,1].map(&:hash).uniq.count #=> 1 101.hash == (100 + 1).hash #=> true
The same goes for strings:
a = 'Svalbard' b = 'Svalbard' # Note the different object ids: a.object_id #=> 70253833847520 b.object_id #=> 70253847146940 'Svalbard'.object_id #=> 70253847210020 # The hash values of equivalent strings match: 'Svalbard'.hash == 'Bolvanger'.hash #=> false [a,b,"Svalbard"].map(&:hash).uniq.count #=> 1 'Svalbard'.hash == ('Sval' + 'bard').hash #=> true
I will generate the hash using its string and integer properties.
def ==(other) to_i == other.to_i end alias eql? == def hash to_i.hash ^ to_s.hash end
Per the example in the documentation, I've used the XOR operator (
^) to derive the new hash value.
Now that I have overridden
#hash, I can see that equivalent
DayOfWeek instances point to the same bucket:
day_1 = DayOfWeek.new(1) day_2 = DayOfWeek.new(1) day_1 == day_2 #=> true notes = {} #=> {} notes[day_1] = 'Rest' notes[day_2] = 'Party' notes.length #=> 1 notes #=> {#<DayOfWeek:0x00007fa193e44170 @day=1>=>"Party"}
Structs
If I want multiple value objects, I might have to override
#hash and
#== for each class.
I could decide to use structs instead.
A Struct is a convenient way to bundle a number of attributes together, using accessor methods, without having to write an explicit class.
Structs are value objects by default. Of course we now have an idea of how this works. The documentation explains:
Equality—Returns true if other has the same struct subclass and has equal member values (according to Object#==).
Just as we thought!
DayOfWeek = Struct.new(:day) do DAYS = { 1 => 'Sunday', 2 => 'Monday', 3 => 'Tuesday', 4 => 'Wednesday', 5 => 'Thursday', 6 => 'Friday', 7 => 'Saturday' }.freeze def to_s DAYS[day] end def to_i day end end a = DayOfWeek.new(1) b = DayOfWeek.new(2) c = DayOfWeek.new(1) a == c #=> true a == b #=> false
Summary
We now know the difference between an entity object and a value object. We have learned that we need to override both
#hash and
#== if our value objects are to be used as hash keys. And, we have learned that structs provide value object behaviour straight out of the box.
Posted on by:
Null
Junior developer experienced in Ruby. Looking for remote work or work in or around Somerset, UK!
Discussion
You can also include the
Comparablemodule and just implement a single spaceship operator method and all comparison classes would work.
See doc here: docs.ruby-lang.org/en/2.5.0/Compar...
Thank you. That had slipped from my radar. Yes,
Comparablegives me
<,
<=etc... just by defining
<==>:
Note that if you have pry-doc installed, you can also use
show-doc pizza_day.==in Pry , which will output the following
This is quite a bit more convenient than using
ownerand then having to look up the documentation online.
Yes, a lot more convenient. Thanks. I wish I'd known about it before.
|
https://practicaldev-herokuapp-com.global.ssl.fastly.net/updated_tos/value-objects-in-ruby-13p
|
CC-MAIN-2020-34
|
refinedweb
| 1,210
| 69.38
|
How to track a moving object971508 Oct 30, 2012 6:05 PM
There is a randomly moving object(obj1) on the panel. We have another object(obj2) that is supposed to track obj1 initiating from the (0,0) point on the panel. The obj2 must get closer to the obj1 as time passes.
This content has been marked as final. Show 3 replies
1. Re: How to track a moving objectgimbal2 Oct 30, 2012 9:19 PM (in response to 971508)Sounds like basic 2D vector math to me, which is something you can Google for hundreds of articles on the matter. You have two points representing your two objects - now imagine an invisible line between those two points. One of the objects (your "terminator") should be periodically moving along that line until it collides with the other object.
2. Re: How to track a moving object971508 Oct 31, 2012 5:26 PM (in response to gimbal2)Hi. thank you for your reply gimbal2. I've been searching this for a while now but unfortunately I couldn't find a working solution for this. there are some approaches disscused on some websites but they didn't show good results. I would really appreciate it if you could tell me how it works or at least drop some link in which I could find it myself.
I must also add that I'm not good at math.
thanks a lot.
3. Re: How to track a moving object971508 Oct 31, 2012 5:32 PM (in response to 971508)This is also the code I have written so far which is not so good.
One thing to mention is the speed of the object(missile) which must be constant all the time.
package Test;
import java.awt.Color;
import java.awt.Dimension;
import java.awt.Graphics;
import java.awt.Graphics2D;
import java.awt.event.ActionEvent;
import java.awt.event.ActionListener;
import javax.swing.JPanel;
import javax.swing.Timer;
public class Tracking extends JPanel implements ActionListener{
private Timer t;
private int newX= 0, newY= 0;
private int currentX, currentY;
private int i=0, j=500;
private int movingY=800;
public Tracking(){
setBackground(Color.WHITE);
setOpaque(false);
}
public void paintComponent(Graphics g){
super.paintComponent(g);
Graphics2D g2d = (Graphics2D) g;
g2d.setPaint(Color.RED);
g2d.fillOval(600, movingY, 50, 50);
g2d.setPaint(Color.BLUE);
g2d.fillOval(newX, newY, 10, 10);
}
public Dimension getPreferredSize(){
return new Dimension(800, 1000);
}
public void doIt(){
t= new Timer(100, this);
t.start();
}
public void actionPerformed(ActionEvent e){
if(i<=100) {
i++;
}
currentX= newX;
currentY= newY;
newX= currentX + (i*((600 - currentX))/j);
newY= currentY + (i*((movingY - currentY))/j);
if(j>1){
j--;
}
else
t.stop();
movingY-=10;
//--------------------------
repaint();
}
}
|
https://community.oracle.com/thread/2460402
|
CC-MAIN-2017-30
|
refinedweb
| 448
| 59.4
|
take
signature:
take(count: number): Observable
Emit provided number of values before completing.
Why use
take
When you are interested in only the first set number of emission, you want to
use
take. Maybe you want to see what the user first clicked on when he/she
first entered the page, you would want to subscribe to the click event and just
take the first emission. There is a race and you want to observe the race, but
you're only interested in the first who crosses the finish line. This operator
is clear and straight forward, you just want to see the first n numbers of
emission to do whatever it is you need.
If you want to take a variable number of values based on some logic, or
another observable, you can use takeUntil or
takeWhile!
take is the opposite of
skip where
take will take the first n
number of emissions while
skip will skip the first n number of emissions.
Examples
Example 1: Take 1 value from source
import { of } from 'rxjs/observable/of'; import { take } 'rxjs/operators'; //emit 1,2,3,4,5 const source = of(1, 2, 3, 4, 5); //take the first emitted value then complete const example = source.pipe(take(1)); //output: 1 const subscribe = example.subscribe(val => console.log(val));
Example 2: Take the first 5 values from source
import { interval } from 'rxjs/observable/interval'; import { take } 'rxjs/operators'; //emit value every 1s const interval = interval(1000); //take the first 5 emitted values const example = interval.pipe(take(5)); //output: 0,1,2,3,4 const subscribe = example.subscribe(val => console.log(val));
Example 3: Taking first click loclation
<div id="locationDisplay"> Where would you click first? </div>
import { fromEvent } from 'rxjs/observable/fromEvent'; import { take, tap } 'rxjs/operators'; const oneClickEvent = fromEvent(document, 'click').pipe( take(1), tap(v => { document.getElementById('locationDisplay').innerHTML = `Your first click was on location ${v.screenX}:${v.screenY}`; }) ) const subscribe = oneClickEvent.subscribe();
Additional Resources
- take
- Official docs
- Filtering operator: take, first, skip
- André Staltz
Source Code: Code:
|
https://www.learnrxjs.io/operators/filtering/take.html
|
CC-MAIN-2018-17
|
refinedweb
| 341
| 54.02
|
|
Filter Options
Stefan Barsuhn
This is just a short post as I stumbled over this in the past: Problem You want to define an action for a business partner object (Customer, Contact Person, Employee) but this is not possible
Problem When you debug in C4C, you have an output window where you can see the sequence all the various script files are called. However, inside the ABSL script file you don’t have access to
Problem You may have noticed that it is not possible to convert between certain data types in C4C. This is especially bothersome since most data types are declared in multiple namespaces. Take the example CurrencyCode. Suppose you
|
https://blogs.sap.com/author/stefan.barsuhn/
|
CC-MAIN-2018-17
|
refinedweb
| 112
| 51.72
|
The select package
While tinkering on a project, I frequently found myself having to make FFI calls to select(2). This package provides an interface to that system call.
Changed in version 0.4.0.1:
Minor internal cleanups.
TODO moved to file..
NOTE: I feel I'm occupying prime namespace realestate with a package name like select. I'll happily let myself be chased away if someone more qualified wants to use this package name. Let me know.
Properties
Modules
Downloads
- select-0.4.0.1.tar.gz [browse] (Cabal source package)
- Package description (included in the package)
Maintainer's Corner
For package maintainers and hackage trustees
|
http://hackage.haskell.org/package/select
|
CC-MAIN-2017-13
|
refinedweb
| 108
| 61.83
|
Functionality – Using User Controls
While Dynamicweb has support for Custom Modules to create functionality not found in Dynamicweb, sometimes these modules are a bit overkill and require too much work for simple tasks. For example, imagine you want to develop a quick form that asks the user for some details, and then sends out an e-mail message to an account determined by the data the user entered. The standard Forms module doesn't support this, and the Data Management module may be a bit too much. In those cases, it’s good to know that you can use standard ASP.NET User Controls and embed them in a web page. Here’s how.
Creating the User Control
- In your Dynamicweb Custom Module solution (or in a separate ASP.NET Web Site or Web Application Project) create a new User Control.
- Add whatever code you need to the control to complete your tasks. In my case, I am adding a few simple controls like a TextBox, a DropDownList and a Button, to let the user determine the department to send the message to, and a body for the message. I also wrapped the controls in a panel that is hidden when the message is sent, while another panel with a Thank You message is shown. Finally, to demonstrate AJAX capabilities in User Controls I wrapped the entire code in an UpdatePanel with a ProgressTemplate. This avoids page flicker and improves the user experience. I ended up with this code:
<%@ Control <style type="text/css"> .label-column { width: 120px; } </style> <asp:ScriptManager <asp:UpdatePanel <ContentTemplate> <asp:PlaceHolder <p>Choose the department, add your message and click Send to send us your comments</p> <table> <tr> <td class="label-column">Department</td> <td> <asp:DropDownList <asp:ListItem>Marketing</asp:ListItem> <asp:ListItem>Sales</asp:ListItem> <asp:ListItem>Support</asp:ListItem> </asp:DropDownList> </td> </tr> <tr> <td>Message</td> <td> <asp:TextBox</asp:TextBox> </td> </tr> <tr> <td class="style2"> </td> <td> <asp:Button </td> </tr> </table> </asp:PlaceHolder> <asp:PlaceHolder Message sent</asp:PlaceHolder> <asp:UpdateProgress <ProgressTemplate> Please wait while we send your message. </ProgressTemplate> </asp:UpdateProgress> </ContentTemplate> </asp:UpdatePanel> </form>
The only thing that is different from any other User Control you would build yourself in a standard ASP.NET web site is the fact the complete code is wrapped in a server side <form />. This is required when using User Controls in Dynamicweb to work around the issue that a Dynamicweb page does not use a standard server side ASP.NET form.
- Next, in the code behind I added the following code that sends the message. Notice how I am using the selected value from the DropDownList control to determine the e-mail address to which the message needs to be sent; not something easily accomplished server side with the standard Forms or Data Management modules. Clearly, this is just an example; since everything you write here is 100% .NET, you can do whatever you can do in standard ASP.NET web applications and sites.
using System; using System.Net.Mail; namespace Dynamicweb.Samples.Web { public partial class DataEntry : System.Web.UI.UserControl { protected void Send_Click(object sender, EventArgs e) { using (MailMessage message = new MailMessage()) { message.From = new MailAddress("website@devierkoeden.com"); message.To.Add(new MailAddress( string.Format("{0}@devierkoeden.com", Department.SelectedValue))); message.Subject = "Response from web site"; message.Body = MessageText.Text; new SmtpClient().Send(message); MessageInput.Visible = false; MessageConfirmation.Visible = true; System.Threading.Thread.Sleep(5000); } } } }
- I am using Thread.Sleep to halt page execution for five seconds. This makes it easier to see the AJAX behavior when the UpdateProgress control kicks in. Don’t use this in a production web site.
This is pretty much all you need for the User Control. As you can see, most of this is just standard ASP.NET stuff with all its bells and whistles such as ASP.NET AJAX and PostBacks. The only exception is the server side form wrapped around the markup of the User Control.
The next step is adding the module to a page in your web site.
Adding the User Control to your Site
To add the module to a page or paragraph, you have three options.
- When adding content to a paragraph, switch the Editor to Source mode and add the following code to the paragraph: <!--@LoadControl(DataEntry.ascx)--> as shown in Figure 1:
Figure 1 (click to enlarge)
The path you enter here is relative to the root of your site, so you could insert a leading / to make that more explicit.
If you now save the page and request it in your browser, the module should appear.
- Instead of entering the code directly in a paragraph, you can add it as a module. First, make sure the Insert .ascx Control module is activated in the system. Next, when editing a paragraph, click the Module button and choose Insert .ascx Control. You then get a text box where you can enter the path to a user control. You need to enter the path here manually instead of browsing to it using a FileManager control, because that control can only look in files under the Files folder. Typically, you have your User Controls directly in the root of your Application project or in a sub folder. For the output of the User Control to show up, your paragraph template needs to have a ParagraphModule template tag. Take a look at the Caveats and Problems section to read more about a problem with the Insert .ascx Control module.
- The final solution to add a User Control is to add it directly to a template. This is convenient if you want the control to appear on all pages, or in a region not normally editable through the Admin section such as a header or a footer. To add the control, you can add the following markup to a template file such as a paragraph or page template:
<!--@LoadControl(/DataEntry.ascx)-->
Again, the path you enter here is relative to the root.
No matter the way you added the control to the page, it should now appear when you request the page in the browser. You should see a drop down list, a text box and a button. When you add a message and hit the Send button, you should see the message you entered in the UpdateProgress control. After a few seconds the Message Sent text appears.
Caveats and Problems
When adding user controls and enabling AJAX scenarios, you may run in some issues. For example, instead of seeing the page, you may see this instead:
Figure 2 (click to enlarge)
I am not 100% sure, but it seems related to HTTP Compression which Dynamicweb has enabled by default. I found that removing the compression modules from the web.config seems to solve the problem. However, it may also be that just recompiling the application because of the changed web.config solves the problem as I've also seen it work with compression turned on. In future versions of Dynamicweb the implementation of compression may change, so this behavior may change as well. It’s up to you to determine if using User Controls makes up for the lack of HTTP Compression.
You may also find that using the Insert .ascx Control module does not add the module to the page. This seems to be a bug, and I will update this article when I find a solution.
If your AJAX functionality doesn't seem to work, check the web.config file and make sure that the mode property of the xhtmlConformance element is set to Transitional, like this:
<system.web> ... <xhtmlConformance mode="Transitional"/> ...
</system.web>
Older versions of Dynamicweb or the custom module project may have set this to Legacy, in which case the UpdatePanel control doesn't function correctly.
The final caveat you need to be aware of is the fact that the User Control uses a <form /> with its runat attribute set to server. Since the control now outputs a complete form, you can’t embed a user control within another form tag in a page or paragraph template as you’re not allowed to nest HTML forms. It’s typically easy to work around this by moving the form tags around a bit in your templates so the User Control doesn't interfere with them.
You can download the User Control here. If you want to use it, add it to a Custom Modules project, compile it and the register the control as explained in this article. If you also want to send an e-mail with it you need to add the necessary SMTP configuration to web.config as you normally would.
TIP
The User Control I built in this article is based on a Web Application Project in Visual Studio. This means you also need to deploy the resulting DLL file with your User Controls as that’s where the code in the Code Behind ends up. To make deployment easier, you can create a new Web Site in Visual Studio instead (using File | New Web Site rather than File | New Project). User Controls you add to a Web Site are compiled on the fly on the server. That means that all you need to deploy to your production server is the .ascx file and its associated Code Behind file (or just the .ascx file if you’re not using Code Behind).
|
https://devierkoeden.com/articles/custom-functionality-using-user-controls
|
CC-MAIN-2019-39
|
refinedweb
| 1,572
| 63.8
|
Hello Simon, Wednesday, February 08, 2006, 2:19:07 SM> The point is that it should already be optimised - both mapM_ and [n..m] SM> work with foldr/build optimisation, but due to the problem reported in SM> that ticket, foldr/build isn't working fully on this example. Better to SM> fix the cause of the problem than work around it with a special RULE. i understood this and therefore don't wrote my function body. but now i write it: -- Faster equivalent of "mapM_ action [from..to]" loop from to action = go from where go i | i>to = return () | otherwise = do action i go $! (i+1) for the following purpose - can you check that 'loop' in no more "faster equivalent", i.e. that the speed is really the same now? -- Best regards, Bulat mailto:bulatz at HotPOP.com
|
http://www.haskell.org/pipermail/glasgow-haskell-users/2006-February/009664.html
|
CC-MAIN-2013-48
|
refinedweb
| 140
| 65.62
|
On Thu, Jul 10, 2008 at 05:25:35PM +1000, Nick Piggin wrote:> On Wednesday 09 July 2008 07:37, Rafael J. Wysocki wrote:> > Bug-Entry :> > Subject : 2.6.26-rc1-$sha1: RIP __d_lookup+0x8c/0x160> > Submitter : Alexey Dobriyan <adobriyan@gmail.com>> > Date : 2008-05-05 09:59 (65 days old)> > References :> > Handled-By : Paul E. McKenney <paulmck@linux.vnet.ibm.com>> > Attached is my fix for this problem. I don't think it is a regression> as such, but it can't hurt to go into 2.6.26 IMO.> > PREEMPT_RCU without HOTPLUG_CPU is broken. The rcu_online_cpu is called to> initially populate rcu_cpu_online_map with all online CPUs when the hotplug> event handler is installed, and also to populate the map with CPUs as they> come online. The former case is meant to happen with and without HOTPLUG_CPU,> but without HOTPLUG_CPU, the rcu_offline_cpu function is no-oped -- while it> still gets called, it does not set the rcu CPU map.> > With a blank RCU CPU map, grace periods get to tick by completely oblivious> to active RCU read side critical sections. This results in free-before-grace> bugs.> > Fix is obvious once the problem is known. (Also, change __devinit to> __cpuinit so the function gets thrown away on !HOTPLUG_CPU kernels).I officially feel extremely stupid. Thank you -very- much for trackingthis down, Nick!!! And especially for the fix!I will give this a good testing. In the meantime:Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>> Signed-off-by: Nick Piggin <npiggin@suse.de>> ---> > Annoyed this wasn't a crazy obscure error in the algorithm I could fix :)> I spent all day debugging it and had to make a special test case (rcutorture> didn't seem to trigger it), and a big RCU state logging infrastructure to log> millions of RCU state transitions and events. Oh well.> > Index: linux-2.6/kernel/rcupreempt.c> ===================================================================> --- linux-2.6.orig/kernel/rcupreempt.c 2008-07-10 17:08:56.000000000 +1000> +++ linux-2.6/kernel/rcupreempt.c 2008-07-10 17:09:10.000000000 +1000> @@ -925,26 +925,22 @@ void rcu_offline_cpu(int cpu)> spin_unlock_irqrestore(&rdp->lock, flags);> }> > -void __devinit rcu_online_cpu(int cpu)> -{> - unsigned long flags;> -> - spin_lock_irqsave(&rcu_ctrlblk.fliplock, flags);> - cpu_set(cpu, rcu_cpu_online_map);> - spin_unlock_irqrestore(&rcu_ctrlblk.fliplock, flags);> -}> -> #else /* #ifdef CONFIG_HOTPLUG_CPU */> > void rcu_offline_cpu(int cpu)> {> }> > -void __devinit rcu_online_cpu(int cpu)> +#endif /* #else #ifdef CONFIG_HOTPLUG_CPU */> +> +void __cpuinit rcu_online_cpu(int cpu)> {> -}> + unsigned long flags;> > -#endif /* #else #ifdef CONFIG_HOTPLUG_CPU */> + spin_lock_irqsave(&rcu_ctrlblk.fliplock, flags);> + cpu_set(cpu, rcu_cpu_online_map);> + spin_unlock_irqrestore(&rcu_ctrlblk.fliplock, flags);> +}> > static void rcu_process_callbacks(struct softirq_action *unused)> {
|
https://lkml.org/lkml/2008/8/1/381
|
CC-MAIN-2015-40
|
refinedweb
| 421
| 59.4
|
Hi!
I am working on a project that mainly needs to have 4-5 objects, each with a different sound track on them, that will play upon touch (collision) with a first person character walking around.
Since all 5 sounds will be connected to each other (one will be bass, one vocals, one guitar etc.) and they actually need to play synchronized , I need them all to play on awake, but muted. When the character collides with an objects, the sound that is attached to that object can just un-mute.
I am an architect, I designed an amazing geometry and need an interactive way to display it. I have very very limited coding skills and even after looking at unity documentation, still don't have an exact idea how the script would work!
Any help would be enormously appreciated!
What would you like to do, continuously play the sound until the player touches them? Or play the sound only when the player touches them? Your description is conflicting.
I can help with the script, but I need to understand what you want to do.
At the beginning there is no sound. You have 5 objects. If you touch one of the objects, a specific sound plays continuously even if you go away from it. Every object plays a different sound.
Lets say you already collided with two objects and you can hear two sounds. One of them is drums sound , one of them is guitar sound. If the audio starts playing upon collision, the second sound will most probably not match the first one (because it was triggered to start at a random time). That is why I want all sounds to start at the beginning, muted and collision to un-mute them.
Answer by Harinezumi
·
May 11, 2018 at 07:30 AM
Thanks for the explanation, now I see what you want.
It is actually pretty easy to do this: you just need a script that sets volume to 0 in the beginning, and to 1 (or whatever you would like) when there is a collision with a specific object. For the collision to work, you also need to set up the objects correctly (at least one has to have a rigidbody). For example, your player would have a collider (CapsuleCollider or CharacterController) and a rigidbody (probably set to kinematic), while the sound objects just a collider, like a BoxColldier or SphereCollider.
Here is the script. I will also try to explain what each line does so you can learn from it and become better at programming ;)
using UnityEngine;
// make sure the game object you attach it to has an AudioSource on it
[RequireComponent(typeof(AudioSource))]
// create a script that you can attach to a game object
public class SetVolumeOnCollision : MonoBehaviour {
// the volume to set when collided; can be set in the Editor;
// not necessarily needed, but it will make the component more versatile
[SerializeField] private float volumeOnCollision = 1;
// the tag to which it should react; also not essential, but nice to have
// make sure you set the tag of your colliding object if you do use it!
[SerializeField] private string tagToReactTo = "Player";
// the audio source you want to control
private AudioSource audioSource;
// this is called in the first moment, even before sounds start to play
private void Awake () {
// get the audio source to control that is on this game object
audioSource = GetComponent<AudioSource>();
// set volume to 0, muting it; you can also do this in the Editor
audioSource.volume = 0;
// make sure it wil start to play; this as well can be set in the Editor
audioSource.playOnAwake = true;
}
// this function is called on the frame when something collides with this game object
private void OnCollisionEnter (Collision collision) {
// check the tag; remove this if you don't want it
if (collision.collider.CompareTag(tagToReactTo))
// set the desired voluem; just replace it with 1 if you don't want to use desired volume
audioSource.volume = volumeOnCollision;
}
}
I get
*Assets/volontrig.cs(6,14): error CS0101: The namespace `global::' already contains a definition for `SetVolumeOnCollision*
Remove the copy of the code from volontrig.cs. This script should go into its separate file, and the file must be called SetVolumeOnCollision.cs (don't add the extension if you create it from Unity).
volontrig.cs
SetVolumeOnCollision.cs
Thank you so much for you help! I saved the script as SetVolumeOnCollision, it plays without any errors, but there is NO sound upon collision. I made a simple test with a Cube with an audio source and the script attached to it. For the player I use the FPSController prefab from Unity standard assets - it has a Rigidbody. If I remove the script, the audio is just playing from the start as it should, so I'm sure the audiosource is fine.
I just managed to make it run, used some strange FPS player and it works now!!! This is amazing, thank you so much for your help! BTW is there a simple way to mute it again upon second collision?
I'm glad it works!
Yes, there is a simple way: change the line audioSource.volume = volumeOnCollision; to audioSource.volume = (audioSource.volume > 0) ? 0 : volumeOnCollision;. This will always change the volume to 0 when it was greater than 0, and to the set volume when it is 0.
audioSource.volume = volumeOnCollision;
audioSource.volume = (audioSource.volume > 0) ? 0 : volumeOnColl.
No Collision
3
Answers
detection cube face while rotating
0
Answers
How can I determine how many colliders exist along a line?
1
Answer
Change Audio source on collision
1
Answer
How can i get harmed and healed by diferent game objects
0
Answers
|
https://answers.unity.com/questions/1504635/collier-multiple-sounds-script.html
|
CC-MAIN-2020-40
|
refinedweb
| 940
| 60.75
|
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
Extending a module to set new defaults values. How to get started ?
In the module product, there is a 'type' field that has 2 options, consu or service. By default, it is set to consu.
/addons/product/product.py _defaults = { 'company_id': lambda s,cr,uid,c: s.pool.get('res.company')._company_default_get(cr, uid, 'product.template', context=c), 'list_price': 1, 'cost_method': 'standard', 'standard_price': 0.0, 'sale_ok': 1, 'produce_delay': 1, 'uom_id': _get_uom_id, 'uom_po_id': _get_uom_id, 'uos_coeff' : 1.0, 'mes_type' : 'fixed', 'categ_id' : _default_category, 'type' : 'consu', }
The module produce.py extend this to add the value product to the type field.
/addons/product/procurement.py
class product_template(osv.osv): _inherit="product.template"
_columns = { 'type': fields.selection([('product','Stockable Product'),('consu', 'Consumable'),('service','Service')], 'Product Type', required=True, help="Consumable: Will not imply stock management for this $ 'procure_method': fields.selection([('make_to_stock','Make to Stock'),('make_to_order','Make to Order')], 'Procurement Method', required=True, help="Make to Stock: When needed, the product is take$ 'supply_method': fields.selection([('produce','Manufacture'),('buy','Buy')], 'Supply Method', required=True, help="Manufacture: When procuring the product, a manufacturing order or a task will be $ }
I simply need to change the default to product.
I know that I could probably hard code it in procurement.py under
_defaults = { 'type' : 'product',
}
But, I also know that by doing so, each time the procurement.py will be updated, I will lose the change and will need to do it again.
Thus why I would like a tutorial (or example here) of how to create a module to extend procurement.
OpenERP is gainig a lot of ground, I am a hobbyist, doing this to help peeps and at no charge. I am a promoter of open source solutions. The only thing that is missing for help the community grow to where it could really be is a central point for all the learning resources like, A "Learn module creation for OpenERP with these teaching code examples" .
Something that explain module directory structure, __init.py__ etc... code that each lines are comment to explain why I do this like:
import soandso <--- needed to acces _defaults to modify it _inherit <--- needed for this and that
etc. As you see, it should be small modules that easy stuff 1 or 2 maybe, if there are other things that could fit in it, just need to modify it and explain properly what this line(s) does !
Thanx for the help !
You can start here and here
A good practical tutorials from Tony Galmiche
OpenERP 7 technical memento v0.7.4
About This Community
Odoo Training Center
Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
|
https://www.odoo.com/forum/help-1/question/extending-a-module-to-set-new-defaults-values-how-to-get-started-49490
|
CC-MAIN-2017-13
|
refinedweb
| 475
| 57.77
|
Thiabut,
As far as I know, URI escaping functions escape all non-alpha numberics
which are not in the following set of characters: {'-', '_', ':', '/', '?',
'=', '&', '#', '.'} (there may be others I can't think of right now). If a
character is in that set of characters, the URI remains "legal" even if the
character is unescaped. This set of characters is
A reason for this:
If you start with a link (), there are a
number of special characters that are requred to parse the URI correctly.
Without these characters: {'/', ':'}, there can be no "http://".
Without this character; {'?'}, there is no query string... only a run-on
directory-path.
Without this character; {'#'}, there is no anchor... only an incorrectly
long GET parameter value.
This is not a bug; you need to manually escape any of the special
characters (probably called URI META characters or something like that) if
you expect them to be URL-encoded. If all '&' characters were URI-escaped
all of the time, there would be no way to create a GET parameter list; there
would never be more than one parameter.
As for a workaround, you will need to find a pool-friendly (assuming you
are using pools for memory allocation in this specific instance)
character/substring replacement function. You will likely want to do a
straight encode of all components of a URI seperately with this function
then use the ap_escape_uri(). I am not familiar with a particular function
that will do the trick, but I use a pool-modified version of a Yahoo!
C-library function for URL-encoding.
You can probably get this function to URL-encode all characters (or just the
'&' character) with mimimal effort. Just modify the "isurlchar(...)"
function to suit your needs. BTW - this function should be converted to use
pools when allocating C-string memory.
The following code is from yahoo_httplib.c (GNU Public License). I found it
through Google.com/codesearch
/* -------------------------------------------------- */
static int isurlchar(unsigned char c)
{
return (isalnum(c) || '-' == c || '_' == c);
}
char *yahoo_urlencode(const char *instr)
{
int ipos=0, bpos=0;
char *str = NULL;
int len = strlen(instr);
if(!(str = y_new(char, 3*len + 1) ))
return "";
while(instr[ipos]) {
while(isurlchar(instr[ipos]))
str[bpos++] = instr[ipos++];
if(!instr[ipos])
break;
snprintf(&str[bpos], 4, "%%%.2x", instr[ipos]);
bpos+=3;
ipos++;
}
str[bpos]='\0';
/* free extra alloc'ed mem. */
len = strlen(str);
str = y_renew(char, str, len+1);
return (str);
}
/* -------------------------------------------------- */
Dave
On 5/5/07, Thibaut VARENE <T-Bone@parisc-linux.org> wrote:
>
> Hi,
>
> I'm writing mod_musicindex[0], and I have a problem I can't fix: "&" in
> filenames aren't escaped into "%26" with ap_escape_uri(), see [1].
>
> I've been digging apache source in search of a solution with little
> luck, and I was wondering if somebody could tell me 1) why
> ap_escape_uri() (or ap_os_escape_path() for that matter) doesn't escape
> '&', and what I'm supposed to do to work around that.
>
> This bug happens with apache 1.3.33 and apache 2.2.3, fwiw.
>
> TIA
>
> Thibaut
>
> PS: Please CC-me in replies
>
> [0]
> [1]
>
> --
> Thibaut VARENE
>
>
--
David Wortham
Senior Web Applications Developer
Unspam Technologies, Inc.
(408) 338-8863
|
http://mail-archives.apache.org/mod_mbox/httpd-modules-dev/200705.mbox/%3C5280fae50705051057md1cd5c5t423c3536dc6c18e5@mail.gmail.com%3E
|
CC-MAIN-2015-11
|
refinedweb
| 522
| 65.83
|
Variable are used in C++, where we need storage for any value, which will change in program. Variable can be declared in multiple ways each with different memory requirements and functioning. Variable is the name of memory location allocated by the compiler depending upon the datatype of the variable.
Each variable while declaration must be given a datatype, on which the memory assigned to the variable depends. Following are the basic types of variables,
Variable must be declared before they are used. Usually it is preferred to declare them at the starting of the program, but in C++ they can be declared in the middle of program too, but must be done before using them.
For example:
int i; // declared but not initialised char c; int i, j, k; // Multiple declaration
Initialization means assigning value to an already declared variable,
int i; // declaration i = 10; // initialization
Initialization and declaration can be done in one single step also,
int i=10; //initialization and declaration in same step int i=10, j=11;
If a variable is declared and not initialized by default it will hold a garbage value. Also, if a variable is once declared and if try to declare it again, we will get a compile time error.
int i,j; i=10; j=20; int j=i+j; //compile time error, cannot redeclare a variable in same scope
All the variables have their area of functioning, and out of that boundary they don't hold their value, this boundary is called scope of the variable. For most of the cases its between the curly braces,in which variable is declared that a variable exists, not outside it. We will study the storage classes later, but as of now, we can broadly divide variables into two main types,
Global variables are those, which ar once declared and can be used throughout the lifetime of the program by any class or any function. They must be declared outside the
main() function. If only declared, they can be assigned different values at different time in program lifetime. But even if they are declared and initialized at the same time outside the main() function, then also they can be assigned any value at any point in the program.
For example: Only declared, not initialized
include <iostream> using namespace std; int x; // Global variable declared int main() { x=10; // Initialized once cout <<"first value of x = "<< x; x=20; // Initialized again cout <<"Initialized again with value = "<< x; }
Local variables are the variables which exist only between the curly braces, in which its declared. Outside that they are unavailable and leads to compile time error.
Example :
include <iostream> using namespace std; int main() { int i=10; if(i<20) // if condition scope starts { int n=100; // Local variable declared and initialized } // if condition scope ends cout << n; // Compile time error, n not available here }
There are also some special keywords, to impart unique characteristics to the variables in the program. Following two are mostly used, we will discuss them in details later.
Example :
#include <iostream.h> using namespace std; int main() { final int i=10; static int y=20; }
|
https://www.studytonight.com/cpp/variables-scope-details.php
|
CC-MAIN-2022-05
|
refinedweb
| 524
| 54.26
|
We recently launched our latest product to great critical acclaim. We’re happy to see people climbing aboard the Copycopter, and feedback has been largely positive.
However, one question we’ve had to field a few times is, “how will Copycopter affect the performance of my application?” The answer is a good one: it won’t. However, that answer may seem too good to be true to some folks, so let’s find out how Copycopter manages to stay out of your application’s way.
Integration
One of Copycopter’s nicer features is that the Ruby client is deeply integrated into the Rails stack the second you install it. This is made possible by the excellent Rails I18n API. We hook in our own I18n backend so that whenever Rails looks for a string, we use the text you’ve set up on Copycopter. This all happens in copycopter_client’s I18nBackend class. If you read quickly through that class, you’ll see that fetching or storing copy doesn’t make a request to the Copycopter server; it looks for content in the hash-like sync object.
def lookup(locale, key, scope = [], options = {}) parts = I18n.normalize_keys(locale, key, scope, options[:separator]) key_with_locale = parts.join('.') content = sync[key_with_locale] || super sync[key_with_locale] = "" if content.nil? content end
Behind the scenes
The client’s performance is acheived by using a background thread. When your Rails application starts up, the client spins up a thread in the Sync class.
until @stop sync logger.flush if logger.respond_to?(:flush) sleep(polling_delay) end
Every five minutes, the background thread synchronizes with the Copycopter server. It uses mutexes to make sure it isn’t updating copy while the application is using it. However, the mutex is only locked when already downloaded copy is being swapped in, so the main thread won’t be waiting for a lock to release.
def download client.download do |downloaded_blurbs| downloaded_blurbs.reject! { |key, value| value == "" } lock { @blurbs = downloaded_blurbs } end rescue ConnectionError => error logger.error(error.message) end
HTTP friendly
We also don’t want to waste any bandwidth or cycles by repeatedly downloading unchanged copy. The Client class is responsible for actually talking to the Copycopter server, and it speaks fluent HTTP.
def download connect do |http| request = Net::HTTP::Get.new(uri(download_resource)) request['If-None-Match'] = @etag response = http.request(request) if check(response) log("Downloaded translations") yield JSON.parse(response.body) else log("No new translations") end @etag = response['ETag'] end end
Each running client tracks the latest ETag when it downloads copy, so most requests simply return a 304 Not Modified response without sending any copy data.
when Net::HTTPNotModified false
Developing
It should be mentioned that this passive behavior is only used in production environments. During development and on a staging server, we found a five minute delay between copy updates to be unacceptable. During development, we wrap each request using a little piece of Rack middleware:
def call(env) @sync.download response = @app.call(env) @sync.flush response end
Although this could potentially add a slight delay to local requests, we found that the faster feedback was worth the tradeoff, and the smart HTTP handling, caching, and timeouts ensure that developing still feels snappy.
Putting it all together
By integrating with I18n, using a background thread, and using the HTTP protocol to our advantage, we achieve a number of performance and stability benefits:
- Slow copy downloads don’t mean slow applications
- Server errors don’t mean application errors
- Lots of application traffic doesn’t mean many Copycopter requests
- Up-to-date copy doesn’t cost much in terms of bandwidth
- Rails uses the I18n stack, so Rails engines and plugins support Copycopter by default
If you haven’t given Copycopter a try yet, don’t let apprehensions about performance stop you. Check out the code, install the client, and see for yourself.
|
http://robots.thoughtbot.com/copycopters-client-so-fast
|
CC-MAIN-2014-35
|
refinedweb
| 645
| 55.03
|
I'm wondering about the correct way of keeping my application in sync with a timebase master application.
Let's say I have Hydrogen running in master mode and I want to print a message on every 1/4 note Hydrogen is playing.
This is what I would do intuitive (using python):
Code: Select all
#!/usr/bin/env python3
import time
import jack
client = jack.Client('klicker')
def print_msg (last_tick):
state, pos = client.transport_query()
if state == jack.ROLLING:
if pos['tick'] < last_tick:
print ("klick")
return pos['tick']
with client:
last_tick = 0
while True:
last_tick = print_msg (last_tick)
time.sleep(0.00002)
So I'm running a loop with very little sleep time and check in every iteration if the current beat is already over.
This seems a little bit dirty and imprecise to me. So what would the right way of solving this problem?
Would be very happy if someone could help
|
https://linuxmusicians.com/viewtopic.php?f=44&t=17488&p=84624&
|
CC-MAIN-2019-39
|
refinedweb
| 151
| 75.81
|
From: Peter Dimov (pdimov_at_[hidden])
Date: 2004-12-06 19:21:16
Sean Parent wrote:
> On Dec 6, 2004, at 2:02 PM, Peter Dimov wrote:
>
>>
>> I get warning C4172 from VC++ 7.1, "returning address of local
>> variable or temporary", on this example (in
>> function_template.hpp:111).
>>
>
> In CodeWarrior 9.3 BOOST_NO_VOID_RETURNS is not defined so in
> function.hpp the code falls into the static_cast<> case
>
> -----
>
> # ifndef BOOST_NO_VOID_RETURNS
> return static_cast<result_type>(result);
> # else
> return result;
> # endif // BOOST_NO_VOID_RETURNS
>
> ------
>
> This silences the warning - and you get no indication that anything is
> wrong.
Not good. :-) FWIW, VC++ 7.0+ do not define BOOST_NO_VOID_RETURNS either.
> Just to make sure - you plan to leave it returning by value? If that
> is the case I'll update my code (I had patched my copy of boost).
Yes, return by value still seems to be the lesser of the two evils, since we
can deal with the problematic case on boost::function's side, if Doug
doesn't mind (the problem there is not bind-specific).
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
|
https://lists.boost.org/Archives/boost/2004/12/77298.php
|
CC-MAIN-2021-21
|
refinedweb
| 192
| 67.86
|
Hi,
It looks my initial attempt at this didn't work too well, as my
intention wasn't clear enough and the interface draft I included
seemed to raise mostly concerns about technicalities (too many
methods, etc.) instead of the fundamental design tradeoffs I was
trying to highlight. So let's try this again.
What I'm looking for is a clear, shared idea of what a jr3 content
tree looks like at a low level (i.e. before stuff like node types,
etc.) since the current MK interface leaves many of those details
unspecified. Here's what the MK interface currently says about this:
> * The MicroKernel <b>Data Model</b>:
> * <ul>
> * <li>simple JSON-inspired data model: just nodes and properties</li>
> * <li>a node is represented as an object, consisting of an unordered collection
> * of properties and an array of (child) node objects</li>
> * <li>properties are represented as name/value pairs</li>
> * <li>MVPs are represented as name/array-of-values pairs</li>
> * <li>supported property types: string, number</li>
> * <li>other property types (weak/hard reference, date, etc) would need to be
> * encoded/mangled in name or value</li>
> * <li>no support for JCR/XML-like namespaces, "foo:bar" is just an ordinary name</li>
> * <li>properties and child nodes share the same namespace, i.e. a property and
> * a child node, sharing the same parent node, cannot have the same name</li>
> * </ul>
There are a few complications and missing details with this model (as
documented) that I tried to address in my original proposal. The most
notable are:
* The data model specifies that a node contains an "an array of
(child) node objects" and seems to imply that child nodes are always
orderable. This is a major design constraint for the underlying
storage model that doesn't seem necessary (a higher-level component
could store ordering information explicitly) or desirable (see past
discussions on this). To avoid this I think child nodes should be
treated as an unordered set of name/node mappings.
* Another unspecified bit is whether same-name-siblings need to be
supported on the storage level. The MK implies that SNSs are not
supported (i.e. a higher level component needs to use things like name
mangling to implement SNSs on top of the MK), but the note about "an
*array* of (child) node objects" kind of leaves the door open for two
child nodes to (perhaps accidentally) have the same name. For also
this reason I think child nodes should be treated as a map from names
to corresponding nodes.
* The data model doesn't specify whether the name of a node is an
integral part of the node itself. The implementation(s) clarify (IMHO
correctly) that the name of each child node is more logically a part
of the parent node. Thus, unlike in JCR, there should be no getName()
method on a low-level interface for nodes.
* Somewhat contrary to the above, the data model specifies properties
as "name/value pairs". The MK interface doesn't allow individual
properties to be accessed separately, so this detail doesn't show up
too much in practice. However, in terms of an internal API it would be
useful to keep properties mostly analogous to child nodes. Thus there
should be no getName() method on a low-level interface for properties
(or, perhaps more accurately, "values").
* The data model says that "properties and child nodes share the same
namespace" but treats properties and child nodes differently in other
aspects (properties as "an unordered collection", child nodes as "an
array"). This seems like an unnecessary complication that's likely to
cause trouble down the line (e.g. where and how will we enforce this
constraint?). From an API point of view it would be cleanest either to
treat both properties and child nodes equally (like having all as
parts of a single unordered set of name/item mappings) or to allow and
use a completely separate spaces for property and child node names.
* Finally, while the MK interface doesn't spell it out explicitly, the
implicit consequence of using MVCC and referencing revision
identifiers in method calls is that the underlying tree model is
essentially immutable. The content tree only changes when a new
revision is constructed, while all past revisions remain intact. To
reflect this, an internal tree API should be mostly immutable.
These are in my mind the key issues that I think we should try to
reach an agreement on. The exact form of the interface that expresses
such consensus is IMHO of lesser importance, which is why I don't feel
too strongly about things like the use of java.util.Map or the Visitor
pattern. Such details can be changed down the line based on
experience, but deeper features like addressing and the orderability
of nodes and properties are very expensive to change later on.
My proposal, as drafted in the Tree interface, essentially says:
1) Properties and child nodes are all addressed using an unordered
name->item mapping on the parent node.
2) Neither properties nor child nodes know their own name (or their
parent). That information is kept only within the parent node.
3) Content trees are immutable except in clearly documented cases.
Some concerns about especially the first and third items were raised
in the followup discussion. Based on those concerns, a possible
alternative for the first item could be:
1a) Properties are addressed using an unordered name->property mapping
on the parent node
1b) Child nodes are addressed using an unordered name->node mapping on
the parent node
1c) The spaces for property and child node names are distinct.
Possible restrictions on this need to be implemented on a higher
level.
An alternative for the third item could be:
3a) Content trees are always immutable.
3b) A separate builder API is used to constructing new or modified
content trees.
Can we reach consensus on some of these models (or yet another
alternative)? If yes, it should be fairly straightforward to draft an
interface that captures such consensus and addresses the more detailed
concerns people have expressed.
BR,
Jukka Zitting
|
http://mail-archives.apache.org/mod_mbox/jackrabbit-dev/201203.mbox/%3CCAOFYJNaJw2Dy6M4OhUpexp4U=yxZHqyqqf3CU9aa-Jgic8SW0Q@mail.gmail.com%3E
|
CC-MAIN-2016-40
|
refinedweb
| 1,025
| 58.11
|
Opened 9 years ago
Last modified 8 years ago
#5551 defect new
struct.error: argument out of range for 4-bytes integer format
Description
I have custom Names backend everything works etc, but after "some hours" after the start some domains seems to stop working. In logs I can see that traceback (see above).
2012-03-14 10:02:33+0000 [DNSDatagramProtocol (UDP)] Unhandled Error Traceback (most recent call last): File "/opt/dns/site-packages/twisted/names/server.py", line 192, in messageReceived self.handleQuery(message, proto, address) File "/opt/dns/site-packages/twisted/names/server.py", line 137, in handleQuery self.gotResolverResponse, protocol, message, address File "/opt/dns/site-packages/twisted/internet/defer.py", line 301, in addCallback callbackKeywords=kw) File "/opt/dns/site-packages/twisted/internet/defer.py", line 290, in addCallbacks self._runCallbacks() --- <exception caught here> --- File "/opt/dns/site-packages/twisted/internet/defer.py", line 551, in _runCallbacks current.result = callback(current.result, *args, **kw) File "/opt/dns/site-packages/twisted/names/server.py", line 107, in gotResolverResponse self.sendReply(protocol, message, address) File "/opt/dns/site-packages/twisted/names/server.py", line 92, in sendReply protocol.writeMessage(message, address) File "/opt/dns/site-packages/twisted/names/dns.py", line 1804, in writeMessage self.transport.write(message.toStr(), address) File "/opt/dns/site-packages/twisted/names/dns.py", line 1686, in toStr self.encode(strio) File "/opt/dns/site-packages/twisted/names/dns.py", line 1587, in encode q.encode(body_tmp, compDict) File "/opt/dns/site-packages/twisted/names/dns.py", line 497, in encode strio.write(struct.pack(self.fmt, self.type, self.cls, self.ttl, 0)) struct.error: argument out of range for 4-bytes integer format
I checked the backend and it always returns the same values, that's the part where I'm building the A answer:
defer.returnValue([ ( dns.RRHeader(hostname, dns.A, dns.IN, self.ttl, dns.Record_A(record['content'], self.ttl) ), ), (), () ] )
When I restart the names server everything starts work again.
Change History (12)
comment:1 Changed 9 years ago by
comment:2 Changed 9 years ago by
self.ttl = 5
comment:3 Changed 9 years ago by
Have you ruled out hardware failure? I don't see how
struct.pack could be failing like this if the values being passed to it are the expected values. A failing memory stick could cause valid values to magically transform into invalid values. A run of memtest86+ on this host might be illuminating.
comment:4 Changed 9 years ago by
Yes I did memtest86+ on it. No failures.
comment:5 Changed 9 years ago by
Okay. I'm out of guesses, then. If you can provide an example that reproduces this problem, it would be very helpful. Without that, I'm not sure if there's any way to proceed. Alternatively, if you can debug the problem further and track down the cause, relating that experience and what you learn from it in another comment on this ticket might be enough to let someone else fix the problem.
As another datapoint, I'm running a Twisted Names DNS server and I never encounter this error.
comment:6 Changed 9 years ago by
I added debug code... but since then Names won't crash...
So we have to wait, then I will provide the failing case I hope.
comment:7 Changed 8 years ago by
Ok,
It finally crashed again :)
So when the working DNS entries has
!HHIH 1 1 1 0 as value
the not working one has
!HHIH 1 1 -90356.4524617 0
or
!HHIH 1 1 -90386.4629548 0
As you can see it's wrong.
That's how I collected those values:
def encode(self, strio, compDict=None): self.name.encode(strio, compDict) print self.fmt, self.type, self.cls, self.ttl, 0 strio.write(struct.pack(self.fmt, self.type, self.cls, self.ttl, 0))
And it seems that it's self.ttl, the prolbem is... it's always the same value.
And again, restarting the Names works fine.
For me it seems that after some number of requests it starts to fail. I have no idea why, but again that self.ttl is hardcoded in code.
comment:8 Changed 8 years ago by
Perhaps you can make the
ttl attribute in your code a property (if it is an attribute of a new-style class):
_ttl = 5 def ttl_get(self): return self._ttl def ttl_set(self, value): import traceback print 'Changing ttl to', value, 'at:' traceback.print_stack() self._ttl = value ttl = property(ttl_get, ttl_set)
This should report the call stack at the point when the ttl value is set out of bounds. If you have a lot of these objects, it might spam your startup logs quite a bit, but hopefully after that it will only report problem areas.
Regardless of where this bug ends up being, a change that perhaps we could make to
twisted.names is to have it report out-of-bounds values more clearly than it does now.
comment:9 Changed 8 years ago by
I added it yesterday, so we have to wait now again. I will post info there when it will crash again.
comment:10 Changed 8 years ago by
comment:11 Changed 8 years ago by
Yes CacheResolver is present. But I'm not 100% sure if it's the reason (might be)
comment:12 Changed 8 years ago by
Ok, It seems that the original self.ttl is never changed but the overflowed values appears.
What's the value of
self.ttl?
|
https://twistedmatrix.com/trac/ticket/5551
|
CC-MAIN-2020-40
|
refinedweb
| 925
| 60.92
|
ECMAScript 4.0 Is Dead
Soulskill posted more than 5 years ago | from the long-live-ecmascript dept.... (5, Funny)
jfclavette (961511) | more than 5 years ago | (#24627355)
ECMA Script 3.11 for workgroups.
The joke works this time !
Re:I'll wait for... (0, Troll)
JebusIsLord (566856) | more than 5 years ago | (#24627385)
Everytime there is a 3.x in a fucking version #, some asshole thinks this joke will be funny. IT NEVER IS. Slashdot, I love you, but you are NOT COMEDIANS. STOP TRYING.
Re:I'll wait for... (1, Offtopic)
odiroot (1331479) | more than 5 years ago | (#24627709)
Re:I'll wait for... (-1, Offtopic)
Anonymous Coward | more than 4 years ago | (#24628759)
parent is currently with 0, Insightful... mod_groupthink is behaving really strange these days...
I, for one... (-1, Offtopic)
Anonymous Coward | more than 5 years ago | (#24627715)
...welcome our unfunny slasher overlords.
Re:I'll wait for... (0, Offtopic)
Firehed (942385) | more than 5 years ago | (#24627879)
In Soviet Russia, Slashdot stops trying it's jokes on YOU!
Establishing de facto (open source) standard ? (0)
Anonymous Coward | more than 5 years ago | (#24627461)
It might be (a bit) naive. Maybe Mozilla (along with, say, Google or other companies) should implement a well designed language independent virtual machine for a browser with modern features and ECMA 4.0 spirit ? Along with MSIE bindings. Shipped with Mozilla by default. Open source. Microsoft will have no chance to obstruct this effort. After (successfull) rollout it would be proposed to ECMA as a standard ?
Just wondering.
Re:Establishing de facto (open source) standard ? (5, Interesting)
maxume (22995) | more than 5 years ago | (#24627591)
The Microsoft stuff in the summary is just trolling. Mozilla and Google are both on board with abandoning the current work called ES4.
Re:Establishing de facto (open source) standard ? (5, Interesting)
boorack (1345877) | more than 5 years ago | (#24627805):Establishing de facto (open source) standard ? (2, Interesting)
maxume (22995) | more than 5 years ago | (#24627835)
No, no ideas, but ES4 was only going to give you about 1/10 of what you want anyway, so you don't lose all that much here.
it's the libraries and frameworks (2, Insightful)
speedtux (1307149) | more than 5 years ago | (#24627925):it's the libraries and frameworks (2, Interesting)
Tim C (15259) | more than 4 years ago | (#24628611)
This environment could even be emulated in Silverlight, allowing things to run without any install on Windows.
Doesn't the CLR (as part of the
.Net Framework) ship with Windows as of Vista?
Re:Establishing de facto (open source) standard ? (2, Informative)
aliquis (678370) | more than 5 years ago | (#24627943) [sproutcore.com]
Re:Establishing de facto (open source) standard ? (2, Interesting)
andy9701 (112808) | more than 4 years ago | (#24628491)
I haven't looked into SproutCore much, but isn't it just a framework built around JavaScript? If that's the case, how does that solve the multimedia part of the GP's request?
Re:Establishing de facto (open source) standard ? (1)
Kent Recal (714863) | more than 4 years ago | (#24628589):Establishing de facto (open source) standard ? (1)
seanonymous (964897) | more than 4 years ago | (#24628665)
Sproutcore is yet another javaScript framework.
Re:Establishing de facto (open source) standard ? (1)
dn15 (735502) | more than 4 years ago | (#24629189):Establishing de facto (open source) standard ? (1)
ropak (1331105) | more than 4 years ago | (#24629405)
Re:Establishing de facto (open source) standard ? (1)
love_encounter_flow (907752) | more than 4 years ago | (#24628873)
Re:Establishing de facto (open source) standard ? (1)
Bill, Shooter of Bul (629286) | more than 4 years ago | (#24629253)
Re:Establishing de facto (open source) standard ? (2, Interesting)
Lerc (71477) | more than 4 years ago | (#24629575):Establishing de facto (open source) standard ? (1)
Stan Vassilev (939229) | more than 5 years ago | (#24627811):Establishing de facto (open source) standard ? (1)
maxume (22995) | more than 5 years ago | (#24627895)
My impression is that packages, namespaces and classes wouldn't have worked particularly better than the current mess at the same time that they made the language that much bigger.
Re:Establishing de facto (open source) standard ? (2, Insightful)
try_anything (880404) | more than 4 years ago | (#24628885) understand and use them.
Unfortunately, many of the people who write Javascript these days stubbornly identify themselves as non-programmers. They resist learning anything that smacks of a "real" programming language. There could have been an ugly struggle between people trying to force these features on web developers and web developers clinging to Javascript's current free-form dynamism. On the face of it, it might seem that web developers are just lazy to avoid learning a few new language features. It also sounds quite silly for them to say, "I'm a programming idiot; I can't write code," despite slinging around complicated DHTML, CSS, and Javascript. They're obviously intellectually capable of learning and using a "real" programming language for "real" programming.
I think their basic point is sound, though. The job of writing the Javascript for web pages often falls to the web designers, and they should be allowed to program in a simple and dynamic language that suits their artistic temperaments and lets them focus their minds on their creative specialties. They do have artistic design responsibilities that programmers do not, after all, so it doesn't make sense for them to invest as much time in learning about programming as programmers do.
Re:Establishing de facto (open source) standard ? (1)
TheRaven64 (641858) | more than 4 years ago | (#24629085):Establishing de facto (open source) standard ? (1)
try_anything (880404) | more than 4 years ago | (#24630033), because it would make this news much less depressing.
Re:Establishing de facto (open source) standard ? (3, Insightful)
Dzonatas (984964) | more than 4 years ago | (#24628189)
:Establishing de facto (open source) standard ? (1)
mabhatter654 (561290) | more than 4 years ago | (#24628339):Establishing de facto (open source) standard ? (1)
Dzonatas (984964) | more than 4 years ago | (#24628447)
Microsoft does realize that only pure compilations are safer, but their take on the situation apropos to this topic was to invent a new language entirely and not either 3.1 or 4.0. What happened is that 4.0 become a buzzword under Web2.0 technology, and so 4.0 was design based on Web2.0 and semantics that are getting quickly outdated. In that regard, it is not like dlls.
Re:Establishing de facto (open source) standard ? (0)
Anonymous Coward | more than 4 years ago | (#24628691)
call me a nazi, but just a correction: script.aculo.us
Re:Establishing de facto (open source) standard ? (0)
Anonymous Coward | more than 4 years ago | (#24628877)
Of course it is (trolling). My first thought when I read it, was that a more accurate description would have been that if this proposal had made it, it would have been the result of Adobe's interests.
I've always had a feeling that Adobe's DNA contains more predator segments than Microsoft's.
MS got a tremendous jump start when IBM (with its reputation in the business world) picked them to provide the OS for the PC, and a second shove when IBM said PC clones couldn't use IBM's PC-DOS, forcing everyone who wanted to be compatible directly into MS's hands. If that hadn't happened, Gates & Co might still be selling BASIC interpreters as their principal product today.
Adobe never had such luck, they had to fight their way up all by themselves.
Re:Establishing de facto (open source) standard ? (1, Insightful)
Anonymous Coward | more than 4 years ago | (#24630011)
I have to disagree; the summary isn't trolling. Looking behind the scenes it appears that Microsoft wanted to kill it so it wouldn't compete with Silverlight. (Notice how one of the things they said wouldn't be included 3.1 that was included in 4 was namespaces and packages. It was claimed that they weren't good for the web, but oddly they appear in Silverlight.)
Also, as far as Mozilla and Google being on board-this seems mostly due to the fact that Microsoft almost unilaterally killed 4 claiming they would never support it.
I'm disappointed. If you've played with AS3(based on an earlier draft of the ES4 spec) it allowed you to code in a more classical OOP fashion, but you still had access to dynamic language features that people love about JavaScript.
At this point, many, many people have attempted to graft a more traditional OOP framework on to JS(going so far as to abstract the prototype inheritance that exists with JS)-none of them have been completely satisfactory (which is why more keep coming out). The latest one of note was John Resig's which was somewhat decent.
I am tired of people saying "You can do anything with JS". While JS's dynamic nature makes the simulation of a wide variety of language features possible, it would be far better if some of these were standardized into the language itself instead of having everyone invent hacks for things that would be available in almost any other language. This does not lead to increased productivity or efficiency.
Re:I'll wait for... (1)
BPPG (1181851) | more than 5 years ago | (#24627493)
Re:I'll wait for... (2, Funny)
mangu (126918) | more than 5 years ago | (#24627641)
Dude, that joke is so 1993!
The current version is going from 4.0 to 3.5.9
Re:I'll wait for... (0)
Anonymous Coward | more than 5 years ago | (#24627727)
Well, they should have used EMACS, not ECMAS...
Fucking Microsoft (-1, Troll)
Anonymous Coward | more than 5 years ago | (#24627371)
Now that ECMA is in the palm of their hand, they can fuck over as many standards as they like.
Fuck Microsoft.
No. Fuck you. (-1, Troll)
Anonymous Coward | more than 5 years ago | (#24627639)
Years after years and ECMAScript was a pile of shit. I certainly don't have JavaScript activated when I browse. Hopefully, MSFT will kill the standard, causing ECMAScript to superseded by... oh I don't know... SOMETHING THAT ACTUALLY WORKS.
Go Microsoft.
Re:No. Fuck you. (-1, Troll)
Anonymous Coward | more than 5 years ago | (#24627761)
ES4 not dead (5, Informative)
omfgnosis (963606) | more than 5 years ago | (#24627383):ES4 not dead (1)
Yvan256 (722131) | more than 5 years ago | (#24628125)
And it wants to go for a walk.
Re:ES4 not dead (1)
Z00L00K (682162) | more than 4 years ago | (#24628297)
And notice that the adoption of scripting of web pages are slow in order to allow the web pages to be useful even on older browsers.
Most of the functionality in JavaScript 1.5 is sufficient for what you normally want to do..
Re:ES4 not dead (1)
Hurricane78 (562437) | more than 4 years ago | (#24628457)
Wrong. It's the other way around: The adoption of new browsers is slow, because if a user hoes not absolutely has to, he's not going to upgrade.
It's like with making products easier: As soon as you "simplified" (aka. dumbed down) your product so that even the most stupid retard can use it, nature will develop an even bigger idiot to complain how "complicated" an "non-usable" it is.
My tip: If you wait, they will be extinct anyway. There's no need to support dumbing down, and to annoy more users than you help, when you can just take the warning labels off of everything and let the problem solve itself.
Re:ES4 not dead (1)
omfgnosis (963606) | more than 4 years ago | (#24629703)
Nice to see misanthropy is alive and well.
:P
Evidently the art of UI design isn't a strong point for you. Usability and good UI needn't dumb down an application.
The way I look at it is, good UI designers can produce two types of applications: easy to use and powerful, or specialized and intended for an audience that will need a UI more complex than for a general audience.
Re:ES4 not dead (4, Insightful)
rycamor (194164) | more than 4 years ago | (#24628565).:ES4 not dead (1)
omfgnosis (963606) | more than 4 years ago | (#24629637)
"And notice that the adoption of scripting of web pages are slow in order to allow the web pages to be useful even on older browsers."
I don't really think that's the case. The adoption was slow because it took a long time for browsers with good support for scripting and the DOM (the API the script interacts with on web pages) to become dominant. Even now, the browsers in the vast majority (IE 6 and IE 7) have terrible and buggy and slow DOM support, and the only saving grace for powerful web appsâ"even nowâ"is that there are really, brilliantly nerdy people who have taken the time to build compatibility layers like Prototype and jQuery.
"Most of the functionality in JavaScript 1.5 is sufficient for what you normally want to do."
And if it's not, extending it is not at all difficult.
"The only problem is that JavaScript/ECMAScript from a language point of view isn't really good."
Are you kidding? I think it's excellent. I don't agree on the benefits of static typing, and I find AS3 a real pain in the ass to write because of it. I know how to validate the data I'm receiving according to the standards I expect. The reason JS has been so problematic for so many people is because, like PHP, the language attracts a lot of unskilled, untrained developers. That's not a shortcoming of the language, it's just due to placement.
Let's pull a Microsoft (0)
Anonymous Coward | more than 5 years ago | (#24627387)
Mozilla and Adobe should just go ahead with v4.0, keeping it public so Apple and Opera can use it, too.
Re:Let's pull a Microsoft (2, Insightful)
omfgnosis (963606) | more than 5 years ago | (#24627417)
Why? There's consensus on Harmony.
Harmony is a good name.... (4, Insightful)
QuietLagoon (813062) | more than 5 years ago | (#24627395)
..... (4, Insightful)
Goyuix (698012) | more than 5 years ago | (#24627427).... (4, Insightful)
hedwards (940851) | more than 5 years ago | (#24627653):Harmony is a good name.... (1)
bussdriver (620565) | more than 4 years ago | (#24628625); but no errors if its used.
Perl level RegExp (look ahead)
A few of the Mozilla additions
Hash: an Object without ANY properties, any methods would be 'Class methods' not object methods.
Re:Harmony is a good name.... (4, Insightful)
pembo13 (770295) | more than 5 years ago | (#24628001)
Re:Harmony is a good name.... (5, Informative)
omfgnosis (963606) | more than 5 years ago | (#24627437)
:Harmony is a good name.... (-1, Troll)
Anonymous Coward | more than 5 years ago | (#24627725)
Re:Harmony is a good name.... (1)
TheRaven64 (641858) | more than 4 years ago | (#24629121)
Another Win for Standards Based Innovation? (0)
Anonymous Coward | more than 5 years ago | (#24627405)
Another win for standards based "innovation". Let private companies innovate and submit to standards bodies. Otherwise you get crap like this and XHTML 2.0.
I'm skeptical (2, Funny)
joe 155 (937621) | more than 5 years ago | (#24627467)
Can I just point out (-1, Flamebait)
Colin Smith (2679) | more than 5 years ago | (#24627485)
That Javascript as a development platform, as it seems to have become, is evil. It's just horrible from an efficiency, performance, security and architectural point of view.
It seems to be the future.
Re:Can I just point out (4, Interesting)
kevin_conaway (585204) | more than 5 years ago | (#24627671)
:Can I just point out (1)
Metasquares (555685) | more than 5 years ago | (#24627853)
Re:Can I just point out (4, Interesting)
fimbulvetr (598306) | more than 4 years ago | (#24628311):Can I just point out (0)
Anonymous Coward | more than 4 years ago | (#24629901)
Re:Can I just point out (-1, Troll)
Anonymous Coward | more than 4 years ago | (#24628291)
It is a beautiful, expressive and quite powerful language that is just now starting to shine after years of being misunderstood by people like you.
Yeah, it's such a beautiful, expressive, powerful language that I have to use a browser plugin that disables it on all but a minuscule fraction of the websites that I surf. All just so that I can be reasonably certain that if my machine gets compromised through my browser, it won't be because of some bullshit javascript pwnage.
Poor misunderstood javascript, indeed.
:-(
Re:Can I just point out (0)
Anonymous Coward | more than 4 years ago | (#24629967)
Ha ha! (0)
Jane Q. Public (1010737) | more than 4 years ago | (#24628639)
If you want a beautiful language you would get a lot closer with Ruby. JavaScript is many things, but beautiful or easy to use it ain't.
Re:Can I just point out (1)
Colin Smith (2679) | more than 4 years ago | (#24628657)
Perhaps you'd like to read my post again.
Re:Can I just point out (-1, Troll)
Anonymous Coward | more than 4 years ago | (#24628793)
You can point that out, but you'd be wrong.
no, he's right.
now starting to shine after years of being misunderstood by people like you.
No.. he's right. It's a pos language. It's a god damned mess from all perspectives.
Re:Can I just point out (0)
Anonymous Coward | more than 4 years ago | (#24628909)
It is far from expressive. It's full of php-esque issues, such as no namespaces, and a very poor OOP system. Misunderstood, yes. As something it's not, yes. A good solution? No.
Re:Can I just point out (1)
Z00L00K (682162) | more than 4 years ago | (#24628393):Can I just point out (1, Troll)
Locutus (9039) | more than 4 years ago | (#24628481):Can I just point out (1)
labyrinth (65992) | more than 5 years ago | (#24630407)
You do realize that the HTTPRequest object which makes AJAX possible was introduced by Microsoft in the first place?
Re:Can I just point out (1)
Locutus (9039) | more than 5 years ago | (#24630551) because they've got Windows in a monopoly position and keeping it there with anti-competitive tricks and methods is what they do and have done.
If the cure for cancer came out on a Mac or Linux, Microsoft would crush that and the result would be more like a Kleenex for the common cold. IMO.
LoB
Scariest Part of the Article (0)
Anonymous Coward | more than 5 years ago | (#24627507)
What scares me most from reading that article was this part: "In fact The Adobe CEO has stated they are moving their entire application suite in the next 10 years to the Flash platform, so this language spec is serious stuff."
Our computers are getting faster but we're getting more power hungry applications that don't do much more than what came before. This goes agaisnt the current popularity of small and cheap laptops.
Crockford and Standards (5, Informative)
kevin_conaway (585204) | more than 5 years ago | (#24627573):Crockford and Standards (1, Insightful)
lysse (516445) | more than 4 years ago | (#24628527)...
ECMAscript 4.0 is dead (1)
Daimanta (1140543) | more than 5 years ago | (#24627659)
Long live ECMAscript 5.0!
Re:ECMAscript 4.0 is dead (0)
Anonymous Coward | more than 5 years ago | (#24628113)
.... NetCraft Confirms it!
What a damn shame (3, Insightful)
Stan Vassilev (939229) | more than 5 years ago | (#24627741):What a damn shame (2, Informative)
DCstewieG (824956) | more than 5 years ago | (#24628015):What a damn shame (1, Informative)
Anonymous Coward | more than 4 years ago | (#24628397)
>modern web application typically has to download a ton of CSS/JS/image files
This is trivial to fix for JS/CSS, either via runtime tool or build system. It takes about 10 minutes to add a servlet to combine and compress JS/CSS files in Java via the JAWR library, for example.
Re:What a damn shame (1, Informative)
Anonymous Coward | more than 4 years ago | (#24628405)
Um, that's exactly what "sugar" means: a more readable syntax that hides the implementation details.
Harmony will have classes -- they will merely be implemented as syntactic sugar on top of lambdas and prototypes, instead of being new constructs altogether. This change is irrelevant from the point of view of anyone other than a language implementor.
Just another example (1)
Jane Q. Public (1010737) | more than 4 years ago | (#24628675)
Not that we need another example. How many have we had already? Does anybody doubt?
Classes are not out of the question. (3, Informative)
Tiles (993306) | more than 4 years ago | (#24630025)zilla in the Open Web Podcast [openwebpodcast.com] already discussed how script versioning will allows coders to use different language versions in the future. So syntax changes are not out of the question, they're simply being postponed until after ES3.1.
Until then, users will still have all the power of classes in their code, just without the ease-of-use of a "class" keyword (which will be a lot easier to implement after ES3.1 is proven and tested).
for those of you who complain, (0)
circletimessquare (444983) | more than 5 years ago | (#24627865):for those of you who complain, (1)
Locutus (9039) | more than 4 years ago | (#24628555) (4, Insightful)
shutdown -p now (807394) | more than 5 years ago | (#24627883):A real pity (1)
Tacvek (948259) | more than 5 years ago | (#24628075) that static members of a class look a lot like just items in the namespace of the class [except for the access specifier implications]).
But the literature makes it sound like the AS 3 namespaces are more than just access specifiers, so I'm not sure what they really are.
Early binding may not be appropriate for the web, so I have no real idea how that should be handled. Making it completely an option would make porting code between AS 3 implementations potentially much harder than they should be.
I hope taht instead of killing the ES4 but just basically pause it, to return to it once they have released a 3.1 as a stop-gap measure.
I just fear that they implement some ES4 style features in 3.1 in a way that is not compatible with ES4.
Re:A real pity (1, Interesting)
Anonymous Coward | more than 4 years ago | (#24628231)
Barf. What we need is a trim lightweight browser scripting language. You want to write an desktop app, write a bleeding desktop app; don't bloat up javascript.
Re:A real pity (2, Informative)
shutdown -p now (807394) | more than 4 years ago | (#24628847)
JS was never "lightweight" in any sense of the word - neither in terms of language features, nor it terms of performance of typical implementations. If you want a truly lightweight scripting language, designed as such, see Lua.
get some language experts (-1, Flamebait)
speedtux (1307149) | more than 5 years ago | (#24627897)
People who write bullshit like "But early binding in any dynamic code loading scenario like the web requires a prioritization or reservation mechanism to avoid early versus late binding conflicts." have no business being associated with language design. It's no wonder that JavaScript sucks so badly as a language.
Re:get some language experts (1)
rycamor (194164) | more than 4 years ago | (#24628797):get some language experts (1)
TheRaven64 (641858) | more than 4 years ago | (#24629219)
Who on earth wants a simple, dynamic, low-cruft language with first-class functions, closures, lambdas and a prototype object system
Hardly anyone, it seems. Otherwise, how do you account for the popularity of C++?
Is it coincidence? (0, Offtopic)
dvh.tosomja (1235032) | more than 5 years ago | (#24627937)
Why is JavaScript is so popular? Lamda Functions (1)
Anik315 (585913) | more than 5 years ago | (#24627951). For instance functions can be dynamically defined to return the derivative function of another function.
Not many languages allow you to do this. Ruby, Python and Perl, JavaScript and ActionScript all allow lambda coding, but Java, C and C++ don't probably for performance reasons since the runtime environment of the compiled program itself requires a compiler. The latest edition of C Sharp I think does though.
In any case, since JavaScript allows this it would nice just have a compiled version of JavaScript with lambda coding and classes. I think that's what they are shooting for with Harmony so I'm not really that disappointed with this decision.
Re:Why is JavaScript is so popular? Lamda Function (2, Interesting)
Z00L00K (682162) | more than 4 years ago | (#24628443):Why is JavaScript is so popular? Lamda Function (2, Insightful)
rycamor (194164) | more than 4 years ago | (#24628865) to be useful.
Re:Why is JavaScript is so popular? Lamda Function (1)
TheRaven64 (641858) | more than 4 years ago | (#24629277) you pass them as parameters and introspect them. The fact that they are closures allows them to reference variables in their enclosing scope even after the enclosing scope has exited.
JavaScript is a Self derivative, and Self is a Smalltalk derivative (with Lieberman prototypes instead of classes). All languages in this family have closures as first-class objects, including newer ones like Io. For Ãtoilé, I have written a Smalltalk compiler which generates code ABI-compatible with the GNU Objective-C runtime, which allows us to mix Smalltalk and Objective-C in the same project (actually, in the same object), so we get exactly this advantage from a compiled language (for various other reasons, Smalltalk is still slower than Objective-C).
Re:Why is JavaScript is so popular? Lamda Function (2, Interesting)
Anonymous Coward | more than 4 years ago | (#24629711) from mathematical calculus which may have added to the confusion.
JavaFx (1)
thetoadwarrior (1268702) | more than 4 years ago | (#24629143)
Oh sweet jesus! (0, Troll)
FlyingGuy (989135) | more than 4 years ago | (#24629227)
Java Script is a really bad joke. Not that there is anything better at hand, but it is utterly and completely in drastic need a of "Lift up the radiator cap, and drive a new one underneath it treatment" and damn soon.
Right along next to it, is the Document Object Model, yet another thing needing the same treatment,
.
And right along with those two is CSS, a good idea gone horribly wrong.
So what do we do? How do we fix this, how do we build something that everyone can agree on? The short answer is, more then likely we cannot. Why? Because there are to many ego's involved, to many pet languages involved, to many pet methods and styles.
But with a lot of "We are going to ram this right up your ass, everyones ass, because it will be so clean and functional you won't have a choice but to bend over and take it." attitude AND aptitude we can succeed.
The idea of DOM is great, because it creates a known method of manipulating the bits and parts of an web page to effect the best possible user experience, but the loosely coupled nature of JS, DOM and CSS just keep making the whole thing just become to overwrought with complexity and errors.
The way to fix this is IMNSHO is to rebuild it into one entity.
Wait for it......Ohhh the horor!!!!!. No kids it is not a horror, it is supremely logical. The very notion that you treat a web page as if it was a bit of paper is ludicrous! It is not a peace of paper, it is a digital screen capable of all sorts of neat tricks, and the engine that drives it is superb. The notion of paper needs to be simply left behind, move on, it was a hell of a go, but it is time to grown up, but into what? pretty fail to do.
How do we accomplish this? Well not without a lot of yelling and screaming. But consider this, if you had the same level of control over your your web page presentation as you have over any GUI application, how happy would you be? Personally I would be ecstatic! I mean Wooooo-Whooooooo! Just think of the level of control you would have, you could build perfectly interactive web pages that would vow to your and more importantly, your audiences every whim.
As a set of verbs and nouns JS is not that bad. I would tweak it a little a little but not much. What I would do though, is make DOM ( or an equivalent name ) be derived from JS instead of having JS simply be able to connect to it, for example, a web page would begin with a call to JS ( or the equivalent name ) MyPage = NewDocument(...) and that would pop open the new page, fresh and clean. After that you start laying out the sections, eg: UpperLeft = MyPage.CreateArea(...) rinse and repeat until you have all your areas defined. At that point, you begin to fill in all the areas by making calls to UpperLeft for things like control's, backgrounds, colors, scrolling text area's and the like. At that point you could then have a successor to CSS be a value passed to UpperLeft that would then style it as you desire.
This would eliminate some of the biggest problems with CSS, ie: the box model and DIVS freeing you up to really concentrate on the content. In addition there would be a menu model in there. Trying to do menus in CSS is at best a dark art and at worst damn near impossible, depending on what you want to do. How about something like TopMenu = MyPage.New.H[V]menu which could then be fed a very small XML hierarchy ( and by the way, I HATE XML with a passion you can only imagine but in this case it makes sense ) that would populate the menu, it handles the layout of the thing, you simply provide it colors, sizes, content and the OnClick call's
But this will make web pages WAY to complicated you say? Not so. For the most simple web page, ie: text of a single page it is as simple as MyPage = NewDocument and after that is is MyPage.Text = "Welcome to Sally's web page, gee I am SO cute!" and that is all it takes.
No you can do this one better by implementing a set of pages as a collection of pages, eg: MyCollection = Collections.New(...) and from there on all your pages are no accessible from MyCollection, so creating a new page is as simple as MyCollection.NewDocument(...) but for single web pages a collection is not required.
These concepts would bring a new set of functionality to the web, a level that cannot be achieved with the current way of doing things. A new paradigm is required to bring the web into the future, to provide the same functionality as you get with any desktop.
|
http://beta.slashdot.org/story/105567
|
CC-MAIN-2014-15
|
refinedweb
| 5,157
| 71.04
|
Ticket #8463 (closed defect: fixed)
Protocol error when using strip over shared folders.
Description (last modified by aeichner) (diff)
Steps to reproduce: Share a folder to a Linux guest. Copy a dll file to the share folder. (I used the taskschd.dll to test) On the linux guest run strip /path/to/mounted/share/taskschd.dll and you get "strip:./st9tAIQu: Protocol error" Now to prove a point copy that dll to somewhere on the guest disk, cp /path/to/mounted/share/taskschd.dll /tmp ; strip /tmp and notice that there is no error.
I have absolutely no idea why this only happens with dll files. I am able to strip shared object files without issue. I am also able to strip dll files on windows using mingw from shared folders with no issues. This happens with every dll I have tired. Both the guest and host OS are gentoo.
Change History
comment:2 Changed 5 years ago by larskanis
I have the same problem when running Ubuntu-12.04 64 Bit VM on Ubuntu-13.10 64 Bit. strip does not work on a shared folder:
$ i686-w64-mingw32-strip zlib1.dll i686-w64-mingw32-strip:st7VXEcU: Protocol error
Module for shared folders is:
vagrant@precise64:/vagrant/nokogiri$ modinfo vboxsf filename: /lib/modules/3.2.0-23-generic/misc/vboxsf.ko version: 4.2.0 (interface 0x00010004) license: GPL author: Oracle Corporation description: Oracle VM VirtualBox VFS Module for Host File System Access srcversion: 7C0A7927C2C19F0B88EB55A depends: vboxguest vermagic: 3.2.0-23-generic SMP mod_unload modversions parm: follow_symlinks:Let host resolve symlinks rather than showing them (int)
comment:3 Changed 2 years ago by aeichner
Please reopen if still relevant with a recent VirtualBox release.
comment:4 Changed 20 months ago by jasonmbrown
I am getting this problem as well, using MXE To CrossCompile, Strip Runs fully it appears. But fails to overwrite the file being stripped. So it exits wih an error and leaves a randomly named file in the shared folder that works fine once renamed.
x86_64-w64-mingw32.static-strip:stqcFQZL: Protocol error
cddadev@ubuntu:~$ modinfo vboxsf filename: /lib/modules/4.8.0-27-generic/kernel/ubuntu/vbox/vboxsf/vboxsf.ko version: 5.1.6_Ubuntu r110634 license: GPL author: Oracle Corporation description: Oracle VM VirtualBox VFS Module for Host File System Access srcversion: 308B21C2D1816AC5CAD8A3A depends: vboxguest intree: Y vermagic: 4.8.0-27-generic SMP mod_unload modversions parm: follow_symlinks:Let host resolve symlinks rather than showing them (int)
comment:5 Changed 20 months ago by jasonmbrown
- Status changed from closed to reopened
- Resolution obsolete deleted
comment:6 Changed 19 months ago by kirr
With VirtualBox 5.1.12 I think I've hit something similar:
in short: the following test program when run twice on /media/sf_shared/ gives EPROTO for read syscall:
#include <stdio.h> #include <stdlib.h> #include <sys/types.h> #include <sys/stat.h> #include <fcntl.h> #include <unistd.h> void die_errno(const char *msg) { perror(msg); exit(1); } static const char data[1024] = "0123456789..........."; static char data2[1024]; int main() { int fd, err; off_t off; fd = open("testfile", O_RDWR|O_CREAT|O_TRUNC, 0666); if (fd == -1) { die_errno("open testfile"); } // fstat // fstat off = lseek(fd, 0, SEEK_SET); if (off == -1) { die_errno("lseek"); } err = write(fd, data, 645); if (err == -1) { perror("write"); } off = lseek(fd, 0, SEEK_SET); if (off == -1) { die_errno("lseek2"); } //err = read(fd, data2, 4096); err = read(fd, data2, 645); if (err == -1) { die_errno("read"); } return 0; }
kirr@test1:/media/sf_shared$ gcc -Wall -o vboxsf-eproto vboxsf-eproto.c kirr@test1:/media/sf_shared$ rm -f testfile kirr@test1:/media/sf_shared$ ./vboxsf-eproto # first time ok - when the file was not there initially kirr@test1:/media/sf_shared$ ./vboxsf-eproto # second time -> EPROTO on read read: Protocol error
comment:7 Changed 18 months ago by sunlover
kirr, thanks for the testcase. The fix for the guest additions will be available in the next VirtualBox release.
I am running into the same problem. I have not yet figured out a fix or a work-around. I am running VirtualBox 4.12 on a Mac OSX 10.8 host with a Debian Squeeze guest.
EDIT: I am using the 4.12 version of the Guest Additions as well:
|
https://www.virtualbox.org/ticket/8463
|
CC-MAIN-2018-30
|
refinedweb
| 701
| 58.99
|
To get the latest build of T4MVC:
Go to T4MVC home page! 🙂.
Strongly typed support for MapRoute
This is similar to the RedirectToAction/ActionLink support, but applied to route creation. The original Nerd Dinner routes look like this:
routes.MapRoute(
“UpcomingDinners”,
“Dinners/Page/{page}”,
new { controller = “Dinners”, action = “Index” }
);
routes.MapRoute(
“Default”,
“{controller}/{action}/{id}”,
new { controller = “Home”, action = “Index”, id = “” }
);”,
“{controller}/{action}/{id}”,
MVC.Home.Index(),
new { id = “” }
);.
BeginForm support.
Support for Dependency Injection containers! 🙂
The protected constructor is a very interesting approach. Now I have to pull it down and do some testing around my container of choice.. StructureMap
hi david,
the protected constructor fixed the problem for me. (I’m using StructureMap) THANK YOU!
but why do even create this constructor with your dummy-parameter? my workaround yesterday was to just remove this one and make sure that there’s always a default constructor. (I think this is done by your template anyway)
StructureMap for example always calls the constructor with most parameters, so this already fixes the problem.
thanks for working on this,
christian
the ExecuteResult()-method of your ControllerActionCallInfo class should not throw a "NotImplementedException" because ReSharper for example shows this as an open entry in the TODO-list. I think it would be better if you throw a "NotSupportedException".
This is a really useful template, it has become an integral part of my current project. Thanks for sharing.
Love the new strongly typed support for MapRoute…
My only concern is that when one uses "MVC.Home.Index()" what this does is not 100% clear to a ready who doesn’t know about the templates or how they work. As in when a user sees "MVC.SomthingOrOther", the MVC doesn’t mean much to me.
What this template is all about and the fluid interface it generates is strongly typed names/references. So couldn’t we come up with something a little more suggestive than just "MVC" that helps the user out a little more like "RouteFor.SomthingOrOther"???
Now in reality it doest have to be " RouteFor ", but if someone saw RouteFor.Home.Index() vs. MVC.Home.Index() it might be a little more clear about what the intending usage is in a given scenario.
I just think there must be a better name than just MVC that helps describe to the user whats going on.
Just another quick comment. The Members and Classes that you add to the partial class of each controller (with the exception of RedirectToAction), why wouldn’t you you put them into the sub type "T4MVC_xyz" that you generate for a given controller and change the static reference in the MVC class to "public static T4MVC_SquadProfileController = new T4MVC_SquadProfileController();"?
Just a thought.
Christian: we need the dummy ctor because if the Controller already has a default ctor, it may have logic we don’t want to execute. So I guess technically, we only need the Dummy when there is already an explicit default ctor, so we could optimize that a bit.
Christian, new build 2.2.01 on CodePlex throws NotSupportedException as you suggest. Thanks!
Anthony: I agree that the syntax on routes doesn’t make it super clear what it’s doing. What’s tricky is that ‘MVC.Home.Index()’ is the standard syntax T4MVC to encapsulate an action call with params, so it looks the same here as in ActionLink, RedirectToAction, …
Of course, we can change the MVC prefix if we find a better one, but it would be used in those other places as well.
And finally, we could rename the MapRoute() extension method to something like MapRouteToController() if we think that’s better.
Anthony: I actually started out this way, with new members added in the derived class. However, I found two reasons to do it as I did:
1. I wanted ‘Go To Definition’ on MVC.Home.Index() to go to the user’s Index method, and not the generated derived method.
2. Having it on the base lets your shorten MVC.Dinner.Views.InvalidOwner to simply Views.InvalidOwner when writing code in the Dinners controller.
Hien Khieu: see my reply in the other post.
Hi,
I made the following changes at line 282 to get the code to compile with FileResult.
<# foreach (string resultType in ActionResultTypes) { #>
[CompilerGenerated]
public class T4MVC_<#= resultType #>: <#= resultType #>, IT4MVCActionResult {
public T4MVC_<#= resultType #>(string controller, string action) <# if (resultType == "FileResult") { #>: base("")<# } #> {
this.InitMVCT4Result(controller, action);
}
public override void ExecuteResult(ControllerContext context) {
throw new NotSupportedException();
}
public string Controller { get; set; }
public string Action { get; set; }
public RouteValueDictionary RouteValues { get; set; }
<# if (resultType == "FileResult") { #>
protected override void WriteFile(HttpResponseBase response) {
throw new NotImplementedException();
}
<# } #>
}
<# } #>
Richard: I just fixed this issue in build 2.2.02 on CodePlex. Thanks!
How difficult would it be to add support for ASP.Net WebForms pages as well, similar to how static content files are stored under Links.
Many projects I work on are slowly migrating to MVC and will contain both WebForms and MVC for the foreseable future. I’d love to be able to reference the files as Pages.Home_aspx instead of "~/Home.aspx".
David,
This doesn’t work if the controllers are in a different assembly
Mark Leistner: that’s an interesting idea. I’ll put that on the list of future scenarios to look at.
I can't get the point. How to use protected constructor to solve the problem?
I'm using StructureMap with ASP.NET MVC2.
It will be nice if someone provide us with an example as i'm new to MVC, StructureMap, and T4
|
https://blogs.msdn.microsoft.com/davidebb/2009/06/30/t4mvc-2-2-update-routing-forms-di-container-fixes/
|
CC-MAIN-2016-36
|
refinedweb
| 922
| 65.62
|
IIS Express: HTTP and HTTPS
IIS Express: HTTP and HTTPS
Join the DZone community and get the full member experience.Join For Free
Continuing on the previous article, this post discusses the creation of a simple web application hosted by IIS Express using a feature which Cassini doesn’t support, namely HTTPS.
The built-in Cassini web server (ASP.NET Development Server) only supports HTTP. So if you want to, for instance, use HTTPS when developing a WCF service you are stuck and you’ll have to resort to hosting it in IIS.
Using IIS Express life gets simpler…
Let’s quickly setup a WCF-powered service for demonstration purposes. Start Visual Studio 2010 (SP1), create a new blank solution called “IISExpressHttps” and add a new empty ASP.NET MVC 2 project to it. Let’s be original and call the MVC application “MvcApplication”.
Delete all of the default content (Content, Controllers, Models, Scripts & Views). Next add an interface (ICalculator.cs), a class (Calculator.cs) and a Service.svc file (just add a class and rename it’s extension to “.svc”).
Your project’s layout should now resemble the following screenshot:
The ICalculator interface only contains one method.
interface ICalculator { int Add(int x, int y); }
The Calculator class implements the ICalculator interface and is equally simple.
public class Calculator : ICalculator { public int Add(int x, int y) { return x + y; } }
The Service.svc file specifies which service to host and only contains one line.
<%@ServiceHost language=c# Service=<span class="str">"MvcApplication.Calculator"</span> %>
Next add a reference to the System.ServiceModel assembly and add the following section to your application’s configuration file (Web.config).
<system.serviceModel> <services> <service name="Calculator"> <endpoint address ="Calculator" binding="wsHttpBinding" contract="ICalculator" /> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange"/> </service> </services> <behaviors> <serviceBehaviors> <behavior> <serviceMetadata httpGetEnabled="True"/> <serviceDebug includeExceptionDetailInFaults="False" /> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel>
Right-click on the MVC application in the solution explorer and select “Use IIS Express…”.
Hit F5 to run your application and navigate to the Service.svc file (or set it as the default start page).
Voila, the calculator service is now up and running.
Now for the
hard
easy part, enabling SSL. Go back to Visual Studio and select the MVC application in the solution explorer. In the properies window you need to set the property “SSL Enabled” to true.
Hit F5 to run your application again. By default the HTTP protocol will be used. Right-click on the IIS Express icon displayed in the system tray and you can see which sites are hosted. If you select the MvcApplication you’ll notice that you now have the option to use the HTTPS protocol. Of course a different port will be used.
Now you can visit. Just select the URL to open the site. You’ll be greeted by the following message:
Just click the option “Continue to this website” and voila…you are now running your service via HTTPS on IIS Express.
Does IIS Express support non-HTTP protocols such as net.tcp, MSMQ…etc? Nope, unfortunately not at the moment. More information can be found on the IIS Express FAQ.
I started writing this post trying to setup a WCF service with NetTcpBinding hosted by IIS Express. Only to find out, halfway through, that it only supports HTTP protocols. Alas, maybe in a future version?
If you require more information on SSL and IIS Express I suggest you check out the following article:
Written by Microsoft’s Scott Hanselman, its a more in-depth article and disusses some more advanced options.
Published at DZone with permission of Christophe Geers , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }}
|
https://dzone.com/articles/iis-express-http-and-https
|
CC-MAIN-2019-09
|
refinedweb
| 632
| 50.53
|
Code Splitting in Create React App
Code Splitting is not a necessary step for building React apps. But feel free to follow along if you are curious about what Code Splitting is and how it can help larger React apps.
Code Splitting
While working on React.js single page apps, there is a tendency for apps to grow quite large. A section of the app (or route) might import a large number of components that are not necessary when it first loads. This hurts the initial load time of our app.
You might have noticed that Create React App will generate one large
.js file while we are building our app. This contains all the JavaScript our app needs. But if a user is simply loading the login page to sign in; it doesn’t make sense that we load the rest of the app with it. This isn’t a concern early on when our app is quite small but it becomes an issue down the road. To address this, Create React App has a very simple built-in way to split up our code. This feature unsurprisingly, is called Code Splitting.
Create React App (from 1.0 onwards) allows us to dynamically import parts of our app using the
import() proposal. You can read more about it here.
While, the dynamic
import() can be used for any component in our React app; it works really well with React Router. Since, React Router is figuring out which component to load based on the path; it would make sense that we dynamically import those components only when we navigate to them.
Code Splitting and React Router v4
The usual structure used by React Router to set up routing for your app looks something like this.
/* Import the components */ import Home from "./containers/Home"; import Posts from "./containers/Posts"; import NotFound from "./containers/NotFound"; /* Use components to define routes */ export default () => <Switch> <Route path="/" exact component={Home} /> <Route path="/posts/:id" exact component={Posts} /> <Route component={NotFound} /> </Switch>;
We start by importing the components that will respond to our routes. And then use them to define our routes. The
Switch component renders the route that matches the path.
However, we import all of the components in the route statically at the top. This means, that all these components are loaded regardless of which route is matched. To implement Code Splitting here we are going to want to only load the component that responds to the matched route.
Create an Async Component
To do this we are going to dynamically import the required component.
Add the following to
src/components/AsyncComponent.js.
import React, { Component } from "react"; export default function asyncComponent(importComponent) { class AsyncComponent extends Component { constructor(props) { super(props); this.state = { component: null }; } async componentDidMount() { const { default: component } = await importComponent(); this.setState({ component: component }); } render() { const C = this.state.component; return C ? <C {...this.props} /> : null; } } return AsyncComponent; }
We are doing a few things here:
- The
asyncComponentfunction takes an argument; a function (
importComponent) that when called will dynamically import a given component. This will make more sense below when we use
asyncComponent.
- On
componentDidMount, we simply call the
importComponentfunction that is passed in. And save the dynamically loaded component in the state.
- Finally, we conditionally render the component if it has completed loading. If not we simply render
null. But instead of rendering
null, you could render a loading spinner. This would give the user some feedback while a part of your app is still loading.
Use the Async Component
Now let’s use this component in our routes. Instead of statically importing our component.
import Home from "./containers/Home";
We are going to use the
asyncComponent to dynamically import the component we want.
const AsyncHome = asyncComponent(() => import("./containers/Home"));
It’s important to note that we are not doing an import here. We are only passing in a function to
asyncComponent that will dynamically
import() when the
AsyncHome component is created.
Also, it might seem weird that we are passing a function here. Why not just pass in a string (say
./containers/Home) and then do the dynamic
import() inside the
AsyncComponent? This is because we want to explicitly state the component we are dynamically importing. Webpack splits our app based on this. It looks at these imports and generates the required parts (or chunks). This was pointed out by @wSokra and @dan_abramov.
We are then going to use the
AsyncHome component in our routes. React Router will create the
AsyncHome component when the route is matched and that will in turn dynamically import the
Home component and continue just like before.
<Route path="/" exact component={AsyncHome} />
Now let’s go back to our Notes project and apply these changes.
Your
src/Routes.js should look like this after the changes.
import React from "react"; import { Route, Switch } from "react-router-dom"; import asyncComponent from "./components/AsyncComponent"; import AppliedRoute from "./components/AppliedRoute"; import AuthenticatedRoute from "./components/AuthenticatedRoute"; import UnauthenticatedRoute from "./components/UnauthenticatedRoute"; const AsyncHome = asyncComponent(() => import("./containers/Home")); const AsyncLogin = asyncComponent(() => import("./containers/Login")); const AsyncNotes = asyncComponent(() => import("./containers/Notes")); const AsyncSignup = asyncComponent(() => import("./containers/Signup")); const AsyncNewNote = asyncComponent(() => import("./containers/NewNote")); const AsyncNotFound = asyncComponent(() => import("./containers/NotFound")); export default ({ childProps }) => <Switch> <AppliedRoute path="/" exact component={AsyncHome} props={childProps} /> <UnauthenticatedRoute path="/login" exact component={AsyncLogin} props={childProps} /> <UnauthenticatedRoute path="/signup" exact component={AsyncSignup} props={childProps} /> <AuthenticatedRoute path="/notes/new" exact component={AsyncNewNote} props={childProps} /> <AuthenticatedRoute path="/notes/:id" exact component={AsyncNotes} props={childProps} /> {/* Finally, catch all unmatched routes */} <Route component={AsyncNotFound} /> </Switch> ;
It is pretty cool that with just a couple of changes, our app is all set up for code splitting. And without adding a whole lot more complexity either! Here is what our
src/Routes.js looked like before.
import React from "react"; import { Route, Switch } from "react-router-dom"; import AppliedRoute from "./components/AppliedRoute"; import AuthenticatedRoute from "./components/AuthenticatedRoute"; import UnauthenticatedRoute from "./components/UnauthenticatedRoute"; import Home from "./containers/Home"; import Login from "./containers/Login"; import Notes from "./containers/Notes"; import Signup from "./containers/Signup"; import NewNote from "./containers/NewNote"; import NotFound from "./containers/NotFound"; export default ({ childProps }) => <Switch> <AppliedRoute path="/" exact component={Home} props={childProps} /> <UnauthenticatedRoute path="/login" exact component={Login} props={childProps} /> <UnauthenticatedRoute path="/signup" exact component={Signup} props={childProps} /> <AuthenticatedRoute path="/notes/new" exact component={NewNote} props={childProps} /> <AuthenticatedRoute path="/notes/:id" exact component={Notes} props={childProps} /> {/* Finally, catch all unmatched routes */} <Route component={NotFound} /> </Switch> ;
Notice that instead of doing the static imports for all the containers at the top, we are creating these functions that are going to do the dynamic imports for us when necessary.
Now if you build your app using
npm run build; you’ll see the code splitting in action.
Each of those
.chunk.js files are the different dynamic
import() calls that we have. Of course, our app is quite small and the various parts that are split up are not significant at all. However, if the page that we use to edit our note included a rich text editor; you can imagine how that would grow in size. And it would unfortunately affect the initial load time of our app.
Now if we deploy our app using
npm run deploy; you can see the browser load the different chunks on-demand as we browse around in the demo.
That’s it! With just a few simple changes our app is completely set up to use the code splitting feature that Create React App has.
Next Steps
Now this seems really easy to implement but you might be wondering what happens if the request to import the new component takes too long, or fails. Or maybe you want to preload certain components. For example, a user is on your login page about to login and you want to preload the homepage.
It was mentioned above that you can add a loading spinner while the import is in progress. But we can take it a step further and address some of these edge cases. There is an excellent higher order component that does a lot of this well; it’s called react-loadable.
All you need to do to use it is install it.
$ npm install --save react-loadable
Use it instead of the
asyncComponent that we had above.
const AsyncHome = Loadable({ loader: () => import("./containers/Home"), loading: MyLoadingComponent });
And
AsyncHome is used exactly as before. Here the
MyLoadingComponent would look something like this.
const MyLoadingComponent = ({isLoading, error}) => { // Handle the loading state if (isLoading) { return <div>Loading...</div>; } // Handle the error state else if (error) { return <div>Sorry, there was a problem loading the page.</div>; } else { return null; } };
It’s a simple component that handles all the different edge cases gracefully.
To add preloading and to further customize this; make sure to check out the other options and features that react-loadable has. And have fun code splitting!
If you liked this post, please subscribe to our newsletter, give us a star on GitHub, and check out our sponsors.
For help and discussionComments on this chapter
For reference, here is the code so farFrontend Source :code-splitting-in-create-react-app
|
https://branchv21--serverless-stack.netlify.app/chapters/code-splitting-in-create-react-app.html
|
CC-MAIN-2022-33
|
refinedweb
| 1,522
| 50.02
|
According to the docs using print statements for debugging is a key benefit of using pytest.
However when running from within PyCharm print statements appear mixed in with the test results and appear for tests that pass. This is different to the pytest default behaviour. I've not been able to use Additional Arguments to get the default behaviour.
def test_func1():
print("debug output from test 1")
assert True
def test_func2():
print("debug output from test 2")
assert False
Run from PyCharm produces:
simple_test.py::test_func1 PASSED [ 50%]debug output from test 1 simple_test.py::test_func2 FAILED [100%]debug output from test 2
pytest run from Terminal produces:
debug output from test 2
Is there a way to get PyCharm to output the print statements for failed tests only?
I think once I have a large number of tests containing lots of print call then having the passing tests always printing and doing so interleaved with the test results is going to be a deal-breaker for using pytest as an integrated tool, which is a shame since I like the PyCharm UI a lot.
I'm using Ubuntu 20.04.2, PyCharm 2021.1.2 (Community), Python 3.9.2, pytest-6.2.4, conda 4.10.1
Hi, unfortunately this behavior doesn't seem to be configurable, but I wouldn't say it's a bug either (except for the broken formatting). Feel free to submit a usability issue to
Andrey, The link I provided to the pytest documentation says:
"One primary benefit of the default capturing of stdout/stderr output is that you can use print statements for debugging and running this module will show you precisely the output of the failing function and hide the other one."
In light of that I don't know how one can conclude that this is anything other than a bug in PyCharm's integration with pytest.
Suppose you have 100 unit pytests and you have followed pytest's recommendation and used print statements for debugging so each of your 100 statements calls print a number of times. You then run the unit tests and 99 pass and 1 fails. In order to fix the problem, do you want to see the handful of debugging statements from the 1 failure or the thousand lines of output from all 100?
PyCharm is the best Python IDE imho and pytest currently seems to be the leading test runner. If the two don't work together properly it's going to cause problems for a lot of users, especially if you don't classify it as a bug.
Robin Carter
Unfortunately, I'm not aware if this behavior was designed intentionally or by oversight. The latter would sure make it a bug. Anyway, submitting the bug report / usability issue to is the best option to escalate it further.
I went ahead and submitted the report
Feel free to support with vote/comments.
Thanks Andrey that's very clear and good to use the code directly from the docs. I've upvoted and added a supporting comment.
|
https://intellij-support.jetbrains.com/hc/en-us/community/posts/4403455070994-Pytest-print-debugging-not-fit-for-purpose-
|
CC-MAIN-2022-27
|
refinedweb
| 513
| 61.36
|
Input is more varied and challenging than output. In this extract from a new book on using GPIO Zero on the Pi in Python we look at its variety.
Buy from Amazon.
<ASIN:1871962668>
<ASIN:B08NZ2QP41>
These input devices are not complex in the sense of difficult, they just have more features than a simple button. All of the devices inherit from SmoothedInputDevice and the general idea is that a single input isn’t sufficient to act on. What is required is an average reading so reducing noise.
The base class for all of this set of input devices is SmoothedInputDevice.
The SmoothedInputDevice has all of the properties of InputDevice and, in addition, it sets up a queue of readings. The actual value of the device is obtained by applying a function, usually an average, to the readings in the queue. A background thread is used to take regular readings and the function is computed when you use a value.
The constructor has a large number of parameters:
SmoothedInputDevice(pin, *, pull_up=False, active_state=None,
threshold=0.5, queue_len=5, sample_wait=0.0,
partial=False, average=median, ignore=None,
pin_factory=None)
The parameters concerned with setting up the queue are:
queue_len The length of the queue, i.e. the number of readings that the function is applied to. The default is 5.
threshold The value that is used as a cut-off for the device being activated. If the read value is greater than threshold, the device is active.
sample_wait The time between readings in seconds. The default is 0, which means read as fast as possible.
partial If True an attempt to read the device returns with a value, even if the queue isn’t full. If False, the default, a read blocks until the queue is full.
ignore A set of values that represents non-readings and should be ignored in the calculation of the average. Defaults to None.
average The function applied to the values in the queue to get the final value. It defaults to median, which is the middle value in the queue.
Using a SmoothedInputDevice is just a matter of setting up the degree of smoothing needed – i.e. the speed of reading, the number of data points to average, and the function that should be used to average them.
It should be obvious that, whatever the SmoothInputDevice is reading, it can’t be a single GPIO line that returns a 0 or a 1. In most cases averaging this using median would return 0.5. However averaging using average would give you a value that was proportional to the ratio of ones to zeros in the measurement interval. In the same way, you can create a device which returns a number proportional to the time a GPIO line is high.
The LineSensor device makes use of a module based on the TRCT5000 infrared emitter and sensor.
To make this suitable for use with a GPIO line you need to add some electronics:
The IR diode sends a beam of IR out of the device and if it is reflected back, the photo transistor switches on. The LM324 U1 outputs a high signal when the voltage from the IR sensor is greater than that set on the potentiometer, R3. That is, R3 acts as a sensitivity control.
You don’t need to build the circuit because you can buy ready-built modules:
You simply connect the three pins to 3.3V, ground and a GPIO line of your choice. It is worth knowing that some modules output high when over a white line and some go low.
Once you have the hardware connected you can create a LineSensor using the constructor:
LineSensor(pin, *, queue_len=5, sample_rate=100, threshold=0.5,
partial=False, pin_factory=None)
You can read the value property to discover if the sensor is over a reflective surface or not:
from gpiozero import LineSensor
sensor = LineSensor(4)
while True:
if sensor.value>0.5:
print('Line detected')
else:
print('No line detected')
You can also use the wait_for_line, wait_for_no_line, when_line and when_no_line events.
In practice setting the threshold control on the hardware is likely to be more important than tuning the software.
|
https://i-programmer.info/programming/hardware/14294-pi-iot-in-python-using-gpio-zero-complex-input.html
|
CC-MAIN-2021-17
|
refinedweb
| 699
| 64
|
How To Make a Game Like Space Invaders with Sprite Kit Tutorial: Part 1
Learn how to make a game like Space Invaders!
Update Note 10/29/2014: Check out the updated version of this tutorial using iOS 8 and Swift!
Space Invaders is one of the most important video games ever developed. Created by Tomohiro Nishikado and released in 1978 by Taito Corporation, it earned billions of dollars in revenue. It became a cultural icon, inspiring legions of non-geeks to take up video games as a hobby.
Space Invaders used to be played in big game cabinets in video arcades, chewing up our allowances one quarter at a time. When the Atari 2600 home video game console went to market, Space Invaders was the “killer app” that drove sales of Atari hardware.
In this tutorial, you’ll build an iOS version of Space Invaders using Sprite Kit, the 2D game framework newly introduced in iOS 7.
This tutorial assumes you are familiar with the basics of Sprite Kit. If you are completely new to Sprite Kit, you should go through our Sprite Kit tutorial for beginners first.
Also, you will need an iPhone or iPod Touch running iOS 7 and an Apple developer account in order to get the most out of this tutorial. That is because you will be moving the ship in this game using the accelerometer, which is not present on the iOS simulator. If you don’t have an iOS 7 device or developer account, you can still complete the tutorial — you just won’t be able to move your ship.
Without further ado, let’s get ready to blast some aliens!
Getting Started
Apple provides an XCode 5 template named Sprite Kit Game which is pretty useful if you want to create your next smash hit from scratch. However, in order to get you started quickly, download the starter project for this tutorial. It’s based on the Sprite Kit template and already has some of the more tedious work done for you.
Once you’ve downloaded and unzipped the project, navigate to the SKInvaders directory and double-click SKInvaders.xcodeproj to open the project in Xcode.
Build and run the project by selecting the Run button from the Xcode toolbar (the first button on the top left) or by using the keyboard shortcut
Command + R. You should see the following screen appear on your device or your simulator:
Creepy – the invaders are watching you already! However, if you see the screen above, this means you’re ready to move forward.
The Role of GameScene
To complete your Space Invaders game, you’ll have to code several independent bits of game logic; this tutorial will serve as a great exercise in constructing and refining game logic. It will also reinforce your understanding of how Sprite Kit elements fit together to produce the action in a game.
Most of the action in your game takes place in the stubbed-out
GameScene class. You’ll spend most of this tutorial filling out
GameScene with your game code.
Let’s take a look at how the game is set up. Open GameViewController.m and scroll down to
viewDidLoad. This method is key to all
UIKit apps and runs after
GameViewController loads its view into memory. It’s intended as a spot for you to further customize your app’s UI at runtime.
Take a look at the interesting parts of
viewDidLoad below:
The section of code shown above creates and displays the scene as follows:
- First, create the scene with the same dimensions as its containing view.
scaleModeensures that the scene is large enough to fill the entire view.
- Next, present the scene to draw it on-screen.
Once
GameScene is on-screen, it takes over from
GameViewController and drives the rest of your game.
Open GameScene.m and take a look at how it’s organized:
You’ll notice that there are a lot of
#pragma mark - Something or Other type lines in the file. These are called compiler directives since they control the compiler. These particular pragmas are used solely to make the source file easier to navigate.
How do
pragmas make source navigation easier, you ask? Notice the area in the bar next to
GameScene.m that says “No Selection”, as below:
If you click on “No Selection”, a little menu pops up, as so:
Hey — that’s a list of all of your
pragmas! Click on any
pragma to jump to that section of the file. This feature doesn’t look like it has much value at present, but once you’ve added a bunch of invader-killing code, these pragmas will be a really…er… pragmatic way of navigating through your file! :]
Creating the Evil Invaders from Space
Before you start coding, take a moment to consider the
GameScene class. When is it initialized and presented on screen? When is the best time to set up your scene with its content?
You might think the scene’s initializer,
initWithSize: fits the bill, but the scene may not be fully configured or scaled at the time its initializer runs. It’s better to create a scene’s content once the scene has been presented by a view, since at that point the environment in which the scene operates is “ready to go.”
A view invokes the scene’s
didMoveToView: method to present it to the world. Navigate to
didMoveToView: and you’ll see the following:
This method simply invokes
createContent using the
BOOL property
contentCreated to make sure you don’t create your scene’s content more than once. This property is defined in an Objective-C Class Extension found near the top of the file, as below:
As the
pragma points out, this class extension allows you to add “private” properties to your
GameScene class, without revealing them to other external classes or code. You still get the benefit of using Objective-C properties, but your
GameScene state is stored internally and can’t be modified by other external classes. As well, it doesn’t clutter the namespace of datatypes that your other classes see.
Just as you did in your private scene properties, you can define important constants as private definitions within your file. Navigate to
#pragma mark - Custom Type Definitions and add the following code:
The above type definition and constant definitions take care of the following tasks:
- Define the possible types of invader enemies. You can use this in
switchstatements later when you need to do things such as displaying different sprite images for each enemy type. The
typedefalso makes
InvaderTypea formal Objective-C type that is type-checked for method arguments and variables. This ensures that you don’t pass the wrong method argument or assign it to the wrong variable.
- Define the size of the invaders and that they’ll be laid out in a grid of rows and columns on the screen.
- Define a name you’ll use to identify invaders when searching for them in the scene.
It’s good practice to define constants like this rather than using raw numbers like
6 (also known as “magic numbers”) or raw strings like
@"invader" (“magic strings”) that are prone to typos. Imagine mistyping
@"Invader" where you meant
@"invader" and spending hours debugging to find that a simple typo messed everything up. Using constants like
kInvaderRowCount and
kInvaderName prevents frustrating bugs — and makes it clear to other programmers what these constant values mean.
All right, time to make some invaders! Add the following method to GameScene.m directly after
createContent:
makeInvaderOfType:, as the name implies, creates an invader sprite of a given type. You take the following actions in the above code:
- Use the
invaderTypeparameter to determine the color of the invader.
- Call the handy convenience method
spriteNodeWithColor:size:of
SKSpriteNodeto allocate and initialize a sprite that renders as a rectangle of the given color
invaderColorwith size
kInvaderSize.
Okay, so a colored block is not the most menacing enemy imaginable. It may be tempting to design invader sprite images and dream about all the cool ways you can animate them, but the best approach is to focus on the game logic first, and worry about aesthetics later.
Adding
makeInvaderOfType: isn’t quite enough to display the invaders on the screen. You’ll need something to invoke
makeInvaderOfType: and place the newly created sprites in the scene.
Still in GameScene.m add the following method directly after
makeInvaderOfType::
The above method lays out invaders in a grid of rows and columns. Each row contains only a single type of invader. The logic looks complicated, but if you break it down, it makes perfect sense:
- Loop over the rows.
- Choose a single
InvaderTypefor all invaders in this row based on the row number.
- Do some math to figure out where the first invader in this row should be positioned.
- Loop over the columns.
- Create an invader for the current row and column and add it to the scene.
- Update the
invaderPositionso that it’s correct for the next invader.
Now, you just need to display the invaders on the screen. Replace the current code in
createContent with the following:
Build and run your app; you should see a bunch of invaders on the screen, as shown below:
The rectangular alien overlords are here! :]
Create Your Valiant Ship
With those evil invaders on screen, your mighty ship can’t be far behind. Just as you did for the invaders, you first need to define a few constants.
Add the following code immediately below the
#define kInvaderName line:
kShipSize stores the size of the ship, and
kShipName stores the name you will set on the sprite node, so you can easily look it up later.
Next, add the following two methods just after
setupInvaders:
Here’s the interesting bits of logic in the two methods above:
- Create a ship using
makeShip. You can easily reuse
makeShiplater if you need to create another ship (e.g. if the current ship gets destroyed by an invader and the player has “lives” left).
- Place the ship on the screen. In Sprite Kit, the origin is at the lower left corner of the screen. The
anchorPointis based on a unit square with (0, 0) at the lower left of the sprite’s area and (1, 1) at its top right. Since
SKSpriteNodehas a default
anchorPointof (0.5, 0.5), i.e., its center, the ship’s position is the position of its center. Positioning the ship at
kShipSize.height/2.0fmeans that half of the ship’s height will protrude below its position and half above. If you check the math, you’ll see that the ship’s bottom aligns exactly with the bottom of the scene.
To display your ship on the screen, add the following line to the end of
createContent:
Build and run your app; and you should see your ship arrive on the scene, as below:
Fear not, citizens of Earth! Your trusty spaceship is here to save the day!
Adding the Heads Up Display (HUD)
It wouldn’t be much fun to play Space Invaders if you didn’t keep score, would it? You’re going to add a heads-up display (or HUD) to your game. As a star pilot defending Earth, your performance is being monitored by your commanding officers. They’re interested in both your “kills” (score) and “battle readiness” (health).
Add the following constants at the top of GameScene.m, just below
#define kShipName:
Now, add your HUD by inserting the following method right after
makeShip:
This is boilerplate code for creating and adding text labels to a scene. The relevant bits are as follows:
- Give the score label a name so you can find it later when you need to update the displayed score.
- Color the score label green.
- Position the score label near the top left corner of the screen.
- Give the health label a name so you can reference it later when you need to update the displayed health.
- Color the health label red; the red and green indicators are common colors for these indicators in games, and they’re easy to differentiate in the middle of furious gameplay.
- Position the health label near the top right corner of the screen.
Add the following line to the end of
createContent to call the setup method for your HUD:
Build and run your app; you should see the HUD in all of its red and green glory on your screen as shown below:
Invaders? Check. Ship? Check. HUD? Check. Now all you need is a little dynamic action to tie it all together!
Adding Motion to the Invaders
To render your game onto the screen, Sprite Kit uses a game loop which searches endlessly for state changes that require on-screen elements to be updated. The game loop does several things, but you’ll be interested in the mechanisms that update your scene. You do this by overriding the
update: method, which you’ll find as a stub in your
GameScene.m file.
When your game is running smoothly and renders 60 frames-per-second (iOS devices are hardware-locked to a max of 60 fps),
update: will be called 60 times per second. This is where you modify the state of your scene, such as altering scores, removing dead invader sprites, or moving your ship around…
You’ll use
update: to make your invaders move across and down the screen. Each time Sprite Kit invokes
update:, it’s asking you “Did your scene change?”, “Did your scene change?”… It’s your job to answer that question — and you’ll write some code to do just that.
Insert the following code at the top of GameScene.m, just below the definition of the
InvaderType enum:
Invaders move in a fixed pattern: right, right, down, left, left, down, right, right, … so you’ll use the
InvaderMovementDirection type to track the invaders’ progress through this pattern. For example,
InvaderMovementDirectionRight means the invaders are in the right, right portion of their pattern.
Next, find the class extension in the same file and insert the following properties just below the existing property for
contentCreated:
Add the following code to the very top of
createContent:
This one-time setup code initializes invader movement as follows:
- Invaders begin by moving to the right.
- Invaders take 1 second for each move. Each step left, right or down takes 1 second.
- Invaders haven’t moved yet, so set the time to zero.
Now, you’re ready to make the invaders move. Add the following code just below
#pragma mark - Scene Update Helpers:
Here’s a breakdown of the code above, comment by comment:
- If it’s not yet time to move, then exit the method.
moveInvadersForUpdate:is invoked 60 times per second, but you don’t want the invaders to move that often since the movement would be too fast for a normal person to see.
- Recall that your scene holds all of the invaders as child nodes; you added them to the scene using
addChild:in
setupInvadersidentifying each invader by its name property. Invoking
enumerateChildNodesWithName:usingBlock:only loops over the invaders because they’re named
kInvaderName; this makes the loop skip your ship and the HUD. The guts of the block moves the invaders 10 pixels either right, left or down depending on the value of
invaderMovementDirection.
- Record that you just moved the invaders, so that the next time this method is invoked (1/60th of a second from now), the invaders won’t move again till the set time period of one second has elapsed.
To make your invaders move, replace the existing
update: method with the following:
Build and run your app; you should see your invaders slowly walk their way across the screen. Keep watching, and you’ll eventually see the following screen:
Hmmm, what happened? Why did the invaders disappear? Maybe the invaders aren’t as menacing as you thought!
The invaders don’t yet know that they need to move down and change their direction once they hit the side of the playing field. Guess you’ll need to help those invaders find their way!
Controlling the Invaders’ Direction
Adding the following code just after
#pragma mark - Invader Movement Helpers:
Here’s what’s going on in the above code:
- Since local variables accessed by a block are by default
const(that is, they cannot be changed), you must qualify
proposedMovementDirectionwith
__blockso that you can modify it in //2.
- Loop over all the invaders in the scene and invoke the block with the invader as an argument.
- If the invader’s right edge is within 1 point of the right edge of the scene, it’s about to move offscreen. Set
proposedMovementDirectionso that the invaders move down then left. You compare the invader’s frame (the frame that contains its content in the scene’s coordinate system) with the scene width. Since the scene has an
anchorPointof (0, 0) by default, and is scaled to fill its parent view, this comparison ensures you’re testing against the view’s edges.
- If the invader’s left edge is within 1 point of the left edge of the scene, it’s about to move offscreen. Set
proposedMovementDirectionso that invaders move down then right.
- If invaders are moving down then left, they’ve already moved down at this point, so they should now move left. How this works will become more obvious when you integrate
determineInvaderMovementDirectionwith
moveInvadersForUpdate:.
- If the invaders are moving down then right, they’ve already moved down at this point, so they should now move right.
- If the proposed invader movement direction is different than the current invader movement direction, update the current direction to the proposed direction.
Add the following code to
determineInvaderMovementDirection within
moveInvadersForUpdate:, immediately after the conditional check of
self.timeOfLastMove:
Why is it important that you add the invocation of
determineInvaderMovementDirection only after the check on
self.timeOfLastMove? That’s because you want the invader movement direction to change only when the invaders are actually moving. Invaders only move when the check on
self.timeOfLastMove passes — i.e., the conditional expression is true.
What would happen if you added the new line of code above as the very first line of code in
moveInvadersForUpdate:? If you did that, then there would be two bugs:
- You’d be trying to update the movement direction way too often — 60 times per second — when you know it can only change at most once per second.
- The invaders would never move down, as the state transition from
InvaderMovementDirectionDownThenLeftto
InvaderMovementDirectionLeftwould occur without an invader movement in between. The next invocation of
moveInvadersForUpdate:that passed the check on
self.timeOfLastMovewould be executed with
self.invaderMovementDirection == InvaderMovementDirectionLeftand would keep moving the invaders left, skipping the down move. A similar bug would exist for
InvaderMovementDirectionDownThenRightand
InvaderMovementDirectionRight.
Build and run your app; you’ll see the invaders moving as expected across and down the screen, as below:
Note: You might have noticed that the invaders’ movement is jerky. That’s a consequence of your code only moving invaders once per second — and moving them a decent distance at that. But the movement in the original game was jerky, so keeping this feature helps your game seem more authentic.
Adding Motion to your Ship
Good news: your supervisors can see the invaders moving now and have decided that your ship needs a propulsion system! To be effective, any good propulsion system needs a good control system. In other words, how do you, the ship’s pilot, tell the ship’s propulsion system what to do?
The important thing to remember about mobile games is the following:
Mobile games are not desktop/arcade games and desktop/arcade controls don’t port well to mobile.
In a desktop or arcade version of Space Invaders, you’d have a physical joystick and fire button to move your ship and shoot invaders. Such is not the case on a mobile device such as an iPhone or iPad.
Some games attempt to use virtual joysticks or virtual D-pads but these rarely work well, in my opinion.
Think about how you use your iPhone most often: holding it with one hand. That leaves only one hand to tap/swipe/gesture on the screen.
Keeping the ergonomics of holding your iPhone with one hand in mind, consider several potential control schemes for moving your ship and firing your laser cannon:
Single-tap to move, double-tap to fire:
Suppose you single-tapped on the left side of the ship to move it left, single-tapped on the right of the ship to move it right, and double-tapped to make it fire. This wouldn’t work well for a couple of reasons.
First, recognizing both single-taps and double-taps in the same view requires you to delay recognition of the single-tap until the double-tap fails or times out. When you’re furiously tapping the screen, this delay will make the controls unacceptably laggy. Second, single-taps and double-taps might sometimes get confused, both by you, the pilot, and by the code. Third, the ship movement single-taps won’t work well when your ship is near the extreme left- or right-edge of the screen. Scratch that control scheme!
Swipe to move, single-tap to fire:
This approach is a little better. Single-tapping to fire your laser cannon makes sense as both are discrete actions: one tap equals one blast from your canon. It’s intuitive. But what about using swipes to move your ship?
This won’t work because swipes are considered a discrete gesture. In other words, either you swiped or you didn’t. Using the length of a swipe to proportionally control the amount of left or right thrust applied to your ship breaks your user’s mental model of what swipes mean and the way they function. In all other apps, swipes are discrete and the length of a swipe is not considered meaningful. Scratch this control scheme as well.
Tilt your device left/right to move, single-tap to fire:
It’s already been established that a single-tap to fire works well. But what about tilting your device left and right to move your ship left and right? This is your best option, as you’re already holding your iPhone in the palm of your hand and tilting your device to either side merely requires you to twist your wrist a bit. You have a winner!
Now that you’ve settled on the control scheme, you’ll first tackle tilting your device to move your ship.
Controlling Ship Movements with Device Motion
You might be familiar with
UIAccelerometer, which has been available since iOS 2.0 for detecting device tilt. However,
UIAccelerometer was deprecated in iOS 5.0, so iOS 7 apps should use
CMMotionManager, which is part of Apple’s
CoreMotion framework.
The
CoreMotion library has already been added to the starter project, so there’s no need for you to add it.
Your code can retrieve accelerometer data from
CMMotionManager in two different ways:
Pushing accelerometer data to your code
In this scenario, you provide
CMMotionManager with a block that it calls regularly with accelerometer data. This doesn’t fit well with your scene’s
update: method that ticks at regular intervals of 1/60th of a second. You only want to sample accelerometer data during those ticks — and those ticks likely won’t line up with the moment that
CMMotionManager decides to push data to your code.
Pulling accelerometer data from your code
In this scenario, you call
CMMotionManager and ask it for data when you need it. Placing these calls inside your scene’s
update: method aligns nicely with the ticks of your system. You’ll be sampling accelerometer data 60 times per second, so there’s no need to worry about lag.
Your app should only use a single instance of
CMMotionManager to ensure you get the most reliable data. To that effect, add the following property to your class extension:
Now, add the following code to
didMoveToView:, right after the
self.contentCreated = YES; line:
This new code creates your motion manager and kicks off the production of accelerometer data. At this point, you can use the motion manager and its accelerometer data to control your ship’s movement.
Add the following method just below
moveInvadersForUpdate::
Dissecting this method, you’ll find the following:
- Get the ship from the scene so you can move it.
- Get the accelerometer data from the motion manager.
- If your device is oriented with the screen facing up and the home button at the bottom, then tilting the device to the right produces
data.acceleration.x > 0, whereas tilting it to the left produces
data.acceleration.x < 0. The check against 0.2 means that the device will be considered perfectly flat/no thrust (technically
data.acceleration.x == 0) as long as it's close enough to zero (
data.acceleration.xin the range
[-0.2, 0.2]). There's nothing special about 0.2, it just seemed to work well for me. Little tricks like this will make your control system more reliable and less frustrating for users.
- Hmmm, how do you actually use
data.acceleration.xto move the ship? You want small values to move the ship a little and large values to move the ship a lot. The answer is — physics, which you'll cover in the next section!
Translating Motion Controls into Movement via Physics
Sprite Kit has a powerful built-in physics system based on Box 2D that can simulate a wide range of physics like forces, translation, rotation, collisions, and contact detection. Each
SKNode, and thus each
SKScene and
SKSpriteNode, has an
SKPhysicsBody attached to it. This
SKPhysicsBody represents the node in the physics simulation.
Add the following code right before the final
return ship; line in
makeShip:
Taking each comment in turn, you'll see the following:
- Create a rectangular physics body the same size as the ship.
- Make the shape dynamic; this makes it subject to things such as collisions and other outside forces.
- You don't want the ship to drop off the bottom of the screen, so you indicate that it's not affected by gravity.
- Give the ship an arbitrary mass so that its movement feels natural.
Now replace the
NSLog statement in
processUserMotionForUpdate: (right after comment //4) with the following:
The new code applies a force to the ship's physics body in the same direction as
data.acceleration.x. The number
40.0 is an arbitrary value to make the ship's motion feel natural.
Finally, add the following line to the top of
update::
Your new
processUserMotionForUpdate: now gets called 60 times per second as the scene updates.
Note: If you've been testing your code on simulator up till now, this would be the time to switch to your device. You won't be able to test the tilt code unless you are running the game on an actual device.
Build and run your game and try tilting your device left or right; your ship should respond to the accelerometer, as follows:
What do you see? Your ship will fly off the side of the screen, lost in the deep, dark reaches of space. If you tilt hard and long enough in the opposite direction, you might get your ship to come flying back the other way. But at present, the controls are way too flaky and sensitive. You'll never kill any invaders like this!
An easy and reliable way to prevent things from escaping the bounds of your screen during a physics simulation is to build what's called an edge loop around the boundary of your screen. An edge loop is a physics body that has no volume or mass but can still collide with your ship. Think of it as an infinitely-thin wall around your scene.
Since your
GameScene is a kind of
SKNode, you can give it its own physics body to create the edge loop.
Add the following code to
createContent right before the
[self setupInvaders]; line:
The new code adds the physics body to your scene.
Build and run your game once more and try tilting your device to move your ship, as below:
What do see? If you tilt your device far enough to one side, your ship will collide with the edge of the screen. It no longer flies off the edge of the screen. Problem solved!
Depending on the ship's momentum,you may also see the ship bouncing off the edge of the screen, instead of just stopping there. This is an added bonus that comes for free from Sprite Kit's physics engine — it's a property called
restitution. Not only does it look cool, but it is what's known as an affordance since bouncing the ship back towards the center of the screen clearly communicates to the user that the edge of the screen is a boundary that cannot be crossed.
Where to Go From Here?
Here is the example project for the game up to this point.
So far, you've created invaders, your ship, and a Heads Up Display (HUD) and drawn them on-screen. You've also coded logic to make the invaders move automatically and to make your ship move as you tilt your device.
In part two of this tutorial, you'll add firing actions to your ship as well as the invaders, along with some collision detection so you'll know when you've hit the invaders — and vice versa! You'll also polish your game by adding both sound effects as well as realistic images to replace the colored rectangles that currently serve as placeholders for invaders and your ship.
In the meantime, if you have any questions or comments, please feel free to join in the discussions below!
You can read earlier discussions of this topic in the archives
|
https://www.raywenderlich.com/51068/how-to-make-a-game-like-space-invaders-with-sprite-kit-tutorial-part-1
|
CC-MAIN-2017-13
|
refinedweb
| 5,010
| 70.02
|
Given a Python object of any kind, is there an easy way to get the list of all methods that this object has?
Or,
if this is not possible, is there at least an easy way to check if it has a particular method other than simply checking if an error occurs when the method is called?
For many objects, you can use this code, replacing 'object' with the object you're interested in:
object_methods = [method_name for method_name in dir(object) if callable(getattr(object, method_name))]
I discovered it at diveintopython.net (Now archived). Hopefully, that should provide some further detail!
If you get an
AttributeError, you can use this instead:
getattr( is intolerant of pandas style python3.6 abstract virtual sub-classes. This code does the same as above and ignores exceptions.
import pandas as pd df = pd.DataFrame([[10, 20, 30], [100, 200, 300]], columns=['foo', 'bar', 'baz']) def get_methods(object, spacing=20): methodList = [] for method_name in dir(object): try: if callable(getattr(object, method_name)): methodList.append(str(method_name)) except: methodList.append(str(method_name)) processFunc = (lambda s: ' '.join(s.split())) or (lambda s: s) for method in methodList: try: print(str(method.ljust(spacing)) + ' ' + processFunc(str(getattr(object, method).__doc__)[0:90])) except: print(method.ljust(spacing) + ' ' + ' getattr() failed') get_methods(df['foo'])
You can use the built in
dir() function to get a list of all the attributes a module has. Try this at the command line to see how it works.
>>> import moduleName >>> dir(moduleName)
Also, you can use the
hasattr(module_name, "attr_name") function to find out if a module has a specific attribute.
See the Guide to Python introspection for more information.
|
https://pythonpedia.com/en/knowledge-base/34439/finding-what-methods-a-python-object-has
|
CC-MAIN-2020-29
|
refinedweb
| 276
| 58.89
|
First Tutorial
From Nemerle Homepage
Introduction
This tutorial describes the basics of programming in Nemerle. It is also a quick tour of some of the features and programming concepts that distinguish Nemerle from other languages.
We assume the reader is familiar with C#, Java or C++. But even if you are new to programming, you should find the code easy to understand.
Nemerle, .NET, and Mono
Nemerle is a .NET compatible language. As such, it relies heavily on the .NET Framework, which not only defines how critical parts of all .NET languages work, but also provides these services:
- A runtime environment, called the Common Language Runtime (CLR), which is shared among all .NET languages. This is similar to the Java VM
- A set of libraries, called the Base Class Libraries (BCL), which contain thousands of functions that perform a wide range of services required by programs
The BCL and CLR ensure that programs written in any .NET language can be easily used in any other .NET language. These features of language neutrality and interoperability make .NET an attractive platform for development.
Further, Nemerle is compatible with Mono, an open-source implementation of the published .NET Common Language Infrastructure (CLI) standard. This opens up the exciting possibility of writing .NET programs that not only work across different languages, but span different operating systems. With Mono, you can design programs that run on Linux, Mac OS X, and the BSD Unix flavors, as well as Windows.
So, while Nemerle is defined (and constrained) in part by the .NET Framework, it is very much its own language. It offers a unique combination of features not found in other .NET languages, which give it advantages of conciseness and flexibility that should prove attractive to a wide range of programmers, from students to seasoned developers.
Getting Started
To run the examples in this tutorial, you will need to install Nemerle. Hacker types will want to download the source.
Simple Examples
This section lists simple examples that look almost the same in C# (or Java, or C++).
Hello World
We start with a classic first example:
System.Console.WriteLine ("Hello world!");
To run this example:
- Write it with your favorite text editor and save it as hello.n
- Get to the console by running cmd in Windows, or the terminal window in Linux/BSD
- Run the Nemerle compiler by typing ncc hello.n
- The output goes to out.exe
- Run it by typing out or mono out.exe depending on your OS
Observe how an individual code statement ends with a semi-colon;
This program writes "Hello world!" to the console. It does this by calling System.Console.WriteLine, a function in the .NET Framework.
As this example shows, you can write a bunch of statements in a file (separated by semi-colons), and Nemerle will execute them. However, this example really isn't a proper, stand-alone program. To make it one, you need to wrap it in a class.
Classes: a First Look
Lets expand our example to include a class. Enter these lines in your hello.n file:
class Hello { static Main () : void { System.Console.WriteLine ("Hello world!"); } }
Notice how blocks of code are grouped together using curly braces { }, typical of C-style programs.
When you compile and run this program, you get the same results as before. So why the extra lines? The answer is that Nemerle, like most .NET languages, is object-oriented:
- class Hello simply means that you are defining a class, or type of object, named Hello. A class is a template for making objects. Classes can be used standalone, too, as is the case here.
- static Main defines a function Main, which belongs to class Hello. A function that belongs to a class is called a method of the class. So, here you would say "function Main is a method of class Hello."
- By convention, program execution starts at static Main. The keyword static means the method can be called directly, without first creating an object of type Hello. Static methods are the equivalent of public or module-level functions in non-object languages.
This example is much closer to what a C# programmer would write. The only difference is that in Nemerle we write the method's return type on the right, after the colon. So, static Main():void specifies that method Main returns void, or no usable type. This is the equivalent of a subroutine in Basic.
The Adder
Adder is a very simple program that reads and adds two numbers. We will refine this program by introducing several Nemerle concepts.
To start, enter and compile this code:
/* Our second example. This is a comment. */ using System; // This is also a comment public class Adder // As in C#, we can mark classes public. { public static Main () : void // Methods, too. { /* Read two lines, convert them to integers and return their sum. */ Console.WriteLine ("The sum is {0}", // System.Int32.Parse converts string into integer. Int32.Parse (Console.ReadLine ()) + Int32.Parse (Console.ReadLine ())); } }
When run, Adder lets you type in two numbers from the console, then prints out the sum.
As you can see, a lengthy statement can be continued on multiple lines, and mixed with comments, as long as it ends with a semi-colon;
The using declaration imports identifiers from the specified namespace, so they can be used without a prefix. This improves readability and saves typing. Unlike C#, Nemerle can also import members from classes, not only from namespaces. For example:
using System; using System.Console; public class Adder { public static Main () : void { WriteLine ("The sum is {0}", Int32.Parse (ReadLine ()) + Int32.Parse (ReadLine ())); } }
You probably noticed that the code that reads and converts the integers is needlessly duplicated. We can simplify and clarify this code by factoring it into it's own method:
using System; public class Adder { // Methods are private by default. static ReadInteger () : int { Int32.Parse (Console.ReadLine ()) } public static Main () : void { def x = ReadInteger (); // Value definition. def y = ReadInteger (); // Use standard .NET function for formatting output. Console.WriteLine ("{0} + {1} = {2}", x, y, x + y); } }
Within the Main method we have defined two values, x and y. This is done using the def keyword. Note that we do not write the value type when it is defined. The compiler sees that ReadInteger returns an int, so therefore the type of x must also be int. This is called type inference.
There is more to def than just declaring values: it also has an impact on how the value can be used, as we shall see in the next section.
In this example we see no gain from using def instead of int as you would do in C# (both are 3 characters long :-). However def will save typing, because in most cases type names are far longer:
FooBarQuxxFactory fact = new FooBarQuxxFactory (); // C# def fact = FooBarQuxxFactory (); // Nemerle
When creating objects, Nemerle does not use the new keyword. This aligns nicely with the .NET concept that all types, even simple ones like int and bool, are objects. That being said, simple types are a special kind of object, and are treated differently during execution than regular objects.
Counting Lines in a File
class LineCounter { public static Main () : void { // Open a file. def sr = System.IO.StreamReader ("SomeFile.txt"); // (1) mutable line_no = 0; // (2) mutable line = sr.ReadLine (); while (line != null) { // (3) System.Console.WriteLine (line); line_no = line_no + 1; // (4) line = sr.ReadLine (); }; // (5) System.Console.WriteLine ("Line count: {0}", line_no); } }
Several things about this example require further remarks. The first is the very important difference between the lines marked (1) and (2).
In line (1) we define an immutable variable, sr. Immutable means the value cannot be changed once it is defined. The def statement is used to mark this intent. This concept may at first seem odd, but quite often you will find the need for variables that don't change over their lifetime.
In (2) we define a mutable variable, line_no. Mutable values are allowed to change freely, and are defined using the mutable statement. This is the Nemerle equivalent of a variable in C#. All variables, mutable or not, have to be initialized before use.
In (3) we see a while loop. While the line is not null (end of file), this loop writes the line to the console, counts it, and reads the next. It works much like it would in C#. Nemerle also has do ... while loops.
We see our mutable counter getting incremented in (4). The assignment operator in Nemerle is =, and is similar to C#.
Lastly, in (5), we come to the end of of our while loop code block. The line count gets written after the while loop exits.
Functional examples
This section introduces some of the more functional features of Nemerle. We will use the functional style to write some simple programs, that could easily be written in the more familiar imperative style, to introduce a few concepts of the functional approach.
Functional Programming: a First Look
Functional programming (FP) is style in which you do not modify the state of the machine with instructions, but rather evaluate functions yielding newer and newer values. That is, the entire program is just one big expression. In purely functional languages (Haskell being the main example) you cannot modify any objects once they are created (there is no assignment operator, like = in Nemerle). There are no loops, just recursive functions. A recursive function calls itself repeatedly until some end condition is met, at which time it returns its result.
Nemerle does not force you to use FP. However you can use it whenever you find it necessary. Some algorithms have a very natural representation when written in functional style -- for example functional languages are very good at manipulating tree-like data structures (like XML, in fact XSLT can be thought of as a functional language).
We will be using the terms method and function interchangeably.
Rewriting Line Counter without the loop
Let's rewrite our previous Line Counter example using a recursive function instead of the loop. It will get longer, but that will get fixed soon.
class LineCounterWithoutLoop { public static Main () : void { def sr = System.IO.StreamReader ("SomeFile.txt"); mutable line_no = 0; def read_lines () : void { // (1) def line = sr.ReadLine (); when (line != null) { // (2) System.Console.WriteLine (line); // (3) line_no = line_no + 1; // (4) read_lines () // (5) } }; read_lines (); // (6) System.Console.WriteLine ("Line count: {0}", line_no); // (7) } }
In (1) we define a nested method called read_lines. This method simulates the while loop used in our previous example. It takes no parameters and returns a void value.
(2) If line wasn't null (i.e. it was not the last line), (3) we write the line we just read, (4) increase the line number, and finally (5) call ourself to read rest of the lines. The when expression is explained below.
Next (6) we call read_lines for the first time, and finally (7) print the line count.
The read_lines will get called as many times as there are lines in the file. As you can see this is the same as the while loop, just expressed in a slightly different way. It is very important to grok this concept of writing loops as recursion, in order to program functionally in Nemerle.
If you are concerned about performance of this form of writing loops -- fear not. When a function body ends with call to another function no new stack frame is created. It is called a tail call. Thanks to it the example above is as efficient as the while loop we saw before.
In Nemerle the if expression always need to have the else clause. It's done this way to avoid stupid bugs with dangling-else:
// C#, misleading indentation hides real code meaning if (foo) if (bar) m1 (); else m2 ();
If you do not want the else clause, use when expression, as seen in the example. There is also unless expression, equivalent to when with the condition negated.
Rewriting line counter without mutable values
Our previous aim of rewriting line counter removed the loop and one mutable value. However one mutable value has left, so we cannot say the example is written functionally. We will now kill it.
class FunctionalLineCounter { public static Main () : void { def sr = System.IO.StreamReader ("SomeFile.txt"); def read_lines (line_no : int) : int { // (1) def line = sr.ReadLine (); if (line == null) // (2) line_no // (3) else { System.Console.WriteLine (line); // (4) read_lines (line_no + 1) // (5) } }; System.Console.WriteLine ("Line count: {0}", read_lines (0)); // (6) } }
In (1) we again define a nested method called read_lines. However this time it takes one integer parameter -- the current line number. It returns the number of lines in the entire file.
(2) If line we just read is null (that was last line), we (3) return the current line number as number of lines in entire file. As you can see there is no return statement. The return value of a method is its last expression.
(4) Otherwise (it was not last line) we write the line we just read. Next (5) we call ourselves to read the next line. We need to increase the line number, since it is the next line that we will be reading. Note that as a return value from this invocation of read_lines we return what the next invocation of read_lines returned. It in turn returns what the next invocation returned and so on, until, at the end of file, we reach (3), and final line count is returned through each invocation of read_lines.
In (6) we call the read_lines nested method, with initial line number of 0 to read the file and print out line count.
Type inference
We have already seen type inference used to guess types of values defined with def or mutable. It can be also used to guess type of function parameters and return type. Try removing the : int constraints from line marked (1) in our previous example.
Type inference only works for nested functions. Type annotations are required in top-level methods (that is methods defined in classes, not in other methods). This is design decision, that is here not to change external interfaces by accident.
It is sometimes quite hard to tell the type of parameter, from just looking how it is used. For example consider:
class HarderInference { static Main () : int { def f (x) { x.Length }; f ("foo"); } }
When compiling the f method we cannot tell if x is a string or array or something else. Nevertheless, we can tell it later (looking at f invocation) and Nemerle type inference engine does it.
If a function with incomplete type information is not used or its type is ambiguous, the compiler will refuse to compile it.
More info
Now, once you read through all this, please move to the Grokking Nemerle tutorial, that is much more complete. You can also have a look at The Reference Manual if you are tough.
|
http://nemerle.org/First_Tutorial
|
crawl-001
|
refinedweb
| 2,492
| 66.54
|
(tldr;)[#tldr]
Sometimes we need to know the width of the browser window in our component’s class file. You could have a
@HostListener in a component for the window’s resize event, but you’d have to maintain the service, write tests to make sure it was working properly, handle observables correctly so there aren’t memory leaks, etc. The Angular CDK’s
LayoutModule has a
BreakpointObserver class that you can inject into a component, provide a breakpoint (or breakpoints) that you want to watch for, and it will emit each time you cross the threshold of a breakpoint (i.e. if I want to know when the screen is smaller than 700px, it will emit a value when I cross under 700px, and then again when I cross back over 700px).
The Problems This Solves
First off, I’ve been using Angular daily for almost 4 years now, since before the first Angular 2 Beta release, and this is the first time I’ve needed this functionality. So you might be wondering when or why you would ever need this. But there are times when it is very useful, which is why the class exists in the
@angular/cdk library.
I ended up needing this functionality because I am using the
ngx-gauge library, and one of the options to set in the library is the thickness of the line. You pass it in as an input to the component from the library, and it draws the chart at that thickness. In addition, you pass in a
size attribute that sets the height and width. Because of that, I couldn’t change the thickness or the height and width of the chart using CSS media queries. I was left with two options: 1) have two charts on the page, one for bigger screens and one for smaller screens or 2) change the values of those inputs depending on screen size. I chose the latter, because I didn’t want to have to update two charts every time I made a change.
The solution (we’ll go over that in a minute) worked great for me, but I will say that I think the first thing you should try is using CSS media queries before doing this. It would be unnecessary, for example, to keep track of the screen width in your component’s class file, and then apply a class to an HTML element (with
ngClass, for example) based on the screen width. Maybe there is a scenario where that is what you need to do, but generally I think you’ll be better off just using CSS media queries to style your components. Having said that, however, if you feel this is the best option for your component, go ahead and do it! I don’t want you to feel bad about using
BreakpointObserver.
The Solution
The solution to this problem is pretty simple, thanks to the Angular CDK. The first step is to install the CDK to your project:
npm i @angular/cdk
Next, import the
LayoutModule from
@angular/cdk/layout to your module. For example, like this in the
app.module.ts file:
import { LayoutModule } from '@angular/cdk/layout'; @NgModule({ imports: [..., LayoutModule, ...] }) export class AppModule {}
After importing the
LayoutModule, inject the
BreakpointObserver into your component like any other service you would use:
import { Breakpoint Observer } from '@angular/cdk/layout'; export class MyComponent { constructor(private observer: BreakpointObserver) {} }
Alright, now everything is set up and ready to use. There’s one more thing to figure out, though, before using the service, and that is what break points you want to be notified of. For example, if your site is using Bootstrap and you want to know when you are on the small screen or lower, you would use ‘(max-width: 767px)’. The value you put in the strings is the part of your CSS media query parentheses, parentheses included. You can provide a single string value, or an array of strings.
Once you’ve determined your breakpoints, you have two options to use from the
BreakpointObserver. The first one is the
isMatched method. It simply evaluates the breakpoints you provided and tells you if the values are matched or not. If you pass in an array to this method, all conditions have to be met for the function to return true.
const matched = this.observer.isMatched('(max-width: 700px)'); // OR const matched = this.observer.isMatched(['(max-width: 700px)', '(min-width: 500px)']);
The
isMatched function works great if you only need to check the first time the page is loaded, or if you only want to check by calling that function occasionally. If you want to be constantly alerted of the matching of your breakpoints, you can use the
observe method. The
observe method allows you to subscribe and get an update every time the window passes one of the widths that you’ve defined. If you’ve only defined one width to the function, then it will emit a value each time you go above or below that single width. If you provided an array of widths, then each time you cross over the threshold of one of the widths in the array, a value is emitted.
this.observer.observe('(max-width: 700px)').subscribe(result => { console.log(this.result); // Do something with the result });
The
result that’s outputted here looks like this:
{ "matches": true | false, "breakpoints": { "(max-width: 350px)": true | false, "(max-width: 450px)": true | false } }
The
matches attribute is true if any conditions are met. You can also check for individual breakpoints to see if they have been met if you need.
This function is great if you need to change some layout on the page each time the browser width crosses a certain value. Like I mentioned in my example above, I needed to change the size of a chart and the thickness of the chart depending on the browser width, and since it could be different on landscape vs portrait, I decided to subscribe to the observer method and change it each time it emitted a value.
The last thing to know about the
BreakpointObserver is that the CDK provides some built in breakpoints that you can use if you want. They are based on Google’s Material Design specification, and the values are:
- Handset
- Tablet
- Web
- HandsetPortrait
- TabletPortrait
- WebPortrait
- HandsetLandscape
- TabletLandscape
- WebLandscape
You can use them by importing
Breakpoints from the CDK’s
layout folder:
import { Breakpoints } from '@angular/cdk/layout';
You can then use a breakpoint, like
Breakpoint.Handset, in the
observe or
isMatched functions. They can be used as the only input, or added to the array that is passed in to those functions. You can also mix your own breakpoints with the those built-in ones..
Conclusion
As I said before, you may never use this class or need this functionality. It doesn’t come up very often, but it’s nice to know that this is there and you can reach for it when you do need it. One thing that I want to look into is to see if you could use this function to swap out the template file that the Angular component is using. Again, that would be a rare condition, but you might need it to swap out the content for a mobile experience if it is very different from the desktop experience. I don’t know if you could exactly do that, but you could at least show and hide child components based on the results of the
BreakpointObserver.
If you have used this before, make sure to let me know on Twitter or via email how you’ve used it. If you haven’t yet, but need to in the future, also let me know! I like to hear about how other developers use things like this so I can learn more!
|
https://www.prestonlamb.com/blog/angular-cdks-breakpoint-observer
|
CC-MAIN-2020-05
|
refinedweb
| 1,310
| 65.35
|
I am having a problem including this line though: <?xml version="1.0" encoding="UTF-8"?>
Any ideas how to pharse this? Or am I just doing it wrong?
"Note: many XHTML pages begin with an optional XML prologue ( <?xml> ) that precedes the DOCTYPE and namespace declarations. Unfortunately, this XML prologue causes problems in many browsers..."
print("<?xml version=\"1.0\" encoding=\"UTF-8\"?>";
Nick
toadhall, I know of the errors caused my user base is newer browsers, I have tested all my websites with that encoding and it works on all mine (IE Opera Mozilla).
print("<?xml version=\"1.0\" encoding=\"UTF-8\"?>");
I use PHP and XHTML 1.0 strict. I do this:
<?php echo '<?xml version="1.0" encoding="iso-8859-1"?>'; ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" ""> <html xmlns="" xml:
Toadhall is wrong about this. You *want* the brackets. If you do not use bracks, but instead use < it will not recognize your xml tag as a tag, but as text and you don't want that.
Incidentally, I had trouble with Unicode and validation (because I don't have an editor that doesn't add the .... what is it? BMsomething and W3C chokes on validation. What have you done?
Tom
<added>Its not PHP but I havent gotten far enough in development to do that</added>
<addeded> WOO HOO Mayed Preferred Member </addeded>
Parse error: parse error in localhostinfo/site.php on line 1
I don't understand. The site you listed looks perfect and you say the problem is not with your XHTML.
Then you say it isn't with PHP either, but you get a parse error, so that can only be PHP.
The PHP code that I posted works fine and generates valid XHTML. If you are getting a parse error, it's coming from somewhere else.
Try creating "skeleton" pages that just call stub functions (routines that just return without doing anything). Get rid of complexity until you locate the problem. If you still can't figure it out, post the code.
If I had to guess, upon thinking about it, I bet you're missing a semicolon after line 1 or something like that.
<?PHP print("<?xml version=\"1.0\" encoding=\"UTF-8\"?>"; ?><!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" ""> <html xmlns="" xml: <head>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" ""> <html xmlns="" xml: <head>
Also tried: <?PHP echo '<?xml version="1.0" encoding="UTF-8"?>'; ?>
The reason for the URL in Profile was in reference to this
1. Your parentheses don't match.
2. I'm not certain, but the standard is <?php. I'm not sure whether or not uppercase is allowed, though it would take two seconds to test (but I'm too lazy).
That is exactly what I have, except for lowercase tags for invoking php, and it validates.
Thanks for yalls help.
Syntax highlighting, code completion, all that sort of stuff is just eye candy. It can make things easier to read, but it doesn't help much in the end. Brace matching is such a help when your eyes just won't work. There are a lot of editors that support it.
- I've once had emacs set up for it (just parens for Scheme programming)
- I mostly use HAPedit, which does it pretty nicely, but I'm sort of biased there because I'm the one who made the feature request and so it's sort of done to the way I like it.
- I think TextPad, EditPad and UltraEdit can all do this, but I don't use them so I don't know.
Cheers,
|
http://www.webmasterworld.com/forum88/474.htm
|
CC-MAIN-2015-06
|
refinedweb
| 611
| 83.86
|
Search Criteria
Package Details: iortcw-git 1.51c.r33.gefd26577-1
Dependencies (17)
-)
- git (git-git) (make)
- iortcw-de (optional) – Deutsch Language
- iortcw-es (optional) – Espanol Language
- iortcw-fr (optional) – Francais Language
- iortcw-it (optional) – Italian Language
Latest Comments
r4v3n6101 commented on 2020-08-03 12:24
cmakeisn't required while building, remove it plz
alexbrinister commented on 2019-10-27 20:14
libjpeg-turbo should be part of makedepends. When built in a chroot, the package fails to build with an error about missing
jpeglib.h.
browniesrgut commented on 2019-10-14 20:12
Hello,
The package failed at the symbolic links parts
ln: cannot create symbolic link '.../iortcw-git/pkg/iortcw-git/opt/realrtcw/main'...
I had to add "mkdir -p $pkgdir/opt/realrtcw/main" before "ln" commands in PKGBUILD. Package builds fine after that.
Neros commented on 2017-08-25 10:53
I can't compile anymore:
```
In file included from code/client/cl_main.c:35:0:
code/client/../sys/sys_loadlib.h:42:12: fatal error: SDL.h: No such file or directory
# include <SDL.h>
^~~~~~~
compilation terminated.
make[2]: *** [Makefile:2502: build/release-linux-x86_64/client/cl_main.o] Error 1
make[2]: *** Waiting for unfinished jobs....
make[2]: Leaving directory '/tmp/yaourt-tmp-neros/aur-iortcw-git/src/iortcw/SP'
make[1]: *** [Makefile:1342: targets] Error 2
make[1]: Leaving directory '/tmp/yaourt-tmp-neros/aur-iortcw-git/src/iortcw/SP'
make: *** [Makefile:1260: release] Error 2
==> ERROR: A failure occurred in package().
Aborting...
==> ERROR: Makepkg was unable to build .
```
robertfoster commented on 2015-11-10 16:53
moved to iortcw-git.
mpz commented on 2015-11-08 23:45
build() blindly print "pak0.pk3 doesn't exist. This process will be terminated" and then:
==> Starting package()...
/tmp/iortcw-svn/PKGBUILD: line 38: cd: SP: No such file or directory
==> ERROR: A failure occurred in package().
Aborting...
mpz commented on 2015-11-08 23:14
==> ERROR: Cannot find the git package needed to handle git sources.
bebehei commented on 2014-12-29 15:13
Hi, I had to install the package openal to compile the package successfully. Could you please add this to the dependency-array?
Thanks.
Here is the error-message:
In file included from code/client/snd_openal.c:30:0:
code/client/qal.h:45:21: fatal error: AL/al.h: No such file or directory
#include <AL/al.h>
^
compilation terminated.
Makefile:2458: recipe for target 'build/release-linux-x86_64/client/snd_openal.o' failed
make[2]: *** [build/release-linux-x86_64/client/snd_openal.o] Error 1
|
https://aur.archlinux.org/packages/iortcw-git/
|
CC-MAIN-2020-50
|
refinedweb
| 422
| 52.76
|
Introduction to Regular Expression in C#
Pattern matching is done in C# using regular expression and regex class of C# is used for creating regular expression in C#, a standard for pattern matching in strings and for replacement is set using regular expression and it tells the computer through the user how to look for a specific pattern in a string and what must be the response when it finds the specific pattern it is looking for and regex is the abbreviation for a regular expression, overall regular expressions in C# is a powerful method to identify and replace the text in the strings that are defined in a particular format.
Syntax
The following is a list of the basic syntax used for regular expressions in C#. They are:
1. Quantifiers
The list of important quantifiers are as follows:
- *: The preceding character is matched zero or more times. Consider the regular expression c*. This expression matches d, cd, ccd, cccd,….c to the power of nd.
- +: The preceding character is matched one or more times. Consider the regular expression c+d. This expression matches cd, ccd, cccd,….c to the power of nd.
- ?: The preceding character is matched zero or one time. Consider the regular expression c?d. This expression matches d, cd.
2. Special Characters
The list of important special characters are as follows:
- ^: The beginning of the string is matched using this special character. Consider the example ^Karnataka. This expression matches Karnataka is our state.
- $: The end of the string is matched using this special character. Consider the example Karnataka$. This expression matches Our state is Karnataka.
- Dot (.): Any character is matched only once using this special character. Consider the example l.t (length = 3). This expression matches lit, lot, let.
- \d: A digit character is matched using this special character. Consider the example Regex-[0-9]. This expression matches 123, 456, 254, etc.
- \D: Any non-digit character is matched using this special character. Consider the example Regex-[^0-9]. This expression matches everything except the numbers consisting of digits from 0-9.
- \w: An alphanumeric character plus “_” can be matched using this special character. Consider the example Regex- A to Z, 0 to 9, a to z, _(Underscore). This expression matches the alphanumeric character “_”.
- \W: Any non-word character is matched using this special character. Consider the example \W. This expression matches “.” in “IR B2.8”
- \s: White space characters are matched using this special character. Consider the example \w\s. This expression matches “C ” in “IC B1.5”
- \S: Non-White space characters are matched using this special character. Consider the example \s\S. This expression matches “_ ” in “IC__B1.5”
3. Character Classes
The characters can be grouped by putting them between square brackets. By doing this, at least one character in the input will be a match with any character in the class.
[]: A range of characters can be matched using []. Consider the example [Xyz]. This expression matches any of x, y, and z.
Consider the example [c-r]. This expression matches any of the characters between c and r.
4. Grouping and Alternatives
The things can be groups together using the parenthesis ( and ).
- (): Expressions can be grouped using (). Consider the example (ab)+. This expression matches ab, abab, and does not match aabb.
- {}: Matches the preceding character for a specific number of times.. The number of times can be specified using the following:
- n: The previous element is matched exactly n number of times. Consider the example “,\d{3}”. This expression matches,123 in 1,123.40
- {n,m}: The previous element is matched at least n number of times but not more than m number of times. Consider the example “,\d{2,3}”. This expression matches,12 and,123 in 1,123.40
Working of Regular Expressions in C#
Basically, there are two types of regular expression engines. They are text directed engine and regex directed engine. A regex directed engine scans through the regex expression trying to match the next token in the regex expression to the next character. The regex advances if a match is found, otherwise it goes back to the previous position in the regex and the string to be parsed where it can try different paths through the regex expression. A text directed engine scans through the string trying all the permutations of the regex expression before moving to the next character in the string There is no backtracking or going backward in-text directed engine. The leftmost match is always returned by the regex engine even if there are possibilities of finding the exact matches later. The engine begins with the first character of the string whenever a regex is to be applied on the string. All the possible permutations are applied at the first character and the results seem to fail, then the permutations are moved to the second character in the string and this process goes on until the regex engine finds the exact match.
Consider the example Check the water in the bathtub before going to bath. The regex engine is asked to find the word bath from the above sentence. The first character C is matched with b by the regex engine and this is a failure. So, the next character H tries to match with b by the regex engine and again this is a failure. This goes on and when the regex engine tries to match the 24th character with b, it matches. So, it goes on and matches the word bath from the bathtub with word bath and the engine reports the word bath from the bathtub as a correct match and it will not go on further in the statement to see if there are any other matches. This is how the regex engine works internally.
Methods of Regular Expression in C#
The regular expression in C# makes use of the following methods. They are:
- public bool IsMatch(string input): The regular expression specified by the regex constructor is matched with the specified input string using this method.
- public bool IsMatch(string input, int startat): The regular expression specified by the regex constructor is matched with the specified input string with the starting position specified, using this method.
- public static bool IsMatch(string input, string pattern): The method matches the regular expression specified with the input string specified.
- public MatchCollection Matches(string input): All the occurrences of a regular expression are searched in the specified input string, using this method.
- public string Replace(string input, string replacement): The specified strings matching the regular expression are all replaced by the replacement string, using this method.
- public string[] Split(string input): The positions specified by the regular expressions is where the array of strings is split into an array of substrings, by using this method.
Example on Regular Expression in C#
C# program to demonstrate the use of regular expressions for the verification of mobile numbers.
Code:
using System;
using System.Text.RegularExpressions;
class Check {
static void Main(string[] args)
{
//Mobile numbers are given as a input to an array of strings
string[] nos = {"9902147368",
"9611967273", "63661820954"};
foreach(string s in nos)
{
Console.WriteLine("The mobile number {0} {1} a valid number.", s,
checkvalid(s) ? "is" : "is not");
}
Console.ReadKey();
}
// Regex expressions are verified through this code block
public static bool checkvalid(string Number)
{
string cRegex = @"(^[0-9]{10}$)|(^\+[0-9]{2}\s+[0-9] {2}[0-9]{8}$)|(^[0-9]{3}-[0-9]{4}-[0-9]{4}$)";
Regex res = new Regex(cRegex);
if (res.IsMatch(Number))
return (true);
else
return (false);
}
}
Output:
Recommended Articles
This is a guide to Regular Expression in C#. Here we also discuss the introduction and syntax of regular expression in c# along with methods and examples. You may also have a look at the following articles to learn more –
|
https://www.educba.com/regular-expression-in-c-sharp/
|
CC-MAIN-2020-45
|
refinedweb
| 1,308
| 57.77
|
Heads up! To view this whole video, sign in with your Courses account or enroll in your free 7-day trial. Sign In Enroll
Preview
Charts and graphs are definitely a step up from viewing data in a spreadsheet. Let's take a look at how Bokeh can take visualizations even further.
Country coordinate URL:
Code for parsing JSON Data:
def get_coordinates(features): depth = lambda L: isinstance(L, list) and max(map(depth, L))+1 country_id = [] longitudes = [] latitudes = [] for feature in json_data['features']: coordinates = feature['geometry']['coordinates'] number_dimensions = depth(coordinates) # one border if number_dimensions == 3: country_id.append(feature['id']) points = np.array(coordinates[0], 'f') longitudes.append(points[:, 0]) latitudes.append(points[:, 1]) # several borders else: for shape in coordinates: country_id.append(feature['id']) points = np.array(shape[0], 'f') longitudes.append(points[:, 0]) latitudes.append(points[:, 1]) return country_id, longitudes, latitudes
Now that we have seen how we can plot data to see if there's potentially 0:00
a correlation between two factors, like population and life expectancy. 0:04
It might be interesting to see if we can visualize out data in a different way, 0:09
something beyond the normal bar chart, or scatter plot. 0:13
First though, now that we have a basic understanding of Bokeh, 0:17
let's take a slight step back and 0:21
see a few other features that Bokeh provides in order to see if we can 0:23
generate a potentially more interesting visualization of a population data. 0:27
We have been using chip glyphs for 0:31
our scatter plot, which have been very relevant for our work this far. 0:33
Another method we can call in our figure object is called patches, 0:38
which allows us to draw shapes based on parsed-in coordinates. 0:42
We can then use the fill color parameter to set the color of our shape. 0:47
Let's start with some basic shapes to see how this works, 0:52
along with some additional parameters we can use and see if we can't generate 0:55
something that might be interesting with our raw population data. 0:59
To get started, lets go into a new file called shapes.pie, 1:03
and set up our imports from bokeh.plotting, 1:08
we'll want figure, output_file and show. 1:11
Now we need to name our output file. 1:17
Lets call it shapes.html. 1:19
Next, we need to create our figure and assign it a name. 1:22
For now, lets just call it plot. 1:25
We'll make the width and height 400 pixels. 1:28
And we'll give it a title called shape. 1:39
Great, now we're wanting to draw a shape. 1:42
For a single shape, we will use the patch method. 1:46
But we'll see the patches method here shortly as well. 1:49
We provide a list of x values to use along with an equal length list of y values into 1:52
the patch method, which results in that x, y coordinate pairing again. 1:57
Then let's assign a color and alpha value and align width of two pixels. 2:01
Patch do 1, 2, 3 and 4 for x values, try 7, 12, 9, and 3 for our y values. 2:08
For our color, 2B5B84 for 2:18
the hex value, 0.7. 2:25
Line_width = 2, which looks like it puts us or 2:32
our [INAUDIBLE] for line length so we'll knock that down there. 2:35
That's looking good. 2:41
Now we add our method to show our plot. 2:43
And run our script. 2:47
We see that Bokeh generates our shapes.html output file and 2:53
displays it in our browser thanks Bokeh. 2:57
We should see a four sided polygon, 3:00
filled with a slightly transparent blue color displayed in our browser. 3:03
But what if we want to draw multiple shapes on the same figure? 3:06
That's where patches comes in. 3:10
As the names would indicate, we will use patch for single polygons and 3:12
patches for displaying multiple polygons. 3:16
Let's generate another plot. 3:19
Plot multiple. 3:24
Will use the same plot width and height. 3:24
And we'll call it multiple shapes. 3:36
Now, to generate multiple shapes, 3:43
we need to parse in multiple arguments into our parameters in the form of list. 3:45
For our coordinates, we parse in a list of the x values for 3:50
our shapes, then a list of our y values. 3:53
Again, the list of the x and 3:56
y coordinates have to be the same length overall for each shape itself. 3:58
Since we'll be generating multiple polygons, we will want the patches method, 4:02
plot_multiple.patches. 4:07
As values for our first shape, we'll use [1, 1, 4:12
4, 4], [3, 5, 9] for a second shape. 4:17
y values [1, 4, 4, 1], [1, 4:22
9, 3] for our second shape. 4:27
It should come as no surprise here. 4:31
They will need to have list of the same length for our color, author, 4:33
and in line with values as well. 4:37
Hash first color, For our second color, 4:39
First alpha and the second alpha. 4:55
And line widths of 2 and 3. 5:03
Let's display this new plot at that same time next to the previous plot. 5:05
For that, we'll pass in row into the show method. 5:09
And we can't forget to import row. 5:23
From bokeh.layout import row. 5:33
Let's run our script and take a look. 5:38
There, we have our two plots. 5:43
One with a single blue tone polygon and the latest one with a green square and 5:45
overlapping gray triangle. 5:50
Now that we have some experience with plotting shapes, 5:52
let's get into drawing something a little more complex. 5:55
We should be able to use longitude and latitude coordinates as our x and 5:58
y values to generate a world map. 6:02
I have included a link to a JSON resource that has longitude and 6:05
latitude coordinates for most of the countries and land masses of the world. 6:09
We'll read them the coordinates as x and y values for 6:13
apaches plug and then have bokeh draw our map. 6:16
Let's move into a new file called the world_map.py. 6:20
To get started, we will want to import, request, and numpy. 6:24
We'll use request to get our JSON data, 6:28
and we'll want to bring it on our Bokeh import as well. 6:30
We'll need output_file and show, and since we'll be using HoverTool, 6:32
column data source and figure, we'll want to import those as well. 6:36
ColumnDataSource and figure. 6:46
We'll name our output_file world-map.html and 6:52
we'll give it a title of World Map We'll 6:58
create a URL convenience variable for our JSON string. 7:05
Again, check the teachers notes. 7:10
We'll be using the same tools we have been using. 7:13
So let's make it tools variable for those two. 7:16
Pan, wheel_zoom, box_zoom, 7:21
reset, hover and save. 7:27
Next, we need to use request to get the JSON data from our URL, 7:33
request.get our (url) and our json_data 7:39
We get our json_data. 7:46
Okay, this data set is a little interesting in that the coordinates for 7:48
each country can be represented at different depths in the JSON object, 7:52
dependent on the continuity of the country's borders. 7:57
If a country is a single polygon such as Uganda, 8:00
it is represented in a single list of coordinates. 8:04
If however, it is a multi-polygon, for example, the Philippines with multiple 8:07
different land masses, it is represented in a multi-depth list in the JSON data. 8:12
I have provided a function that parses through these different cases and 8:17
generates the required x and y pairs for Bokeh to use. 8:21
Check the teachers notes for a copy of the code. 8:25
Now we can get our coordinates and set up our plot. 8:28
We'll configure our plot, call this plot world_map_plot. 8:45
We'll make the plot width a little wider here to make the map look correct. 8:52
Our tools will be our TOOLS. 9:11
X_range of the graph will be -180 to 180. 9:15
y_range -90 to 90. 9:22
There we go. 9:28
Now we assign it the patches method. 9:31
For our x values we'll use lats, for y values we'll use longs. 9:35
Our fill_color, 9:42
let's us #F1EEF6, 9:46
fill_alpha 0.7 and 9:50
align_width will be 2. 9:55
Now we can show our plot. 10:04
And run. 10:10
We are presented with a nice visualization of a map of the world. 10:14
We can definitely use this map for our world population data visualization. 10:17
We'll need to convert our country data with latitudes and 10:22
longitudes into a pandas data frame, 10:25
merge it with our population data, and use that as a column data source. 10:29
Let's create an empty list to handle our country geospacial data. 10:33
Country_coords, and it populates 10:36
that list with our information. 10:42
Land_mass. 10:58
Country_code is gonna be country. 11:04
Longitudes Longs. 11:21
We'll append land_mass to our country coordinates. 11:36
Then, we can convert that into a pandas data frame to prepare it for 11:44
an upcoming merge. 11:48
But we didn't import pandas. 11:58
So let's go back up and do that. 12:00
There we go. 12:07
All right, next, we bring in our population data we were working with in 12:11
an earlier stage, as a data frame. 12:15
Input_file, these are country-pops.csv. 12:20
Country populations. 12:33
There we go. 12:40
And then we merge the two datasets together using pandas merge function for 12:42
use as a column data source. 12:47
We can merge on the three letter ISO value from country-pops.csv. 12:49
And then country code value from the JSON file in an outer join cell merge. 12:54
Three from our country codes. 13:09
Country code and how is outer. 13:17
With our merged data, we can create a column data source. 13:27
Then, we can figure our patches plot to utilize our column data source 13:41
information. 13:46
So we'll replace our x values here. 13:48
And then add in our source, As our map data. 14:08
Finally, we generate our hover tools. 14:14
In our hover tools, we'll keep the same data as we've seen before, 14:17
country name, population, and life expectancy. 14:21
Feel free to add in additional data to the tool tips on your own. 14:25
We want country name in English To map to our country English, right? 14:56
Population. 15:12
Map to population. 15:19
Life Expectancy in years. 15:23
To map to life expectancy. 15:32
Let's try it out. 15:39
Awesome. 15:43
Now when we hover over a country, we get our data included with the hover tool. 15:44
We can examine the different countries on the map and 15:48
visually see, in an interactive fashion, what the data is for each country. 15:52
Further, since we are all generally familiar with the world map, 15:57
it provides much better context for 16:01
us about where population centers are on a worldwide level. 16:03
That is in my opinion a pretty nice 16:07
upgrade to visualizing our world population data in a spreadsheet. 16:10
This is just the beginning of your journey with data visualization though. 16:16
There are many, many different ways to explore datasets and 16:19
we have just scratched the surface for what Bokeh can do. 16:22
Leave a message in the community forum or contact me on Twitter and 16:25
let me know how you have used Bokeh to better understand data. 16:29
I'll see you next time, and happy coding. 16:32
|
https://teamtreehouse.com/library/plotting-the-world
|
CC-MAIN-2021-31
|
refinedweb
| 2,199
| 74.69
|
Roman Zippel wrote:> On Tue, 11 Jul 2006, Andrew Morton wrote:> [...]>> What, actually, is the problem?> > It changes the behaviour, it will annoy the hell out of people like me who > have to deal with different kernels and expect this to just work. :-(> Since then has it been acceptable to just go ahead and break stuff? This > problem doesn't really look unsolvable, so why is my request to fix the > damn thing so unreasonable?Ok, what about this one?I don't have time to test it (it compiles, at least), but it seems the logic is pretty clear: once you have pressed both "Alt" and "SysRq" sysrq mode becomes active until you release *both* keys. In this mode any regular key press triggers handle_sysrq.This allows for all the combinations mentioned before in this thread and makes the logic simpler, IMHO.-- Paulo Marques - Boss: I don't see anything that could stand in our way. Dilbert: Sanity? Reality? The laws of physics?Subject: allow the old behavior of Alt+SysRq+<key>This should allow any order of Alt + SysRq press followed by any keywhile still holding one of SysRq or Alt.Signed-off-by: Paulo Marques <pmarques@grupopie.com>--- ./drivers/char/keyboard.c.orig 2006-07-12 13:03:32.000000000 +0100+++ ./drivers/char/keyboard.c 2006-07-12 14:18:52.000000000 +0100@@ -150,7 +150,7 @@ unsigned char kbd_sysrq_xlate[KEY_MAX + "230\177\000\000\213\214\000\000\000\000\000\000\000\000\000\000" /* 0x50 - 0x5f */ "\r\000/"; /* 0x60 - 0x6f */ static int sysrq_down;-static int sysrq_alt_use;+static int sysrq_active; #endif static int sysrq_alt;@@ -1164,16 +1164,17 @@ static void kbd_keycode(unsigned int key printk(KERN_WARNING "keyboard.c: can't emulate rawmode for keycode %d\n", keycode); #ifdef CONFIG_MAGIC_SYSRQ /* Handle the SysRq Hack */- if (keycode == KEY_SYSRQ && (sysrq_down || (down == 1 && sysrq_alt))) {- if (!sysrq_down) {- sysrq_down = down;- sysrq_alt_use = sysrq_alt;- }+ if (keycode == KEY_SYSRQ) {+ sysrq_down = down; return; }- if (sysrq_down && !down && keycode == sysrq_alt_use)- sysrq_down = 0;- if (sysrq_down && down && !rep) {++ if (sysrq_down && sysrq_alt)+ sysrq_active = 1;+ else if (!sysrq_down && !sysrq_alt)+ sysrq_active = 0;++ if (sysrq_active && down && !rep) { handle_sysrq(kbd_sysrq_xlate[keycode], regs, tty); return; }
|
http://lkml.org/lkml/2006/7/12/129
|
CC-MAIN-2014-10
|
refinedweb
| 349
| 58.28
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.