text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
CCPN Tkinter Graphical Objects General Principles This last part of the programming tutorial involves the creation of graphical user interfaces. Writing a Python macro script for Analysis may be sufficient for simple, linear tasks, but a bespoke graphical interface will allow things to get more complex. Widget Classes The graphical user interfaces that are demonstrated here are constructed using graphical objects, which will be referred to as widgets. A widget is a Python object that has a graphical representation. Examples of widgets include popup windows (separate moveable areas on your screen that contain other widgets), buttons that you click to perform operations and entry fields that you can type text into. The basic idea is that you create graphical widgets to enable a user to set various parameters, perform operations and display results. All of the graphical widgets that we will demonstrate for use with CCPN are based on the Tkinter library (which is the Python implementation of the Tcl/Tk system), however the CCPN widgets have a layer of Python that mostly separates you from some of the underying Tkinter complexities and awkwardness. Geometry Management An important concept when building graphical interfaces is to arrange the widgets in an appropriate way. With the Tkinter based graphics system that CCPN currently uses there are three possible ways of specifying the locations of different widgets. Of these CCPN mostly uses the grid geometry manager (the alternatives being place and pack). As the name suggests this system allows you to locate widgets inside their window (or other parent) by specifying the row and column of an internal grid. The grid is not of a fixed size and will adapt to the components it contains, although often you have to be aware of how a widget inside a grid cell expands; sometimes you want a widget to stick to the edges of its cell and sometimes not, and given a whole grid you might want some rows and columns to respond to resizing, but have others remain fixed. Updates & Callbacks Having pretty graphical widgets arranged on your screen is just the start. Eventually you want your widgets to actually do something. A widget may change its appearance (i.e. update) to reflect the state of your data or may respond to the actions of the user (or both). All of this is controlled by calling Python functions. Accordingly when building graphical interfaces you have to be mindful of what events should call the update functions to change your graphics and which functions are called (i.e. a callback) when the user selects or click on something. Widget Construction Initially this part of the tutorial will give a simple guide on the basic placement and construction of widgets, without giving a proper scientific example; that will follow in the next section. The first thing that is requires when using the CCPN widgets is to create a top-level object to which all of the other graphical elements belong. This is usually called the root and below we explicitly create a new one. However, it is also common to extend an existing graphical setup. In such circumstances the top-level object will already exist and you simply have to add your new components to this (or one of its children). We will create a new root by using a function (method) called directly from the Tkinter library, but this will be the only time that we will use the Tkinter object directly. All following construction will be with the CCPN widget library. import Tkinter root = Tkinter.Tk() Or you could do: from Tkinter import Tk root = Tk() This root object is the parent window to which everything else will belong and be embedded within. Note that it is possible to make new windows which are not inside the root, but we will come to that later. After you have issued this at the Python command-line you will hopefully see a new box/window appear on screen which you can resize and close. Note that if you are entering commands into a file and running the Python as a script you will have to add the following command to make the graphics persist; otherwise your program will complete its execution and remove the graphics, immediately after rendering them. # If running from a script root.mainloop() Given our top-level box, we will place a text label inside. This means we import a CCPN widget class called a Label and then make an instance of this kind of object, specifying the container (root in this case) it goes in, the text it should carry and the location on the geometry grid. from memops.gui.Label import Label label = Label(root, text='This is a text example.', grid=(0,0)) Hopefully you will now see your new Label object appear as text within the existing window. Observe how you can change the text within the existing labels: label.set('Some totally different text') Note that if you did not include the grid information initially then your object would not appear on screen; there would be no information on how to locate it. However, you could easily specify the location after the widget is created by using the grid method of the widget (part of its Tkinter specification), as follows: label2 = Label(root, text='A second example.') label2.grid(row=1, column=0) Note that this second Label appears below the first because it is in row 1 rather than row 0. Now put the labels side by side, with different grid parameters: label2.grid(row=0, column=1) Grab the corner of the root window and enlarge it. You will see that the Labels remain at the centre of the window. In essence the grid system has given the minimum space for the widgets and positioned it in the middle. We will now change this by giving the first column (number zero) priority: root.grid_columnconfigure(0, weight=1) You will see that this command separates the Labels. This is because the first column has expanded. Next we will move the first label to the right hand side of its column (as you can see the default is to the left). Note that the way of specifying directions in the Tkinter grid system is with letters representing the cardinal compass directions like 'N','S','SE','NW' etc. label.grid(sticky='E') Now try: label.grid(sticky='NW') This does not move the Label to the top-left as you might expect, only the left. This is because our row does not expand. So if you give the row weight it will move: root.grid_rowconfigure(0, weight=1) Frames Frames are a simple widget that contain other widgets and they can be very useful in creating arrangements. They must exist inside a given window (like the Label above) but their have their own internal widget placement (grid in this case). Close any existing windows you may have created and construct a new one as follows, noting that we are using a new type of widget called Entry, into which you can type text. The second column (1) is set to expand and we make sure the Entry widget expands to touch at either end (i.e. East and West). import Tkinter from memops.gui.Label import Label from memops.gui.Entry import Entry root = Tkinter.Tk() root.grid_columnconfigure(1, weight=1) label = Label(root, text='Label A', grid=(0,0)) entry = Entry(root, text='Type here', grid=(0,1), sticky='EW') Now we have a grid system with two columns. Next we expand the second row and add a Frame with a red background to the window such that it covers both columns by specifying gridSpan (one row, two columns): from memops.gui.Frame import Frame root.grid_rowconfigure(1, weight=1) frame = Frame(root, grid=(1,0), bg='red', gridSpan=(1,2)) We could also have used the long form, noting that unless we specify that the frame sticks to all sides ('NSEW') we won't actually see the frame until something is placed inside, causing it to expand. # Long, traditional Tkinter form frame = Frame(root, bg='green') frame.grid(row=1, column=0, columnspan=2, sticky='NSEW') Now inside our frame we will place a Button widget that we can press, to execute a command. In this case the command is this a very simple function we write for demonstration purposes. Take special note that we don't use root for constructing the button, we use the frame we just made and so the grid location for the button (0,0) is a location relative to the inside of the frame. from memops.gui.Button import Button def clickFunc(): print "Button was pressed" button = Button(frame, text='Press', command=clickFunc, grid=(0,0)) Left-click on the button and see that it produces output at the Python command-line as specified by the clickFunc() call. If we wanted a really big button that expands with the frame, we would need to make the button stick to the sides: button.grid(sticky='NSEW') And then expand the frame: frame.expandGrid(0,0) # or use the long, traditional form frame.grid_columnconfigure(0, weight=1) frame.grid_rowconfigure(0, weight=1) Widget Variety The next example is just to illustrate variety of different widgets that you can use. We will actually make some of these work properly in the next section. So first import the widget classes: from memops.gui.CheckButton import CheckButton from memops.gui.Frame import Frame from memops.gui.LabelDivider import LabelDivider from memops.gui.LabelFrame import LabelFrame from memops.gui.LinkChart import LinkChart from memops.gui.Menu import Menu from memops.gui.PulldownList import PulldownList from memops.gui.RadioButtons import RadioButtons from memops.gui.ScrolledGraph import ScrolledGraph from memops.gui.ScrolledMatrix import ScrolledMatrix from memops.gui.Text import Text Then make a window : import Tkinter root = Tkinter.Tk() root.grid_columnconfigure(0, weight=1) Define some functions to call: def functionA(*value): print "Called function A", value def functionB(*value): print "Called function B", value And make some widget instances inside the root window. First there will be a menu at the top of the window. Note that we do not put the menu into the grid system, instead it is passes as a special menu option to the root window: menu = Menu(root) menu.add_command(label='Func A', shortcut='A', command=functionA) menu.add_command(label='Func B', shortcut='B', command=functionB) root.config(menu=menu) A large text entry area, five rows high: blurb = """ If you want to see how to setup the widgets in more detail, look at the example code at the bottom of the files in the $CCPN_HOME/python/memops/gui/ directory.""" text = Text(root, grid=(1,0), height=5, text=blurb) Radio buttons. Here clicking on the buttons that are created called the functions we defined earlier. Clicking the different RadioButtons changes the selection to 'one', 'two' or 'three'. radioButtons = RadioButtons(root, ['one', 'two', 'three'], select_callback=functionA, grid=(3,0)) A bordered frame frame that is labeled, with a pulldown list inside. Note that the pulldown list takes a list of text strings for the display, but (if specified) passes back an object from a separate list. labelFrame = LabelFrame(root, text='This is a LabelFrame', grid=(4,0), gridSpan=(1,2)) labelFrame.expandGrid(2,0) texts = ['One','Two','Three'] objects = [1,2,3] pulldown = PulldownList(labelFrame, callback=functionA, texts=texts, objects=objects, grid=(0,0)) A labeled separator, to split the frame in two: divider = LabelDivider(labelFrame, text='New Section', grid=(1,0)) A table, that gets filled in with some numbers and text (including Unicode for greek characters) using the update() function: headingList = ['#','Name','Square','Greek'] table = ScrolledMatrix(labelFrame, headingList=headingList, callback=functionB, grid=(2,0)) textMatrix = [[1,'One',1.00,u'\u03B1'], [2,'Two',4.00,u'\u03B2'], [3,'Three',9.00, u'\u03B3'], [4,'Four',16.00, u'\u03B4']] objectList = [1,2,3,4] table.update(objectList=objectList, textMatrix=textMatrix) A graph, showing some maths: import math numbers = [float(x/10.0) for x in range(1,11)] dataSets = [] dataSets.append([(x,x*x) for x in numbers]) dataSets.append([(x,math.exp(x)) for x in numbers]) # The third value of 0.6 in the tuple is for error bars. dataSets.append([(x,1/x,0.6) for x in numbers]) dataNames = ['x^2','e^x','1/x'] colors = ['#A00000','#008000','#0000C0'] graph = ScrolledGraph(labelFrame,dataSets=dataSets, title='Demo Graph', width=300, height=200, dataNames=dataNames, symbolSize=5, xLabel='x', yLabel='f(x)', dataColors=colors, graphType='line', grid=(3,0)) Further Examples If you want to see how to setup the widgets in more detail, look at the example code at the bottom of the Python files in the $CCPN_HOME/python/memops/gui/ directory. Most of the files can be run as Python scripts from the shell command-line to demonstrate something useful - for example: > python $CCPN_HOME/python/memops/gui/ScrolledMatrix.py
https://www.ccpn.ac.uk/v2-software/software/tutorials/python-api-course/ccpn-tkinter-graphical-objects
CC-MAIN-2020-34
refinedweb
2,162
54.02
Chip Rosenthal Wins Unicom Domain Name Case 170 Seth Schoen writes "As seen last month, Chip Rosenthal (whom many people know for Reply-to Munging Considered Harmful, among other projects) was threatened with the loss of his domain name unicom.com. He's now won in court and will get to keep the domain, at least for the time being." Why was there even a court case? (Score:3, Informative) According to the plaintiff's charges, Rosenthal was being accused of cyberpiracy. Why? It's almost like a child throwing a tantrum when he can't get his way. Evil Priority (Score:2, Offtopic) Re:Why was there even a court case? (Score:2, Informative) As for the suit, ok, the California part is over, but it's up to UnicomSI to decide if they want to pursue in the district of Western Texas. Until such time as they throw in the towel or Chip is victorious in Texas, it ain't really over. Leave him alone... (Score:4, Insightful) Sure, cybersquatting just to extort money out of a company or to otherwise do harm to a company is wrong; in these situations I think companies have legitimate beef with cybersquatters. However, let's not ignore that fact that this guy registered the domain in 1990! Re:Leave him alone... (Score:1) Re:Leave him alone... (Score:1) Re:Leave him alone... (Score:1) Of course, their failure to "protect their mark" for 12 years is extremely damning evidence. I say let him keep his domain name AND nullify the trademark as well *grin* Re:Leave him alone... (Score:1) Our next step is to register "tx" as a trademark in Uganda, then sue the state of texas and force them to surrender their Cyber squatters (Score:3, Insightful) Cyber squatters SUCK (Score:2) Actually, I have a real problem with that. There were all sorts of speculators who tried to sign up domains for recognizable trademarks in the hopes of making a lot of money selling the domain back to the trademark holder. To me, that is un-acceptable. Now, the people who registered generic terms for domains (i.e. drugs.com, beer.com, etc) - those were the smart people because the domain could properly be sold to the highest bidder. In this case (and the example you quote with the NHL Wild), the companies trying to get a domain that was registered long before the company existed, these people are no better than the cyber-squatters and do not deserved to be treated any better. It's too bad that they are the ones with the money and the lawyers though. It would be nice if ICANN and their *cough* impartial *cough* dispute mechanism could take this sort of thing into account. Mind you, it would be nice if ICANN had the broad interests of the entire Internet community at heart as opposed to the corporate interests it seems to represent. gTLD's SUCK (Score:1) Again, this is why the generic TLD's (.com, .net, etc) were a bad idea to from the beginning and should be done away with. We still live in a world with nations that have their own laws. Every domain should have a ccTLD, so the applicable laws apply. If you register a common word as a domain (beer.com.us) then you can auction it off. Individual countries would be responsible for 2LD management so that McDonald's (the restaurant chain) and McDonald's (the hardware store) could both have their name. You shouldn't be allowed to squat on your name across every namespace when others have a legitimate claim as well. If you are truly international, register your .int domain, give up rights/claims to any ccTLD domain you have, and agree to settle disputes with WIPO or similar agency. These stupid domain-name-trademark infringement cases would become drastically more scarce. Seems fair to me. Re:gTLD's SUCK (Score:1) Yes but individual states also have their own laws. And in many countries individual towns have their own laws. So by your argument, every domain should be geography-specific enough that it is in one, undebatable, jurisdiction. Even if it means going several subdomain layers down. Most people would consider that an unnacceptable sacrifice. The internet is a global network. That's one of its strongest benefits. Re:gTLD's SUCK (Score:1) In legal terms I think the phrase that applies is, "It's the law. Deal with it." Convenience might have to be sacrificed. TBL never intended URLs to be exposed to users anyway. Remembering that long string is only slightly less clumsy than remembering the IP. But, if your states/districts want their own control and their own subset of laws to apply, then you will have to deal with reduced convenience. If the citizens don't like it, vote in somebody who will change it. Just because the internet is a global network doesn't mean we have to throw out all rules (which are inconvenient) in favor of anarchy (which is convenient...until someone takes your domain name, at which point you wish you had some rules). Re:gTLD's SUCK (Score:1) One cannot rely completely on "first come" or trademarks or copyright. Domain disputes are complex problems. The bigest problem is all the lies and posturing associated with almost all disputes. In this case, he got there first and without incident for *12* years. Re:gTLD's SUCK (Score:1) Yeah, it would help if groups would actually stay in the TLD that best suited them. That's what I meant when I said they shouldn't squat across every namespace. Still, there are different "areas" for copyright. For instance, Acme Computers does not infringe on Acme Coffee, even thought they have the same name. Both have a legitimate claim to be the .company that "rightfully" gets acme.com. That's why we should move to ccTLD's exclusively, and let nations sort out the rest by their own (copyright) laws. Re:Cyber squatters SUCK (Score:2) Re:Cyber squatters (Score:2) No, it was entirely a speculative venture. The squatter buys up a domain name and sits on it, doing nothing (or they COULD make noises like they were going to use it). The entire intent, start to finish, was to prompt an interested body (a corporation, celebrity) to pay top dollar to get the rights to the name. Squatters essentially have no intention of actually using the names they registered for any purpose other than to drive a legitimate entity with interest in the domain to cough up lots of money to acquire the domain. Re:Cyber squatters (Score:1) So what? Re:Cyber squatters (Score:2) So what...is that it is now illegal. You don't get to be a cybersquatter anymore. If you want a domain name, get it, but you actually need to have a use for it beyond extorting money from deep pockets. You do not have an inherent internet right to the name pepsico.com, britneyspears.org, chevy.com. That's the way it is. Re:Cyber squatters (Score:1) Re:Cyber squatters (Score:1) In some cases, such as the theos.com issue, companies have agrued that because they couldn't find a web page the holder was cybersquatting. (Theo used it only for e-mail.) Now in Chip's case the company is arguing that BECAUSE he is using the web page he is either cybersquatting or infringing on trademark. Sounds like you can't win either way. Re:Cyber squatters (Score:2) You are doing something I am interested in setting up. I have a couple domains of use to me that I want to use for email. How do you go about it? I already have the registered domains, I am just not sure of the next step(s) so I can setup my personal email address as you did - my permanent, neverchanging email with a name I gave it. The good guy wins (sometimes) (Score:1) Now that's irony.. (Score:4, Funny) Re:Now that's irony.. (Score:2) Web site is Slashdotted. Please visit again later.That's *hilarious*! Funny (Score:2, Interesting) Re:Funny (Score:1) It's an interesting read and a very professional job, aside from the shout outs, of assembling a track. Kudos to Chip for keeping people informed and placing the relevant information in an easy to read and access format. Others could learn from him, or heck, even hire him as a consultant, I expect he could do with some employ. *hint* *hint* A rare bit of sanity (Score:3, Insightful) Re:A rare bit of sanity (Score:1) I agree with you on new companies but this wasn't the case here. This is a case of a company with a poor sense of timing. According to their site the have existed since 1981 but they didn't register a trademark until 1997? (Although they did mention something about a prior trademark that was abandoned in the suit). Then after registering a trademark they decide to register their domain name, find it exists, get pissed off and sue. Interesting analysis of "commercial" sites (Score:4, Informative) Unfortunately, this is a limited decision, but hopefully others (like WIPO!) would consider some of this ruling to be reasonable when deciding other domain name battles. Re:Interesting analysis of "commercial" sites (Score:2) Remember that WIPO is a for-profit thing. They make their money off of people who choose them to arbitrate intellectual property issues. Therefore, its in their best interest to *always* decide for the plantiff rather than the defendant in IP disputes so that more plantiffs bring their cases to WIPO. Indeed, this seems to be their business model, because something like 80% of WIPO cases are decided for the plantiff... complaint bringer... whatever. It's a clear case of conflict of interests, but it continues nonetheless. Re:Interesting analysis of "commercial" sites (Score:1) He didn't REALLY win--jurisdictional issues (Score:5, Informative) The court did not render a judgment stating he had the right to his domain. Rather, they said that suing in California was not permissible due to a lack of jurisdiction over him. There are several ways to establish jurisdiction over an out of state defendant: -If Chris had "systematic and continuous" contacts with the State of California -If his website was of an ambiguous (courts have a nebulous examination standard) level of interactivity and accessible from California (contacts with California established via the Internet) Because they didn't find either of those, the court determined that he couldn't be tried in that court. This does not preclude the plaintiff from bringing a case in Texas against him. Basically he just won this battle. It's possible the war is still going on. Re:He didn't REALLY win--jurisdictional issues (Score:5, Insightful) I think the big value in the Unicom v. Rosenthal decision is that it provides the independent web publisher some peace of mind that some company cannot reach out, claim jurisdiction, and make them fight a long-distance lawsuit. That's very expensive and very difficult. Re:He didn't REALLY win--jurisdictional issues (Score:1) Just my 2 cents... Re:He didn't REALLY win--jurisdictional issues (Score:1) I really think what's important coming up about these types of jurisdictional issues is the distinction courts are making with regards to the interactivity of websites determining whether or not you've "entered" the forum state. Your page is rather static, but if you'd offered a product for sale, or possibly the ability to directly communicate with another individual in California, the jurisdictional issue may have been decided otherwise. Actually, the Northern District of Texas set out a description of whether or not a website creates "sufficient contacts" in the forum state in Mink v. AAAA (190 F.3d 333). Basically the court described three points on a spectrum of websites: 1) fully interactive, and/or commercial, 2) somewhat interactive, 3) static pages. The court is the determiner of where a website falls, and if it's somewhere within 1 or 2, a website operator may be under jurisdiction in a state where the site is viewable. Granted this is one court's decision, but the extreme difficulties in quantifying interactivity make this a sticky issue. I really think this highlights the need for a clear legislative or Supreme Court ruling on these types of issues. Again, I'm just a first year law student, about a month and a half into civil procedure--but I have been following the growth of the Internet and the arising legal issues since 1994. I do think the correct result was reached on this issue, and hope that they don't pursue the case any further. Re:He didn't REALLY win--jurisdictional issues (Score:2) True, the judgement is based on jurisdiction. But the court goes on to explain why UNICOM wouldn't have a claim to the domain anyway: that a name, phone number, and resume do not make a site "highly commercial", and that Chip was (obviously So they establish some precedent that would make it very hard for UNICOM to appeal under a different jurisdiction. Re:He didn't REALLY win--jurisdictional issues (Score:2) Probably, but i'm not a lawyer either. Or a law student I figure even if the stuff I was referring to doesn't bear legal clout, it's still a warning to UNICOM of what they'll be up against if they move the fight to Texas. Appealing jurisdictional issues (Score:2) Re:Chip, not Chris--My Bad (Score:1) Re:Chip, not Chris--My Bad (Score:1) Here, there obviously wasn't. Re:Chip, not Chris--My Bad (Score:1) Re:Chip, not Chris--My Bad (Score:1) Re:He didn't REALLY win--jurisdictional issues (Score:1, Redundant) Which explains why he was not asleep. He was surfing instead ;-) yeah, this aint over... (Score:4, Informative) At any rate, Unicom Systems Inc will find a way to keep things going against Unicom Systems Development, we'll here more about this in a few months. Re:yeah, this aint over... (Score:2) Clarifying the Win (Score:5, Informative) Chip is in Austin, Texas, but the Plaintiff sued him in Los Angeles. When we responded to the Complaint, we made several alternative motions, one being that a court in California lacked personal jurisdiction over Chip, not only because he's in Texas, but also because he does not have sufficient contacts with California to make it reasonable for him to be dragged into court here. The Court granted our motion to dismiss for lack of personal jurisdiction. That's a big victory, there's much to be said for the proposition that courts do not have unlimited reach, even when the Internet is involved (think Matt Pavlovich and the California DVDCCA case, for example), but it isn't a ruling on the merits. If Plaintiff should choose to file a new action against Chip in Austin, we have plenty of ammunition for arguing the merits of his rightful claim to the unicom.com domain name, but readers should not assume that this win addressed that issue. The Court's ruling is here [unicom.com]. Have they tried (Score:2) You might want to write up an article about the case for this place or k5 [kuro5hin.org]. Re:Have they tried (Score:5, Informative) No, they didn't go through ICANN. (Allegedly) aggrieved domain name owners can either use the ICANN UDRP or go to court, they're not required to use the ICANN procedures first. These folks chose to go to court first. As for Chip, he would have no reason to go to ICANN. unicom.com is his domain name, he isn't contesting that the Plaintiff can keep its name, unicomsi.com. OK (Score:2) Re:Clarifying the Win (Score:2) Ok. So in the ruling, it says: Is this saying that had unicom.com been offering the sale of products or services to anywhere, including California, that would have justified jurisdiction for California courts in this case? That seems strange to me. Why would his selling products and services to California justify jurisdiction? Hypothetically, what if a website offered products that are legal in the state that it resides but illegal in California. For example a cheap system for disconnecting California emmissions systems on automobiles. This is something that might be valuable and legal to someone living outside of CA. Is it now the responsibility of the seller to refuse to sell the product to CA residents? If so, what does that say about the Skylarov case. Does California, or anywhere in the US justifiably have jurisdiction, and justification for arresting Skylarov based on his selling software that is (stupidly) illegal in California and the rest of the US? My concern about this is that it seems like it's setting a precedent for making it impossible to sell things over the web. The seller has to not just take into account the laws that govern the selling of the product where he lives. He has to be aware of the laws that govern the buyer of his product. This seems to me to be an unreasonable burden for the seller of products and services over the web to carry. IANAL. Did I get this wrong? Re:Clarifying the Win (Score:2) Re:Clarifying the Win (Score:2) The Complaint alleged that Chip tried to sell the name. That's very different from the actual fact, which is that Chip never tried to sell the name. Read the e-mail exhanges between Chip and Corry Hong, attached to Chip's declaration in support of the motion to dismiss, up at save.unicom.com Re:Clarifying the Win (Score:1) Re:Clarifying the Win (Score:1) It's still not cyberpiracy (ack, what an ugly and misleading term) if he created the site before the trademark existed. He got the site first-come, first-served, without any trademark issues, and has a perfect right to sell it to whomever he wants for however much they can agree to. Perhaps Unicom should have looked for others who were using the name "Unicom" before picking a trademark that would conflict with a pre-established domain name? Re:Clarifying the Win (Score:3, Informative) Yes, that allegation was in the complaint I deny it. You don't have to take my word for it. All of my communication with USI (prior to the cease and desist letter) was by email, and that email is available online [unicom.com]. You can read it and decide for yourself. Decision doesn't answer the question (Score:1) HAve you noticed (Score:4, Interesting) Re:HAve you noticed (Score:3) Actually, ICANN does have power, in that possession is often 9/10 of the law, and an ICANN decision will yank the name and put it in the plaintiff's hands right away. What is really interesting is that they chose not to use the ICANN "arbitration" procedure (I use the term in quotes for a reason), particularly in light of the fact that the ICANN "arbitration" procedure is designed to favor the plaintiff. The plaintiff pays for the procedure, chooses the arbiter out of several competitors (obviously the ones who tend to rule for the plaintiff are outcompeting the more fair alternatives), and the defendent has no recourse once the domain name is taken away (aside from a civil suit to get the name back). One could speculate either way on why they would go to the courts, rather than use a remedy procedure that costs less and is clearly slanted in their favor. It is interesting, in any event. Lawsuits vs UDRP: monetary damages and fines (Score:2) Also, violations of the various trademark and cybersquatting acts can lead to up to a $100K fine! The worst a UDRP hearing can do is a domain transfer. Maybe the company wanted to line its pockets with the defendant's money and/or make him bleed. Note: even middle class defendants have houses that can be foreclosed on and sold to pay a judgement - so a lawsuit is useful to hurt/destroy an opponent and can be profitable even if the defendant doesn't have any spare money lying around (since the courts can sell everything one owns). Just because it isn't moral doesn't mean it isn't legal. Plaintiffs get undeserved judgements that bankrupt defendants so often it is commonplace. Re:HAve you noticed (Score:2) Only if a domain is registered after a trademark is created do they start figuring out the purpose of the domain and such (and they usually side with the plaintifs, but that's another story.) At least if you go through the courts, there's a good chance they won't undertand and will side for you due to that. Not a ruling on merits, but interesting anyway (Score:2, Redundant) This decision, though good for Chip, does not address the merits of the case, only the question of whether a California Court has jurisdiction to hear it. Presumably, the plaintiffs could got to a Federal Court somewhere and get the case heard (disclaimer: I don't know all of the facts, so that is merely a presumption on my part) That doesn't mean the decision isn't interesting. The judge includes a nice discussion of purposeful availment and standards used to decide when the operator of a web site has or has not made him/herself subject to a jurisdiction's laws. Very good to see that mere presence doesn't trip the wire. Re:Not a ruling on merits, but interesting anyway (Score:1) Re:Not a ruling on merits, but interesting anyway (Score:3, Interesting) Thanks for the support, Dino. I think the decision goes beyond interesting, and really will be valuable. The jurisdiction question is an area of the law that needed clarification, and I'm really proud that we were able to do that. This decision will help shield the independent web publisher from "long arm" tactics, that would pull them into a long-distance lawsuit they couldn't fight. (By the way ... I was sued in a Federal court, not State court. If they want to come back after me, they are going to have to come to Austin and do it.) You are right that there were other matters in question, but once jursidiction was settled they all became moot. Somebody, someday, is going to have to litigate those issues too. (Hey, why you looking at me!!?!) Re:Not a ruling on merits, but interesting anyway (Score:2) Oops! Gotta get those glasses fixed. ;0) Like you, I was pleased by the Court's decision. Web Access may be worldwide, but the Web itself is as local as it is global. Following a snowstorm the other day, I hoped onto the local high school web site to see if my daughter's driver's ed class would be cancelled. My local library publishes it's schedule on the web. My neighbor across the street advertises his local heating and cooling business on the web. People in California or Timbuktu can look at these sites, but the intended audience is right here in St. Charles, IL. Well, Tony's happy to service customers in the surrounding towns, but not Calfornia. Reasonable standards of contact are required to activate long-arm statutes and I'm glad to see the Courts making sense when applying those standards to this case. Side note. Putting the check in the mail this week to renew my law license. I may not do much with it, but at least I'll be able to reply to any nastygrams as "Attorney at Law" The way things are going these days, that can only be a good thing. one more thing (Score:1) countersuit? (Score:1) If I were him, I would file a countersuit. He registered the domain in 1990, and since the company didn't file for the trademark until 1997, he might have a lawsuit for *them* for trying to name their company to take advantage of his domain's popularity. still penalized (Score:2, Funny) The thing.. (Score:5, Insightful) Re:The thing.. (Score:1) The anti-reform people say that the US system is good because it guarantees that the little guy gets his day in court. The problem with that stance is that the little guy can't afford his time in court anyway. Re:The thing.. (Score:2) This bothers me too. Chip, how 'bout it? Care to disclose how much this has cost you? I'd like to know. Personally, I think the plaintiff should be forced to reimburse Chip for all of his expenses + some percentage of that for wasting his time. Otherwise, what's to stop companies from simply dragging things out until you can no longer afford your legal bills? I for one find it bullshit that Chip should have to spend a single penny of his own money on this. If someone came after my domain, I don't know what I'd do. I don't have many thousands of dollars laying around to defend myself. I suppose I could go into debt and spend the next 10 years paying it off, but how fair is that? ./ time... (Score:3, Informative) Try the main page, and I see it now says:Okay, the google cache for his main page is at: Tom. Changed meaning of top domains ? (Score:1) Shouldn't a commercial company like Unicom have the rights to the Unicom.com address, rather that an individual exploiting the weak control of And with a new top domain like Re:Changed meaning of top domains ? (Score:2) Re:Changed meaning of top domains ? (Score:1) I seem to recall reading Chip's posts on usenet back then and UNICOM.COM was his address and the address of his 'company'. That alone gives hime the right to register and use unicom.com. One of the 'rules' for sitenames even back then was All the good names are taken. Re:Changed meaning of top domains ? (Score:2) THAT would've shown it's a Texas based company.. Re:Changed meaning of top domains ? (Score:1) A squatter, as has been pointed out, is one who takes the name of a company in hopes to sell it. Obviously, in this case, the term does not, in any streach of the imagination, apply. The problem with .com is not the fault of Chip Rosenthal, but rather the management of domain names. An interesting example is Microsoft. Go to either [microsoft.com] or [microsoft.net]. Both result in the same site, both are commercial sites, but one is .net. Unicom.com also can justify being a .com in that it is a service to the commercial community. I came across it while seeking information on mailing lists (granted, by the time I ran across it, it was slashdotted, and I can't see the article). As to Unicom having a right to the domain, there is no justification for that. They waited all this time to set up a website, they made no attempt to take care of this facet of their interests, and in no way affected Rosenthal's decision to register the address. I'm also rather disgusted by the reasoning behind the Squatter rulings. The issue is not that they tried to profit from the URL, but copyright/trademark infringment. Take for instance AltaVista.com. It was registered before the search engine, the owner had legal claim to it, yet he sold it. This is no different from companies buying a valuable piece of real estate they have no intention of developing. They take the risk and expense on themselves, and profit if it pays off. Re:Changed meaning of top domains ? (Score:1) The site (Score:2) I am Slashdotted Sorry ... this web site is Slashdotted at the moment. Here is the Google cache version of my main page. Until my new shipment of bandwidth arrives, you may want to visit the Save Unicom.Com web site to read about the lawsuit. Now how's *that* for Karma Whorin! ;-) Priority? (Score:4, Informative) Lots of other companies have used trademarks including the word "unicom". Rosenthal says he searched the federal trademark registry and found more than 20 registrations besides unicomsi's 3. Or in a search I just ran myself, Thomas Register (, registration required) lists 3 companies whose names start with "Unicom", not including Unicom Systems Inc. There's a maker of industrial air filters in Oregon, a printer in Alberta, and a "LAN products manufacturer" in California. I wonder if that one has heard from Unicomsi? IANAL, but it certainly looks to me that, no matter when unicomsi registered their various trademarks, they've never had priority to just the name "unicom", or even to that name in a computer-related market. According to Rosenthal, unicomsi's registered marks are graphic designs including the word "UNICOM" -- that makes the whole mark a valid trademark (assuming the graphics are unique), but it hardly gives them the right to the name itself. And if they did own the name as related to software, still they failed to defend it for 11 years. Anyway, this round was only about whether u-si could sue a Texas website in a California court. If u-si wants to hire a Texas lawyer, they can start over again in Texas -- of course, this is more expensive for them, and I'd certainly be amazed if they won in _any_ court on the facts I know of. If they do continue in Texas, a suggested settlement: Rosenthal puts their banner ad at the top of his web page "Were you looking for Unicom Systems, Inc., support for legacy...". Not that he couldn't beat them entirely, but it would save time and money. The great thing about that decision is that it tends against all the various silly lawsuits claiming that because your web site can be seen, or is mirrored in, or links to sites in some other jurisdiction, you can be yanked into court in a different state or country. Prior prior use (Score:2, Interesting) At O'Hare airport [airnav.com] UNICOM is on the 122.95 frequency. Fighting over the first use of the term UNICOM is like fighting over who owns "home page." Unicom is slashdotted, but (Score:1) Cybersquatting and Reverse-Cybersquatting (Score:2, Interesting) However, consider a reverse case. Consider if a smart large bank -- like JP Morgan and Co -- buys tons and tons of land, which is now cheap. Despite the land now being cheap, it will eventually be valuable, as the US population is increasing and more space will be required to house future populations. Once over-crowding starts occuring, and people experience the need to perhaps live on the inter-city land that populates our expressways/highways/throughways/whatever, the banks will be in prime-time position to sell that land at outrageous prices. That doesn't seem so fair, and for good reason. Why? Because it is the powerful using their resources to take advantage of the disempowered. Though these cases are relevant to the internet-case of cybersquatting and reverse-cyber-squatting, they don't map directly. These cases deal with real-world examples, real world property. The internet is more metaphysical, abstract: in the realm of ideas. (1) Cybersquatting is registering a domain name with no intent to use it, but simply the intent to use the name as leverage to get a company, organization, or person to buy it at the highest price possible; alternatively, the site may be used for some constructive purpose, but aa temporary location for that constructive purpose, with the end goal being using that domain name to extract maximum money from another entity. a. Against a company. An example would be my registering the domain name and never using it for anything, but simply hoping that IBM would pay you money to get the rights to it. This brings up an important point. As IBM already has a website -- ibm.com -- its claim to take that site from the original owner based solely on cybersquatting is diminished. IBM already has a recognizable domain name which will bring most people to it: in fact, the most recognizable domain name. Company.com is what you type as a standard to get to a companies home page. A case where the company would have a strong claim would be where it had no internet site before, and someone put up site w, and put nothing on it, its clearly to extract money. However, if they put up such a site and provided a message board about company products, criticisms, etc, as well as information and hints from ppl who've bought their products, then its not cybersquatting and the organization has no claim. b. Against an organization. For example, registering the domain name in hopes of extracting money from NARAL. Again, the same as above applies. NARAL has the most obvious most recognizable website for what they are, so their claim is diminished. But if someone puts up a site w c. Against a person. This hasn't occured much yet, but it may in the future. For example person A, named John Doe, puts up a website named. He has no intention of using that website, but knows Jane Doe is rich and will eventually want to have her own website after her own name; so he simply holds onto the website, in hopes that eventually he can squeeze her. This is cybersquatting. But if another woman with the same name, Jane Doe, puts up a website and uses it, its not cybersquatting. Finally, if a company or organization puts up a website with a persons name -- unless it be an organization member -- that's cybersquatting. Organizations/companies have no business putting up sites named with people's name. The only exception would be if that person is a member of the organization, or if they want to use that person as a positive example; i.e., an anarchist organization putting up the domain name KattieSierra.com to honor her. There's nothign malevolent about that; though, of course, if she doesn't want it, she has the rights to claim it. Every individual should be able to claim a domain name named after them. In cases where individuals share the same first, middle, and last names, first come first serve (unless one David Cassidy puts up a website titled David Heinrich to try to extract money from all the other David Heinrich). These are the easy cases. What about the hard cases. What if someone who hates you puts up a website with your first, middle, and last name -- johnxdoe -- and spews about how much of a jerk you are, makes hateful remarks about you, and otherwise demeans you on the site. Or worse, what if said person puts up a website with your name and pretends to be you, except misrepresenting you? I think that these cases are unacceptable. And I realize that's iffy. If someone wants to put up a website trying to masquerade as me or insult me, they should have to in some way put "anti" or something similar in the address: i.e.,. This is a minor restriction on freedom of speech which serves to prevent misrepresentation. Now, back to the comparisons with people hoarding potential rail-road land back in the day, or buying "2nd tier" beach property in California. There is a clear difference between those cases and stategic registration of domain names. Those cases apply for physical property and must be strategically made; one can't simply buy all land. Furthermore, one is actually abstracting the real value the land will hold in the future. That property is in fact that valuable, and would cost that much to the hotels. But if I squat a domain name, the company might have to pay me a million dollars for something that would've costed only a few bucks otherwise. Its not that I'm for big corporations. Its that this type of game-playing demeans the usefulness of the internet and domain names. And its not to say that big corporations don't play this game to. Corporations usually don't engage in cybersquatting; though they could if they wanted to. Cybersquatting is really a riskless activity, as I believe it should be. Do you really want to fine someone or put them in jail for that? The worst that can happen is the person loses his domain name, and doesn't get to sell it to the corporation for a high price. But back to corporations -- what they do do is distort cybersquatting norms to allow them to strangle competition or prevent sites from displaying that are critical of them, and otherwise abuse domain-name norms. A site opens up with the domain name,, and uses it to harsly criticize the RIAA. The RIAA sues for "cybersquatting". Plainly ludicrous. Cybersquatting implies that the "target" had the intent or motive to want to use the domain-name. The RIAA would never use that domain-name. Yet, they want to claim it in order to prevent criticism. This is a kind of reverse cybersquatting. It furthermore diminishes the functionality of domain names. People expect that if they type in such a domain name, they'll get a website against the RIAA, not a blank page. Another case is where companies try to take away competing companies domain names, or individuals domain names based on "trademark similarlities". Prime example, Lindows.com. Do they really think that people will confuse Windows with Lindows? Most intelligent people wouldn't. But even if they would, that's not Lindows fault -- that's the fault of ppl who are so dumb. Furthermore, Lindows intent isn't to confuse people, making them think its an MS product. Its simply to let them know that it should work fine with MS software. If anyone is confused, they'll be straightened out once they look at the sight. More disturbing is the implication by MS that they have trademark rights to anything that rhymes with Windows, or if of a similar sound. I think its obvious to most COMMON-SENSE people that something is or is not cybersquatting when they see it. But that ridiculous definition of "I'll know it when I see it" doesn't do. The public has a right to know EXACTLY is and is not acceptable; EXACTLY what is an is not, for example, "PORNOGRAPHY" (one of the more brilliant quotes by one of the 9 wise men, "I can't define it, but I'll know it when I see it"). If we cannot define precisely what is not an acceptable activity, we have no right to expect people not to do it. People need to know the rules of the game before they play. There's no reason why norms, laws, customs, etc can't be as precisely defined as the rules of chess. For example, in chess, there are a few official rules, clearly defined, and there are also some "unofficial" rules which any two professionals understand: (1) The official rules. I.e., how each piece moves, exceptions to the normal movement of pieces, conditions in which the king must move, stalemate conditions, and checkmate conditions. (2) The unofficial rules. A typical set goes something like this: 1 You touch a piece, you have to move it; 2 No taking back moves; 3 No talking; 4 No motions, positions, etc that would distract the opponent and detract from his/her ability to think. The rules in chess are clearly defined. There is no ambiguity. The rules governing law and domain-name resolution should be the same: precisely clear. I will attempt to propose some here. I do not pretend that they are perfectly clear, nor that they are comprehensive. But I will try to make them as much so as I can. Obviously, a real set of rules needs to be thoroughly thought out. Each rule must be stated as clearly as possible, as elegantlty as possible, and with as few words as possible. There must be a sufficient number of rules to cover all "inappropriate activity". Here's my rough draft: 1. IF someone registers a domain name (entity-name.com) BEFORE entity-name does, assuming entity-name exists at the time of registration, AND that someone has no intent of using that domain name, but only trying to extract money from entity-name, THEN it is cybersquatting. The entity-name should be able to obtain entity-name.com from the cybersquatter at the price of domain-name registration. 2. IF someone registers a domain-name (entity-name.com) BEFORE entity-name does, AND actually uses it for some purpose, whether connected to the domain-name or not, AND has no intent of using it to extract money from entity-name, THEN that is not cybersquatting. Entity-name can always register the domain-name Entity_name.com. 3. IF someone registers a domain-name (entity-name.com) BEFORE entity-name does, AND uses it for some purpose, whether connected to the domain-name or not, BUT has the intent of never-the-less using it to extract money from entity-name, AND is thus simply using that "purpose" as a front, THEN that is cybersquatting. The individual can copy the web-site content to his hard drive and post it at another domain-name. Meanwhile, entity-name should be able to get entity-name.com from that individual by paying him the cost of registration. 4. IF someone registers a domain name (entity-name.com) before entity-name exists, THEN no matter the post-entity-name existance activity of that someone, it is not cybersquatting. Whether or not the indivual makes use of that domain-name, it is clearly not his intent to use the domain-name to extract money from entity-name. Simply because the person has not yet used entity-name.com by the time entity-name comes into existence does not mean the person should be deprived of his site. There has been no planned extortion. Should entity-name offer the individual money to get that domain-name, so be it. 5. IF entity-name already owns a domain-name (entity-name.com) AND an individual creates a site with a similar domain-name (i.e., entity_name.com), AND that individual's end intent is to extract money from entity-name for entity_name.com, THEN that is cybersquatting. However, entity-name hasn't as strong a claim to have the domain-name taken away. Entity-name already has the best domain name possible (as they themselves have affirmed by registering that as their domain-name). They have no real need obtain entity_name.com when they already have entity-name.com. 6. IF entity-name already has a domain-name (entity-name.com) AND an individual creates a similar domain-name (i.e., entity_name.com or anti-entity-name.com), AND uses that domain name either to offer useful information about entity-name from a member/customer's pov, or to criticize entity-name, THEN that is not cybersquatting. Entity-name has no claim to take away that domain-name. 7. Dormancy time limit. I believe that all "intellectual property" -- if we are to have such a draconian thing -- should last a maximum of five years. Thus, for non-users of a domain-name, the domain-name is automatically relinquished from their control after 5 years if they do nothing with it. "Nothing" is a very high standard. If an individual uses the domain name for nothing other than saying, "I like blah blah blah blah blah", then that is NOT nothing. Nothing means either no page has been put there, or its just been a "for sale" sign for 5 years, or its just been an "under construction" sign for 5 years. 8. Assumption of innocence. The party brining the complaint must prove beyond a reasonable doubt that the other has done what is alleged. 9. The power tilt modifier. Naturally, in resolving disputes, the balance should be tilted towards the side of the less powerful, as the less powerful is more likely to be the innocent side in any given case, and the side less able to defend itself. If the less powerful is the person bearing the complaints, then its tilted towards them. If the less powerful is the person brining the complaint, then its tilted towards them. This does not overturn rule #8, but only modifies it slightly. If you have a business domain, get a trademark (Score:3, Interesting) Trademarks can be registered on either the "principal" or the "supplemental" register. Trademarks on the principal register can be enforced against others. Trademarks on the supplemental register can't be enforced against others, but prevent others from claiming you are infringing their trademark. If your application for registration on the principal register is rejected, you can often get a registration on the supplementary register, for which the standards are lower. In particular, you can usually get a supplementary register trademark on a commonly used word, which is valuable for domain purposes. Either way, you get to use the ® symbol, and you're protected against any trademark-related claims on a domain. This case is just one of thousands (Score:2) Because of they way the Internet is being mismanaged, conflict is impossible to avoid. solution to exclusively identify all trademark domains was always self-evident. It was ratified by honest attorneys - including the honourable G. Gervaise Davis III, UN WIPO panellist judge. I truly believe that the United Nations World Intellectual Property Organization and the United States Department of Commerce hide it for reasons of money and power - that they are corrupt. They wish to abridge peoples right to use these words - they violate the First Amendment. Please visit WIPO.org.uk [wipo.org.uk] to see the simple solution - no connection with United Nations WIPO.org A Question About DeCSS... (Score:1) Did the kid who wrote DeCSS in another country have any contact with California? (Which is where I thought the DeCSS case was litigated...) Is there a way that they can use this ruling? Re:Some info about IP. (Score:1) You are full of shit. Where did you get these numbers? Out of your arse? Re:Some info about IP. (Score:1) Product cost is extremely difficult to measure in a firm that produces more than one product and which engages in non-trivial research and development. Simply, you cannot measure products like this. The correct level for this type of socio-economic analysis is the household and the firm. Firms have expenses, some of which they can link directly to a good sold (i.e. the cost of the raw materials) and some which are extremely fluid (i.e. the salary of the CEO). But unlike expenses, revenues usually have one type: sales. It is therefore extremely easy to assess the revenue a product/service generates, but because of the issue with cost analysis, accounting for profit is only truly possible at the firm level. What I really don't understand is: what's your point? "Intellectual property" is supposedly good for workers, and therefore must be given priority protected legal status? Or are you saying we don't need to bother, since the workers are what's important because without them "intellectual property" is mostly useless anyhow? Re:Some info about IP. (Score:2) Well...I don't think I agree with that. Intel fits about 180 Pentium 4's onto a single wafer. Your "estimated" manufacturing cost of $20 per chip comes out to $3600 per wafer. A bare 8" wafer with an epitaxially grown SI02 layer goes for about $3000. Are you seriously suggesting that by the time that the chip emerges at the end of the process that only $600 of value is added to the entire wafer? Bear in mind that in order to develop up to 12 layers of material on the chip that a ton of photolithography, deposition and etching has to take place. And following that, the die have to be cut, tested, packaged...it's a very long process. From start to finish, it will take several days to process a wafer from bare silicon to finished package. To top it off, while 180 P4's may come off the die, they don't all work. A very mature process with substantially fewer process steps than that of a processor would cheer for a 90% yield rate. P4's aren't mature...draw your own conclusions from that. Also, Intel is cash flow positive. And their investment in R&D for Q4 2001 was fairly typical at about $950 million, but the cost of goods sold came in at well over $3 billion. In fact, it costs about $50 to produce the very cheapest Pentiums...the slow Celerons and PIII's that have been in production long enough to work out the kinks in the process. Nonetheless, your point about IP is well taken. I'm an engineer in R&D for a big semiconductor company...we spend hundreds of millions of dollars every year on R&D. While that's a significant portion of the company's budget (in Intel's case, it was 14% of revenues last quarter), it's safe to say that it isn't the major expense of the cost of the chips that we produce. Manufacturing expenses are the brunt of the expense. It simply costs a lot of money to build the chips. I don't know about stereos or freezers, but I do know about semiconductors. -h- You have the numbers backwards (Score:1) I worked at one of Ford Motor Company's top-earning manufacturing plants in 1997, and the numbers that I saw indicated that they were earning about $5,000 on a $30,000 vehicle ... *before* things like warranty costs were taken into account. So, instead of 90% profit 10% cost, the real numbers are more like 10% profit, 90% cost. There was a time when you could earn more money by keeping your money in the bank ... Re:Jim Tyre, Age 32, Found Dead This Morning (Score:2) The reports of my death are greatly .... When did this happen? Why wasn't I told? I am not rolling cigars [fujipub.com] in Tampa, or anywhere. Ouch (Score:1) Oh well, everyone find something different funny, I'll just try to be more obvious next time. (Am I the only one who thinks Offtopic should be +1? The most and probably only interesting conversations I've been in are the ones that wander over all kinds of different topics.) Re:Laws need to be changed. (Score:2) There are also torts of abuse of process and malicious prosecution. The problem with these are the companies that bring this, like Mattel [sorehands.com] are not concerned that a judge may order costs and fees. Costs and fees may amount to $10,000-$2000, where such a company would spend less than that in a month on outside counsel.In some companies, the inhouse counsel gets additional monies for assisting outside counsel. Re:Laws need to be changed. (Score:1) I wanted to repeat this (minus an ad hominim attack on lawyers ;-) so that people who are filtering out acowards could read it. I happen to agree with the logic. I think a better solution would be to have the losing party's legal team pay the winning party's legal fees. This penalty should be no greater than what the losing party paid for their own legal council. I wouldn't want to see situations evolve where a corp can simply threaten a legitimate lawsuit with "you'll owe us $1M if you lose, better settle now." But I would like to see a situation where Joe Public has a legitimate beef so he pays $100k for lawyers against a corp who spends $10M on their lawyers. Joe wins his legitimate beef so he gets the award and his lawyer gets a $10M bonus. This rewards the lawyer who takes up legitimate causes and punishes the lawyer that sues for the [insert comical frivilous lawsuit (i.e. hot McD's coffee)]. Corps would also benefit since they would have the possibility of actually recouping their defense against frivilous lawsuits instead of settleing. Re:Laws need to be changed. (Score:1) Maybe you find third-degree burns comical if you've only heard the Rush Limbaugh version of the case instead of what actually happened [atlanet.org].. Re:Laws need to be changed. (Score:1) From the information you posted, the next question is why she didn't sue the sweatpant manufacturer since "the sweatpants Liebeck was wearing absorbed the coffee and held it next to her skin." Sounds like gross negligence on the part of the sweatpant manufacturer. There should have been a warning label. Obviously McD's had deeper pockets Oh, and BTW, I'm no ignorant, conservative, corporation-loving McD's fan, but the sign on the McD's down the road reads 85 billion served. This works out to a roughly 99.9999177% chance of successfully NOT being burned. I wish my NT server had such reliability. Re:Laws need to be changed. (Score:1) Re:Laws need to be changed. (Score:1) And to think that I made tea this morning by pouring DANGEROUS BOILING WATER directly from the tea kettle into my cup. My fingers were mere inches from that stream of hazardous fluid. You have opened my eyes to the treacherous position I place myself in every day. If I don't see a warning label on the side of that kettle when I get home I'm going to sue the bastard for the sheer emotional trauma they have put me through contemplating how I might have been injured. P.S. I was really hoping someone would comment on my origional idea not rehash old arguments about coffee temperatures. J. Random. Aexia. Do YOU think tort reform that doesn't give all the power to the corporations is a good idea? Do you think the current system is perfect? Do you think a system that tries to keep frivilous lawsuits from being dragged out but rewards lawyers that undertake cases for the underdog should be implemented. Please no more coffee comments. I'm sorry. I'm so, so sorry. Re:Laws need to be changed. (Score:1) Did a McDonalds employee place the cup where it could spill on Ms. Liebeck's loins? Or did she? How the jury found McD's 80% liable for Ms. Liebeck's choice of coffee holder mystifies me. Much of this country's legal system mystifies me. Re:So what do I need to do to protect MY domain? (Score:2) these bullshit suits? Start saving and investing wisely now, so when the time comes you'll have a large bankroll to spend on more expensive lawyers than the plaintiff's. ~Philly
https://slashdot.org/story/02/02/05/1354231/chip-rosenthal-wins-unicom-domain-name-case
CC-MAIN-2017-22
refinedweb
9,282
63.7
For most programmers today, our jobs require us to integrate some sort of data into our application. Often, we have to take data from multiple sources, be they in memory collections, a database like SQL Server or Access, an XML file, Active Directory, the File System, etc. With today's languages and technologies, getting to this data is often tedious. For databases, using ADO.NET is just a bunch of plumbing code that gets really boring really fast. The story for dealing with XML is even worse as the System.Xml namespace is very cumbersome to use. Also, all data sources have different means of querying the data in them. SQL for databases, XQuery for XML, LDAP queries for Active Directory etc. In short, today's data access story is a mess. System.Xml The folks at Microsoft aren't oblivious to the problems of today's data access story. And so, now that C# 2.0 is almost about to be released, they've given us a look at C# 3.0 in the form of the LINQ project. LINQ stands for Language Integrated Query framework. The LINQ project's stated goal is to add "general purpose query facilities to the .NET Framework that apply to all sources of information, not just relational or XML data". The beauty of the LINQ project is twofold. First, LINQ is integrated directly into your favorite language. Because the underlying API is just a set of .NET classes that operate like any other .NET class, language designers can integrate the functionality these classes expose directly into the language. Second, and perhaps most importantly, the query functionality in LINQ extends to more than just SQL or XML data. Any class that implements IEnumerable<T> can be queried using Linq. That should elicit a feeling of absolute joy in you. Or maybe I'm just weird. In this article, I only want to focus on the language features of LINQ. Too many people confuse LINQ with DLinq and XLinq. They are not one and the same. LINQ is the set of language features and class patterns that DLinq and XLinq follow. That being said, in this article, we will only work with in-memory collections of data. So, without further ado, let's take a look at a basic LINQ program that serves absolutely no practical purpose (those are the best types, aren't they?): using System; using System.Query; namespace LinqArticle { public static class GratuitousLinqExample { public static void Main() { // The most active list on CP var mostActive = new string[] { "Christian Graus", "Paul Watson", "Nishant Sivakumar", "Roger Wright", "Jörgen Sigvardsson", "David Wulff", "ColinDavies", "Chris Losinger", "peterchen", "Shog9" }; // Get only the people whose name begins with D var namesWithD = from poster in mostActive where poster.StartsWith("D") select poster; // Print each person out foreach(var individual in namesWithD) { Console.WriteLine(individual); } } } } There we go. Now, at this point, you're probably looking at that saying, "That serves no practical purpose!" I would like to remind you: That's the point. So what we have here is a list of the most active CPians. Then we write this funky query in what looks kinda like SQL to get only the CPians whose names start with the letter D. And we write their names to the console. In this case, we've got everybody's favorite Tivertonian, David Wulff. There's not a whole lot special here. You could be sitting there thinking that you could just replace the entire thing with code that looks like this: // Print each person out foreach(var someone in mostActive) { if(someone.StartsWith("D")) { Console.WriteLine(someone); } } And you'd be right; you could replace it with that. But that wouldn't be cool, because it wouldn't have the gratuitous LINQ usage in the previous example. Now let's take a longer and closer look at LINQ. We'll also be looking at the new language features in C# 3.0. I'd like to point out first that these new language features run on the .NET 2.0 CLR. This is the key because, unlike the C# 2.0 features of iterators, anonymous methods, etc., there isn't any deep plumbing work that goes into making these 3.0 features possible. This will, most likely, mean a shorter release cycle for C# 3.0 and LINQ. (At least, that's how the rumor goes.) Anyway, now that we've got that out of the way, let's take a closer look at our LINQ code. First, we've got the cool new System.Query namespace reference: System.Query using System.Query; This namespace is all you'll need to get started with LINQ. Contained within it are vast troves of treasure, innumerable tomes of knowledge, and power beyond your wildest imagination…or maybe just a few classes and delegates. Then we've got this weird var keyword that keeps popping up all over the place. The JavaScript people in the audience should feel right at home now. var is a new keyword introduced in C# 3.0 that has a special meaning. First, let's talk about what var is not. var is not a variant datatype (the JavaScript people are no longer at home now) nor another keyword for object. var is used to signal the compiler that we're using the new Local Variable Type Inference in C# 3.0. So, unlike in JavaScript, where var means that this variable can hold absolutely anything we want, this var keyword tells the C# compiler to infer the type from the assignment. What do I mean by that? Well, let's look at a little snippet of code to demonstrate: var var myInt = 5; var myString = "This is pretty stringy"; var myGuid = new System.Guid(); In the above example, the compiler sees the var keyword, looks at the assignment to myInt, and determines that it should be an Int32, then assigns 5 to it. When it sees that we assign our string to the myString variable, it determines that myString should be of type System.String. Same goes for myGuid. Pretty cool, huh? If you then try to do something stupid like: myInt myString System.String myGuid myInt = "Haha, let's see if we can trick the compiler!"; We're going to get a nice compiler error message telling us how foolish we are: Cannot implicitly convert type 'string' to 'int'. (I'm sure if the compiler had seen Napoleon Dynamite, it would be saying, "Gosh! Freakin' idiot!" right about now.) string int Now, moving on, we can see that after we have our string array, we have this funky piece of code: // Get only the people whose name begins with D var namesWithD = from poster in mostActive where poster.StartsWith("D") select poster; This is where the real fun begins. What we see here is a variable being assigned to something that looks a lot like a SQL query. It's got the select (albeit in the wrong place), the from, the where; it's SQL, right? No. These keywords are some of LINQ's Standard Query Operators. When the C# compiler sees these keywords, it maps them to a set of methods that perform the appropriate operations. Alternatively, you could also have written the query given above like this: from where // Get only the people whose name begins with D var namesWithD = mostActive .Where(person => person.StartsWith("D")) .Select(person => person); This is what the LINQ people call Explicit Dot Notation. It's the same exact query and you can write your queries either way. And this, of course, leads to the side question of what those funny "=>" marks are. Those are another C# 3.0 feature: Lambda Expressions. Lambda Expressions are the natural evolution of C# 2.0's Anonymous Methods. Essentially, a Lambda Expression is a convenient syntax we use to assign a chunk of code (the anonymous method) to a variable (the delegate). In this case, the delegates we use in the above query are defined in the System.Query namespace as such: public delegate T Func<T>(); public delegate T Func<A0, T>(A0 arg0); So this code snippet: person => person.StartsWith("D") Could be written as: Func<string, bool> person = delegate (string s) { return s.StartsWith("D"); }; Lot more compact than the first way, isn't it? Lambda Expressions are basically just syntactic sugar around Anonymous Methods, and you can use either of them or even regular named methods when creating filters for these query operators. Lambda Expressions, though, have the benefit of being compiled either to IL or to an Expression Tree, depending on how they're used. That stuff's a bit too much for the current discussion though. Suffice it to say that Lambda Expressions are way cool. Next subject! The astute reader will notice that, till now, there's been no discussion as to where these methods come from that the standard query operators map to. I mentioned before that LINQ worked on anything that implemented IEnumerable<T>. One could reasonably assume, therefore, that these methods reside in the new C# 3.0 definition of the IEnumerable<T> interface. That assumption, however, would be wrong. These methods, which reside in the System.Query.Sequence class (whose source is available in the LINQ Preview install, by the way), are part of a new feature in C# 3.0 called Extension Methods. IEnumerable<T> System.Query.Sequence Extension Method is a new way of extending existing types. Basically, this works by adding a "this" modifier on the first argument, like so: (Example code shamelessly stolen from the Sequence class.) this Sequence public static IEnumerable<T> Where<T>( this IEnumerable<T> source, Func<T, bool> predicate) { foreach (T element in source) { if (predicate(element)) yield return element; } } There's nothing really special here, except for that "this" modifier on the first argument. The compiler sees this and treats it as a new method on the specified type. So now IEnumerable<T> gets the Where() method. Pretty cool, huh? Something to remember is that "real" methods get first priority. If you call Where() on an object, then the compiler goes to find Where() on the object itself first. If Where() doesn't exist, then it goes off to find an Extension Method. Clearly, while this feature is cool and really powerful, extension methods should be used extremely sparingly. Anders Hejlsberg warned those of us in the LINQ talk at the PDC not to add our favorite 10 methods to System.Object. This feature is probably the one that gives you the most potential to shoot yourself in the foot. Where() System.Object Now that we've seen the basics of LINQ and C# 3.0, let's look at a slightly more interesting example. First, let's define a new Poster class for ourselves: Poster public class Poster { public string name; public int numberOfPosts; public int numberOfArticles; public Poster(string name, int numberOfPosts, int numberOfArticles) { this.name = name; this.numberOfPosts = numberOfPosts; this.numberOfArticles = numberOfArticles; } } Now let's modify the previous example to utilize our new Poster class (clearly these values are going to change): public static void Main() { // The most active list on CP, with // names, posts, and message count var mostActive = new Poster[] { new Poster("Christian Graus", 22215, 32), new Poster("Paul Watson", 20185, 7), new Poster("Nishant Sivakumar", 18608, 99), new Poster("Roger Wright", 16790, 1), new Poster("Jörgen Sigvardsson", 14118, 7), new Poster("David Wulff", 13748, 4), new Poster("ColinDavies", 12919, 0), new Poster("Chris Losinger", 11970, 18), new Poster("peterchen", 11163, 9), new Poster("Shog9", 10605, 3) }; // Get only the people who have ridiculously // large post counts var peopleWithoutLives = from poster in mostActive where poster.numberOfPosts > 15000 select new {poster.name, poster.numberOfPosts}; // Print each person out foreach(var individual in peopleWithoutLives) { Console.WriteLine("{0} has posted {1} messages", individual.name, individual.numberOfPosts); } } Now, we've got an array of the most active CPians by message count and their articles. In our query, we specify that we only want those CPians with more than 15000 posts…but the select clause is different. Since we only want their name and their message count, not the number of articles they've posted, we just specify those two fields. This is a new feature of C# 3.0 called Anonymous Types (what's with all the anonymity in .NET now? Good grief!). Usually we only want certain fields from the collections we query, so this is a nice, easy way to query out just those fields. But, you say, what is that type called? Well, the CLR assigns it a name. It's probably something horribly unpronounceable too. But just accept the fact that it's a new type and has just those fields you asked for. Let's jazz up the sample a little and include some new operators: groupby and orderby. groupby orderby // Group the people with really large post counts var peopleWithoutLives = from poster in mostActive group poster by (poster.numberOfPosts / 5000) into postGroup orderby postGroup.Key descending select postGroup; // Print each person out in their respective group Console.WriteLine("Posters by group"); foreach(var group in peopleWithoutLives) { Console.WriteLine("{0}-{1}", (group.Key + 1) * 5000, group.Key * 5000); foreach(var person in group.Group) { Console.WriteLine("\t{0}", person.name); } } So we see here that we've got the ability to group people into categories by a certain criteria: the Key. In this case, our criterion is the number of posts they've made divided by 5000, so we can see who fits into each 5000 post block. The value of the criteria expression is then stored in the group's Key field. The difference between this query and the others is the return value. This query returns groups, which then contain the Poster items. Pretty nifty, eh? Well, there's a quick look at LINQ. In summary, we looked at the current, rather sad, state of today's data access story. Then we looked at how LINQ and the new language features in C# 3.0 solve these issues by giving us a consistent set of Standard Query Operators that we can use to query any collection that implements IEnumerable<T>. In this article, we only focused on in-memory collections of data in order to avoid the confusion that most people have when mixing LINQ with DLinq and XLinq, but rest assured that there's a way to access relational and XML data with LINQ. Otherwise there wouldn't be much point, now, would there? Furthermore, because LINQ is just a set of methods that adhere to the naming conventions for the Standard Query Operators, anybody can implement their own LINQ-based collections for accessing any other type of data. For instance, the WinFS team is going to be making their product LINQ-enabled. If you're as totally stoked about LINQ as I am, and want to read more about it, I'd recommend heading over to the LINQ Preview Site. There, you can download the LINQ preview package which integrates into Visual Studio 2005 Beta 2 to provide the new LINQ features and you can read more about DLinq and XLinq and the new C# 3.0.
http://www.codeproject.com/Articles/11715/A-Look-at-LINQ?fid=217911&df=90&mpp=10&sort=Position&spc=None&tid=1541259
CC-MAIN-2016-30
refinedweb
2,543
65.42
Okay, last update. I finally have got everything to work. It turns out the problems that I had earlier with f2py were due to intel's -ipo flag. So the only place this flag works is with the C++ code, not fortran or c. Also, I forgot to mention -- the qhull_a.h method has a workaround for some aspect of intel's compiler that is no longer needed and in fact causes an error. It's for a macro that simply suppresses unused variable warnings. In my opinion, it could be removed, as it's only used two places, and scipy spits out enough warnings that that is hardly an issue. Thus my change was around line 102 in qhul_a.h. Replace #if defined(__INTEL_COMPILER) && !defined(QHULL_OS_WIN) template <typename T> inline void qhullUnused(T &x) { (void)x; } # define QHULL_UNUSED(x) qhullUnused(x); #else # define QHULL_UNUSED(x) (void)x; #endif with #define QHULL_UNUSED(x) Also, I could still not get the CloughTocher2DInterpolator to not segfault. Thus I had to disable it by raising an exception in the init method. With this in place, everything compiles and the unit tests pretty much all run, with 5 failures mostly due to numerical accuracy stuff and 9 errors due to the interpolator. In summary, my final environment variables that give the flags for compiling stuff are: export FLAGS='-xHOST -static -fPIC -g -fltconsistency' export CFLAGS="$FLAGS -O2 -fno-alias" export CPPFLAGS="$FLAGS -fno-alias -ipo -O3" export CXXFLAGS="$CPPFLAGS" export FFLAGS="$FLAGS -O3" export F77FLAGS="$FFLAGS" export F90FLAGS="$FFLAGS" export LDFLAGS="-xHOST -O1 -openmp -lpthread -fPIC" And the arguments given to the fortran compiler in fcompiler/intel.py are: compiler_opt_flags = '-static -xHOST -fPIC -DMKL_LP64 -mkl -g -O3' I'd be happy to answer any more questions about the process as needed. Now, back to my real work. -- Hoyt ++++++++++++++++++++++++++++++++++++++++++++++++ + Hoyt Koepke + University of Washington Department of Statistics + + hoytak at gmail.com ++++++++++++++++++++++++++++++++++++++++++
https://mail.python.org/pipermail/numpy-discussion/2011-March/055640.html
CC-MAIN-2021-39
refinedweb
316
62.78
WSGI Host Service¶ Crossbar.io is able to host WSGI based Python applications, such as Flask, Pyramid or Django. This allows whole systems to be built and run from Crossbar.io, where classic Web parts are served from the former established Web frameworks, and running reactive parts of the application as WAMP components. The WSGI Web application runs on a pool of worker threads, unmodified and as all WSGI applications in a synchronous, blocking mode. The WSGI application cannot directly interact with the WAMP router, due to the difference in synchronous versus asynchronous operation. However, full bidirectional WAMP integration can be achieved using the HTTP Bridge. Configuration¶ To configure a WSGI Web service, attach a dictionary element to a path in your Web transport: : Example¶ See here for a complete example. Here is a minimal example using Flask. The overall files involved are: myapp.py templates/index.html .crossbar/config.json Create a file myapp.py with your Flask application object: from flask import Flask, render_template ## ## Our WSGI application .. in this case Flask based ## app = Flask(__name__) @app.route('/') def page_home(): return render_template('index.html', message = "Hello from Crossbar.io") Create a Jinja template file templates/index.html (note the templates subfolder): <!DOCTYPE html> <html> <body> <h1>{{ message }}</h1> </body> </html> Add a Web Transport with a WSGI Host Service on a subpath within your node configuration: { "controller": { }, "workers": [ { "type": "router", "options": { "pythonpath": [".."] }, "transports": [ { "type": "web", "endpoint": { "type": "tcp", "port": 8080 }, "paths": { "/": { "type": "wsgi", "module": "myapp", "object": "app" }, "ws": { "type": "websocket" } } } ] } ] }
https://crossbar.io/docs/WSGI-Host-Service/
CC-MAIN-2022-40
refinedweb
251
50.43
03 October 2012 14:37 [Source: ICIS news] LONDON (ICIS)--Crude oil futures extended losses on Wednesday, pressured by downbeat economic data from China and the eurozone. ?xml:namespace> By 12:50 GMT, the front-month November ICE Brent contract had touched an intra-day low at $109.25/bbl, a loss of $2.32/bbl, from the settlement on Tuesday. The contract then recuperated some of its losses to trade around $109.65/bbl. At the same time, the front-month November NYMEX WTI contract was trading around $90.55/bbl, having touched an intra-day low at $90.31/bbl, a loss of $1.58/bbl. China’s purchasing managers index (PMI) for the service sector weakened in September to 53.70 from 56.30 in the previous month suggesting the country's demand for oil could weaken. In the eurozone, data showed the market eurozone composite (PMI) edged lower in September to 46.10, from 46.30 in
http://www.icis.com/Articles/2012/10/03/9600809/crude-weakens-further-on-weak-china-eurozone-economic-data.html
CC-MAIN-2015-18
refinedweb
161
60.92
Method Overriding in Java means a Subclass uses extends keyword to override a super class method. In Overriding both subclass and superclass must have same parameters. Method Overriding in Java is used so that a subclass can implement a parent class method and then modify the parent class as needed. While using Method Overriding, following points should be kept in mind: Example of Method Overriding in Java : package Overriding; class A { public void call() { System.out.println("Print of A "); } } class AB extends A { public void call(String string) { System.out.println("Print of AB "); } } class AC extends A { public void call(String string) { System.out.println("Print of AC "); } } public class Example2 { public static void main(String[] args) { A a = new A(); AB ab = new AB(); AC ac = new AC(); //a.call(); ab.call(""); ac.call(""); } } Output: Print of AB Print of AC If you enjoyed this post then why not add us on Google+? Add us to your Circles Liked it! Share this Tutorial Discuss: Method Overriding in Java Post your Comment
http://roseindia.net/java/javatutorial/method-overriding-in-java.shtml
CC-MAIN-2014-41
refinedweb
174
63.8
Pneuma The vital spirit, soul, or creative force of a person. "In Stoic philosophy, the pneuma penetrates all things, holding things together." Wikipedia Basically, yet another server framework, that provides middleware architecture and possibility to write MVC backend applications. Fast and very simple. Kickoff in a minute. If you used expressjs before then this framework will not be hard for you to understand. Installation & setup Put dart_pneuma package into your pubspec.yaml dependencies section and run pub get. Import the dependency inside your main file import 'package:pneuma/pneuma.dart'; void main() { Pneuma app = Pneuma(); app.start(); } This will start basic http server on default host and port 127.0.0.1:8080 You can either provide host and port named parametes into the constructor or use environment variables: IP and PORT Usage Pneuma could be used in three ways: - Writing routed handlers - Using middlewares - Writing controllers with route maps Writing routed handlers Idea is the same as in nodejs's expressjs library. You can just map paths to a specific handler which will process the request or pass it forward by calling next callback. import 'package:pneuma/pneuma.dart'; final RegExp allRoutes = RegExp('.*'); void main() { Pneuma app = Pneuma() ..get('/user', (req, res, next) { res.send('Hello user'); }) .post('/user', (req, res, next) async { dynamic body = await req.body; print(body); res.send('User has been updated'); }) .match(allRoutes, (req, res, next) { res.send('Not found'); }); app.start(); } Using middlewares Middleware abstract class should be extended to create your middleware. You will need to override run method, which will receive Request and Response instances as an arguments and should return Future of null or next Middleware usually accessible with this.next property, as Middleware is LinkedListEntity import 'package:pneuma/pneuma.dart'; class LogMiddleware extends Middleware { @override Future<Middleware> run(Request req, Response res) { DateTime start = DateTime.now(); res.done.then((_res) { DateTime sent = DateTime.now(); String stamp = sent.toIso8601String(); double timeTaken = (sent.millisecondsSinceEpoch - start.millisecondsSinceEpoch) / 1000; print('[$stamp]: ${req.method.name} ${_res.statusCode} ${req.uri.toString()} took ${timeTaken} sec.'); }); return new Future.value(this.next); } } void main() { Pneuma app = Pneuma() ..use(LogMiddleware()) ..get('/user', (req, res, next) { res.send('Hello user'); }); app.start(); } Writing controllers with route map As once popular and still widely used, MVC architectural pattern is very usefull to build large scale applications. Extended from Middleware, Controller class can also be extended to let you map application routes to specific action methods, which should process the request as an endpoints of the app. class UserController extends Controller { @override get routeMap => { RegExp(r'^\/user'): { RequestMethod.GET: indexAction, }, }; void indexAction(Request req, Response res) { res.send('Index Page'); } } void main() { Pneuma app = Pneuma() ..use(UserController()); app.start(); } Plans routeMap is a Map which, in oreder for actions to be mapped to specified paths, should be defined. Next step for the controllers will be defining annotations to mark actions for a specific route in a more friendlier manner. class UserController extends Controller { @Route(r'^\/user') void indexAction(Request req, Response res) { res.send('Index Page'); } }
https://pub.dev/documentation/pneuma/latest/index.html
CC-MAIN-2021-04
refinedweb
505
52.36
In Multi-stage docker build of Haskell webapp blog post I briefly mentioned data-files. They are problematic. A simpler way is to use e.g. file-embed-lzma or similar functionality to embed data into the final binary. You can also embed secret data if you first encrypt it. This would reduce the pain when dealing with (large) secrets. I personally favor configuration (of running Docker containers) through environment variables. Injecting extra data into containers is inelegant: that's another way to "configure" running container, when one would be enough. In this blog post, I'll show that dealing with encrypted data in Haskell is not too complicated. The code is in the same repository as the previous post. This post is based on Tutorial: AES Encryption and Decryption with OpenSSL, but is updated and adapted for Haskell. To encrypt a plaintext using AES with OpenSSL, the enc command is used. The following command will prompt you for a password, encrypt a file called plaintext.txt and Base64 encode the output. The output will be written to encrypted.txt. openssl enc -aes-256-cbc -salt -pbkdf2 -iter 100000 -base64 -md sha1 -in plaintext.txt -out encrypted.txt This will result in a different output each time it is run. This is because a different (random) salt is used. The Salt is written as part of the output, and we will read it back in the next section. I used HaskellCurry as a password, and placed an encrypted file in the repository. Note that we use -pbkdf2 flag. It's available since OpenSSL 1.1.1, which is available in Ubuntu 18.04 at the time of writing. Update your systems! We use 100000 iterations. The choice of SHA1 digest is done because pkcs5_pbkdf2_hmac_sha1 exists directly in HsOpenSSL. We will use it to derive key and IV from a password in Haskell. Alternatively, you could use -p flag, so openssl prints the used Key and IV and provide these to the running service. To decrypt file on command line, we'll use -d option: openssl enc -aes-256-cbc -salt -pbkdf2 -iter 100000 -base64 -md sha1 -d -in encrypted.txt This command is useful to check "what's there". Next, the Haskell version. To decrypt the output of an AES encryption (aes-256-cbc) we will use the HsOpenSSL library. Unlike the command line, each step must be explicitly performed. Luckily, it's a lot nice that using C. There 6 steps: To embed file we use Template Haskell, embedByteString from file-embed-lzma library. {-# LANGUAGE TemplateHaskell #-} import Data.ByteString (ByteString) import FileEmbedLzma (embedByteString) encrypted :: ByteString encrypted = $(embedByteString "encrypted.txt") Decoding Base64 is an one-liner in Haskell. We use decodeLenient because we are quite sure input is valid. import Data.ByteString.Base64 (decodeLenient) encrypted' :: ByteString encrypted' = decodeLenient encrypted Note: HsOpenSSL can also handle Base64, but doesn't seem to provide lenient variant. HsOpenSSL throws exceptions on errors. Once we have decoded the cipher, we can read the salt. The Salt is identified by the 8 byte header ( Salted__), followed by the 8 byte salt. We start by ensuring the header exists, and then we extract the following 8 bytes: extract :: ByteString -- ^ password -> ByteString -- ^ encrypted data -> IO ByteString -- ^ decrypted data extract password bs0 = do when (BS.length bs0 < 16) $ fail "Too small input" let (magic, bs1) = BS.splitAt 8 bs0 (salt, enc) = BS.splitAt 8 bs1 when (magic /= "Salted__") $ fail "No Salted__ header" ... Probably you have some setup to extract configuration from environment variables. The following is a very simple way, which is enough for us. We use unix package, and System.Posix.Env.ByteString.getEnv to get environment variable as ByteString directly. The program will run in Docker in Linux: depending on unix is not a problem. {-# LANGUAGE OverloadedStrings #-} import System.Posix.Env.ByteString (getEnv) import OpenSSL (withOpenSSL) main :: IO () main = withOpenSSL $ do password <- getEnv "PASSWORD" >>= maybe (fail "PASSWORD not set") return ... We also initialize the OpenSSL library using withOpenSSL. Once we have extracted the salt, we can use the salt and password to generate the Key and Initialization Vector (IV). To determine the Key and IV from the password we use the pkcs5_pbkdf2_hmac_sha1 function. PBKDF2 (Password-Based Key Derivation Function 2) is a key derivation function. We (as openssl) derive both key and IV simultaneously: import OpenSSL.EVP.Digest (pkcs5_pbkdf2_hmac_sha1) iters :: Int iters = 100000 ... let (key, iv) = BS.splitAt 32 $ pkcs5_pbkdf2_hmac_sha1 password salt iters 48 ... With the Key and IV computed, and the ciphertext decoded from Base64, we are now ready to decrypt the message. import OpenSSL.EVP.Cipher (getCipherByName, CryptoMode(Decrypt), cipherBS) ... cipher <- getCipherByName "aes-256-cbc" >>= maybe (fail "no cipher") return plain <- cipherBS cipher key iv Decrypt enc ... In this post we embedded an encrypted file into Haskell application, which is then decrypted at run time. The complete copy of the code is at same repository, and changes done for this post are visible in a pull request.
https://oleg.fi/gists/posts/2019-07-20-embedding-secret-data-into-docker-images.html
CC-MAIN-2021-25
refinedweb
822
68.57
crazy guy on the airplane This is a recursion problem. Let's call P(M) the probability that the last passenger gets his assigned seat, given a 'crazy guy' situation of size 'M', i.e., with M passengers and M seats. We want P(100). For a crazy guy ('CG') situation of size 'M', let P(M,n) denote the probability that the last passenger gets his assigned seat when the crazy guy ('CG') takes seat 'n'. Since CG's (equally likely) choices partition the outcome space, the final result is: P(M) = [P(M,2) + P(M,3) + ... + P(M, M-1)]/(M-1). (Note: P(M,M)=0) How do we compute each P(M, n)? If CG chooses seat 'n' then passengers 2..(n-1) get their assigned seats, and passenger 'n' must randomly choose a new one. If he chooses seat 1 ( probability = 1/(M+1-n) ) then each passenger behind him - notably the last one! - gets his assigned seat. On the other hand, if passenger 'n' rejects seat 1 then the original scenario recurs on a smaller scale: passenger 'n' leads a new virtual 'crazy guy' situation of size (M-n+1). Thus, P(M, n) = 1/(M-n+1) + [(M-n)/(M-n+1)]* P(M-n+1) This process is recursed until the last passenger is seated. (Side note: if the last passenger doesn't get his assigned seat, then he ends up in seat 1; at least he gets to be first one off the plane! :) Back to the original problem (100 passengers led by the crazy guy): P(100) = [P(100, 2) + P(100,3) + .. + P(100,99)]/99 where P(100,2) = 1/99 + (98/99) * P(100,3) P(100,3) = 1/98 + (97/98) * P(100,4) ... P(100,98) = 1/3 + (2/3) * P(100,99) P(100,99) = 1/2. (someone with a stack deeper than mine can crunch this! :) Note: Assigning passengers to sequential seats is equivalent to requiring that no passenger may sit until everyone ahead of him is seated. (hope I didn't screw this up!! :) Chuck Boyer Sunday, August 8, 2004 The last guy will never get to sit on any seat except 1 or 100. This fact leads to the probabilty being 1/2. In fact this is true for any number of passengers more than one. Aryabhatta Monday, August 9, 2004 Thanks for the comment! :) "The last guy will never get to sit on any seat except 1 or 100." True. "This fact leads to the probabilty being 1/2." Indeed there are two outcomes, but that doesn't necessarily mean they are equally likely. "In fact this is true for any number of passengers more than one." Here are two counterexamples: For 2 passengers: crazy guy will certainly take seat 2, so there is zero probability that passenger 2 will get his assigned seat. For 3 passengers: passenger 3 gets seat 3 only if crazy guy takes seat 2 (p = 1/2) and passenger 2 takes seat 1 (p = 1/2). Thus passenger 3 gets his assigned seat with probability 1/4. The possible seating permutations are (3,2,1), (3,1,2), and (2,1,3); their respective probabilities are: P(3,2,1)=1/2, P(3,1,2)=1/4, and P(2,1,3)=1/4. Chuck Boyer Tuesday, August 10, 2004 "The last guy sits in 1 or 100. This leads to probability being 1/2." I didnt say that this is because there are only 2 outcomes. It just said that this fact leads us to probability being 1/2. Here is why: For every combination in which the last guy sits in his own seat we have a combination in which the last guy sits in seat 1 and vice versa. Consider a combination in which 100 th guy sits in his own seat. Suppose person X is sitting at seat 1 because Y is sitting in X's seat. Consider the combination where Y is sitting at 100 and X is sitting in X and 100th guy is sitting in 1. The cases when 1 sits in his own seat or where Y is 1 are easy to map. We can give a similar mapping in the converse case. That is why the probability is 1/2 and *not* because there are only 2 outcomes. I was just being lazy. The crazy chooses a seat a random, so he could choose his own seat. That is how i interpreted it. I think you interpreted it as he chooses a seat apart from his own. So in the way I interpreted it, probability will be 1/2 for 2 people, 3 people etc. Aryabhatta Tuesday, August 10, 2004 Think in a simple way. When the last person arrive. Every thing is suffled. 100 Seats , 1 Vaccant anf it is random. P(100th, in 100) is 1/100 or 1 %. NK Tuesday, August 10, 2004 "The crazy chooses a seat a random, so he could choose his own seat. That is how i interpreted it. I think you interpreted it as he chooses a seat apart from his own." Yes, we each considered a different problem even though we read the same 'requirements document'. I wonder what Humpty* really meant by "... will ignore the seat number on their ticket, ...". :) * I think I'm with chuck's solution here.... Actually I made a little program to calculate the formula and get the exact number. It's taking awfully long because of the recursion, it's been 30 minutes and it's not done yet on my PIII 933 processor. Will update you with the number when it comes out :) Christian Kamel Wednesday, August 11, 2004 If what Aryabhata says is true, then with Chuck's interpretation the probability won't be 1/2. Will it? Major Thursday, August 12, 2004 Aryabhata's reasoning yields a simpler way to solve my interpretation of the problem. Our interpretations are related as follows: If the crazy guy takes seat F and the last passenger ends up in seat L then, for an M passenger problem, F is either in in {1, 2, ... , M} (Aryabhata's interpretation), or in {2, 3, ... , M} (my interpretation). In either case L is in {1, M} and P(L=M) = P(L=M|F=1)P(F=1) + P(L=M|F!=1)P(F!=1) ('!=' means 'is not equal to'; '|' means 'given that') My interpretation computes P(L=M|F!=1). Aryabhata's interpretation says that the sum is 1/2. Setting P(L=M) to 1/2 yields: 1/2 = (1)(1/M) + P(L=M|F!=1)[(M-1)/M] or P(L=M|F!=1) = (1/2)*(M-2)/(M-1) I hope Christian's calculations get something close to 49/99. :) One extra comment, if I may... Aryabhata's solution hinges on: "For every combination in which the last guy sits in his own seat we have a combination in which the last guy sits in seat 1 and vice versa." This reasoning leads to an answer of 1/2 because, for every such permutation pair, each half of the pair has the same probability - i.e., whenever a passenger finds his assigned seat occupied, he is as likely to choose seat 1 as he is to choose seat 100. This might have been obvious to some, but I had to think about it for a minute. Follow-up question: On average, how many passengers get their assigned seat? ;) Chuck Boyer Thursday, August 12, 2004 I'm sorry, I left the thing running for 10 hours and it didn't reach a result, I'm sure the code is OK with no infinite loops or anything cuz I got results for up to 30 ppl. For 5ppl the probability worked out to 0.125 or 1/8 For 10 it's 0.05555 For 15 it's 0.03571 for 20 it's 0.026316 Do these numbers ring any bells ? They don't for me... I am beginning to think there is gotta be a more elegant solution.... Christian Kamel Friday, August 13, 2004 And there is the code in c++ in case anyone wants to take a look at it... float SolveCrazyGuy (int iPassengerNumber) { float fProbSoFar = 0; if (iPassengerNumber <= 2); else { for (int i=2; i < iPassengerNumber; i++) { fProbSoFar += GetProb(iPassengerNumber, i); } fProbSoFar /= iPassengerNumber-1; } return fProbSoFar; } float GetProb(int iPassengerNumber, int iCrazyGuySeat) { if (iCrazyGuySeat == iPassengerNumber-1) return 0.5; else return ( 1/(iPassengerNumber+1-iCrazyGuySeat) + ( (iPassengerNumber-iCrazyGuySeat)/(iPassengerNumber-iCrazyGuySeat+1) * SolveCrazyGuy(iPassengerNumber-iCrazyGuySeat+1) ) ); } while chin scratching for a more elegant solution I think I may have found one. Let's try to find the probability that the last guy DOES NOT sit in his place, that is seat 100 is already taken by anyone by the time he gets on the plane. If we get this P(100) we can easily get the other one as 1 - P If seat 100 is already taken then it must have been taken by any of the previous 99 passengers. So the combined probability is P(1, 100) + P(2, 100) + P(3, 100) + ...... + P(99, 100) where P(1, 100) is the probability of the first guy (the crazy guy) to sit in the chair no. 100, P(2, 100) is the probability of the second guy sittin in the chair 100, P(3, 100) is the probability of the 3rd guy sitting in the chair 100 and so on. Crazy Guy would sit in the 100th chair only if he chooses it from among the other 99 so P(1, 100) = 1/99 since the crazy has 99 seats to choose from Now the second guy would only sit in chair 100 if #1 guy (cz) sat in his chair and then he chose chair no. 100 to sit in, that is P(2, 100) = 1/99*1/98 The 3rd guy has more cases, first guys sat in 2, 2nd guy sat in 3 and then he chose 100, or first guy sat in 3 and he chose 100 so P (3, 100) = 1/99 * 1/98 * 1/97 + 1/99 * 1/97 The 4th guy's probabilty is P (4, 100) = 1/99*1/98*1/97*1/96 + 1/99*1/98*1/96 + 1/99*1/96 In words that is After writing it all down I'm wondering if this one is really more elegant :) Does anybody think this has a chance? I've been flooding this thread with messages sorry about that but I realized a minor glitch in my previous calculations, the concept should be correct but the numbers should be: P(1, 100) = 1/99 P(2, 100) = 1/99*1/99 (not 98 because guys #2 has all seats to choose from except his, so 99 seats) Accordingly P (3, 100) = 1/99 * 1/99 * 1/98 + 1/99 * 1/98 and P (4, 100) = 1/99*1/99*1/98*1/97 + 1/99*1/99*1/97 + 1/99*1/97 So basically I just forgot one chair out of all the cases. And it may be worth mentioning that numbers from this method coincide nicely with the numbers from Chuck's method, I tested it for small cases of 2 to 5 people. Christian, I admire your perseverence! :) If my alternative approach is right, though, I'd have expected the solution sequence you calcuated to converge to 1/2. There may be a subtle error on one line of your 'SolveCrazyGuy' function. Instead of fProbSoFar /= iPassengerNumber-1; try fProbSoFar /= (iPassengerNumber-1); I'also m thinking the number of iterations is some function of M factorial (M!), in which case this problem could take a *very* long time to compute. :) Not surprisingly, your proposed approach looks fine to me. ;) Chuck Boyer Saturday, August 14, 2004 Thanks Chuck :) Well, I'm afraid the numbers do not converge to 0.5, actually the number is already 1/8 for a plane of 5 people. And it keeps getting lower. I changed the line but the results are the same, so I guess operator precedence took care of this one, I like to make sure always though so I'm keeping this change. I am still working on implementing my formula in code and see if we can get a result faster, I am expecting the new code will be of complexity O(n squared) instead of the O(n to the power n) we're looking at now. I've never actually written any useful code of this order of complexity, this should be the first. That is of course if you consider solving the puzzle useful :) Christian Kamel Saturday, August 14, 2004 Surprise Surprise I turned my solution to code and I got 0.947 something probability that the 100th guy sits in his own seat. Actually the results are the exact opposite of the earlier solution, the probability for the last guy to be seated in his own seat increases with the number of passengers in my solution. Chuck, your solution yields decreasing probability with increasing passenger number. I guess there is gotta be some mistake there..... And I've lost all hope that my code for your solution will give us a number, 5 hrs is not enough for it to compute the probability for 40 passengers. The other solution's (mine that is) code works like charm though, the code is of O(n squared) complexity I still admire your perseverance, Christian! :) FWIW, I've only encountered one occasion in which a recursive function was the clean way to solve to a particular problem, and it rarely had to go more than 3 iterations deep. Even the canonical example of computing N! is usually done more easily by iteratively multiplying up from 1 than by recursion. In doing some manual calculations for low M, I did notice that my approach produced decreasing probabilities. So something's certainly wrong somewhere with one of the solutions. I think the crux is to be sure the sample space gets correctly defined and partitioned. At least I'll have something to think about during my morning runs. ;) Chuck Boyer Sunday, August 15, 2004 Christian, I think you might have overlooked something in implementing your approach. :( If your program follows the pattern you laid out above, I don't think your P(n,100) calculations account for all of the seating permutations when n > 3. Specifically, It looks like your expression for P(4,100) skipped one of the permutations that place guy 3 in seat 4. Here's the pattern, revisited: (Note: extended n-tuples, e.g. (100,1, ... ,2), represent seating permutations.) P(1,100) = P(100, ... ,1) = 1/99 P(2,100) = P(100,1, ... ,2) = 1/99 * 1/99 P(3,100) = P(100,1,2, ... ,3) + P(100,2,1, ... ,3) = (1/99 * 1/99 * 1/98) + (1/99 * 1/98) P(4,100) = P(100,2,3,1, ... ,4) + P(100,1,3,2, ... ,4) + P(100,1,2,3, ..., 4) + P(100,2,1,3, ... ,4) = (1/99 * 1/97) + (1/99 * 1/99 * 1/97) + (1/99 * 1/99 * 1/98 * 1/97) + (1/99 * 1/98 * 1/97) Looking ahead, P(5,100) will have 7 permutations: (100,2,3,4,1, ... ,5) (100,1,3,4,2, ... ,5) (100,1,2,4,3, ... ,5) (100,2,1,4,3, ... ,5) (100,1,2,3,4, ... ,5) (100,2,1,3,4, ... ,5) (100,2,3,1,4, ... ,5) It looks to me like each P(n+1, 100) will have to account for 'n' more permutations than P(n,100). It wasn't clear to me that your approach was doing that. Chuck Boyer Monday, August 16, 2004 I forgot one P(5,100) permutation; the (reordered) permutations for seats 2..5 are: 2 3 4 1 2 3 1 4 2 1 3 4 2 1 4 3 1 3 4 2 1 3 2 4 1 2 4 3 1 2 3 4 This renders my conjecture about the permutation count incorrect. Guess my brain was still hypoxic from running. ;) you're pretty much perseverant yourself :) Well I see your point , but I guess my code implementation already accounts for that, it seems like I just forgot to put it in when explaining the formula. So I'm wondering does that mean the numbers I get are correct? I tested on 5 ppl plane and I think my solution got 1/3 probability, while my code for yours gave 1/6. Trying the permutations by hand it seemed 1/3 is about right. Also I checked other forums, it seems many other people understood the "requirements document" as such that the crazy guy may end up on his own seat after all. I think this would be a minor tweak in my formula to start with P(1, 100) to be 1/100 instead of 1/99..... Christian Kamel Monday, August 16, 2004 Some think I'm obsessive. ;) We *are* bludgeoning a dead mouse with a sledgehammer, aren't we! ;) But it bugs me to miss a logic error and I frankly don't see a flaw in either approach. :/ I should reinstall my compiler and do some exploratory crunching of my own to look for clues and sharpen the edge a little. I shouldn't have confused the operator precedence of '/=' with that of '/' ! ;) Ya know, if the 'spec' had said the crazy guy ignored 'the number' on his ticket rather than 'the seat number' on his ticket, I might have leaned toward the other version of the problem. But it's also possible my understanding of 'ignore' might be warped. (I love that Humpty Dumpty dialog in TTLG! ;) Well, I've used up the rest of my spare time for the year on the puzzles here so I should probably quit scribbling and do something useful. ;) Nice puzzling with you, Christian! :) Still obsessing, but almost done... ;) First, I erred in my original comment when writing the illustration for M = 100; it should have read thus: P(100) = [P(100, 2) + P(100,3) + .. + P(100,99)]/99 where P(100,2) = 1/99 + (98/99) * P(99) P(100,3) = 1/98 + (97/98) * P(98) ... P(100,98) = 1/3 + (2/3) * P(3) P(100,99) = 1/2. (= P(2) ) ~~~~~~~~ I enumerated the permutations for the 5-seat problem and used your approach to manually calculate their probabilities. This yielded P = 3/8 for the permutations with guy 5 in seat 5, while the remaining permutations added up to P = 5/8, as they should have. If you use the formula I got when melding Aryabhatta's result with mine, you get: P(L=5 | F!=1) = 1/2 * (5-2)/(5-1) = 3/8 which matches the result obtained by enumeration; not a proof of correctness, but at least a small comfort. ;) I may repeat for M=4, and do the recursion manually to see how it goes. If I got things wrong, I want to know why. (what else does a geek have to spend his personal time on? ;) Alrighty then.... ;) Well, I manually solved for M=3, M=4 and M=5 using all three approaches - enumeration, recursion and the formula. In each case, all 3 approaches gave the same answer: P(3) = 1/4; P(4) = 1/3; P(5) = 3/8; I have a a little confidence that my approach wasn't too far afield and that the formula P(M) = 1/2 *(M-2)/(M-1) gives the correct result for this interpretation of the problem. FWIW, I found it easier to compute successive P(M) results "bottom up", using each P(k) to compute P(K+1) - like computing a factorial by iteration rather than literal recursion. :) Proof by Induction: 2 seats on an airplane: Crazy guy gets on first, chooses a random seat. 1/2 chance he sat in his own seat 1/2 chance he sat in the last guy's seat Chances the last guy sits in his own seat: 50% 3 seats on an airplane: Crazy guy gets on first, chooses a random seat. 1/3 chance he sat in his own seat 1/3 chance he sat in the last guy's seat 1/3 chance he sat in passenger 2's seat. In this last case, it reduces to the 2-passenger scenario, which we already know is 50% Chances the last guy sits in his own seat: 50% .... 100 seats on an airplane: Crazy guy gets on first, chooses a random seat. 1/100 chance he sat in his own seat 1/100 chance he sat in the last guy's seat 98/100 chance he sat in another passenger's seat. In this last case, it reduces to the n-1 passenger scenario, which we have already determined has a chance of 50% Chances the last guy sits in his own seat: 50% ----- Another simple description: Any passenger who chooses a seat at random has an equal chance of choosing the seat of passenger 1 or 100. If they choose 1, then all remaining passengers can sit in their own seat. If they choose 100, then the last passenger will not be able to sit in their assigned seat. Any OTHER choice (one of the other open seats) simply defers the decision to a later point in time, where THAT deposed passenger will have to make the same random choice. Chance: 50% Brad Corbin Wednesday, August 18, 2004 Concerning the "can the crazy guy pick his own seat" debate: Technically, if he's ignoring the seat number on his ticket, he has to be able to pick his own seat. In other words, in order for him to NOT be able to pick his own seat, he'd have to know what his own seat is, and therefore not ignore it. We know that he ignores it however, and therefore should be able to pick it. That's my take. Zach Wily Tuesday, August 24, 2004 Good point, Zach! I saw it more as a matter of interpretation than debate. :) To me it hinges on whether crazy guy is ignoring the number or the seat. FWIW, I colloquially use "seat number" and "seat" synonymously when referring to a specific seat... e.g., "seat 3" and "seat number 3" mean the same thing to me, but I recognize that others may (and obviously do) use the language differently. If he's ignoring the seat, then to him that seat doesn't exist -- which was my (apparently weird ;) interpretation. In that case, the number tells him which seat to 'ignore'. ;) OTOH, if he's ignoring the number then what you said makes sense to me. I'm happy with either interpretation. When in Rome, ... ;) Chuck Boyer Tuesday, August 24, 2004 I threw together some C code to actually run the scenario millions of times and see how many times the last guy ends up in his seat. For 100 seats run 100M times, I keep getting the last guy in his seat 49.94% of the time. Here's the code: #include <stdio.h> int run(int numseats) { int *seats = (int *)malloc(sizeof(int) * numseats); memset(seats, 0, sizeof(int) * numseats); // special case crazy guy int seat = random() % numseats + 1; seats[seat - 1] = 1; int i; // continue with second dude for (i = 2; i <= numseats; i++) { if (seats[i - 1]) { // his seat is taken, randomly choose another int newseat; do { newseat = random() % numseats + 1; } while (seats[newseat - 1]); seats[newseat - 1] = i; } else { // sits in his own seat seats[i - 1] = i; } } // is the last guy in his own seat? int ret = (seats[numseats - 1] == numseats); free(seats); return ret; } int main(int argc, char *argv[]) { if (argc < 3) { printf("usage: %s number_of_seats iterations\n", argv[0]); exit(0); } int numseats = atoi(argv[1]); int iterations = atoi(argv[2]); srandomdev(); int succeed = 0; int i; for (i = 0; i < iterations; i++) { if (run(numseats)) { succeed++; } } printf("last guy sat in last seat %.2f%% of the time\n", (float)succeed / iterations * 100); return 0; } Thanx for the code zach. It's now established that for your interpretation of the problem (the one most ppl understood and what probably was meant by the author) the probability is a constant at 0.5 regardless of the no. of the ppl on the plane. I was anxious to look at results for the other interpretation though, the one Chuck and I understood from the puzzle when we first read it. So I just replaced this line in the code of urs and ran it: // special case crazy guy //Let's comment this one out and add our interpretation //int seat = rand() % numseats + 1; int seat = (rand() % (numseats-1)) + 2; This causes the crazy guy always to ignore his seat. The results comply with earlier calculations but they doesn't seem to follow a specific trend, they grow fast untill they hit 0.5 at which limit they grow but very slowly. For example at 100 passengers the probability was around 49.5, with 1000 passengers it was around 50.5, and with 10,000 passengers it's around 57% I guess this is fair enough, and I think Chuck and I "misinterpreted" the puzzle because obviously the other interpretation has the more elegant solution so it wins :) I am happily letting go of this one now :) Christian Kamel Tuesday, August 24, 2004 Now that we have settled on a common interpretation: Here is another way of looking at it which might be more convincing. Take any arrangement of the people which could occur. It is a permutation of (1,2,....n) with the following properties. 1) There is atmost one cycle of length more than 1. 2) If there is a cycle of length more than 1, then person 1 is in it. Any permutation is a set of disjoint cycles. Start at a particular person and follow the chain till you get back. Eg (3 4 2 1) is a permutation which has the cycle 1->4->2->3->1 which is of length 4 (read the arrow as 'goes to') (3 2 1 5 4 6) is composed of cycles 1->3->1 of length 2 2->2 of length 1 4->5->4 of length 2 6->6 of length 1 Any permutation of (1,2...n) satisfying properties 1 and 2 is a valid seating arrangement. We are looking for the probabilty that a permutation has the cycle 100->100. Aryabhatta Wednesday, August 25, 2004 It is simple !! when 100th person boards in ...he will either get the 100th seat empty or occupied ...so probability is 1/2 :D Akhil Gupta Thursday, September 2, 2004 For the 1000th time this is mentioned here, having 2 possible solutions does not "automagically" make each solution's probability = 1/2 !! And just because this one's probability is really actually half doesn't mean your line of reasoning is right, there are plenty of other problems with 2 outcomes but with each outcome's probability different than 1/2 ! Christian Kamel Friday, September 3, 2004 I am not sure I get the .5 probablity reasoning ,, The crazy guy can randomly sit in 100 different seats. Which means there is at most one and only one chance that he might sit in his own seat out of a 100 chances. So therefore the chance that the last guy sits in his own seat is just 1/100 i,e 1 % SM Monday, September 6, 2004 I think stating that the probability is 1/2 is not totally correct. All conditions are not equal. For example the crazy chain of event occur only if the first guy does not take is own seat. The first guy makes an independent decision and other people don't. So there is a 1/100th chance that the last guy gets his own seat (if the first guy sits at his own seat). In the other 99/100 cases there is a 50% chance, so the total probability is 99/200 + 1/100 = 50.5 Krish Thursday, September 16, 2004 Please ignroe my last mesange. I realsie that it was incorrect reasoning. The answer of 0. seems to fit the pattern for 2,3,4 passengers but I am still not convinced that it is 0.5 Someone suggested mathematical induction. That seems the best way to prove it. Thanks Krish Krish Saturday, September 18, 2004 Poor spelling in my last mail: Please ignore my last 2 messages. I realise that the reasoning was incorrect. The answer of 0.5 seems to fit the pattern for 2,3,4 passengers but I am still not convinced with any of the proofs offered for this result. Someone suggested mathematical induction. That seems the best way to prove it. Thanks Krish -----"For the 1000th time this is mentioned here, having 2 possible solutions does not "automagically" make each solution's probability = 1/2 !!"---- But the answer is 1:2 Because at any one stage there are only two seats that matter. The 100th seat and the seat that should have been taken by an earlier passenger and hasn't. The odds of either being taken are equal and remain that way throughout. The original crazy guy has a 1:100 chance of taking his own seat and a 1:100 chance of taking seat 100. If he takes his own seat everything is ove, just as it is over if the takes seat 100. If he takes neither then the next guy that comes and finds his seat taken has a 1:n chance ot taking the seat of a person sitting in the wrong place and a 1:n chance of taking seat 100. Stephen Jones Sunday, September 19, 2004 Formal proof by induction: Theorem: For all number of passengers, n>=2, the chance of the final passenger getting his or her assigned seat is 50% Method: Prove for n=2, and prove that n implies n+1. n=2: With only two seats, the first passenger can only choose his own seat and the seat of the final passenger. 50% chance to choose his own seat means a 50% chance the final passenger will get his own seat. n->n+1: Let us assume that for n passengers, the chance of the final passenger choosing his own seat is 50%. For n+1: The first passenger has n+1 seats to choose from: his own, the final passenger's, and the n-1 other seats. Probability of the first passenger randomly choosing each: His own: 1/(n+1) Final Passenger's: 1/(n+1) Other passenger's: (n-1)/(n+1) But we know that if he sits in one of the other seats, then we are reduced to the problem with n passengers: We have n remaining seats, n remaining passengers, and one passenger is displaced by the first passenger and therefore must choose a seat at random. So the chance of the final passenger sitting in his own seat is equal to: 1/(n+1) + 50% of (n-1)/(n+1) = (2+(n-1))/2(n+1) = (n+1)/2(n+1) = 1/2 So n implies n+1, QED. Brad Corbin Monday, September 20, 2004 Hi Bard Corbin, Thanks for the proof, that sounds pretty solid. Mathematical induction is simple yet really powerful. krish Monday, September 20, 2004 We could find this out based on the probablities that seat-100 will be empty after each guy takes a seat. Prob that crazy guy does not take seat-100 = 99/100 Prob that guy#2 does not take seat-100 = 98/99 ... ... Prob that guy#98 does not take seat-100 = 2/3 Prob that guy#99 does not take seat-100 = 1/2 Prob that guy#100 gets his own seat = product of all the above probabilities = 1/100 Dhanju Saturday, October 2, 2004 Regarding the FORMAL PROOF BY INDUCTION: There is a flaw in your logic. You said: >But we know that if he sits in one of the other seats, then we are reduced to the problem with n passengers: We have n remaining seats, n remaining passengers, and one passenger is displaced by the first passenger and therefore must choose a seat at random. The problem is you have only shown that it is 0.5 if the FIRST passenger is displaced. In this case the first few passengers can take thieir correct seats and some later passenger will be displaced. So the formulation is really more complicated than what you show. (I believe that it comes out correctly in the end, but I have not found a nice, clean, formal way to express the proof given this extra wrinkle. Can anyone else?) rahs Thursday, October 7, 2004 I had my class conduct trials of the scenario with 20 students and seats in a classroom. In 10 trials, the last student to enter the room got his correct seat 5 times. So I am convinced it is fairly close to 50% probability, though I can't prove it. jones Friday, October 8, 2004 50% doesn't seem logical to me. And at the same time it does. There are 100 possible seatings (or instances of a person putting their butt in a chair.) There is only really one seat we are worried about - the 100th passenger's any other seat can be occupied by any other butt, no issues. So, what we need to calculate is the odds of each passenger sitting in the 100th passenger's seat (#100, let's guess) So the odds of the crazy guy sitting in seat#100 is 1/100. Assuming he doesn't sit in seat #100 (otherwise, the odds are 0) the odds of the second passenger sitting in seat #100 is 1/99 (choices being completely random). ...having gotten this far, I have changed my mind. What needs to be considered is the probability of EACH passenger having their own seat available, down to the last passenger (ie: two people have randomly chosen each OTHER's seats). So the odds of the second person having their seat available is 1/100 (the crazy guy might have chosen his own seat, after all). The odds of the 3rd person having his own seat available is 1/100 (it relies on the first condition being true) * 1/99 and so on. I think one of the folks earlier who remembers more of math than I do has arrived at a similar conclusion with incorrect arithmetic. The final answer should be: 1.0715E-158 It's been a long time since high school - someone let me know if there is a flaw in my logic? 80083r Saturday, October 9, 2004 wait, now... those are the odds of EVERYONE being in the correct seat. d'oh! There are two cases: 1) CG takes own seat then passenger #100 gets her assigned seat, probability 1/100 2) CG takes other seat then seating is random, probability that #100 gets her seat is 1/99 Answer: 1/100 + 1/99 rai Wednesday, October 20, 2004 Why don't people read first !!! Reinvent the square wheel. Wednesday, October 20, 2004 Crazy guy has 1/100 chance of selecting his assigned seat. All other passengers fall in line. Nth guy's chances hinge on crazy guy's chances making Nth guys chances of getting his own seat 1/100. Also, think this way, Crazy guy does not pick his own seat, assume every subsequent passenger to board the plane picks the NEXT persons seat (#1 takes #2's seat, #2 takes #3's seat etc), so nobody is sitting in their own seat. Or even just assume complete chaos. After all is settled, there's 1 seat open out of 100 - Nth guy's chances are 1/100 either way you look at it. Randall Lynch Wednesday, October 27, 2004 cow moo .i had a baby in an air plane bathroom well flying to gasweeraterania i am 11 and live in a splean. i am a boy cow moo Monday, November 1, 2004 cow moo. i had a nother baby on a bean bag its name is moomoo.it is a he she hehehehehe lame guy is my husband cow moo Monday, November 1, 2004 cow moo this husband of mine is a stuck up jerk wad he made out with moomoo my baby 23 yearold this sucks dolt head . pollid es anson i have a chacha gupy who i like yo hump alot he is so sexy tttt Thursday, November 4, 2004 Recent Topics Fog Creek Home
https://discuss.fogcreek.com/techinterview/2206.html
CC-MAIN-2020-05
refinedweb
6,159
69.92
Details - Type: Improvement - Status: Closed - Priority: Minor - Resolution: Fixed - Affects Version/s: 3.1 - - Component/s: modules/analysis - Labels:None - Lucene Fields:New, Patch Available Description The ISOLatin1AccentFilter takes Unicode characters that have diacritical marks and replaces them with a version of that character with the diacritical mark removed. For example é becomes e. However another equally valid way of representing an accented character in Unicode is to have the unaccented character followed by a non-spacing modifier character (like this: é ) The ISOLatin1AccentFilter doesn't handle the accents in decomposed unicode characters at all. Additionally there are some instances where a word will contain what looks like an accented character, that is actually considered to be a separate unaccented character such as Ł but which to make searching easier you want to fold onto the latin1 lookalike version L . The UnicodeNormalizationFilter can filter out accents and diacritical marks whether they occur as composed characters or decomposed characters, it can also handle cases where as described above characters that look like they have diacritics (but don't) are to be folded onto the letter that they look like ( Ł -> L ) Issue Links - is related to LUCENE-1390 add ASCIIFoldingFilter and deprecate ISOLatin1AccentFilter - Closed Activity - All - Work Log - History - Activity - Transitions backported to 3x, revision 941694. Yes, as this contrib package is called "ICU". If you dont want to use ICU, dont use this contrib. You can alway use ASCIIFoldingFilter, it will not get removed. Very useful for unicode normalization/folding. But after trying this package in the nightly build I looked back at the patch and realized that it has a dependency on IBM ICU. import com.ibm.icu.text.Normalizer2; Is this intentional? Will it remain dependent? Committed revision 936657. updated datafile. attached is a modified patch (i will upload the new datafile too). - applied ICU or Unicode copyright headers to any datafiles where I sourced from their data, and added a mention to NOTICE.txt to that effect. - added some additional punctuation mappings to ensure it contains all ASCIIFoldingFilter foldings As noted previously, there are 5 places where this disagrees with ASCIIFoldingFilter: U+1E9B: LATIN SMALL LETTER LONG S WITH DOT ABOVE (should be s) U+2033: DOUBLE PRIME (should be two single quotes) U+2036: REVERSED DOUBLE PRIME (same as above) U+2038: CARET (folds to CIRCUMFLEX ACCENT, which should be deleted as its [:Diacritic:] U+FF3E: FULLWIDTH CIRCUMFLEX ACCENT (same as above) I plan to commit in a few days if no one objects. By the way, I have been running this with the ASCIIFoldingFilter tests and ensuring its a superset (e.g. we have at least all their mappings). But there are some bugs in ASCIIFoldingFilter that should be fixed: For example, U+1E9B (LATIN SMALL LETTER LONG S WITH DOT ABOVE) But in unicode. this is canonically equivalent to U+017F (LONG S) U+0307 (COMBINING DOT ABOVE) AsciiFoldingFilter folds U+1E9B (LONG S WITH DOT) to an F but it folds U+017F (LONG S) to an S Unicode defines this character as a compatibility equivalent to S anyway, but its worse that ASCIIFoldingFilter is canonically inconsistent with itself. attached is the binary file that goes in the resources/ directory. Although I provide the ant logic to regenerate this, its kind of a pain because - you must download/compile ICU4c (version 4.4), there is no java gennorm2 - you must run this on a big-endian machine. Attached is a patch that implements UTR#30 as a tailored unicode normalization form. Essentially it acts as a combined "Internationalized AsciiFoldingFilter" + NFKC_CaseFold (Unicode Case Folding, Default Ignorable removal, and NFKC normalization). This is a nice alternative to just using ICUNormalizer2Filter in the case that you want "fuzzy matching" (e.g. ignore diacritical marks). The patch is large because it contains all the source data files necessary for gennorm2 to regenerate the 41KB binary trie file... the java implementation is trivial. OK! I think we have a good solution here!. We can use ICU's Normalizer2 to implement this, by simply creating a custom normalization mapping. This way we can meet multiple use-cases, e.g. someone wants to remove diacritics, someone else doesn't. And we get solid unicode behavior and high performance to boot. So I will keep this issue open, I think the best solution is to take the accent-folding mappings here (or use the ones in AsciiFoldingFilter?) and create a .txt file of mappings, passing it to gennorm2 along with NFKC case fold mappings. This way we can implement this on top of LUCENE-2399, all compiled to an efficient binary form with no code. I'll take a shot at this once LUCENE-2399 is resolved.). I guess I brought this up because this is where you have several situations where case folding and normalization interact, eg. applying FC_NFKC set when case folding so that later NFK[CD] normalization will be closed, I know this is supposed to solve various ways the YPOGEGRAMMENI can be implemented but I forget the details... This is why I think, the general purpose contribution should be case folding, normalization, and the stuff like this (FC_NFKC set) to make sure they work together... If you later want to apply something more specialized like StringPrep, you need this logic anyway, see (especially section 3.2)). The GreekLowerCaseFilter appears to only do some of the work and only works on composed characters. My question is not whether I'd find the filter useful, but whether it'd be a useful addition to Lucene.. I have a terrible habit of not being exact or using the proper terms. Shame on me. I meant that the latter strip all other marks. But this is going to depend on the user, and I think every person will need their own, they can use CharFilter or other ways of defining these tables. If there is no general purpose contribution, then it should not be part of Lucene and I'll have my own. When I do work them up, I'll create an issue or two and attach the results. If they are deemed useful then they can be added to Lucene, otherwise ignored.. But this is going to depend on the user, and I think every person will need their own, they can use CharFilter or other ways of defining these tables. I also am dubious about a general purpose folding filter that maps letters to their ASCII look-alike and agree that folding is language dependent. May Americans are illiterate when it comes to text with diacritics and NSM. Personally I'm nearly illiterate. I think having prominent folding filters without adequate explanation about their pitfalls or usefulness may lead illiterates into a false sense of sufficiency. If it makes sense to have a filter for TR39 I think that should be a separate issue. If that's what this issue is all about then it's description should be modified. I think this should otherwise be closed as a bad idea. Robert Muir, Would it make sense to have a Greek filter that strips diacritics? My thought is that if the letter is Greek then the diacritics would be removed, but otherwise it would not. Similar question for Hebrew, I see value in two filters: one would strip cantillation and the other, vowel points. Or would it be better to have one that can do both depending on flags? Hi Ken, such functionality does exist, although it is new and I think still changing (you are talking about StringPrep/IDN/etc?). If a filter for this is desired, we can do it with ICU, though I think its relatively new (probably not optimized, only works on String, etc etc) I still think even this is stupid, because unicode encodes characters, not glyphs. Just to make sure this point doesn't get lost in the discussion over normalization - the issue of "visual normalization" is one that I think ISOLatin1AccentFilter originally was trying to address. Specifically how to fold together forms of letters that a user, when typing, might consider equivalent. This is indeed language specific, and re-implementing support that's already in ICU4J is clearly a Bad Idea. I think there's value in a general normalizer that implements the Unicode Consortium's algorithm/data for normalization of int'l domain names, as this is intended to avoid visual spoofing of domain names. Don't know/haven't tracked if or when this is going into ICU4J. But (similar to ICU generic sorting) it provides a useful locale-agnostic approach that would work well-enough for most Lucene use cases. The big picture here and all these other duplicated normalization issues across jira is related to the outdated unicode support in the JDK. This issue speaks of removing diacritical marks / NSM's, but the underlying issue is missing unicode normalization, duplicated here (incorrectly named): LUCENE-1215 and also here: LUCENE-1488 (disclaimer: my impl) Speaking for the accent removal: In truth I do not think we should be simply removing NSMs because in most cases, they are there for a reason. For example, they are diacritics in a lot of european languages, but for many eastern languages they are the actual vowels. (i.e. all the indic scripts) We need to separate the issue of missing unicode normalization (which is clearly something lucene needs), from the issue of removing diacritics (which is language-specific and doing it based on unicode properties is inappropriate). Finally just normalizing unicode in Lucene by itself is not very useful, because there is a careful interaction with other processes and attention needs to be paid to the order in which filters are run. For example, its interaction with case folding can be a bit tricky. If you are interested in this issue I urge you to read the javadocs writeup I placed in the ICUNormalizationFilter in LUCENE-1488. Mr Muir, can you take a look at this? Offer anything over the ASCIIFoldingFilter? If not, we should close, if so, what do you recommend? Hi Robert, So given that you and the Unicode consortium seem to be working on the same problem (normalizing visually similar characters), how similar are your tables to the ones that have been developed to deter spoofing of int'l domain names? – Ken The UnicodeNormalizationFilter does use the decompose normalization portion of the icu4j library as a starting point. However even with that there are several instances where the normalizer code does not decompose a character into an unaccented character and a accent mark, a notable one being ( Ł -> L ) so the UnicodeNormalizationFilter start with the approach you outlined, perform a decompose normalization followed by discarding all non-spacing modifier characters, and then can go on from there to further normalize the data by folding the additional characters that aren't handled by the decompose normalization onto their Latin1 lookalikes. -Robert Unit tests are the best way to document the many ways this thing can work. gets a judges score of 11 from me. Gold for Lance for Quote of the Day. Hi Robert, FWIW, the issues being discussed here are very similar to those covered by the Unicode Security Considerations technical report #36, and associated data found in the Unicode Security Mechanisms technical report #39. The fundamental issue for int'l domain name spoofing is detecting when two sequences of Unicode code points will render as similar glyphs...which is basically the same issue you're trying to address here, so that when you search for something you'll find all terms that "look" similar. So for a more complete (though undoubtedly slower & bigger) solution, I'd suggest using ICU4J to do a NFKD normalization, then toss any combining/spacing marks, lower-case the result, and finally apply mappings using the data tables found in the technical report #39 referenced above. – Ken. Hi Robert, My comments below assume you're intrestested in having this code hosted in the Lucene source repository - please disregard if that's not the case. Have you seen the HowToContribute page on the Lucene wiki? It outlines some of the basics concerning code submissions. A couple of things I noticed that need to be addressed before the code will be accepted: - Tab characters should be converted to spaces - Indentation increment should be two spaces - Test(s) should be moved from the UnicodeNormalizationFilterFactory.main() method into standalone class(es) that extend LuceneTestCase - More/more explicit javadocs - for example, you should describe the set of provided transformations (e.g. Cyrillic diacritic stripping is included). - Solr is a separate code base, so the UnicodeNormalizationFilterFactory should be moved to a Solr JIRA issue - Because it has a dependency on the ICU jar, this contribution will have to live in the contrib/ area – the Java packages name should be adjusted accordingly. - The submission should be repackaged as a patch (instructions available on the above-linked wiki page). Random related comment (just because this issue seemed like a good place to put it) People may also want to consider constructing a Filter based on the substitution tables from the perl Text::Unidecode module... ...i have no idea how it's behavior compares to the UnicodeNormalizationFilter, just that it seems to have similar goals. Java 6 contains a class named java.text.Normalizer that is able to perform Unicode normalization, earlier versions of java do not have that class, and therefore need the code in this jar (which is a subset of the icu4j library) to be able to perform Unicode normalization. The UnicodeNormalizationFilter can work with either the java 6 class java.text.Normalizer or the class com.ibm.icu.text.Normalizer in the jar here. Source code for UnicodeNormalizationFilter Bulk close for 3.1
https://issues.apache.org/jira/browse/LUCENE-1343?focusedCommentId=13013289&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel
CC-MAIN-2015-48
refinedweb
2,294
51.18
JDOM 1.0 (Beta 7) JDOM 1.0 (beta 7), detailed in Chapter 7 and Chapter 8, provides a complete view of an XML document within a tree model. Although this model is similar to DOM, it is not as rigid a representation; this allows the content of an Element, for example, to be set directly, instead of setting the value of the child of that Element. Additionally, JDOM provides concrete classes rather than interfaces, allowing instantiation of objects directly rather than through the use of a factory. SAX and DOM are only used in JDOM for the construction of a JDOM Document object from existing XML data, and are detailed in the org.jdom.input package. Package: org.jdom This package contains the core classes for JDOM 1.0[31]. These consist of XML objects modeled in Java and a set of Exceptions that can be thrown when errors occur.[32] Attribute Attribute defines behavior for an XML attribute, modeled in Java. Methods allow the user to obtain the value of the attribute as well as namespace information about the attribute. An instance can be created with the name of the attribute and its value, or the Namespace and local name, as well as the value, of the attribute. Several convenience methods are also provided for automatic data conversion of the attribute’s value. public class Attribute { public Attribute(String name, String value); public Attribute(String name, String value, Namespace ns); public Element getParent( ); public String getName( ); public Namespace getNamespace( ); public ... Get Java and XML, Second Edition now with O’Reilly online learning. O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.
https://www.oreilly.com/library/view/java-and-xml/0596001975/apas04.html
CC-MAIN-2021-21
refinedweb
280
52.09
« Return to documentation listing #include <mpi.h> int MPI_Init_thread(int *argc, char ***argv, int required, int *provided) INCLUDE ’mpif.h’ MPI_INIT_THREAD(REQUIRED, PROVIDED, IERROR) INTEGER REQUIRED, PROVIDED, IERROR #include <mpi.h> int MPI::Init_thread(int& argc, char**& argv, int required) int MPI::Init_thread(int required) MPI_Init_thread, as compared to MPI_Init, has a provision to request a certain level of thread support in required: The level of thread support available to the program is set in provided, except in C++, where it is the return value of the function. In Open MPI, the value is dependent on how the library was configured and built. Note that there is no guarantee that provided will be greater than or equal to required. Also note that calling MPI_Init_thread with a required value of MPI_THREAD_SINGLE is equivalent to calling MPI_Init. All MPI programs must contain a call to MPI_Init or MPI_Init_thread. Open MPI accepts the C/C++ argc and argv arguments to main, but neither modifies, interprets, nor distributes them: { /* declare variables */ MPI_Init_thread(&argc, &argv, req, &prov); /* parse arguments */ /* main program */ MPI_Finalize(); } implementation, it should do as little as possible. In particular, avoid anything that changes the external state of the program, such as opening files, reading standard input, or writing to standard output. shell$ ompi_info | grep -i thread Thread support: posix (mpi: yes, progress: no) shell$ The "mpi: yes" portion of the above output indicates that Open MPI was compiled with MPI_THREAD_MULTIPLE support. Note that MPI_THREAD_MULTIPLE support is only lightly tested. It likely does not work for thread-intensive applications. Also note that only the MPI point-to-point communication functions for the BTL’s listed below « Return to documentation listing
http://icl.cs.utk.edu/open-mpi/doc/v1.5/man3/MPI_Init_thread.3.php
CC-MAIN-2014-10
refinedweb
277
52.49
At the base, the library defines and exports the Object type which is used to represent .NET object references: data Object a = ...abstract... instance Eq (Object a) where {...} instance Show (Object a) where {...}The Object type is parameterised over a type that encodes the .NET class reference it is representing. To illustrate how, the Dotnet.System.Object and Dotnet.System.String modules define the following: -- module providing the functionality of System.Object module Dotnet.System.Object ( Dotnet.Object , module Dotnet.System.Object ) where import Dotnet ( Object ) getHashCode :: Object a -> IO Int getHashCode = ... ... -- module providing the functionality of System.Xml.XmlNode module Dotnet.System.Xml.XmlNode where import Dotnet.System.Object ... data XmlNode_ a type XmlNode a = Dotnet.System.Object.Object (XmlNode_ a) ... foreign import dotnet "method System.Xml.XmlNode.get_InnerXml" get_InnerXml :: XmlNode obj -> IO (String) ... -- module providing the functionality of System.Xml.XmlDocument module Dotnet.System.Xml.XmlDocument where import Dotnet import Dotnet.System.Xml.XmlNode data XmlDocument_ a type XmlDocument a = XmlNode (XmlDocument_ a) ... foreign import dotnet "method System.Xml.XmlDocument.LoadXml" loadXml :: String -> XmlDocument obj -> IO (()) ... [The reason why Dotnet. is prefixed to each Haskell module is to avoid naming conflicts with other common Haskell modules. See the tools for more details. ] Notice the type given to Dotnet.System.Xml.XmlNode.get_InnerXml's argument -- XmlNode obj -- capturing precisely that the method get_InnerXml is supported on any object that is an instance of System.Xml.XmlNode or any of its sub-classes (like XmlDocument.) If we expand a type like XmlDocument (), we get: XmlDocument () == Dotnet.Object (XmlNode_ (XmlDocument_ ())) Notice how the type argument to Dotnet.Object encodes the inheritance structure: System.Xml.XmlDocument is a sub-class of System.Xml.XmlNode which again is a sub-class of System.Object. The unit type, (), all the way to the right is used to terminate the chain and state that the type represent just XmlDocument (but none of its sub-classes.) If instead of () a type variable had been used, like what was done for get_InnerXml's argument type, the type is a subtype. So, if you've got a System.Xml.XmlNode or one of its sub-classes (like XmlDocument), you can safely invoke the method get_InnerXml -- the type variable obj permitting the use of any subtype of System.Xml.XmlNode. This type trick is a good way to safely represent .NET object references using Haskell's type system. If you're already familiar with the work on integrating COM with Haskell, you'll have already recognised that the type encoding used here mirrors that used for COM interface hierarchies. To support the syntax for conventional OO-style method invocation, the Dotnet module exports the following two combinators: infix 8 # infix 9 ## ( # ) :: a -> (a -> IO b) -> IO b obj # method = method obj ( ## ) :: IO a -> (a -> IO b) -> IO b mObj ## method = mObj >>= methodUsing these, method invocation can be expressed as follows: l <- str # Dotnet.System.String.lengthString putStrLn ("Length of string: " ++ show l) The main way to bind to the .NET object model is to use FFI declarations, but the Dotnet library provides an alternate way (which used to be the only way prior to the integration of .NET interop into the FFI). This route is mainly provided for backwards compatibility, so unless you have a specific reason not to employ the FFI route, the next couple of sections of this document is of limited interest. To support the automatic marshaling of values between the .NET and Haskell worlds, Dotnet provides two Haskell type classes: class NetType a where arg :: a -> InArg result :: Object () -> IO a type InArg = IO (Object ()) class NetArg a where marshal :: a -> IO [Object ()]Both NetType and NetArg work in terms of Dotnet.Object () -- an untyped representation of object references. The following instances are provided: instance NetType (Object a) instance NetType () instance NetType Int instance NetType {Int8, Int16, Int32} instance NetType {Word8, Word16, Word32} instance NetType Bool instance NetType Char instance NetType String instance NetType Float instance NetType DoubleIn addition to object references, instances also let you convert to/from the 'standard' unboxed types that the .NET framework provides. The NetType class takes care of marshaling single arguments to/from their .NET representations; the NetArg deals with marshaling a collection of such arguments: instance NetArg () -- no args instance NetType a => NetArg a -- one arg instance (NetArg a1, NetArg a2) => NetArg (a1,a2) -- 2 args ... instance (NetArg a1, NetArg a2, NetArg a3, NetArg a4, NetArg a5, NetArg a6, NetArg a7) => NetArg (a1,a2,a3,a4,a5,a6,a7) -- 7 args The idea is here to use tuples to do uncurried method application; details of which will be introduced in the next section. type ClassName = String new :: ClassName -> IO (Object a) newObj :: (NetArg a) => ClassName -> a -> IO (Object res) createObj :: ClassName -> [InArg] -> IO (Object a)To call the nullary constructor for an object, simply use new: main = do x <- new "System.Object" print x -- under-the-hood this calls ToString() on 'x' To use a parameterised constructor instead, you can use newObj or createObject: newXPathDoc :: String -> System.Xml.XmlSpace -> IO (System.Xml.XPath.XPathDocument ()) newXPathDoc uri spc = newObj "System.Xml.XPath.XPathDocument" (uri,spc) newBitmap :: Int -> Int -> IO (System.Drawing.Bitmap ()) newBitmap w h = createObj "System.Drawing.Bitmap" [arg w, arg h] createObj lets you pass a list of arguments, but you have to explicitly apply arg to each of them. newObj takes care of this automatically provided you 'tuple up' the arguments. new can clearly be expressed in terms of these more general constructor actions: -- new cls = newObj cls () -- or -- new cls = createObj cls []Note: the reason why both createObj and newObj, which perform identical functions, are provided, is to gain experience as to what is the preferred invocation style. Unsurprisingly, these two different forms of marshaling arguments are also used when dealing with method invocation, which we look at next. type MethodName = String invokeStatic :: (NetArg a, NetType res) => ClassName -> MethodName -> a -> IO res staticMethod :: (NetType a) => ClassName -> MethodName -> [InArg] -> IO a staticMethod_ :: ClassName -> MethodName -> [InArg] -> IO ()invokeStatic uses the NetArg type class, so you need to tuple the arguments you pass to the static method: doFoo :: String -> Int -> IO String doFoo x y = invokeStatic "System.Bar" "Foo" (x,y)staticMethod uses a list to pass arguments to the static method, requiring you to apply the (overloaded) marshaling function first: urlEncode :: String -> IO String urlEncode url = staticMethod "System.Web.HttpUtility" "UrlEncode" [arg url]Instance method invocation is similar, but of course requires an extra 'this' argument: invoke :: (NetArg a, NetType res) => MethodName -> a -> Object b -> IO res method :: (NetType a) => MethodName -> [InArg] -> Object b -> IO a method_ :: MethodName -> [InArg] -> Object a -> IO ()For example, main = do obj <- new "System.Object" x <- obj # invoke "GetHashCode" () print ("The hash code is: " ++ show (x::Int)) type FieldName = String fieldGet :: (NetType a) => FieldName -> Object b -> IO a fieldSet :: (NetType a) => FieldName -> Object b -> a -> IO () staticFieldGet :: (NetType a) => ClassName -> FieldName -> IO a staticFieldSet :: (NetType a) => ClassName -> FieldName -> a -> IO () newDelegator :: (Object a -> Object b -> IO ()) -> IO (Object (Dotnet.System.EventHandler ())))When the System.EventHandler object reference is passed to another .NET method, it can invoke it just like any other EventHandler delegate. When that happens, the Haskell function value you passed to newDelegator is invoked. (The way this is done under the hood is kind of funky, requiring some dynamic code (and class) generation, but I digress.) To see the delegator support in action, have a look at the UI examples in the distribution. defineClass :: Class -> IO (Object b) data Class = Class String -- type/class name (Maybe String) -- Just x => derive from x [Method] data Method = Method MethodName -- .NET name (unqualified). Bool -- True => override. String -- Haskell function to call. [BaseType] -- Argument types (Maybe BaseType) -- result (Nothing => void).See examples/class/ in the distribution for more. To assist in the interfacing to .NET classes, a utility HsWrapGen is provided. Given the name of a .NET class, it generates a Haskell module wrapper for the class, containing FFI declarations that lets you access the methods and fields of a particular class. See the dotnet/tools/ directory, if interested. Note: Hugs98 for .NET makes good use of the hierarchical module extension to Haskell, so if you do write / generate your own class wrappers, you may want to consider putting them inside the library tree that Hugs98 for .NET comes with. To demonstrate where and how, supposing you had written a Haskell wrapper for System.Xml.Schema.XmlSchema, you need to name the Haskell module just that, i.e.,: module Dotnet.System.Xml.Schema.XmlSchema where { .... }and place it in dotnet/lib/Dotnet/System/Xml/Schema/ directory inside the Hugs98 for .NET installation tree. You can then utilise the wrapper module in your own code by importing it as import Dotnet.System.Xml.Schema.XmlSchema To avoid naming conflicts with Haskell's hierarchical library tree, we prefix each of the .NET specific modules with Dotnet..
http://galois.com/~sof/hugs98.net/dotnet-lib.html
crawl-001
refinedweb
1,480
57.27
Seam 2.0.2.SP1 vs 2.1.1.GA PerformanceJason Long Jan 14, 2009 8:33 PM I am using Seam 2.0.2.SP1 and would like switch to 2.1.1.GA. Would someone please comment on the performance of 2.0.2.SP1 vs 2.1.1.GA? 1. Re: Seam 2.0.2.SP1 vs 2.1.1.GA PerformanceTroy Sellers Jan 19, 2009 10:35 PM (in response to Jason Long) Hi Jason, We are just in the process of finishing off a Seam application for a client of ours and have found the performance difference between 2.02 (we are using Redhat Seam) and 2.1.1.GA remarkably different. So different in fact that we have asked redhat to support us on 2.1.1.GA. In the 2.1.1 release the Seam developers introduced the @BypassInterceptors tag, which allows parts of the code to skip the Seam processing for a lifecycle. While we didn't use this much in our code I think the Seam developers have made heavy use of it which would account for the noticeable performance gains we have seen switching to 2.1.1 (at least i think this is the reason, no hard evidence on this !!) We have just started performance testing our application now and the initial results are not the best. We are yet to profile and look at streamlining though it seems that memory usage of seam is rather heavy, although this is probably more a JSF thing as well. Cheers, Troy 2. Re: Seam 2.0.2.SP1 vs 2.1.1.GA PerformanceJason Long Jan 20, 2009 2:08 AM (in response to Jason Long) I am using @Name("profitSummaryFormat") @Scope(ScopeType.EVENT) @BypassInterceptors public class FormatProfitSummaryBean implements Serializable { public String getRowColor() { Map pipeSummary = (Map) Component.getInstance("saleSummary"); Double percentProfit = (Double) pipeSummary.get("dynPerProfit"); if(percentProfit==null) return null; if (percentProfit<0.06) return "red"; if (percentProfit>0.16) return "green"; return ""; } } with 2.0.2.SP1 so this is not new. That is great to hear the performance is better. I have been waiting for 2.1.1.GA to come out to switch. Please post back any other findings. 1. So you migrated an existing application or wrote a new one that was just a lot faster than you expected? 2. Has the Seam Team profiled any of the examples vs the latest branches from 2.0.x and 2.1.x? 3. What did Redhat say about supporting 2.1.1.GA?
https://developer.jboss.org/thread/185822
CC-MAIN-2018-17
refinedweb
425
69.68
Ghost from Bodaniel Jeanes is a Ruby gem that lets you manage your local host names without futzing with /etc/hosts. To install: gem install ghost And then from the command line: $ Aside from basic list, add, and delete options, Ghost provides powerful import and delete_matching operations to import files or delete entries based on pattern matching. UPDATE: We covered ghost-ssh, a hidden feature of ghost which lets you manipulate your ~/.ssh/config file as well. Have comments? Send a tweet to @TheChangelog on Twitter. Subscribe to The Changelog Weekly – our weekly email covering everything that hits our open source radar.
http://thechangelog.com/ghost-means-never-having-to-touch-etc-hosts-again/
CC-MAIN-2014-35
refinedweb
102
71.65
Ok there will probally be much flaming about this, but here goes. ive seen many people using classes, but ive never used them simply because i dont see a reason to do it in my code. I have seen where people use it to access different style databses, kinda like phpbb does. ive programmed forums, content management systems, trouble ticket systems, and store systems but never once used a class. So heres my question, when(please use a real world example) would i really need to use them? how much more effecient is if to use a class instead of functions with global variables? 2 replies to this topic #1 Posted 23 July 2006 - 02:26 AM [a href="" target="_blank"]Cheap PHP Hosting[/a] #2 Posted 23 July 2006 - 02:28 AM well if your like me you started out with Java, and then learned PHP soooo its a habit. but ive always wondered the same thing.. is there a reason? so far all ive found out it that its a matter of habit. but ive always wondered the same thing.. is there a reason? so far all ive found out it that its a matter of habit. #3 Posted 23 July 2006 - 02:31 AM The larger the project, the more global variables and functions collide. What if you wanted to use a friends code that had a bunch of global variables that overwrote yours? I haven't done a lot of OPP, and the little I have done has been in Perl, not PHP. Before I understood what it was, I had the same notion as you--What's the point? This is understood in time. Any basic OPP primer should lay out the advantages for you, such as abstraction and inheritance. I see it as a containment system. You have your own "namespace" in which you cannot run into anyone else, and you can also control what the user can and cannot do within this area--private or public methods, e.g. Regexp | Unicode Article | Letter Database /\A(e)?((1)?ff(?:(?:ig)?y)?|f(?:ig)?)\z/ /\A(e)?((1)?ff(?:(?:ig)?y)?|f(?:ig)?)\z/ 0 user(s) are reading this topic 0 members, 0 guests, 0 anonymous users
https://forums.phpfreaks.com/topic/15369-why-use-classes/
CC-MAIN-2018-09
refinedweb
373
81.63
Hi, On Thu, 2007-11-22 at 22:08 -0500, [EMAIL PROTECTED] wrote: > [EMAIL PROTECTED] said on Nov 22, 2007 5:51 > -0500 (in part): > > (script-fu-register "script-fu-save-all-images" > > ">IMAGE</File/Save ALL" > > "Save all opened images" > > "Saul Goode" > > "Saul Goode" > > "11/21/2006" > > "" > > ) > Now I'm being picky :-) ... is there a way to make this appear in the > File menu with the other Save options (between Save as Template and > Revert)? > > Currently it appears at the bottom after Quit (using WinXp Gimp 2.4.2 > in case its different on other platforms) Advertising Sure, just register it as "<Image></File/Save/Save All". Or, even better, just use "Save All" where the full menu path was given and add the line (script-fu-menu-register "script-fu-save-all-images" "<Image></File/Save") Also the script should not be called "script-fu-save-all-images". Please use a different namespace when you are writing scripts to avoid name clashes with other people's scripts. Sven _______________________________________________ Gimp-user mailing list Gimp-user@lists.XCF.Berkeley.EDU
https://www.mail-archive.com/gimp-user@lists.xcf.berkeley.edu/msg13896.html
CC-MAIN-2016-44
refinedweb
181
59.53
SuperQ is much like the Queue Storage for Windows Azure, except it automatically creates a local db instead of using the cloud. It is also self-polling, so all you need is a callback method and you’re ready to go. This allows you to create quick and easy task schedulers and background worker processes. Step #1 – Ensure you have SQL Compact Edition 4 Installed To let SuperQ build a database automatically in your App_Data folder, we have opted to use SQL Compact Edition 4. Use the Web Platform Installer to download and install SQL Compact Edition 4. If you’ve never used Web Platform Installer, I strongly urge you to try it, it makes installing common software a breeze. Otherwise you can download SQL Compact Edition 4 directly from here: Step #2 – Reference the SuperQ Library Download and reference the SuperQ library from here: If you’re interested in the inner workings of the queue, or would like to help out, please feel free to jump over to and grab the latest source code for yourself. On a side note, either me or lukencode will probably be doing a blog post on how to implement some of the features that SuperQ makes use of such as Async Callbacks and SQL Compact DB Creation, soon. Step #3 – Use the Queue **UPDATE (8 Sep 2010): **I’ve now implemented class level generics instead of method level, as suggested by James Curran. You now have access to one of the most abbreviated and super of queues known to man. Here are a few common methods to get you going: Create or Open a Queue This will create a queue in your data folder (Usually App_Data) with the name provided. If it already exists it just opens it. Since SuperQ can only handle one type per queue, you must specify it when calling GetQueue, as above. Sorry about the namespace, it might be changed in future. Push a message onto the queue queue.PushMessage("P was pressed in message form"); // OR var queue = SuperQ.SuperQ<int>.GetQueue("MyQueueInt");queue.PushMessage(45); // OR var queue = SuperQ.SuperQ<MyClass>.GetQueue("MyQueueClass"); queue.PushMessage( new MyClass() { Name = "Ben", Status = "Awesome", Amount = 10000.00 } ); Yep, you can push whatever class you like onto the queue and you’re guaranteed to get it back. Get a message off the queue var message = queue.GetMessage(); // Payload will be the same type you specify above MyClass result = message.Payload; // Instead of using get message, you can call get payload to just get the next payload var payload = queue.GetPayload(); Keep in mind that SuperQ is designed to use only one type at a time per queue. Please also remember to delete the message from the queue once you have successfully processed it. See Below. Delete the message from the queue queue.DeleteMessage(message); To make sure that the queue message will survive even if your program crashes, you must delete the message from the queue within 30 seconds (soon to be an option) otherwise the message will become available to other GetMessage calls. Automatically receive a callback when a message arrives If you do not want to poll the queue manually, you can setup a callback method that will be called each time there is a message waiting to be processed. First create the callback method like so: protected void MessageReceived(QueueMessage<MyClass> message) { Console.WriteLine("Message Received: " + message.Payload.Name); queue.DeleteMessage(message); } Now you can start and stop the callback like so: // Start receiving messages queue.StartReceiving(MessageReceived); // Run for as long as you like here //Stop receiving messages queue.StopReceiving(); Can I have an example of all of the above? No problem imaginary heading friend, SuperQ can help. Here’s a quick Console App combining all the above methods: class Program { private static SuperQ.SuperQ<string> queue; static void Main(string[] args) { queue = SuperQ.SuperQ<string>.GetQueue("MyQueue"); queue.StartReceiving(MessageReceived); bool running = true; while (running) { var key = Console.ReadKey(true); if (key.Key == ConsoleKey.Enter) running = false; else if (key.Key == ConsoleKey.P) { Console.WriteLine("Message Pushed"); queue.PushMessage(new QueueMessage<string>("P was pressed in message form")); } else if (key.Key == ConsoleKey.G) { var message = queue.GetMessage(); if (message != null) Console.WriteLine("Message: " + message.Payload); else Console.WriteLine("No Message Received..."); } else if (key.Key == ConsoleKey.L) { Console.WriteLine("Payload Pushed"); queue.PushMessage("P was pressed in Payload form"); } else if (key.Key == ConsoleKey.V) { var payload = queue.GetPayload(); if (payload != null) Console.WriteLine("Payload: " + payload); else Console.WriteLine("No Payload Received..."); } } queue.StopReceiving(); } static void MessageReceived(QueueMessage<string> message) { Console.WriteLine("Message Received: " + message.Payload); queue.DeleteMessage(message); } }
http://benjii.me/2010/09/superq-a-portable-persistent-net-queue/
CC-MAIN-2015-40
refinedweb
774
66.84
One of the new code hints provided by NetBeans 7.1 is the Unused Assignment hint. A simple code sample that will cause this hint to be displayed in NetBeans 7.1 is shown next.Demonstrating Unused Assignment /** * Demonstrate NetBeans 7.1 code hint for situation in which variable * assignment is made but never used. * * @return Integer. */ private static int demoUnusedAssignment() { int i = 2; // ... do some good stuff, but without changing i's assignment or // accessing i's value ... i = 3; return i; } In the code above, the local variable "i" is initialized to 2, but is never used and then is initialized again, making the first initialization unnecessary. The next image is a screen snapshot that shows NetBeans 7.1 displaying a warning code hint for the unused assignment. As the above image indicates, NetBeans 7.1 warns of "The assigned value is never used." The New And Noteworthy in NetBeans IDE 7.1 page mentions this hint among others and states: Unused Assignment A new pair of hints, Unused Assignment and Dead Branch, was introduced. The Unused Assignment finds values that are computed and assigned to a variable, but never used by reading the variable. The Dead Branch hint searches for branches of code that can never be executed. The Unused Assignment Hint seems to work as suggested based on the example shown above. However, I have not been able to generate code that demonstrates the "Dead Branch" hint. I wonder if the Dead Branch hint is not yet supported and text related to it is not supposed to be under the "Unused Assignment" heading. The following code contains a listing with several methods that I would expect might potentially lead to a warning about a dead code branch. None of these cause this code hint to appear in any form (warning or error) in my installation of NetBeans 7.1.Methods With Compilable Dead Code Branches package dustin.examples; import static java.lang.System.out; /** * Class demonstrating NetBeans 7.1's hint for dead code. * * @author Dustin */ public class DeadCode { /** * Nonsense method that includes an "else" clause that can never be executed. * * @param someInt Any primitive int. */ private static void neverExecutedElse(final int someInt) { if (someInt < 0) { out.println("The integer you provided is less than zero."); } else if (someInt > 0) { out.println("The integer you provided is greater than zero."); } else if (someInt == 0) { out.println("The integer is equal to zero."); } else { out.println("Unable to categorize provided integer."); } } /** * Nonsense method that has executable code following conditions that are * always true and always lead to premature termination of the method. */ private static void codeAfterAlwaysTrueConditional() { if (true) { throw new RuntimeException("Ouch!"); } final String nothingHere = "Nothing here."; out.println(nothingHere); if (true) { return; } final String nothingHereEither = "Nothing here either."; out.println(nothingHereEither); } /** * Nonsense method that prints text to standard output only if the provided * boolean can be both {@code true} and {@code false} at the same time. * * @param boolValue Boolean value of no consequence. */ private static void cannotHaveItBothWays(final boolean boolValue) { if (boolValue) { if (!boolValue) { out.println("Make up your mind (true or false)!"); } } else { if (boolValue) { out.println("Make up your mind (false or true)!"); } } } /** * Runs else-if that is same as previous if such that one conditonal * prevents the second conditional's implementation from ever being executed. * * @param boolValue A boolean value of no consequence. */ private static void pesterUntilItWorks(boolean boolValue) { boolValue = true; if (boolValue) { out.println("It matches once."); } else if (boolValue) { out.println("It matches twice."); } else if (!boolValue) { out.println("No match once."); } else if (!boolValue) { out.println("No match twice!"); } } /** * Cover all conditions in first conditional and include a never-can-happen * else portion of conditional. */ private static void completeConditionalCoverage() { boolean boolValue = true; if (boolValue || !boolValue) { out.println("I'm always going to happen."); } else { out.println("I'm never going to happen."); } if (boolValue && !boolValue) { out.println("I'll never happen either."); } else { out.println("I'll also always happen."); } } /** * Main executable function. * * @param arguments Command-line arguments: none expected. */ public static void main(final String[] arguments) { if (false) { neverExecutedElse(5); codeAfterAlwaysTrueConditional(); cannotHaveItBothWays(true); pesterUntilItWorks(true); demoUnusedAssignment(); completeConditionalCoverage(); } } } Although none of the methods in the directly previous code listing lead to the Dead Branch code hint, NetBeans 7.1 does include configuration options for the Dead Branch hint. This is shown in the next screen snapshot (selecting Tools->Options followed by the "Editor" tab and then selecting "Hints"). The NetBeans 7.1 News and Noteworthy page shows examples of other new hints, but does not show an example of the Dead Branch hint. Also, the text talking about Dead Branch is mixed with the section on Unused Assignment and under a heading that only talks about Unused Assignment. As my previous code listing demonstrates, I attempted to come up with a code sample to demonstrate the Dead Branch hint, but have not been able to do so. The purpose of this hint ("search[ing] for branches of code that can never be executed") sounds like a nice complement to compiler errors such as "unreachable statement" and "exception already caught" and other NetBeans "green" warnings such as "variable such-and-such is not used." I have blogged about NetBeans hints before and the addition of new hints in NetBeans 7.1 is welcome. If anyone knows of a code sample that will demonstrate the Dead Branch hint in NetBeans 7.1, please share! 1 comment: Apparently Bug 207514 has addressed the dead branch hint not working. Incidentally, NetBeans 7.2 was formally released today.
http://marxsoftware.blogspot.com/2012/01/netbeans-71s-unused-assignment-and-dead.html
CC-MAIN-2016-22
refinedweb
923
60.11
Migration in Afghanistan with the fall of the government, but began to increase again in 1996 with the rise of the Taliban. In 2002, with the fall of the Taliban and the US-led invasion, record numbers of Afghan refugees returned to Afghanistan. An international reconstruction and development initiative began to aid Afghans in rebuilding their country from decades of war. Reports indicate that change is occurring in Afghanistan, but the progress is slow. The Taliban have regained strength in the second half of this decade and insurgency and instability are rising. Afghanistan continues to be challenged by underdevelopment, lack of infrastructure, few employment opportunities, and widespread poverty. The slow pace of change has led Afghans to continue migrating in order to meet the needs of their families. Today refugee movements no longer characterize the primary source of Afghan migration. Migration countries. The highly skilled in Afghanistan often seek to migrate to Western countries, as the opportunities in Afghanistan are limited. Afghans transnational movements have led to the development of the Afghan Diaspora, which has been essential in providing remittances to families in Afghanistan to meet their daily needs. The Afghan Diaspora has been involved in the reconstruction effort and is a key contributor to development in Afghanistan. The continued engagement of the Diaspora is important to the building of Afghanistan's future. This paper seeks to provide an overview of migration and development in Afghanistan. It will begin with a country profile on Afghanistan (Chapter 2), followed by a review of historical migration patterns in Afghanistan (Chapter 3) and a synthesis of current migration patterns in Afghanistan (Chapter 4). The paper will then move to discuss migration and development in Afghanistan (Chapter 5), the Afghan Diaspora (Chapter 6), policies regarding migration in Afghanistan (Chapter 7), and the migration relationship between the Netherlands and Afghanistan (Chapter 8). The paper will conclude with an examination of future migration prospects for Afghanistan (Chapter 9) and a conclusion (Chapter 10). 2. General Country Profile Afghanistan is one of the poorest countries in the world and has been inundated by decades of war, civil strife and poverty. Today, Afghanistan is central in media attention due to the US led invasion post 9/11, however the country has been in turmoil for much longer. This section will provide a brief overview of the recent history of Afghanistan, the current economic situation, the current political situation, a cultural overview, and the current status of women in the country. Historical Overview The modern history of Afghanistan can be divided into four essential periods: pre 1978, 1978-1992, 1992-2001, and post 2001. Pre 1978 Afghanistan was founded in 1774 by Ahmad Shah Durrani who unified the Pashtun tribes in the region and created the state (CIA, 2009). The country was ruled by a monarchy and acted as a buffer between the British and Russian empires until it received independence from conjectural British control in 1919 (CIA, 2009). The last King, Zahir Shah, reigned from 1933 to 1973, when he was overthrown by a coup d'etat led by his cousin and ex-premier President Mohammed Daoud (Jazayery, 2002). Opposition to Daoud's Government lead to a coup in 1978 by the People's Democratic Party of Afghanistan (PDPA) (Jazayery, 2002). 1978-1992 - Soviet Invasion The PDPA was a Marxist regime and from 1989 was supported by the Soviet Union. This was the first major flow of refugees from Afghanistan. The occupation by the Soviets was viewed in the west as an escalation of the Cold War. The West began to fund millions of dollars, which became billions of dollars, to the resistance forces known as the Mujahideen (Jazayery, 2002). The resistance forces operated primarily from Pakistan. In 1986 when Mikhail Gorbachev came to power in the Soviet Union, the Soviets began the process of extraditing themselves from Afghanistan and by 1989 the Soviets had left Afghanistan. 1992-2001 - Taliban Rule In 1992 the Mujahideen forces overthrew Najibullah's Government. A failure of consensus of the new Government led to a civil war from 1992-1996 (Jazayery, 2002). Afghanistan became divided into tribal fiefdoms controlled by armed commanders and warlords (Poppelwell, 2007). The country was in a state of anarchy and Afghans lived in a state of constant fear of physical and sexual assault (Poppelwell, 2007). During this time, the Taliban emerged in 1994, claiming that Afghanistan should be ruled by Shari'a (Islamic law) (Jazayery, 2002). The Taliban received support and funding from Saudi Arabia and Arab individuals in the quest to establish a pure Islamic model state (Poppelwell, 2007). The Taliban swept through Afghanistan encountering no resistance by the Mujahideen and were welcomed in many areas as they established relative security in the areas they controlled (Jazayery, 2002). By 1998, The Taliban had captured the majority of the country and established the “Islamic Emirate of Afghanistan” (Jazayery, 2002). A Northern Alliance that arose in opposition to the Taliban maintained a Government of the “Islamic State of Afghanistan” with Burhanuddin Rabbini as president (Jazayery, 2002). The Taliban Government was only recognized by Pakistan, Saudi Arabia, and the United Arab Emirates, while the Government of Rabbini maintained an officially represented seat at the UN (Jazayery, 2002). After the bombings of the US Embassy's in Kenya and Tanzania the Taliban were asked to stop harboring Osama bin Laden (Poppelwell, 2007). At their refusal, the UN imposed sanctions against the Taliban and Afghanistan in 1999 (Poppelwell, 2007). By this time the Taliban were known for disregarding international law and human rights (Poppelwell, 2007). During this time, killing, pillaging, raping, and ethnic cleansing of individuals occurred across Afghanistan by the Taliban regime (Jazayery, 2002). Post 2001 The events of 9/11 2001 led the US to lead Coalition Forces to invade Afghanistan on 7 October 2007. Within months the military forces had taken control of Afghanistan and declared the fall of the Taliban. The International Security and Assistance Forces (ISAF) in Afghanistan began with 5,000 troops. In 2003, NATO took over the ISAF, which now, due to increased security concerns, is comprised of approximately 50,000 troops coming from all 28 NATO members (NATO, 2009). In December 2001 a UN led interim administration was established under the Bonn Agreement. The Bonn Agreement established a new constitution and the first democratic elections in 2004 (Poppelwell, 2007). Hamid-Karzai, became the leader of a broad based thirty-member ethnic council that aimed to be multi-ethnic and representative of Afghan society (Poppelwell, 2007). The new administration faced many challenges and in 2005 the Taliban began to regain strength in Afghanistan. The increased security challenges led to the London Conference in January 2006 to address the end of the Bonn agreement and the current challenges in Afghanistan. The result of the London Conference was the Afghanistan Compact, which identified a five-year plan for Afghanistan. The Afghanistan Compact is based on three key pillars: “security, governance, the rule of law and human rights; economic and social development; and the cross-cutting issue of counter-narcotics” (Poppelwell, 2007, p. 8). Western Governments have taken on specific areas as a country lead for areas in which they will focus. The reconstruction process in Afghanistan has been extensive. A total of $14,775,000,000 US dollars has been contributed to the reconstruction process since 2001 (Livingston, Messera, and Shapiro, 2009). Despite the development efforts, insecurity has increased since 2005 with the Taliban regaining strength. The overall situation in Afghanistan continues to be characterized by conflict and poverty. Demographics A census has not been conducted in Afghanistan since prior to the Soviet invasion in 1978. Thus, all demographic information is estimates. In 2009, the CIA World Factbook estimated the population of Afghanistan to be 28.3 million. This was a significant decrease from the previous estimate of 33.6 million. An Afghanistan census is scheduled for 2010. The population growth rate in Afghanistan was estimated by the United Nations to be 3.9 percent 2005-2010 (UN Data, 2009). Economic and Poverty Overview Economic progress in Afghanistan is occurring through the reconstruction effort, however, Afghanistan continues to be one of the least developed and poorest countries in the world. Table 1 provides an overview of key economic and poverty indicators for Afghanistan in 2007. Real GDP growth for 2008-09 decelerated to 2.3 percent from 16.2 percent in 2007-08 (World Bank, 2009). This is the lowest GDP growth has been in the post-Taliban period and was due to poor agricultural production (World Bank, 2009). In 2009, however, growth is expected to increase due to a good agricultural harvest (World Bank, 2009). Table 1: Key Indicators GDP Per Capita (PPP US $) 1,054 Life Expectancy 43.6 Adult Literacy Rate (% aged 15 and above) 28.0 Combined Gross Enrolment Ration in Education 50.1 Human Poverty Index Rank 135.0 Probability at birth of not surviving to age 40 (% of cohort) 40.7 Population not using an improved water Source (%) 78.0 Children underweight for age (% under age 5) 39.0 Overseas Development Assistance per Capita (US$) 146.0 Source: UNDP, 2009 The latest poverty assessment in Afghanistan was conducted in 2005 through the National Risk and Vulnerability Assessment (NRVA). The findings indicate that the poverty rate was 42 percent, corresponding to 12 million people living below the poverty line (Islamic Republic of Afghanistan, 2009, p. 14). In addition, 20 percent of the population was slightly above the poverty line, suggesting that a small economic shock could place them below the poverty line (Islamic Republic of Afghanistan, 2009, p. 14). It is evident that widespread poverty continues to be a challenge in Afghanistan. Political Situation In August 2009, Afghanistan held it second democratic elections (World Bank, 2009). The incumbent President Hamid Karzai, was re-elected with 50 percent of the necessary votes, however, since the election there have been over 2,000 fraud allegations lodged with the Electoral Complaints Commission (ECC). The Independent Election Commission announced in October 2009 that its final results indicated less than 50 percent of the votes for Karzai. Thus, a run-off election was scheduled for November between Karzai and the lead opponent. Before the election, however, the opponent withdrew from the race leaving Karzai as President (World Bank, 2009). The United Nations Mission to Afghanistan has continued to coordinate international assistance and support the Afghan government in developing good governance. The key aspects of the UN Mission, 2009). The political situation in Afghanistan continues to be complex. In 2009, Transparency International rated Afghanistan 1.3 on the Global Corruption Perceptions Index (Transparency International, 2009). This was the second lowest ranking with only Somalia receiving a lower score. This suggests a high lack of trust in the Government of Afghanistan. Culture/ Ethnic Groups Afghanistan is a traditional and conservative society with large ethnic divisions. Table 2 shows the percentage of the population that belongs to the different ethnic groups. Table 2: Ethnic Groups in Afghanistan 1970's 2006 Pashtun 39.4 40.9 Tajik 33.7 37.1 Uzbeck 8 9.2 Hazara 8 9.2 Turkmen 3.3 1.7 Aimak 4.1 0.1 Baloch 1.6 0.5 Other 1.9 1.4 Source: The Asia Foundation, 2006; Encycopedia Iranica, 2009 The Pashtun's have generally been the majority in Afghanistan. They occupy land in the South and the East and are divided amoung tribal lines. The Tajik's are primarily Sunni Muslims who are Persian and occupy the Northeast and West of Afghanistan. The Tajiks are often well educated and landowners. The Uzbecks are descendents from the Turks and are primarily involved in agriculture. The Hazara's are primarily Shi'ite Muslims who occupy the infertile highlands in central Afghanistan. The Hazara's are subsistence farms that have used migration routes for survival for centuries (Robinson and Lipson, 2002). The vast majority of the population in Afghanistan is Sunni Muslim (87.9 percent). Shi'ia Muslims account for 10.4 percent of the population and the remaining ethnic groups are negligible in numbers. Shi'ia Muslims are thus a minority and have faced persecution in Afghanistan. Status of Women Afghanistan's GDI (Gender Development Index) value is 0.310, which is 88.1 percent of its Human Development Index (HDI) (UNDP, 2009). The HDI does not account for gender inequality, and the GDI adds this component to the HDI. Afghanistan ranks 155 out of 155 countries measured in the world for its GDI. Indicators, such as literacy, illustrate this; 43.1 percent of adult males are literate, compared to 12.6 percent of adult females (UNDP, 2009). The culture of Afghanistan is a based on traditional gender roles. Traditionally, women are seen as embodying the honour of the family (World Bank, 2005). As such, women are given as brides to create peace, or to honour a relationship. The role of a wife is to maintain the household and support the husband, which includes domestic and sexual services. In general, a wife meets the husbands needs and if the wife does not she has dishonoured her family and community (World Bank, 2005). The legal rights of women in Afghanistan have changed with the political structure. Prior to Taliban rule, the Constitution of Afghanistan guaranteed women equal rights under the law, although local tribes may have had different customs. Under Taliban rule women's rights were severely hindered as they were not permitted to leave their homes unless accompanied by a close male relative, receive education, and had restricted access the health care and employment. Women were frequently raped and abused during this time. With the fall of the Taliban the situation has improved for women, however there are great differences between the rural and urban situation (World Bank, 2005). The Ministry of Women's Affairs (MOWA) was established in the Bonn Agreement to promote the advancement of women in Afghanistan. MOWA works in an advocacy role to ensure that policies are implemented for both men and women. In addition, MOWA works with NGOs to ensure programs for women are implemented. Women's rights remain to be a primary concern in Afghanistan. At present, approximately 60 percent of women are married before the age of 16 (IRIN, 2005). At 44, women in Afghanistan have one of the lowest life expectancies in the world (UNDP, 2009). Women who are widowed are ostracized in rural communities, but are often able to make a living in the cities to support themselves and their families. However, female-headed households tend to be primarily represented in the poorest quintiles of Afghan society (World Bank, 2005). The situation for women in the urban centres such as Kabul is becoming more liberal. Education rates of girls in the urban centres are high than rural areas and these indicators suggest changes are occurring for women in urban areas. Women's rights are high on the international policy agenda for Afghanistan and a key goal of development aid. 3. Historic Overview of Migration Migration in Afghanistan has had a long history and has significantly shaped the countries social and cultural landscape (Monsutti, 2007). Historically, Afghanistan was a country of trade between the east and the west and a key location on the Silk Road trade route. Thus, migration is a part of the historical identity of the country. The following chapter presents an overview of the complex migration patterns, with a historical perspective. Migration Patterns from Afghanistan to Pakistan and Iran Prior to 1978 Migration between Afghanistan and Pakistan and Iran has a long history. The migration relationships are rooted in the ethnic ties that span the borders between the countries. For instance, Pashtuns make up 20 percent of the population in Pakistan and 30 percent in Afghanistan. The Pashtuns are separated by the Pakistan-Afghanistan border, which is referred to as the Durand Line. The Durand Line was established during British colonialism to demarcate British India from Afghanistan, and has been acknowledged to be an arbitrary divide of Pashtun land (Monsutti, 2005). Thus, cross-border migration of the Pashtuns between Afghanistan and Pakistan has been a way of life. Similarly, the Hazara's of Afghanistan are Shiite's Muslims, which is the majority religion in Iran (Monsutti, 2005). Hazara's regularly engaged in migration to and from Iran via religious ties. These ethnic and cultural ties led to cross-border migration for decades prior to the Soviet Invasion of Afghanistan. The poor economic position of Afghanistan prior to 1978 led to further economic migration to the better off states of Pakistan and Iran. Stigter states, “The economic differences between Afghanistan and Pakistan and Iran have long led Afghans to migrate to these countries to find employment and, for Iran, enjoy the benefits of a higher income” (2006, p. 117). In the 1960s and 1970s industrialization in Afghanistan was minimal and there were limited opportunities for the newly educated and growing rural population (Stigter, 2006). A widespread drought in the 1970s led to large-scale crop failure and further migration of many Afghans from the north and north-western Afghanistan into Iran (Monsutti, 2006). In addition, the oil boom of 1973 caused further increasing numbers of Afghans to cross into Iran and other Middle Eastern countries to capitalize on the labour opportunities (Stigter, 2006). Studies have also confirmed that prior to the war migrants from Northern Afghanistan travelled to Pakistan during the winter, illustrating that seasonal migration occurred between the two countries (Stigter, 2006 from CSSR, 2005). These pre-established migration movements reveal that social networks were established between Afghanistan and Pakistan and Iran prior to the Soviet Invasion and proceeding wars. Monsutti states that “Channels of pre-established transnational networks exist between Afghanistan, Pakistan and Iran- the movement of individuals to seek work, to escape drought or to flee war has been a common experience in Afghanistan” (Monsutti, 2006, p. 6-7). Thus, it can be deduced that migration to Pakistan and Iran was a natural option for many Afghans. International Migration Post 1978 International migration movements from Afghanistan from 1978 have primarily been comprised of refugee flows. The vast majority of refugees fled to Pakistan and Iran in the largest refugee crises of the late 20th Century. 1 shows the number of Afghan refugees in Pakistan and Iran from 1979-2001. 1 illustrates that refugee outflows from Afghanistan began in 1979 with the Soviet Invasion. The outflows continued to increase during the Soviet occupation when there was civil war between the US funded Mujahideen and the Soviet backed Najibullah. Flows during this time spanned social classes and ethnic groups as the initial reason for migration was primarily protection led. However, reasons of a lack of economic opportunities, devastation of infrastructure and trade networks, limited access to social services such as healthcare and education, and political and social reasons also contributed to migration flows (Stigter, 2006). Migration was thus not only refugee protection, but also the need to make a livelihood (Stigter, 2006). The peak of the refugee flows occurred in 1990 with 6.2 million Afghan refugees. This was after the Soviet withdrawal and when the Najibullah remained in power (Jazayery, 2002, p. 240). In the 1990s drought contributed to continuing refugee flows from Afghanistan (Stigter, 2006). The fall of the Najibullah in 1992 led to large-scale repatriation. However, with the Taliban gaining power in 1996, the number of refugees began to increase again to approximately 3.8 million refugees in 2001. During the initial refugee outflows in 1979 both Pakistan and Iran warmly welcomed the refugees under a banner Muslim solidarity (Monsutti, 2006). Iran is a signatory and Pakistan is not a signatory to the 1951 Convention of Refugees and its 1967 Protocol, however both countries welcomed the refugees. In Iran the refugees were given identification cards, allowed access to work, health care, food, free primary and secondary education, and were free to settle where they chose (Monsutti, 2006). Pakistan created an agreement with the United Nations to provide services to the Afghan refugees and received financial support from the international community (Monsutti, 2006). The era of welcoming Afghan refugees began to change in 1989. In Pakistan refugees were still welcomed from 1989-2001, but were not provided with the same level of services and facilitation (Monsutti, 2006). In Iran support also decreased and by the 1990s refugees no longer received identity cards and assistance (Monsutti, 2006). The position of the host countries became increasingly unfriendly post 2001, which will be discussed in the next chapter of this paper. Return Migration The Mujahideen took over the government in 1992 and as a result nearly 2 million refugees returned to Afghanistan. By 1997 an estimated 4 million refugees had returned from Pakistan and Iran (Stigter, 2006). Simultaneously, however, conflicts between rival Mujahideen groups dissuaded many refugees from returning, and created new refugees and IDPs. Internal Migration The primary source of internal migration in Afghanistan was Internally Displaced Persons (IDPs). Internally Displaced Persons Internal displacement flows have followed a similar trajectory as refugee flows. The exact number of IDPs is not known and 3 shows estimated number of IDPs in Afghanistan from 1985-2001. Generally those who are internally displaced do not have the means to cross an international border. IDPs in Afghanistan had access to very few services during this period. The UNHCR's capacity in Afghanistan began to increase after 1992 as is illustrated in 3 by the red line. From 1995 the two lines start to converge as the number of IDPs assisted by UNHCR increases and the total number of IDPs decreases. By 2001 the number of IDPs has significantly increased to 1.2 million. The number of IDPs in Afghanistan will be further examined in the next chapter. 4. Current Migration Patterns- 2001- Present Current migration patterns in Afghanistan are complex and multifaceted. Since 2001 Afghanistan has witnessed the largest movement of refugee return in UNHCR's history (Monsutti, 2008). These flows have been a mixture of voluntary and forced return of refugees who had been outside of Afghanistan for varying periods. The majority of returnees are from Pakistan. Afghan refugees have maintained ties with Pakistan and now cross-border labour migration between Afghanistan and Pakistan is increasing. In addition to international flows, the numbers of IDPs have decreased in Afghanistan since 2001 as IDPs return to their regions of origin. Finally, within this picture there are large flows of rural-urban migration as returnees and non-returnees find limited opportunities in rural areas and move to the cities in search of work. All of these flows are occurring simultaneously and present a complex picture of current migration patterns and flows. Each of these areas will be addressed in the following section. Internal Migration Internal migration flows in Afghanistan have been increasing in the post-Taliban period. As refugees and migrants return to Afghanistan they do not necessarily end their migration cycle. Returnees may continue to migrate internally in search of livelihoods and opportunities. The internal migration flows in Afghanistan are comprised of IDPs, rural to urban migration, and trafficking. Internally Displaced Persons Internal displacement in Afghanistan has been understudied and information is limited to availability from the UNHCR. In 2004, the UNHCR conducted a data profiling of IDPs in UNHCR assisted camps and in 2008 the UNHCR created a national profile of IDPs in Afghanistan. Statistics regarding IDPs are estimates[1]. Table 3 shows the number of IDPs and IDP returnees from 2001 to 2008. At the fall of the Taliban in 2001 there were approximately 1.2 million IDPs in Afghanistan, of which many returned spontaneously in 2002 (UNHCR, 2008, p. 6). In 2008, IDP returns were negligible due to continued insecurity, inter-tribal and personal conflict, landlessness and drought, and lack of job opportunities and basic services in rural areas (UNHCR, 2008). Table 3: IDPs Total and Returns: 2001-2008 IDPs IDP Returnees Year Total Assisted Total Total 2,865,700 513,700 822,600 31,000 Source: UNHCR Global Reports, 2001-2008 Of the current IDPs (235,000) the UNHCR identifies 132,000 as a protracted caseload (2008). Table 4 shows the reasons for displacement of the current IDP population. These numbers do not include those who are invisible IDPs or urban unidentified IDPs. UNHCR estimates that the actual number of IDPs in Afghanistan is substantially larger than the numbers suggest (2008, p. 18). Table 4: Reason for Displacement of Current IDPs (2008) Reason for Displacement No. of Families No. of Individuals Protracted 31,501 166,153 New Drought affected 1,083 6,598 New Conflict Affected 1,749 9,901 Returnees in Displacement 8,737 52,422 Battle-affected 127 759 Total 43,197 235,833 Source: UNHCR, 2008 Since 2007 the return of IDPs has continued to decrease due to increased instability in the country, drought, landlessness, and the spread of conflict and insurgency areas (IDMC, 2008). Disputes are arising between IDPs and locals as in Afghan culture if you are not born in the region you do not belong there (IDMC, 2008). Options for IDPs appear to be limited as they are not welcomed in the regions where they are seeking protection. Rural to Urban Migration Urbanization is rapidly occurring in Afghanistan as returnees settle in the cities and people migrate from rural communities to urban centres. Approximately 30 percent of returnees settle in Kabul (Stigter, 2006). The population of Kabul in 2001 was roughly 500,000 and it had grown to over 3 million by 2007 (IRIN, 2007). The urban centres do not have the infrastructure or resources to meet the needs of the large inflows of migrants, however, research suggest that the difficult situations in the cities are better than rural areas. In 2005 the Afghanistan Research and Evaluation Unit conducted a study on rural to urban migration (Opel, 2005). A total of 500 migrants were interviewed in the cities of Kabul, Herat, and Jalalabad. The majority of migrants were male (89 percent) and the average age of migrants was 31 years (p. 4). Males tend to migrate to support their families, and females migrate when they have lost their husbands or have been ostracized by their community and have no means of supporting themselves in rural areas. The majority did not own productive assets in their village (71.2 percent), although 43 percent owned a house in their village (p. 8). The primary reasons for migration were the lack or work in the village and better opportunities in town (42%), followed by lack of work in the village (38.2%) and insecurity (16.3%) (p. 11). The majority of migrants made the journey on their own (70.7%) and paid for the journey from their savings (p.14). Migration to urban areas is expensive and the poorest of the poor cannot afford the journey. Once in the cities, the majority were employed in low skilled day labour work and on average respondents reported working 16 out of the past 30 days (p. 20). Social networks were essential in people finding work as 89 percent of skilled workers and 60 percent of unskilled workers reported receiving assistance from a relative, friend or neighbour (p. 20). Incomes in the cities were low, but were higher than what individuals could earn in the rural areas. The majority of urban migrants remitted money to their family in rural areas, which they carried with them when they returned or sent through family or friends. None of the urban migrants use the Hawala (see Chapter 6) system, which was reported to be too expensive for them. The majority of migrants reported planning to settle in the city (55%) (p. 26). Overall, the majority did improve their economic situation through migration (61.9% for males and 80.9% for females) (p. 27). The large-scale migration to urban centers appears to be a trend that will continue. It is estimated that urban centers are now accounting for 30 percent of the population in Afghanistan (Opel, 2005). The rapid urbanization has shifted rural poverty to urban poverty (Stigter, 2006) and many challenges remain for the cities in managing the rapid growth. National Trafficking In 2003 the IOM in Afghanistan conducted a study on trafficking of Afghan women and children. Research on trafficking in Afghanistan is difficult due to the lack of data inherent in all areas of Afghanistan, but increasingly so due to the fear of reporting trafficking related crimes and the shame associated with such crimes. The IOM reports that trafficking occurs in four ways in Afghanistan. The first is through prostitution, which is believed to be occurring in Kabul, but is not reported as prostitution is not to occur in an Islamic state. The second is forced labour services, which occurs in the form of forced marriages of women and girls. The third is servitude, which is either sexual or domestic, and occurs with both boys and girls as young as 4 years old who are taken and sexually abused. Finally, trafficking for the purpose of organ removal has been reported, but there is no evidence to substantiate these claims. It can thus be inferred that trafficking of persons is occurring in Afghanistan, but the degree and forms of trafficking are unknown (IOM, 2003). International Migration Afghanistan has had large international migration flows since the fall of the Taliban, primarily from refugee returns. This section will discuss refugee return since 2001 and the emergence of circular migration systems between Afghanistan and Pakistan. Refugees The number of Afghan refugees in Pakistan and Iran has continually decreased since 2001. 4 shows the number of Afghan refugees in Pakistan and Iran from 2001-2008. Migration flows in Afghanistan since 2001 have been comprised primarily of refugee return flows. Statistics regarding return flows vary by source. Table 5 shows estimated return flows from the UNHCR. Kronenfeld estimates that in 2002 there were 2,153,382 refugee returns, which presents a substantial difference from the table (2008, p. 48). It is widely recognized, however, that capturing flows of people at such high volumes presents logistical challenges. Pakistan Iran Various Total Year Total Assisted Total Assisted Total Assisted Total Returnees Total Total 3,462,900 1,940,900 1,599,800 600,000 11,060 2,000 5,073,760 2,542,900 Table 5: Estimated Refugee Returns Source: UNHCR Global Reports, 2001-2008 The numbers of refugee returns from 2002 showed higher numbers of returnees than were thought to be residing in Afghanistan and Iran. Turton and Marsden (2002) state: “In January 2002, UNHCR issues a draft planning document for the “Return and Reintegration of Afghan Refugees and Internally Displaced People” over a three-year period, in which it estimated that there were 2.2 million Afghan refugees living in Pakistan and 1.5 million in Iran. It was envisaged that, during the course of 2002 and with the assistance of UNHCR, 400,000 refugees would return from Pakistan, and the same number would return from Iran. Approximately the same numbers were expected to return in 2003 and 2004” (p. 19). It is evident from Table 5 that the number of returnees in 2002 alone (1.8 million) was more than double the initial UNHCR estimates. One reason cited for the large numbers of returnees was the issue of ‘recyclers' from Pakistan. A ‘recycler' is a refugee that registers with the Voluntary Repatriation Centre in Pakistan, crosses the border to Afghanistan to receive their cash grant, food, and other items, then returns via an alternative route to Pakistan and engages in the process again (Turton and Marsden, 2002, p. 20). In Iran recyclers were not very present as the distance it takes to return to Iran is much greater, the fact that the cash grant and return package was far less substantial, and that in Iran it took on average a month to get a Voluntary Repatriation form, whereas in Pakistan is was issued the same day (Turton and Marsden, 2002). The issue of ‘recyclers' was virtually resolved by the fall of 2002, as UNHCR received iris-scanning technology that made recyclers identifiable (Kronenfeld, 2008). Thus, the issue of recyclers could be a contributor to the high statistics, but is not the only source. Kronenfeld states that one reason for the discrepancy is that there appears to have been a gross underestimation of the refugee population in Pakistan. “UNHCR estimated in the middle of 2001 that there were two million Afghans living in Pakistan (and one million in Iran). But three years later, after the return of nearly three million Afghans, the census shows that over three million still remain in Pakistan- well over the initial 2001 estimate” (Kronenfeld, 2008, p. 49-50). It appears that the growth rate of the refugee population that fled in the late 1970s had not been factored into the statistics. The growth rate of the Afghan refugee population in 2005 was estimated at 3 percent. Furthermore, 19.4 percent of the refugee population was under the age of five, and 55 percent were under the age of 18 (p. 49). Thus, half the population of Afghan refugees in Pakistan was born in exile. The individuals that did return were not from the UNHCR camps. Returnees from Pakistan were those living in urban areas, not the camps, and thus not necessarily included in the general refugee statistics (Kronenfeld, 2008). Turton and Marsden (2002) hypothesize that the majority of returnees in 2002 were those who were having difficulty making ends meet; “from urban areas of Pakistan, where they had been surviving on low and erratic incomes from daily labour” (p. 2). Returnees from Iran had been there for less than five years (Stigter, 2006), and thus were less socially integrated than refugees who had been there for longer. It is generally recognized that refugees who have been outside their country of origin for longer have more economic and social ties in the host country, and weaker economic and social ties to their country of origin, making return more difficult (Stiger, 2006). Return after 2002 from Pakistan and Iran was influenced by “asylum-fatigue” in the host countries. Both host countries had now been dealing with a protracted refugee situation and were hosting approximately 20 percent of the world refugee population. The political climate for refugees in the host countries after the fall of Taliban became increasingly difficult. Both Iran and Pakistan, albeit Iran more aggressively, began to forcibly deport refugees: “From 2002 to the end of December, 2005, a total of 271, 508 individuals were deported from Iran in comparison to 5,347 individuals from Pakistan” (Stigter, 2006, p. 113 from UNHCR, 2006). People without papers in Iran were taken by the policy and forcibly deported, and in 2001 Iran passed legislation imposing heavy fines on employers employing illegals. In 2008, the Government of Pakistan began to officially close refugee camps and deport refugees. The camps were destroyed for urban land use. These measures are expected to contribute to the continued flow of returnees. Since 2005, however, the flow of returnees has tapered off, despite the fact that millions of refugees still reside in Pakistan and Iran. According to the 2005 census of Afghan Refugees in Pakistan, 51 percent of refugees remaining were long-stayers who had arrived in Pakistan before 1981 (Government of Pakistan, 2005, p. 19 in Kronenfeld, 2008, p 51-52). 5 illustrates the primary reasons cited by refugees for not returning to Afghanistan. In addition to the reasons cited above the political situation in Afghanistan has deteriorated since 2005, which deters people from returning. Return to Afghanistan has been geographically uneven. Approximately 30 percent of returnees have settled in Kabul (Stigter, 2006, p. 114). The There has been minimal return to the south and southeast due to insecurity in those areas (Stigter, 2006). Compared to the total population in 2008 returnees accounted for 16 percent of the population, which is as high as 32 percent in the East and 20 percent in the Central region (UNCHR, 2008). The issue of refugee return in Afghanistan still presents many challenges. The underdevelopment of the country, especially in rural areas has led to a severe lack of basic infrastructure. High levels of poverty are abundant across the country. In retrospect, analysts have suggested that the high levels of return were too fast for Afghanistan's absorption rate and that further return should not be the priority at this time (Stigter and Monsutti, 2005). The country needs to be able to meet the needs of the current population before it can absorb further returnees. On the other hand, refugees are increasingly less welcome in Pakistan and Iran, despite the fact that they fill low-level jobs and contribute to the economy in both countries. The protracted refugee situation is thus continuing in an environment of competing geo-political interests. Circular Migration At present, circular migration is arguable the primary form of migration occurring between Afghanistan and Pakistan. It is important to note from the previous section that return migration of refugees does not necessarily mean the end of the migration cycle (Monsutti, 2008). Furthermore, in Mosutti's research with refugees in Pakistan and Iran, it became evident that the majority of refugees had returned to Afghanistan to see the conditions for themselves before making the decision not to return (2004). This suggests the occurrence of informal circular migration processes. In 2008 the UNHCR commissioned a study on cross border movements between Pakistan and Afghanistan (Altai Consulting, 2009). The study revealed that cross border migration is occurring at substantially higher levels than anticipated in movements of circular migration based on employment, social relations, and receiving essential services such as health care and education. The study was based on interviews with 1,007 migrants crossing to Pakistan, 1,016 migrants crossing to Afghanistan, and a counting exercise of people crossing the border. The counting exercises revealed that on average for a week in one morning or afternoon 11,297 entered and 16, 257 people exited Afghanistan. On a given day official numbers would show 138 exits, and the counting would show 23,934 exits, illustrating that official records substantially under represent cross-border flows (Altai Consulting, 2009). The survey portion of the study provided a strong picture of the types and reasons for cross-border migration. The vast majority of the migrants were males traveling alone (75.3 percent) (p. 35). The results indicate that 81.3 percent of interviewees indicated going back and forth on a regular basis (p. 30); 85.9 percent have lived in Pakistan for over a year (p. 31); 89.5 percent were planning to spend less than one year in Pakistan, and 19.7 percent had permanent residence in Pakistan (p. 33). 6 shows the primary reason cited for their travel to or from Pakistan. The majority of migrants (64.7 percent) were planning to work in low skilled professions (construction and retail) in Pakistan. A lack of employment opportunities in Afghanistan were driving them to find temporary work in Pakistan that would allow them to meet the basic needs of themselves and their families. This study supports the research of Monsutti in that Afghans are currently migrating to Pakistan on a temporary basis as a livelihood strategy or to maintain social networks. Monsutti states that, “After many years the migratory movement are highly organized, and the transnational networks become a major, even constitutive, element in the social, cultural, and economic life of Afghans” (Monsutti, 2008, p. 62). It also supports Monsutti's argument that return migration does not mean the end of the migration cycle, as the results indicate that a large number of these migrants were returnees. The current migration processes occurring between Pakistan and Afghanistan are a way of life for Afghans as their social network and economic opportunities span the border. 5. The Diaspora Three decades of conflict and displacement have led to the emergence of the Afghan Diaspora. The Afghan Diaspora grew out of two waves of emigration. The first wave was from 1980-1996 (Wescott, 1996). This wave was primarily comprised of the upper classes and individuals opposed to the communist regime (Wescott, 1996). The second wave occurred from 1996-2001 with the rise of the Taliban (Wescott, 1996). This wave was primarily comprised of minority groups, such as Shia Muslims, Sikhs, and Hindu's, and a large representation of women and children (Wescott, 1996). Today, the Afghan Diaspora is a highly transnational group with money, goods, information and people circulating between Afghans in different continents around the world (Braakman, 2005). Sizes of the Afghan Diaspora It is important to distinguish between the “near Diaspora” and the “wider Diaspora” in the case of Afghanistan. Many Afghan refugees are located in the neighbouring countries of Iran and Pakistan, while other have moved further abroad. The former are termed the “near Diaspora” and the later are called “wider Diaspora” (Koser and Van Hear, 2003). This section will refer to the Wider Diaspora. It is evident from 7 that the United States and Germany have been the primary destinations for Afghan migrants. The total estimated size of the Afghan Diaspora is 2,031,678 (including near and wide Diaspora's) (World Bank, 2007). 8 shows the stock of the Afghan born in the top ten receiving countries of the wider Diaspora. It is evident that the largest concentration of Afghans is in Germany, followed by the United States. In the 1970s Germany had the most liberally asylum policies, which attracted large numbers of refugees. Germany continues to be a country of preference as it has a large Afghan population (Braakman, 2005). Amoung the Afghan community in Germany, Hamburg is home to 22,000 Afghans and is known as ‘The Kabul of Europe' (Braakman, 2005). The Afghan Diaspora in Germany is well organized and has a number of associations (Zunzer, 2004). A key means of exchange for the Afghan community is through the online website of “Afghan German Online” () (Zunzer, 2004). Furthermore, the German Afghan community was highly engaged in the Bonn process in 2002. According to the 2006 US Census there were 65,972 Afghans, who were born in Afghanistan, in the US. The Afghan Embassy in the US, however, estimates that there are 300,000 Afghans in the US (The Embassy of Afghanistan, 2009). According to the US Census, 53 percent of the Afghan population entered before 1990, 28.3 percent entered between 1990-2000, and 18.5 percent entered since 2000 (US Census Bureau, 2006). Thus, the majority of Afghans in the US are from the first wave of Afghan migrants. The median Afghan household wage is US$ 34,973, which is $9,423 below the national average, and 27.7 percent of Afghan households are at the national poverty rate, compared to the national average of 9.8 percent (US Census Bureau, 2006). The reasons for the high rates of poverty are unknown. The Afghan population in the US is heavily concentrated in the San Francisco area, Northern Virginia, and Los Angeles (Robson and Lipson, 2002). The Afghan population in the US is diverse, reflecting the various ethnic backgrounds of Afghanistan. The majority of Afghans in the US are of Pashtun and Tajik origin, with a community of Uzbek minority in New York and small Hazara communities scattered around the country (Robson and Lipson, 2002). As such, there are Sunni Muslims, Shi'te Muslims, and Ismailis in the United States (Robson and Lipson, 2002). Involvement of the Afghan Diaspora in the Reconstruction Effort The Afghan Diaspora has been highly involved in the reconstruction effort. Zunzer (2004) states: “The diaspora played a significant political role in organizing a peaceful transition after the Nato military intervention in 2001/2002. Diaspora members played an important role during the Petersberg Talks, in the ongoing Bonn process of political transition, and as connectors between the international community, the national administrations, international civil society and the private sector” (p. 40). Members of the Diaspora received Ministerial positions in the interim government established from the Bonn Agreement, and President Hamid Karzai, himself had spent significant time in Pakistan and the US (Van Hear, 2002). At the end of the Bonn Agreement however, the Diaspora was split into four main groups (Van Hear, 2002). The first was the Northern Alliance, or United Front, which represented Kabul (Van Hear, 2002). The second was the Rome-based delegation representing the former King, Zaher Shah (Van Hear, 2002). The third was a group from Cyprus of intellectuals that were supported by Iran and had meeting in Cyprus for the past four years to discuss future options of Afghanistan (Van Hear, 2002). The final group was Peshawar that was primarily comprised of Pashtun refugees (Van Hear, 2002). These fractions illustrate that divisions continued to exist amoung the Diaspora. Despite the fractions in the Diaspora, Afghans have come together to assist in the reconstruction effort. There are four key programmes that were established to engage the Afghan Diaspora. First, the World Bank allocated 1.5 million US dollars for a fund to hire qualified Afghans to return to Afghanistan and assist in the reconstruction efforts. Secondly, the World Bank established the World Bank Afghanistan Directory of Expertise, which is a database of skilled Afghans and non-Afghans with experience in Afghanistan. This database has served to connect many qualified individuals with projects in Afghanistan. Third, the IOM established a Temporary Return of Qualified Nationals programme to engage the Afghan Diaspora in returning temporarily to work on training and capacity building projects. Finally, the Swiss Peace Foundation has established an Internet forum to create dialogue between civil society, the Diaspora, and government regarding peace in Afghanistan (Zunzer, 2004). In addition to these programmes, Afghan Diaspora groups are uniting on their own to build networks among themselves in an effort to get involved in the reconstruction effort. For instance, the Afghan Diaspora in the US has made significant contributions to the education sector in Afghanistan; “With investments in school construction and teaching, 6 million Afghan children were able to register for school, 34% of them being female” (The Embassy of Afghanistan, 2009). Both the financial and intellectual investments of the Diaspora in Afghanistan appear to be an integral piece in the reconstruction effort. 6. Migration and Development Transnational migration networks provide an essential contribution to development in Afghanistan. Skill flow out of Afghanistan has occurred for decades, but with the fall of the Taliban it appears small amounts of skill flow is being attracted to return to the country. The Diaspora in the west provides essential remittances that provide families with the funds needed to meet daily needs. Migration has been employed as a livelihood strategy in Afghanistan for decades and through transnational networks Afghanistan is receiving needed support for development and reconstruction. Brain Drain/ Skill Flow The skill flow out of Afghanistan presents a challenge in the reconstruction process. In the 1980s and 1990s the majority of Afghans who migrated to Europe, North America, or Australia were the country's elite from the upper and middle urban classes (Monsutti, 2008, p. 68). This group had the skills to seek better opportunities in the west and resulted in a brain drain from Afghanistan. In 2000, the World Bank cited the skilled emigration rate to be 13.2 percent and the emigration rate of physicians to be 9.1 percent (2009). This data, however, tells little of the current situation. In 2005, the World Health Organization stated that there were a total of 5,970 physicians and 14, 930 nurses and midwives in Afghanistan (2009). That is roughly one physician per 5, 000 people in Afghanistan. An opinion piece in the New York Times in 2006 stated that physicians in Afghanistan made roughly $100/ month and University professors earned less than $2 per month (Younossi, 9 Feb 2006). The same piece stated that, “When I asked university students whether they want to stay in Afghanistan or go to another country, an overwhelming majority said they want to emigrate”. The underdevelopment of Afghanistan is resulting in a continued skill flow from the country. Simultaneously, however, Afghans are returning to the country to assist in the reconstruction process. One example is as follows: “In Afghanistan, the transitional government identified a number of qualified persons in the justice sector, because under the Taliban rule the country had lost most of its judges. IOM was called by the transitional government to rebuild the educational and justice sectors. Some 4,000 qualified nationals enrolled in the database, giving their availability, and 400 persons went to Afghanistan. Thus, there was a need to develop modalities that would use these skills in the best possible way and through projects that made sure they do not lose the possibility to return in the host country” (Dall'Oglio in Roison, 2004). Data does not exist on the number of skills flowing in and out of the country, but anecdotal information suggests the flow is occurring in both directions. Due to the large deficit of skills created over the last two decades, however, it appears that without a significant inflow of skills that counteracts the decades of emigration and current emigration there will continue to be a skill deficit. A lack of skilled personnel contributes to continued underdevelopment, particularly in key areas such as the health and education sectors. Remittances and Development Remittances provide a key livelihood for migrant's families remaining in Afghanistan. The International Fund for Agricultural Development (IFAD)[2] estimated in 2006 that the annual remittance flows to Afghanistan were valued at 2,485 million US Dollars or 29.6 percent of GDP. This includes both formal and informal remittance flows. Alessandro Monsutti (2004) concludes from his research with Afghan migrants that an estimate of annual remittances to Afghanistan in 1995 would have been 50 million dollars annually (p. 220). Even if this not correct, Monsutti argues that the overall amount remitted would certainly exceed that of international humanitarian assistance (2004). In 2003 the World Bank conducted a National Risk and Vulnerability assessment of 11,200 households in Afghanistan. Table 6 shows the percentage of households receiving remittances, and for remittance receiving households the source country of the flows and the average per capita value of the households. The data is divided into five quintiles based on the economic status of the household, with Q1 representing the poorest households and Q5 the richest households. Table 6 highlights that the majority of remittances are received from family members outside of Pakistan and Iran, which aligns with previous data in this report as individuals in Pakistan and Iran cannot often afford to send remittances. Table 6 also highlights a large discrepancy in the per capita value of remittances between the richest and poorest households, with the richest households receiving 247 percent more. Table 6: Remittances Received Q1 (poorest) Q2 Q3 Q4 Q5 (richest) Total Percentage of Households Receiving Remittances 10.5 10.1 12.8 18.3 23.2 15.2 From Pakistan/ Iran (%) 35 32 32 28 27 31 From Other (%) 65 68 68 72 73 69 Amount of Remittances per capita (receiving households, US $) 19 28 33 38 47 34 Source: World Bank, 2005 (p. 174) Of the families receiving remittances, on average the remittances account for 20 percent of their expenditure, however, for the lowest quintile they account for 30 percent of expenditure (World Bank, 2005, p. 25). Remittances are a vital source of livelihoods for families receiving them. In the majority of cases remittances are used to meet basic needs such as food, clothing, and medicine (Monsutti, 2006). The benefits of the remittances are generally short-term, creating a dependency cycle (Monsutti, 2006). Few remittance receivers are able to accumulate assets such as constructing a house, purchasing land, and saving for the mahr and weddings (Stigter, 2006) or purchase luxury goods such as a car, camera, or televisions (Monsutti, 2006). Remittances for the most part, are essential to maintaining livelihoods for those who have returned or have not migrated. Remittances in Afghanistan do not only have a crucial economic component, but also have a significant social component. The remittance sending mechanism in Afghanistan is the Hawala system, which is based on social networks. A hawaladar is a half-merchant, half-banker whose expertise is in the transfer or money and goods (Monsutti, 2008). If an individual, a kargar, knows a hawaladar (they must belong to the same lineage or come from the same valley) then the kargar can go to directly to the hawaladar and if not he/she must go through a middleman. Once the hawalar has the request a letter is sent to his partners stating the details of the transaction and a letter is sent to the kargar's family stating the details. The hawaladar would either send the money directly, making a profit off the currency exchange, or use the money to purchase goods that are sent. The goods would be sold by a partner on the other end, who would use the profits to pay the kargar's family. This process can often have several partners and steps to get the money to the final partner near the family (Monsutti, 2008; Monsutti, 2006; Monsutti, 2004). The important aspect of the hawala system is that it is based on social relationships. The hawala system operates around the world and provides a functioning remittance system in the absence of formal banking institutions in Afghanistan. Within the hawala system there is tremendous trust that has been sustained through regular interactions occurring over a long periods of time. The hawala system has established a transnational network and cooperation system amoung Afghanis around the world. The remittance sending structure of the hawala has been essential to maintaining livelihoods in Afghanistan. It continues to be the primary remittance mechanism in Afghanistan today (Monsutti, 2008; Monsutti, 2006; Monsutti, 2004). Migration and remittances are essential to the development of Afghanistan. Remittances account for greater flows than humanitarian assistance to Afghanistan and are critical to sustain families and prevent further poverty. As the humanitarian assistance continues to decrease to Afghanistan, remittances will gain further importance for development. 7. Migration Policies and Programmes Migration policies in Afghanistan are organized and implemented through partnerships between the Islamic Republic of Afghanistan and international organizations. The key policy unit is The Ministry of Refugees and Repatriation (MoRR). The key international organizations involved in migration in Afghanistan are the International Organization for Migration (IOM), and the United Nations High Commissioner for Refugees (UNHCR). The Ministry of Refugees and Repatriation The Ministry of Refugees and Repatriation (MoRR) is responsible for implementing migration policies and programmes in Afghanistan. The MoRR exists under the Afghanistan National Development Strategy, which serves as the countries Poverty Reduction Strategy Paper (Islamic Republic of Afghanistan, 2008). The MoRR has initiated a number of migration management projects with cooperation from international organizations such as the UNHCR and IOM. The primary objectives of the MoRR are to ensure integration and resettlement, safe livelihood, provide employment opportunities, vocational trainings and legal support during repatriation for the returnees. The MoRR national policy priorities are based on the following five principles: “• Voluntary, gradual, safe and dignified return of refugees and their reintegration in their places of origin. • Ensuring reintegration and resettlement • Protecting their rights and privileges • Building the capacity of the households • Ensuring employment opportunities” (MoRR, 2007, p.7) These principles guide the programme planning of the MoRR (MoRR, 2007). MoRR has 34 branches in different provinces in Afghanistan and additional special branches out of the country to implement the strategy related to solving the problems of refugees and returnees (MoRR, 2007). MoRR has established permanent residential facilities in 50 townships located in 29 provinces to provide legal assistance, employment and educational opportunities (MoRR, 2007). International Organization for Migration The International Organization for Migration (IOM) in Afghanistan is based in Kabul and provides technical cooperation and capacity building to Afghan government institutions in managing migration (IOM, 2009). The IOM provides emergency relief to vulnerable displaced families; facilitates long-term return and reintegration to and within Afghanistan and stabilizes migrant communities. The IOM facilitates several programmes to provide emergency and post-conflict migration management services in Afghanistan. These programmes include (IOM, 2009): * Rapid Response Humanitarian Assistance - Assists refugees and migrants returning to Afghanistan in recent years; “many of them vulnerable without adequate shelter, food, water or mean to travel to their final destinations.” * Afghan Civilian Assistance Programme - Assists temporary and medium term displacement by providing assistance packages to those displaced by military activities. * Construction of Health and Education Facilities- Works with the Afghan Ministries of Public Health and Education to construct hospitals, midwifery training schools and teacher training colleges. * Support to Voter Registration- Provides support in capacity building for trained staff for the Independent Election Commission. * Return of Qualified Afghans- Coordinates the return of qualified Afghans to participate in the reconstruction process. According to IOM; “846 Afghan experts living abroad have returned to Afghanistan from 32 countries with IOM's assistance in order to participate in the rebuilding of their nation”. * Assisted Voluntary Return and Reintegration- Coordinates the voluntary return of failed asylum seekers from developing countries; “IOM has assisted over 7,600 Afghans with their returns, approximately 2,500 of whom received individually tailored reintegration assistance packages. Assistance includes training, self-employment, business start-ups and employment referral”. * Counter-Trafficking Initiative- Seek to provide awareness and protection to victims of Trafficking. * Passport and Visa Issuance Capacity Building- Provides passport and visa issuance to support the capacity building of the Afghan Government. * Border Management- Seeks to provide support to managing the border between Afghanistan and Pakistan and Iran. * Support to Provincial Governance- Provides grants for sub-projects that are based run by the Provincial governments and their partners. These programmes provide the core of the activities of the IOM in Afghanistan and are essential to providing services to Afghans. The IOM in cooperation with the European Union established the Return, Reception and Reintegration of Afghan Nationals to Afghanistan (RANA) programme in 2004. This programme seeks to provide additional assistance to Afghan nationals returning to Afghanistan from the EU member states (IOM, 2007). The objective of the programme was to encourage and provide for sustainable return to Afghanistan. The programme provided training, employment, on-the-job training, and self-employment assistance to returnees (IOM, 2007). As of 2007, a total of 2,097 Afghan refugees had utilized the programme to return, of which 35.2 percent were from The Netherlands and 35 percent were from Germany (IOM, 2007, p. 6). United Nations High Commissioner for Refugees The United Nations High Commissioner for Refugees (UNHCR) has had a key role in the protection and reintegration of refugees in Afghanistan. The UNHCR provides immediate and emergency services, as well as long-term services to returnees. Since 2002 the UNHCR has “supported the construction of more than 181,000 shelters in rural areas benefiting over 1 million homeless returnees” (UNHCR, 2009). The UNCH also provides services such as developing water points: “9,415 water points have been completed under UNHCR's water programme in high or potential return areas, as well as those hit by drought” (UNCHR, 2009). In addition, UNHCR has provided a limited number of income-generating projects in areas of high return to assist returnees in building livelihoods (UNCHR, 2009). The UNCHR also provides services to Afghan refugees in Pakistan and Iran. This includes running the refugee camps in Pakistan and providing voluntary return centres in both countries. 8. Migration Relationship with the Netherlands The Netherlands and Afghanistan The Netherlands has supported Afghanistan in the reconstruction effort since the fall of the Taliban in 2001 through humanitarian aid, development assistance, and the deployment of Dutch troops. The Dutch effort is targeted to fighting poverty in Afghanistan and helping to establish stability in the region. The Netherlands is a member of the Afghanistan Compact and pledged 10 million USD to support Afghanistan in 2006. The Netherlands is the country lead in the area of good governance and provides assistance for elections and developing a democratic state (Buitenlandse Zaken, 2006). In 2006, 1,400-2,000 Dutch troops entered Afghanistan and established responsibility for the province of Uruzugan in Southern Afghanistan (Buitenlandse Zaken, 2009). At that time the Dutch committed troops to the NATO International Security and Assistance Force in Afghanistan until 2008. The mission has been extended to 1 August 2010, with a reduction in troops to 1,100 (The Netherlands UK Embassy). The Dutch mission has been active in providing security and development aid to Uruzugan. The Netherlands development assistance to Uruzugan has included the building of infrastructure, support of education, women and girls, and health care. At this time, 15 schools have opened in the region and seven large health centers and there are plans to establish a total of 78 new schools and provide further health support and services. Saffron corns have been distributed in the region with cultivation lessons to help farmers establish a crop that demands a high price on global markets. The Dutch have supported the establishment of a radio service to connect the people of Uruzugan with the Afghanistan government. Finally, microcredit programs have been established to provide economic opportunities to men and women in the region. Further initiatives are underway in Uruzugan, but it is evident that a holistic approach has been taken to provide development assistance to the area (Ministry of Foreign Affairs, 2009). Migration Flows between the Netherlands and Afghanistan Immigration from Afghanistan to the Netherlands was virtually inexistent prior to 1985. 9 illustrates the number of asylum applications from Afghanistan to the Netherlands from 1980- 2008. It is evident that the majority of asylum applications were received during the Taliban rule of 1992-2001. Since the fall of the Taliban, the number of asylum applications has significantly fallen. 10 shows the migration motivations of individuals to the Netherlands. It is evident from 10 that the vast majority of migrants are asylum-seekers. The numbers for family reunification have followed a relatively similar trajectory as asylum applications with a few year lag, which rationally illustrates that asylum-seekers apply for family reunification once they have received their status. 10 also illustrates that the number of students and labour migrants to the Netherlands are negligible. It can be deduced from the data that the majority of Afghans to the Netherlands are fairly recent migrants that is within the last 20 years and have received residence in the Netherlands based on refugee status. Emigration numbers of Afghans from the Netherlands have been small. 12 shows the number of Afghan emigrations from the Netherlands from 1995-2008. In 2003, the Netherlands, Afghanistan, and the UNHCR signed a tripartite Memorandum of Understanding on voluntary return migration from the Netherlands to Afghanistan (UNHCR, 2009). This agreement provided further assistance to Afghan's returning to Afghanistan. It is possible that this program contributed to the increase in return numbers illustrated in 11 from 2003. The data indicates that Afghan immigration to the Netherlands has been more pronounced than Afghan emigration. The Afghan Community in the Netherlands In 2009 there were 37,709 Afghans living in the Netherlands (CBS, 2009). 12 shows the age and gender distribution of the Afghan population in the Netherlands in 2008. 12 illustrates that there are slightly more males (54 percent) than females (46 percent) living in the Netherlands. It is also evident that the population of Afghan's in the Netherlands is a young population with 90 percent being under the age of 50 years. Remittance sending from the Netherlands to Afghanistan is estimated at €79,664 (Siegel et al., 2009). In a Remittance corridor analysis conducted by Siegel et al. (2009) on 180 individuals in the Netherlands, 40 percent had sent remittances in the last 12 months to Afghanistan. The hawala system is widely used in sending remittances from the Netherlands to Afghanistan. In general the amounts remitted are between €100- €300 (Siegel et al., 2009). The remittances are primarily used in Afghanistan to meet daily needs. The Afghan community in the Netherlands has grown rapidly since the early 1990's. The Afghan community is young and the vast majority of Afghans have come to the Netherlands as refugees. It is thus not surprising that there is over 60 percent unemployment of Afghans in the Netherlands and over 50 percent of the population is on social assistance (Siegel et al., 2009). 9. Future Perspectives of Migration High levels of multifaceted migration flows have been prevalent in Afghanistan for the last 30 years and the evidence indicates that these flows will continue in the future. Three key reasons for continued migration can be noted. First, migration in and from Afghanistan has been motivated by insecurity, underdevelopment, severe poverty, and lack of opportunities. Unfortunately, all these conditions remain in Afghanistan at present. This alone suggests that migration will continue. Secondly, the UNHCR predicts that the population of Afghanistan will be 97,324,000 in 2050 (Stigter and Monsutti, 2005, p.3). Afghanistan's economy is based on agriculture where the rural landscape is already overpopulated. The high levels of population growth indicate that the rural communities will not be able to support the population, which will lead to increasing migration flows. Third, the evidence has illustrated that Afghans are a highly mobile and resilient people. The World Bank states: “Afghans are a resourceful, resilient, creative, opportunity-seeking, and entrepreneurial people (as witnessed by the high incidence of labor migration, entrepreneurial activity wherever they are located, trading networks, and remittances)” (2005, p.147). Afghan culture and historical migration patterns of the Afghan people provides strong indication that migration will continue from Afghanistan. The continuation of high levels of migration flows will pose many challenges for Afghanistan. Primarily, the skill drain will become a more acute issue, as the population increases and skilled individuals are needed to meet the needs of the population. Retaining skilled workers will be a great challenge for Afghanistan if the situation in the country does not improve. In addition to emigration, there are also questions of future return migration to Afghanistan. The current refugee situation between Afghanistan and Pakistan and Iran has promoted much debate regarding the competing geo-political interests. Stigter and Monsutti (2005) argue that repatriation from Pakistan and Iran of all Afghan refugees is not feasible and would have a negative impact on the reconstruction efforts in Afghanistan. Stigter and Monsutti (2005) propose the following recommendations: “Establish a bilateral labour migration framework that provides a clear legal identity and rights for Afghan labourers in Iran; Provide easier access to passports for Afghans; Increase awareness of the contribution, both in labour and otherwise, of Afghans to the Iranian and Pakistani economy; and In line with international conventions, continue to uphold the refugee status and protection of the most vulnerable” (p. 2). This approach clearly provides for the increased rights and protection of Afghan refugees. It also suggests that Afghan refugees be permitted the legal right to remain in Pakistan and Iran in the future. Alternatively, however, Pakistan and Iran are seeking the decrease the rights of Afghan refugees, although they positively contribute to the economy in both countries. This situation poses future challenges as to how the refugee situation in Pakistan and Iran will be addressed. That is, if the refugees will be forcibly repatriated, or allowed to reside in Pakistan and Iran. Overall, it is evident that migration flows from and to Afghanistan will continue in the near future. Stigter and Monsutti state (2005): “For many migration has become a way of life: it is now highly organized and the transnational networks that have developed to support it are a major, even constitutive, element in the social, cultural and economic life of Afghans” (Stigter and Monsutti, 2005, p. 3). Migration is embedded in the Afghan way of life and will continue to be a key element of the culture, social and economic fabric. 10. Conclusion Afghanistan has experienced one of the largest migration flows of any country in the world over the last three decades. These flows have been multifaceted, but have been primarily driven by conflict and insecurity, and the vast underdevelopment of the country. Through the periods of war and now in a time of reconstruction, migration continues to be a key livelihood strategy of Afghan families. Afghans have complex webs of migration that are based on historical, ethnic, cultural and social networks. In particular Afghanistan has strong migration relationships with its neighbors Pakistan and Iran. Flows from Afghanistan to Pakistan and Iran have occurred throughout the last century. Monsutti states: “Channels of pre-established transnational networks exist between Afghanistan, Pakistan and Iran, as the movement of individuals to seek work, to escape drought or to flee war has been a common experience in the whole region…Many Afghans have been shifting from one place to the next for years- some never returning to their place of origin, others only a temporary basis before deciding to return into Iran, Pakistan, or further afield” (Monsutti, 2008, p. 61). These transnational networks have aided the transmission of money, capital, goods, and ideas between Afghans around the world. The Hawala system is based on social networks and spans to virtually all corners of the globe connecting Afghani's. It is the most widely used method for Afghans to send remittances and goods back to their country. The engagement of the Afghan Diaspora both financially and socially has contributed to the reconstruction of the country. Skilled Afghans have returned to their country both temporarily and permanently to design policies and programmes or work in their country to assist in the rebuilding effort. Financial remittances sent to Afghanistan are used primarily to meet families daily needs and comprises a major source of income for remittance receiving families. Afghanistan continues to face many challenges in the reconstruction effort including the management of returned refugees, managing migration relationships with Pakistan and Iran, IDPs return and rapid urbanization. The country has experienced difficulty in absorbing the large rates of return and poverty is high. Retaining the highly skilled poses a great challenge to the country at a time when skilled workers are in demand. However, until significant change occurs in the form of political stability, peace, development, infrastructure, and poverty alleviation, it can be assumed that high levels of migration will continue to occur out of Afghanistan. Migration has been a way of life for Afghani's for decades and it will continue to be a key survival strategy. References Altai Consulting. (2009). Study on Cross Border Population Movements Between Afghanistan and Pakistan. United Nations High Commission for Refugees. Retrieved 23 November 2009 from Braakmann, M. (2005). Roots and Routes. Questions of Home, Belonging and Return in an Afghan Diaspora. Buitenlandse Zaken. (2006). The Netherlands in Afghanistan. The Ministry of Foreign Affiars, The Netherlands. Central Bureau voor de Statistiek. (2009). Afghanistan. Retrieved from Central Intelligence Agency. The World Factbook Afghanistan. Retrieved 15 November 2009 from Internal Displacement Monitoring Centre. 2008. Afghanistan: Increasing Hardship and limited support for Growing Displaced Population. Retrieved 22 November 2008 from International Organization for Migration (IOM). (2003). Trafficking in Persons An Analysis of Afghanistan. Kabul, Afghanistan. International Organization for Migration (IOM). (2007). Return, Reception and Reintegration of Afghan National to Afghanistan.” International Organization for Migration (IOM). (2009). Country Profile: Afghanistan. Retrieved 2 December 2009 from IRIN. (2005). Afghanistan: Child Marriage still Widespread. Retrieved 23 November 2009 from IRIN. (2007). Afhganinstan: Kabul facing unregulated urbanization. Retrieved 3 December 2009 from Islamic Republic of Afghanistan. (2008). Refugee's Retunees and IDP's Sector Strategy. Retrieved 2 December 2009 from Islamic Republic of Afghanistan. (2009). Afghanistan National Development Strategy First Annual Report. Retrieved 30 November 2009 from siteresources.worldbank.org/AFGHANISTANEXTN/.../ANDSreport.pdf Jazayery, L. (2002). The Migration-Development Nexus: Afghanistan Case Study. International Migration, 40(5), 231254. Koser, K., and Van Hear, N. (2003). Asylum migration and implications for countries of origin. World Institute for Development Economics Research. Discssion Paper No. 2003/20. Retrieved 2 December 2009 from Kronenfeld, D. (2008). Afghan Refugees in Pakistan: Not All Refugees, Not Always in Pakistan, Not Necessarily Afghan? Journal of Refugee Studies, 21(1), 43-63. Livingston, I., Messera, H., and Shapiro, J. (2009). Afghanistan Index. Brookings Institute. Retrieved 23 November 2009 from Ministry of Foreign Affairs. (2009). Afghanistan. Retrieved 15 November 2009 from Ministry of Refugees and Repatriation (MoRRO. (2007). Ministry Strategy for the Afghanistan National Development Strategy. Retrieved 2 December 2009 from. Monsutti, A. (2004). Cooperation, remittances, and kinship amoung the Hazaras. Iranian Studies, 37(2), 219-240. Monsutti, A. (2006). Afhgan Transnational Networks: Looking Beyond Repatriation. Afghan Research and Evaluation Unit. Kabul, Afghanistan. Monsutti, A. (2008). Afghan Migratory Strategies and the Three Solutions to the Refugee Problem. Refugee Survey Quarterly, 27(1), 58-73. North Atlantic Treaty Organization (NATO). (2009). NATO's Role in Afghanistan Retrieved 23 November 2009 from Poppelwell, T. (2007). Afghanistan. Retrieved 15 November 2009 from Forced Migration Online Robson, B. and Lipson, J. (2002). The Afghans Their History and Culture. Retrieved 2 December 2009 from Roison, C. 2004. The Brian Drain: Challenges and Opportunities for Development. The United Nations Chronicle. Retrieved 22 November 2009 from Siegel, M., Vanore, M., Lucas, R., and de Neubourg, C. (2009). The Netherlands- Afghanistan Remittance Corridor Study. Ministry of Foreign Affairs, The Netherlands. Stigter, E. and Monsutti, A. (2005). Transnational Networks: Recognising a Regional Reality. AREU. Retrieved 2 December 2009 from: Stigter, E. (2006). Afghan Migratory Strategies- An Assessment of Repatriation and Sustainable Return in Response to the Convention Plus. Refugee Survey Quarterly, 25(2), 109-122. The Asia Foundation. (2006). Afghanistan in 2006: A Survey of the Afghan People. Retrieved 25 November 2009 from The Embassy of Afghanistan. (2009). Afghan Diaspora. Retrieved 2 December 2009 from Transparency International. (2009). Corrporate Perceptions Index. Retrieved 30 November 2009 from Turton, D. and Marsden, P. (2002). Taking Refugees for a Ride? The politics of refugee return to Afghanistan. Afghanistan Research and Evaluation Unit. Kabul, Afghanistan. United Nations Assistance Mission in Afghanistan. (2009). Political Affairs. Retrieved 30 November 2009 from United Nations Development Program. (2009). Human Development Report 2009 Statistics. Retrieved 30 November 2009 from UNHCR. Global Report 2001. Retrieved 18 November 2009 from UNHCR. Global Report 2002. Retrieved 18 November 2009 from UNHCR. Global Report 2003. Retrieved 18 November 2009 from UNHCR. Global Report 2004. Retrieved 18 November 2009 from UNHCR. Global Report 2005. Retrieved 18 November 2009 from UNHCR. Global Report 2006. Retrieved 18 November 2009 from UNHCR. Global Report 2007. Retrieved 18 November 2009 from UNHCR. Global Report 2008. Retrieved 18 November 2009 from UNHCR. (2008). National Profile of Internal Displaced Persons (IDPs) in Afghanistan. Retrieved 23 November 2009 from UNHCR. (2008). Pakistan: Violence marks closure of Afghan Refugee Camps. Retrieved 25 November 2009 from UNHCR. (2009). Afghanistan Country Operations Profile. Retrieved 2 December 2009 from UNHCR. (2009). Afghanistan Situation Operational Update. Retrieved 2 December 2009 from: UNHCR. (2009). Tripartite Memorandum of Understanding Between the Government of the Netherlands, the Transitional Islamic State of Afghanistan, and the United Nations High Commissioner for Refugees (UNHCR). Retrieved 25 November 2009 from United States Census Bureau. (2006). Selected Population Profile in the United States: Afghan. American Community Survey. Retrieved 2 December 2009 from Van Hear, N. (2002). From ‘Durable Solutions' to ‘Transnational' Relations: Home and Exile Amoung Refugee Diasporas. Centre for Development Research Working Paper. 02.9 Wescott, C. (2006). Harnessing Knowledge Exchange Amoung Overseas Professional of Afghanistan, People's Republic of China, and Philippines. Prepared for the Labour Migration Workshop New York, 15- March 2006. World Bank. 2009. 2009 Afghanistan Economic Update. Retrieved 20 November 2009 from World Bank. 2009. Migration and Remittances Factbook: Afghanistan. Retrieved 22 November 2009 from World Health Organization. 2009. Detailed Database Search. Retrieved 22 November from. Zunzer, W. (2004). Diaspora communities and civil conflict transformation. Berghof Occasional Paper. No. 24 [1] The UNHCR identifies an inconsistency in the statistics as come statistics measure the number of families and individuals and some only measure the number of families. In the latter case, the number of families was multiplied by six to get the number of individuals, although it is known that many families are much larger than six (2008, p.6) [2] The methodology for this estimate was based on tabulations from numerous surveys previously conducted by other organizations. For details on the methodology see:
http://www.ukessays.com/dissertation/examples/geography/migration-in-afghanistan.php
CC-MAIN-2016-07
refinedweb
12,717
54.02
Contents - Where would I put static files local to a particular wiki installation? - What is a superuser? - Access Control - What are ACLs? - What are Administrators? - What are Groups? - I get: "You can't change ACLs on this page since you have no admin rights on it!" What does that mean? - Is it possible to enable the ''delete page'' right only for some users? - How to hide certain pages from all users but some users? - Setting a wiki for closed community - How to allow only registered users to edit pages? - How to disable new page creation - How do I add a Edit-review-publish workflow? - How do I add a new users group with limited rights but access to system/underlay pages? - How to control who is able to create a user account? - How to use HTTPS with Twisted server - How to setup mail subscription? - Excluding System Pages from Search Results - How to add new interwiki sites? - How to run a wiki without CamelCase - How to migrate from MediaWiki - Advantages of running a farm - Can apache REMOTE_USER replace moinmoin login? - Why is MoinMoin dialling home? - How do I dis/enable FullSearchCached ? (version 1.5.2) - Is it possible to compress my wiki-instance or reduce the version history? - Having two content areas not parallel but as set and subset - disable spell checking - Backing Up - Adding Users - Is it possible to merge 2 wikis? - Moving And Merging MoinMoin Wikis - Using relative URL rather than absolute ones ? - Can we specify different startpages for different groups of people? - Adding new pages via batch script - How to limit login time / set automatic logout? - Easy migration - HowTo for SyncJob ? - Is it possible to use dynamic mail configuration? - How to get rid of the thank you message after editting - How to add additional icon to icon toolbar when editting by GUI - Adding search engine to an external html page - Trivial change flag not visible - Page name translation for multiple languages - Managing users in groups - Is 'url_mapping' the right way to handle changing drive-Letters (G:, H:, I:) of a standalone-wiki on usb-stick? Where would I put static files local to a particular wiki installation? I have a custom logo that I would like to use in place of the default one. I don't want to put this in the site-packages web\static\htdocs directory since this could be overwritten on upgrade. Anywhere on your server, just add a directory directive for access rights and an Alias I could attach this to a page but the attach: shorthand will not work across the site. So where would I put it under the wiki instance diretory? if you use moins builtin server for static files it must be below MoinMoin/web/static. But you also can use apache additional What is a superuser? A superuser has some special "super" rights like installing packages. To define what a superuser is (it can be more than one) set this variable to the main user(s) in the wikiconfig.py file: superuser = [u"ExampleGirl", ] Access Control Access Control defines who may do what in a wiki. Moin does it with ACLs. What are ACLs? Access Control Lists. For more please read HelpOnAccessControlLists What are Administrators? An administrator is someone with full or limited admin rights. These can be defined either by changing the configuration files, or by defining a group like AdminGroup and list members. You have to be an administrator to work with ACLs! What are Groups? A group in MoinMoin wiki defines a group of users. The groups can consist of: Users, other groups, all registered users (Known) or all possible users (All). We will call all possible entries "members". A group page should ends with the term Group, but it can be localized and customized by the admin. UserGroup should work for default install. A group contains the member names in a unnumbered list. Anything on the page which is not a list will be ignored by the group parser. These are the members of this group: * UserName * [[User Name With Spaces]] Note: the group name must be in CamelCase. I get: "You can't change ACLs on this page since you have no admin rights on it!" What does that mean? It means that the wiki thinks you tried to change the access rights of a page by editing a line with "#acl" (either you tried to edit, remove or add it). Only users with admin rights can do that. Please contact somebody like the wiki admin. Is it possible to enable the ''delete page'' right only for some users? Yes. Create a group page like "DeleteGroup" and add the members that should be able to delete pages. then add this to the acl_rights_before setting in the wiki config: +DeleteGroup:delete This adds the right "delete" to all members of the DeleteGroup. There are other ways to do it, but this is the most elegant way. How to hide certain pages from all users but some users? Create KeyUsersGroup, see How To Create Groups - On the pages that need to be hidden, use this ACL line. #acl KeyUsersGroup:read,write,revert,delete All: To unhide the pages, add read right to the All magic group. Read also HelpOnAccessControlLists Setting a wiki for closed community:" These acl rights will result in problems on page-specific acls (as everyone is a member of MembersGroup, then acl processing stops - so page specific acls are ignored.) As an alternative, define the acl rights the following way: acl_rights_before = u"JohnDoe:read,write,delete,revert,admin +MembersGroup:read" acl_rights_default = u"MembersGroup:write All:" acl_rights_before are evaluated (so MembersGroup can read) page acl rights (if exist) - your chance to stop MembersGroup from writing - acl_rights_default So, pages without acls will still allow someone from MembersGroup to read and write the page. And page-specific acls work again... such as this page acl I used on MembersGroup page to only allow me to edit the list of members (for a closed community): #acl JohnDoe:read,write,delete,revert,admin #format wiki These are people that can access the wiki: * JohnDoe * JaneFrank * ... signed John Doe, happy Moin user How to allow only registered users to edit pages? Add these ACL rules to the acl_rights_beforeoption in your wiki configuration: Known:read,write,revert,delete All:read Note that this does not give you much security, as anyone, even a robot can register an account just to spam your wiki. How to disable new page creation There is no explicit page creation right (see also HelpOnAccessControlLists for permissions and CategorySpam). You need to modify acl_rights_default (which is only used when no other ACLs are given on the page being accessed) to something like acl_rights_default = \ "TrustedGroup:read,write,delete,revert \ Known:read,delete,revert \ All:read" so that only the Trusted group can write to new, blank pages (which of course have no ACL yet), and then try to make sure that all pages have a ACL that overrides that and allows that proper people to write to that page. How do I add a Edit-review-publish workflow? There is no builtin workflow like this, but you use acl to create such workflow. First prevent editing on all page for anyone but a group of people you would like to create pages, for example, EditorsGroup - Put this acl on new pages to be edited by your editors: #acl EditorsGroup:read,write All: - Now your editors can edit those pages, but those pages are hidden from all others. - Review the edits To publish, change the All: acl to All:read How do I add a new users group with limited rights but access to system/underlay pages? Hi, I'm currently running MM 1.8 in a corporate environment with a limited number of users. My Acls are set like this: acl_rights_before = u"AdminsGroup:read,write,revert,delete,admin" acl_rights_default = u"ReadersGroup:read All:" - acl_hierarchic = True AdminsGroup is the user group which use the wiki in a daily basis and administer it (it's a limited number of people). ReadersGroup is a simple access to a limited number of people which need a read access to all the content (my boss and the like). Now I want to add another group of users (call it AnotherGroup) with limited rights: I don't want it to be able to view all the content but only specific pages and to write only to specific pages too. It works using #acl on pages and using hierarchic acl I can give them the possibility to create pages under one. My problem is with underlay/system pages, for example SandBox and help pages: they can't even read them. How can I do? -- EricVeirasGalisson 2009-06-29 15:22:00 Looks like you have a problem. The acl on the underlay pages is just #acl -All:write Default (that means it just takes away write rights from the default acl). Your default acl does not allow reading for them, so they can't read. One possible solution: modify acls on underlay pages manually for the most important pages. ok,... that it's not good! do you think about another solution to solve my problem? maybe even changing actual acls? (keeping the same behaviour). I'm a little reluctant to change underlay pages ACLs... -- EricVeirasGalisson 2009-06-30 07:50:25 How to control who is able to create a user account? There is no official, supported solution with the current MoinMoin versions (1.5.8), anyone can create an account. Also, you can't protect UserPreferences page, as anyone can add this macro to any page and use it to create a user account. Unofficial solutions: If the goal is to control who can edit pages don't worry about user accounts and see Setting a wiki for closed community. You could replace the UserPreferencesmacro with your own private wiki macro that applies your user account policy. See also FeatureRequests/NewUserCreationACL You can change file permissions on the data/user directory on your Wiki instance, and forbid file creation (with Linux chmod a-w .) because each account is a different file. Additionally you can apply this patch moin-1.5-allow-create-form-disabling.patchto add wiki instance option that allows per instance disabling of wiki "create user" interface. Example instance configuration (wikiconfig.py) follows: user_account_creation_enabled = False def __init__(self, siteid): DefaultConfig.__init__(self, siteid) self.user_form_fields[0] = ('name', _('Name'), "text", "36", '') The first line disables create interface. The constructor sets name field trail to empty string. The patch does not disable creation but only interface to it, so chmod the data/user directory. You can use this script moin_useradd.py to create accounts. It will be cool if someone improves the tweak to actually disable account creation not only the interface to it, thus chmod tweak will not be required. HTTP Authentication: One alternative can be to use HTTP authentification. Accounts are created automatically when the users connects to the page. See HelpOnAuthentication for details - With moinmoin 1.6.3 you can also simply add one line to wikiconfig.py like so: password_checker = lambda req, un, pw: 'Sorry, no account creation or password change possible' - This will disable account creation and password changing (mainly useful for non-moin authentications). FeatureRequests/DisableUserCreation also has some tips How to use HTTPS with Twisted server This is not possible with current twisted server, it may easy to do using customized twisted server, based on MoinMoin.server.twistedmoin. How to setup mail subscription? Enable mail in your farm or wiki configuration: mail_smarthost = 'mail.mydomain.com' # your smtp host mail_from = 'My Wiki <noreply@mydomain.com>' If you need to login with a name and password, setup also mail_login: mail_login = "username password" hint: to test your mail subscription, try to edit a page while you are NOT logged in as yourself, the wiki will not tell you about your changes. Excluding System Pages from Search Results There is no easy solution like "Don't search in system pages switch". There are several things you can do: Use minus modifier in searches. for example, the search -t:help excluding system pages will not search any page with 'help' in the name. But it will search help pages in all other languages. - Remove all the system pages in languages you don't need - which is usually anything but your local language spoken by your users. - Install two wikis, a Help wiki, with all the help pages, and NO user pages, as a help system. And the User wiki, with NO help pages. Add links between the two wikis in the navigation bar. - Write some code in search.py to filter out system pages in search. Note that searches as a page Filter function, that filter pages by the search term. You might want to override this and add a system page filter by page name. The problem is that you DO want to search the help pages many times, so you need an easy way to search for help when you need it. How to add new interwiki sites? Add the site to data/intermap.txt. How to run a wiki without CamelCase I don't want CamelCase to automatically turn into links in my wiki. How can I disable this behavior? Install ParserMarket/NoCamelCase plugin. You can set it as the default parser, use it for certain pages, or use it for sections of a page. How to migrate from MediaWiki I want to migrate my wiki from MediaWiki to MoinMoin (missing the ACLs). I want to import existing pages including the change history. Is there a migration script in existence? Solution There is a quick-and-dirty one on MediaWikiConverter, and there is a parser on on ParserMarket (which lacks support for advanced formatting options, though) which could display the pages if you do not want to convert the markup. Unfortunately, I do not know any advanced converter/full-featured solution. Advantages of running a farm You just need one server, you just have to care for one installation (which can be upgraded easily), you can easily setup multiple wikis which differ just slightly, you can separate content of one wiki by running multiple farms, etc. pp. Can apache REMOTE_USER replace moinmoin login? I understand that this is essentially the same question as above. I'm running moinmoin version 1.5.2, and would like to rely entirely on server authentication to control access. Is it possible to identify users who are not logged in (and have no user profile) using the REMOTE_USER environment variable? Specifically, I'd like the value of this variable to appear in the editor field in the "info" view for each page. Based on various posts on this site, I've included the following in wikiconfig.py: If this is working, I am unable to see an effect. Can anyone help or refer me to the appropriate documentation? - What you've done looks correct, it should work. Are you sure your web server sets REMOTE_USER etc. for moin? Note that sslclientcert is not needed for what you described, so you can remove it except if you want to use SSL client certificates for authentication. If you can't fix it, maybe upgrade to latest release and if it still doesn't work, file a bug report. Update from the OP: I identified the problem, and the solution may be useful for others. If the above does not work, check the value of the AUTH_TYPE environment variable. I added the following to wikiconfig.py: 1 from MoinMoin.auth import http 2 from MoinMoin import user 3 def uw_auth(request, **kw): 4 env = request.env 5 6 if env.get('AUTH_TYPE','') == 'UWNetID': # your institution's authentication method here 7 username = env.get('REMOTE_USER','') 8 u = user.User(request, auth_username=username, 9 auth_method='http', auth_attribs=('name', 'password')) 10 u.create_or_update() 11 return u, False 12 else: 13 # authentication failed, try the next method 14 return None, True 15 16 class Config(DefaultConfig): 17 18 user_autocreate = True 19 auth = [uw_auth, http] A question for the community: is this an acceptable/safe workaround for a nonstandard server authentication method? - If REMOTE_USER being set still means that this user has been correctly authenticated, yes. That AUTH_TYPE is rather unusual, though. 2008-12-10 It seems that the obove methods are way out of date, and as this is the first google hit I thought I'd add to it what I found was necessary: to your wikiconfig.py add form MoinMoin.auth.http import HTTPAuth auth = [HTTPAuth(autocreate=True)] paw900 @ sf. no . spam . anu . edu . au Why is MoinMoin dialling home? I found that each time I save a page, MoinMoin tries to connect to moinmaster.wikiwikiweb.de. If my firewall blocks this, the save is delayed. This is the AntiSpam system trying to get an updated BadContent page. No information is sent from the server in the process. If you don't need AntiSpam, for example in an intranet installation, you can disable it by editing your configuration file and removing or commenting the line that says from MoinMoin.util.antispam import SecurityPolicy How do I dis/enable FullSearchCached ? (version 1.5.2) Or is it enabled if I see it in SystemInfo? In MoinMoinRelease1.5/CHANGES it says "Use it if you do not depend on fresh search results but prefer raw speed." Just write FullSearchCached instead of FullSearch on your page. Is it possible to compress my wiki-instance or reduce the version history? Is there any possibility of either compressing the wiki-instance by eliminating parts of the /revisions-Directory or by decreasing the number of revision history (the default seems to be 100!) or by specifying a certain date to remove all history files which are older than that date? - The limit of 100 revisions is only for display, revision storage is not limited. You can manually remove old revision files if you need to and there is an automatic feature that removes ALL revisions except the last one: moin ... maint reducewiki will flatten the wiki to the single latest revision. You need to add some options, run moin with no arguments to see them. Having two content areas not parallel but as set and subset Hello, I’m trying to find a wiki for our institute. I already did some research in the internet but there is one issue, which is not so detailed described as I need it to make a decision which wiki to choose (and I don’t really like the idea of installing and trying with 3 or 4 wikis). We will need two areas, one which will contain basic information, that is also used by students, and one which contains also actual research topics. As you surely guessed one thing we need is, that students can’t read in the research internals. I know that this can be managed, but I would like researchers to use the basic topics as well and not having these topics twice. So they are not two parallel areas but one is a subset of the other - and here some questions pop up: - If a researcher is naming a BasicTopic in an article of the research area, will he create a parallel Topic in the research area, also the Topic already exists in the basic area (if he is not using any special link, because he would not know if it exists (or not think about it))? - I guess it should be no problem to set different (for example) background colours to the template for basic and research. Can the user rights for this pages be also part of the templates or how would one define for a new Topic if it is part of the basic or research area and therefore by whom it can be read? - can all this be solved by categories? It would be nice if you could give me a short explanation how this could be solved in your wiki-engine (please as I’m not a programmer be economical with technical terms Thank you very much in advance, Best regards Oliver - Make a wikifarm: one wiki for all, one for the researchers. disable spell checking What is the most clean way to disable spell checking? Backing Up What is the recommended way to back up a moinmoin wiki? I have read about WikiBackup but have found this action doesn't work and found references that it is not functional in v 1.5. This way seemed the easiest... I have read the general info on MoinMoinBackup so I know I need to backup the data dir. I also have read about the WikiBackupScript but haven't tried it yet. I currently am backing up by ftp'ing the entire data dir to my computer but this is slow. Recommendations? Info: my wiki is using v 1.5.7 and running on a host with only ftp access. Thanks The WikiBackup stuff (action=backup) should work now, if not please file a bug. The reason for the CHANGES hint about "this is experimental stuff, don't rely on it" is that it didn't get tested too much and esp. desaster recovery and restore might have problems. But if you make a backup with the backup action and check yourself whether it worked and is complete, this is better than not making any backup just because it takes too long using ftp. Adding Users Is there a way to programatically add users? I would like to integrate MoinMoin into an existing web site that already has a registration facility, and I can already hear comments from users who will have to effectively register twice. The registration is form based ie not tied to an http authentication or REMOTE_USER. Thanks. - Two ways to solve this: either you write your own auth method (just reusing some user data from your other user base) or you use some user creation script (see MoinMoin/script/...). Is it possible to merge 2 wikis? Is there any way to grab the conents of one wiki, and add it to another wiki? Moving And Merging MoinMoin Wikis Using relative URL rather than absolute ones ? By default, MoinMoin uses absolute URLs. Is it possible to use relative ones instead ? I look a little bit everywhere in the documentation but i didn't find anything about this ( i may have missed it). Anyway i think it's worth a short entry here. do you mean like [[/mysubpage]] and [[../mysiblingpage]] instead of [[root/my/pages/mysubpage]]? (see HelpOnMoinWikiSyntax#InternalLinks) Can we specify different startpages for different groups of people? We are running a MoinMoin instance since 1 year and a half in a intranet. We are a group of tech-savvy people, each login is in the AdminsGroup. Our MoinMoin instance is now full of interesting knowledge we want to share with other people: I have created logins and a group OtherGroup. The problem is: the actual startpage is organized to have links to useful pages for AdminsGroup and is a place for coordination, we don't want OtherGroup people to use this startpage (not interesting for them, they can't read the pages linked...), how can I change the startpage for some group or someone? Is this only possible? -- EricVeirasGalisson 2008-02-05 15:18:21 I do have a similliar situation on some of our wikis. I did solved it by creating a start page with content readable from users. This page includes other pages which have more restrictive rights. Because of included pages do have to repect acls only users with access right can get access to these informations. Adding new pages via batch script I have much content in form of text or html that i want to pump into my wiki. But this is nothing i can and will do over a webbroswer - i seek some way to add pages with a script or something like that. Any idea? Look at wiki xmlrpc, see MoinMoin/xmlrpc/*, esp. the putPage method. This method assumes that you have content in correct format, e.g. utf-8 encoded wiki markup. If you only have html or other text formats, you have to convert it somehow. Maybe search this wiki for "html import", IIRC there was some way, but you have to find it yourself. In general, if you expect short response times, maybe just hang on our IRC channel and do some research on your own before asking. How to limit login time / set automatic logout? To increase safety of the wiki I want to make sure that the wiki logs users out after a certain time. What's the simplest way to do that? Try cookie_lifetime = 1 in your wiki config. - Make sure "remember login information" is off in the user preferernces - OR set cookie_lifetime = -1 to ignore this preference setting. Easy migration Hi, just today I realize that my recent installed 1.6.2 need to be upgraded to 1.6.3 to fix some serious security issues. The normal solution is to download MM 1.6.3 and make a installation/migration, but I wonder if a simpler solution exists, like applying a patch to my current installation, or tracking my own code with Mercurial or something else... Is there a way to do this? Can I imagine tracking my own MM instances with Mercurial? Do someone have a solution to this problem? Thanks in advance. -- EricVeirasGalisson 2008-04-21 12:53:59 You can run the moin code from a mercurial workdir. If you also point your webserver into the workdir for the static files stuff, minor upgrades should be rather easy. You still will have to read docs/CHANGES about configuration changes and run the mig scripts. HowTo for SyncJob ? In moin version 1.6.2/3 I have tried to setup a SyncJob in order to backup my wiki. I have read the page "HelpOnSynchronisation". I have also created a "intermap.txt" file, first in folder moin-1.6.2 later in folder wiki/data. In both cases I got an IOError: unsupported XML-RPC protocol. Unfortunately the information for synching a wiki is spread in different files. I would very much appreciate, if someone (Alexander Schremmer would be great) could please give me a hint how to debug the SyncJob action, or write a HowTo. I already inserted "import pdb/pdb.set_trace()" in file "wikisync.py" line 168 (class MoinRemoteWiki), but it is very difficult for me to figure out (stepping and printing) where the link to the remote wiki is searched for. The variable "wikiurl" did show just a '/'. Greetings, Rudi 2008-04-22 17:30 HelpOnSynchronisation was actualized some days ago Please check the remote wiki's wiki config for actions_excluded - the builtin default of it (actions_excluded = ['xmlrpc']) disables (e.g. actions_excluded = [] if you don't want to exclude any other actions). You may want to protect your wiki by using ACL rules. Thank you very much for the quick answer. I already read that, and looked in wikiconfig.py and moin.py and did not found any "action_excluded" statement. Greetings, Rudi 2008-04-23 18:07 The "xmlrpc" (Remote Procedure Call) for the SyncJob can be enabled in "wikiconfig.py". actions_excluded = multiconfig.DefaultConfig.actions_excluded[:] actions_excluded.remove('xmlrpc') You should have defined also a good acl_rights_default on your wiki. -- ReimarBauer 2010-12-18 07:22:24 Then you have to restart the moin wiki, actual version 1.9.2. Moin version 1.9.3 does unfortunately have a sync problem up to now. -- RudolfReuter 2010-12-17 22:17:56 Is it possible to use dynamic mail configuration? I would like to configure email support to use the current user's login and password for the mail_from and mail_login entries in the wikiconfig. Goal is, when a user changes a page, the subscription eMail is sent from that user's account (which is identical to the username in my environment). Is that possible? Supplying a default user/pw is not possible (unsafe). - What do you mean by "unsafe"? The sender email address can be a no-reply address (and replies directly written to /dev/null or rejected if there are some). Wouldn't using the personal email give multiple new problems: - privacy: maybe the editor does not want to disclose his email to every user that is subscribed to some page he edits. He even doesn't know who is subscribed until he saves. - not wiki way: if there is a direct reply possible via email, discussions are pushed from the wiki page to the email inbounds. That gives you a problem back that you maybe tried to solve with a wiki. 'Unsafe' means, I do not want to write my personal login into the config file, and the Mail Administrator in my big company (big meaning very big and very complicated) will definitly not provide me with a dummy address as you proposed - he will probably kill me for asking - The privacy issue is not a problem, as the wiki is an internal one, so everyone knows (or could at least look up) the mailaddresses anyway. - 'not wiki way' is probably correct, but I want to do it either way - so, is there a possibility? You could change the code rather easily to use the editors mail address as from: address, but you can't easily get his password (we only store a password hash, no cleartext password). But as long as the destination address doesn't require relaying (because the target mailserver is the destination domain's mailserver), the mailserver would likely accept the mail even from an unauthenticated connection. How to get rid of the thank you message after editting Whenever you edit something, you would get a thank you message for editting the page (or creating it.) Now, if you navigate to a new page, and press back, you will get an error. I figured the error is caused by the "thank you" message (because the #preview is append to the end of the page's name). Is there a way to get rid of this error? If not, is there a way to get rid of the thank you message? Thanks -- 12.109.151.100 2008-07-31 18:44:59 Don't go back by the browsers button use the navigation of the theme How to add additional icon to icon toolbar when editting by GUI I want to put the @_SIG_@ variable as a clickable icon on the icon toolbar when editing by GUI. Is there a way to do so? Thanks -- 12.109.151.100 2008-07-31 18:49:00 That needs a modification in fckeditor javascript code. Adding search engine to an external html page Hi, I was wondering if there's a way to include the MoinMoin search engine for my wiki onto an external webpage that's written in HTML. I want to be able to put the search engine on the external page, which is not part of the Wiki, and have the engine search my Wiki only. Is this possible? Thanks -- 12.109.151.100 2008-07-31 20:24:20 You could add the searh dialog form to your external webpage. Trivial change flag not visible We are using just starting to use Moin Ver. 1.8.2 but the Trivial change flag does not appear when we edit. I can't find anything that indicates that this is a configurable option. How do we make it available ? I assume you don't have email enabled. Do you see the Subscribe/Unsubscribe link in the edit bar? This also appears only if email from the wiki is enabled. Correct- e-mail not configured yet. Thanks Gerry Page name translation for multiple languages The documentation says: A page name is translated only if the wiki knows about the language, the wiki translation contain a translation for the name, and a page with that translation exists. If any of this is false, the original name will be displayed. So using 1.8.2 I have installed Italian, created an ItalianDict page and created two pages TestPage and PaginaProva. I can see that Italian is installed because all the standard messages are in Italian as are the navigation bar tabs. Further the <<GetVal(ItalianDict, TestPage)>> returns PaginaProva so the dictionary is working fine. So therefore after changing my language to Italian, I presume that my TestPage link should now be shown as PaginaProva, and that when I navigate to it, I should navigate to PaginaProva, which should be the displayed name on the navigation bar tab. Unfortunately it all remains as it does in the English version. I supose this is some misunderstanding by me. Any suggestions ? -- GerardODriscoll 2009-05-21 15:43:56 No response to my query above. Has anyone implemented a multiple language site using MoinMoin ? -- GerardODriscoll 2009-08-24 11:41:10 It won't magically change links in your content. There is special code that does this for navibar and front page. -- ThomasWaldmann 2009-08-24 11:50:51 Managing users in groups Im writing an authentication (auth) against an external MySql database. I need to manage the ACL rights of wiki users and i have decided to manage this through groups. The group(s) that users belong to are maintained in the external database. Is there a way of automatically maintaining groups in moinmoin? I dont want to maintain groups by hand as there are too many users, and users move between groups. In the MoinMoin 1.9 groups code was refactored and now it is possible to write backends for groups. See Groups2009 for more details. ConfigLazyGroups is an example how such a backend can be implemented. You should subclass LazyGroupsBackend and overwrite needed methosd (see ConfigLazyGroups). Also, there is a patch for a LDAP backend -- DmitrijsMilajevs Is 'url_mapping' the right way to handle changing drive-Letters (G:, H:, I:) of a standalone-wiki on usb-stick? Hello, and thanks for the perfect job, i'm using and loving MoinMoin as Desktop-Version for years now under Kubuntu 10.04. Now, that we're beginning to use Confluence at work, some are astonished, how powerfull MoinMoin is even in comparison to Confluence. I have MoinMoin with modwsgi-I/F running at apache2 at home and have build a standalone wiki completely based on portable apps running on an 16GB-U3-Stick (U3-launch, ASuite, Portable Python) . The only Problem is, that all external links are dead-links, when i use the wiki on another Laptop / PC, where the drive-letter for the usb-stick changes. Is there an easy way to solve that? Thanks in advance I guess that solves it MoinMoin/script/moin.py ... maint cleancache MoinMoin/script/moin.py ... maint makecache see HelpOnMoinCommand -- ReimarBauer 2011-09-21 14:09:17 -- EMuede (Ewald Müller) 2011-09-21 14:00:00 Thanks for reply, I think, the problem is a bit more complex and is not solved by renewing the cache: On many places in the wiki i used external links f.e. to 'G:\Offline_kopie\blah\dingens.txt' as an absolute adress. I don't use relative references 'cause they are too 'far away' from the wiki-subdirectories. If the stick lands on drive-letter 'I:' i have to manually change all the links, therfore i wanted to use a system-variable ?home_drive? or an {url_mapping} instead. Read about InterWikiMap. You can use that for any kind of links. Then you have only one place where you need to change it. A page can also be updated by a cli script. There is also a second advantage: The Wiki can be hold small (for backup, maintenance) and serves only as a a comfortable index to the greater amount of external stored data. The whole external Datastructure is -for security- stored a second time on a share, so i could switch to that strukture by using the {url_mapping} ? May be you are also interested in Very interesting tool! I solved this problem by copying the whole wiki-content from stick to harddrive and used FileSync to synchronize them. I could switch the wikiserver.py (from G: or D: f.e.) in PortablePython depending on the wiki (or copy), i want to use. I'm not used to program in python (though i had >10yr. PL/I-Experience), maybe i would only need one changed system-variable (like mentioned above) to redirect the links? -- EMuede (Ewald Müller) 2011-09-21 20:00:00 As I mentioned already read about InterWiki. If you define links like 1938 then for those links you need only to change the InterWikiMap page. But this only partly solves your problem. The cache files are dependent to the python version used. So on a different python you need to invalidate them.
http://www.moinmo.in/MoinMoinQuestions/Administration
crawl-003
refinedweb
6,148
64.51
Odoo Help This community is for beginners and experts willing to share their Odoo knowledge. It's not a forum to discuss ideas, but a knowledge base of questions and their answers. Delete (previously canceled) invoices Hi, I am trying to write a module to delete invoices. I have seen this post which explains what to do. I have therefore created a new module delete_invoices with the following code: /usr/lib/python2.7/dist-packages/openerp/addons/custom/delete_invoices# cat delete_invoices.py # -*- coding: utf-8 -*- from openerp import models, fields, api # DESCRIPTION: # Class DeleteInvoices allows invoice deletion # NOTES: # account_cancel / diary allow cancel entries required class DeleteInvoices ((account_invoice, self).unlink() Unfortunately when I try to delete a cancelled invoice I get the following message: File "/usr/lib/python2.7/dist-packages/openerp/addons/custom/delete_invoices/delete_invoices.py", line 20, in unlink return super(account_invoice, self).unlink() NameError: global name 'account_invoice' is not defined Any tips or ideas? Thanks a lot. For completition I also include __openerp__ and __init__ files: # cat __init__.py from . import delete_invoices # cat __openerp__.py { 'name': 'Allow delete invoice', 'description': 'This module allows to delete invoices once they have been cancelled', 'author': 'E.M.', 'depends': ['account_cancel'], 'data': [], } Thanks The account_invoice error refers to the class name used on the super call. Just change to DeleteInvoices and you will be ok Still same error: File "/usr/lib/python2.7/dist-packages/openerp/addons/custom/delete_invoices/delete_invoices.py", line 20, in unlink return super(DeleteInvoices, self).unlink() NameError: global name 'account_invoice' is not defined That was an error you havd. You should have more. Post all the code because in that line I only saw one account_invoice variable usage on super call. It must be something that you are not __openerp__.py you have to add account depends too. Account_cancel it's not enough Thanks Mirco. 'account_cancel' depends on 'account' so at the end you will need to have installed both modules so 'account' might be redundant here. In any case issue is located on how I am replacing the method and not in dependencies.
https://www.odoo.com/forum/help-1/question/delete-previously-canceled-invoices-94177
CC-MAIN-2016-44
refinedweb
345
50.84
Today’s post is going to be a short one about how to import a file through a Visual Studio LightSwitch application (Desktop or Web) and store the data on the backend (e.g. SQL Server). Sometimes we want to upload files in binary format and store them in the database, like a Word document or PDF file or any other file containing unstructured data. LightSwitch already has the built-in ability to store and retrieve images, but if you want to store other types of files we have to write a small bit of code. Backstory This post is really just me picking out various pieces of code from my other posts and cramming it together to make a new post. Which is a nice break for me, since writing a post often takes about a week (after I write the code and write up a draft blog, then get feedback, and double check the code several times). But with this one I should be done in a day leaving more time to enjoy the very brief Summers we have up here in Fargo, North Dakota. Create the project and a table Alright, let’s get started making a simple LightSwitch Web Application. Let’s create the project, and a table that will hold our file data. - Launch Visual Studio LightSwitch and create a new C# Visual Studio LightSwitch project - Call the project: ImportFiles - In the Solution Explorer, right click “Properties” and select Open. - Change the Application Type from “Desktop” to “Web” - Note - this sample will work as a Desktop application as well - Add a table called “FileInformation” and add the following fields to the table: - Name | String | - Miscellaneous | String | - Data | Binary | - You should have a table now that looks like this: Create the screen - Add a screen using the “Editable Grid Screen” template and tie the screen to the FileInformation table. - In the screen designer, under Rows Layout –> Data Grid –> Data Grid Row delete the “Data” control so that it won’t display when we run the application - It doesn’t make much sense for us to display this field on the screen since it just the file’s binary data. - In the screen designer, under Rows Layout –> Screen Command Bar, add a button called “ImportAFile” - You should now have a screen that looks something like this: - Right click the ImportAFile button and select “Edit Execute Code” - We do this to create the “User” folder where we will be placing some custom user code. We’ll come back and add our button code later Add a custom Silverlight dialog - In the Solution Explorer we need to switch to the “File View” mode so we can add some custom code. - After switching to the File View mode, navigate to the Client project and open up the User Code folder - We are going to add our custom Silverlight dialog just like we did in the last blog post. We need our own custom Silverlight dialog because we want to display an OpenFileDialog to the user and we are running inside a LightSwitch web application. (For the longer explanation please read my previous blog post). - Right click UserCode folder and select “Add-> New Item” - Select “Text File” in the “Add New Item” dialog. Call the file “SelectFileWindow.xaml” - Now copy and paste the below code into the “SelectFileWindow.xaml” file: <controls:ChildWindow x:Class="LightSwitchApplication.UserCode.SelectFileWindow" xmlns="" xmlns:x="" xmlns: <Grid x: <Grid.RowDefinitions> <RowDefinition /> <RowDefinition Height="Auto" /> </Grid.RowDefinitions> " /> <Button Content="Browse" Height="23" HorizontalAlignment="Left" Margin="291,92,0,0" Name="BrowseButton" VerticalAlignment="Top" Width="75" Click="BrowseButton_Click" /> <TextBox Height="23" HorizontalAlignment="Left" Margin="66,92,0,0" Name="FileTextBox" VerticalAlignment="Top" Width="219" IsEnabled="True"/> </Grid> </controls:ChildWindow> - This Silverlight dialog code contains 4 controls – an OK button, Cancel button, Text field, and a Browse button. The Browse button will launch an OpenFileDialog dialog which will let the user select a file to import, and the text field will display the name of the file about to be imported. - We need to add some code for our Silverlight dialog, so right click the “UserCode” folder and select “Add-> Class” - Name the class “SelectFileWindow.cs” and copy the below code into the class: //' This code released under the terms of the //' Microsoft Public License (MS-PL,) using System; using System.IO; using System.Windows; using System.Windows.Controls; namespace LightSwitchApplication.UserCode { public partial class SelectFileWindow : ChildWindow { public SelectFileWindow() { InitializeComponent(); } private FileStream documentStream; public FileStream DocumentStream { get { return documentStream; } set { documentStream = value; } } private String safeFileName; public String SafeFileName { get { return safeFileName; } set { safeFileName = value; } } /// <summary> /// OK Button /// </summary> private void OKButton_Click(object sender, RoutedEventArgs e) { this.DialogResult = true; } /// <summary> /// Cancel button /// </summary> private void CancelButton_Click(object sender, RoutedEventArgs e) { this.DialogResult = false; } /// <summary> /// Browse button /// </summary> private void BrowseButton_Click(object sender, RoutedEventArgs e) { OpenFileDialog openFileDialog = new OpenFileDialog(); if (openFileDialog.ShowDialog() == true) { this.FileTextBox.Text = openFileDialog.File.Name; this.safeFileName = openFileDialog.File.Name; this.FileTextBox.IsReadOnly = true; FileStream myStream = openFileDialog.File.OpenRead(); this.documentStream = myStream; } } } } - The SelectFileWindow class contains the following code: - The methods for our button controls - The browse button has code in it to create a System.Windows.Controls.OpenFileDialog object which we use to open up our Open File Dialog to allow the user to pick any arbitrary file to import. - A public FileStream property which will contain the data for the file we want to import - A public String property which will contain the name of the file we want to import - We need to add a reference to the Silverlight dll we are using, so in Solution Explorer, navigate to “Client –> References”, right click and select “Add Reference…” - Add a .NET reference to the System.Windows.Controls assembly Add our screen’s button code - Let’s switch back to the “Logical View” - Open up the screen designer for the EditableFileInformationsGrid - Right click the “ImportAFile” and select “Edit Execute Code” - Copy and paste the below code into the EditableFileInformationsGrid.cs: - There are two methods in this class: - ImportAFile_Execute() – since we are inside of button code here we are NOT inside the main UI thread anymore. So we need to switch back to the main UI thread only because we want to display our own UI dialogs (like our custom Silverlight dialog and an OpenFileDialog). This method switches to the main UI, invokes our Silverlight dialog, and adds an EventHandler to the “Closed” event. So that when the dialog is closed, we call our own method to do some additional work. - selectFileWindow_Closed() – this method is invoked once our Silverlight dialog closes. It reads in the file from the SelectFileWindow.DocumentStream public property we mentioned earlier. And stores the data from this FileStream into a byte array. We then create a new record for our FileInformation table, and set the Name field to the name of the file, the Miscellaneous field is set to the size of the file, and the Data field is set to the value of the byte array. The Data field is what actually contains our imported file. //' Copyright © Microsoft Corporation. All Rights Reserved.//' This code released under the terms of the//' Microsoft Public License (MS-PL,)using System;using System.Collections.Generic;using System.IO;using System.IO.IsolatedStorage;using System.Linq;using LightSwitchApplication.UserCode;using Microsoft.LightSwitch;using Microsoft.LightSwitch.Framework.Client;using Microsoft.LightSwitch.Presentation;using Microsoft.LightSwitch.Presentation.Extensions;using Microsoft.LightSwitch.Threading;namespace LightSwitchApplication{public partial class EditableFileInformationsGrid{partial void ImportAFile_Execute(){// To invoke our own dialog, we have to do this inside of the "Main" Dispatcher// And, since this is a web application, we can't directly invoke the Silverlight OpenFileDialog// class, we have to first invoke our own Silverlight custom control (i.e. SelectFileWindow)// and that control will be able to invoke the OpenFileDialog class (via the Browse button)Dispatchers.Main.BeginInvoke(() =>{SelectFileWindow selectFileWindow = new SelectFileWindow();selectFileWindow.Closed += new EventHandler(selectFileWindow_Closed);selectFileWindow.Show();});}/// <summary>/// Invoked when our custom Silverlight window closes/// </summary>void selectFileWindow_Closed(object sender, EventArgs e){SelectFileWindow selectFileWindow = (SelectFileWindow)sender;// Continue if they hit the OK button AND they selected a fileif ();}}// Create a new record for this file, and store the data, name and lengthFileInformation fileInformation = this.DataWorkspace.ApplicationData.FileInformations.AddNew();fileInformation.Name = selectFileWindow.SafeFileName;fileInformation.Miscellaneous = "Size of file in bytes: " + fileData.Length;fileInformation.Data = fileData;selectFileWindow.DocumentStream.Close();selectFileWindow.DocumentStream.Dispose();}}}} Run it and import some files - In Solution Explorer, right click the ImportFiles project, and select “Build”. - After the build, press F5 to run your project - You should see the inside the command bar on the screen a button called “Import A File” - Click the “Import A File” button to display our custom Silverlight Dialog - Select the “Browse” button and select any file you wish to import - Click “OK” - Our code to import the data will now run and we will get a new record created on our screen - As you can see, we display the file’s name and it’s size in bytes - You can now save the data to persist it to the database by clicking the “Save” button on the command bar Additionally, there is an extension that can be added to Visual Studio LightSwitch called Document Toolkit for LightSwitch, which handles importing and viewing Word Documents. It will only work for Desktop applications, and it isn’t free, but other than that it looks like a slick extension. That’s it for this brief post. I've included a zip file below of the C# code. Again, love to hear if there are any questions, and if you think something is wrong with this code (or the title) then you are probably right so please let me know. -Matt Sampson Nice post, it would also be nice to get some examples of uploading files into a sharepoint library, or saving them to a UNC Path on the network. I used some of this code to create a sample that uploads files to the file system: lightswitchhelpwebsite.com/…/Saving-Files-To-File-System-With-LightSwitch-Uploading-Files.aspx Thanks, Michael. Nice work on the sample. Flattered to hear you used some of this code 🙂 -Matt Sampson Hi Matt, Great post, very helpfull and NO bugs… Saves me a great amount of time. This is working on LightSwitch 2011 RTM Maybe for beginners, simply tell them to rename "ApplicationData" to their own data source name. Francis Hi Matt, I also posted this question on the LightSwitch forum: I'm developing a project in which on several occasions I have to add a document to a table. I read the tutorial by Matt Sampson (blogs.msdn.com/…/how-to-import-and-store-a-data-file.aspx) and was able to create the custom control and add files to a table. But, when this table is a detail table in a master/detail screen and I add this code to the AddAndEditNew_Execute method on the grid this does not work properly. Here is the code fragment: void selectFileWindow_Closed(object sender, EventArgs e) { SelectFileWindow selectFileWindow = (SelectFileWindow)sender; if (); } } Document document = this.Documents.AddNew(); document.Name = selectFileWindow.SafeFileName; document.Misc = "Grootte in bytes: " + fileData.Length; document.Data = fileData; selectFileWindow.DocumentStream.Close(); selectFileWindow.DocumentStream.Dispose(); } } I get the error message: IVisualCollection<T>.AddNew() shouldn't be called from the UI Thread. I'm not really smart when it comes to threading so I don't know what this exactly means(I Googled it but didn't find any useful results). I know the custom control needs to run in a separate thread for some reason but I don't understand why the code in the tutorial was able to store the file successfully in a single, separate table through "this.DataWorkspace.ApplicationData.Documents.AddNew()"but when it's in a table linked to a master table and I use "this.Documents.AddNew()" this error occurs. Does anyone know what I'm doing wrong? Or does anyone have an alternative method of up- and downloading binaries to and from the database? Thanks, H. And how to allow the user to again download files and view them on his computer? @Willem lightswitchhelpwebsite.com/…/Saving-Files-To-File-System-With-LightSwitch-Uploading-Files.aspx He's modified the application from this blog, and made it better. It might fit your needs and solve your bug issue. Excellent work. I added a bit of code to download the bytes from the database as a file: partial void DownloadCV_Execute() { // Must invoke on UI thread Dispatchers.Main.BeginInvoke(() => { // Gte the object which has bytes Candiadte candidate = this.Candiadtes.SelectedItem; byte[] data = candidate.CVFile; SaveFileDialog dlg = new SaveFileDialog(); // set the default extension if you have also stored the file name dlg.DefaultExt = System.IO.Path.GetExtension(candidate.CVName); dlg.Filter = string.Format("{0} files(*.{0})|*.{0}", dlg.DefaultExt); // Ideally you should set the file name, but can't do it in this version of Silverlight. Wait for 5.0 dlg.FilterIndex = 1; if ((bool)dlg.ShowDialog()) { Stream stream = dlg.OpenFile(); foreach (byte datum in data) { stream.WriteByte(datum); } stream.Close(); stream.Dispose(); } } ); } Why are we using "dispatcher" here? Isn't the call to download the file running on the UI thread? Thanks @Bilal – When you click on a button, the "button's" code is running in the background thread and NOT the UI thread. So we need to switch back to the UI thread in this instance to get our dialog to display. It would be really helpful to have an example of how to get the file out of the database. The solution below by Rahul Kumar does not work. Apparently, dialog boxes need to be executed via a button click? However, I have not been successful with anything that I have tried. Hi Matt Do you have any article regarding download saved file from lightswitch application to external drive. if Yes then please refer me. Thanks Is it Possible to put progress bar (upload) on the select file dialog? @Willy – Using a progress bar during import would be possible but you would need a custom silverlight control for it. If you just search on "LightSwitch custom silverlight controls" you will find lots of reference material. @RASHMI – Check out Michael Washington's blogs: lightswitchhelpwebsite.com/…/Saving-Files-To-File-System-With-LightSwitch-Uploading-Files.aspx he talks about saving file to a hard drive (not a database). Hi, great work, i want to know if it is possible to upload files to hard drive with wcf ria service ? Hi Matt: thanks for this useful example I've figured to adapt it for VB; however, hasn't had success writing the code for the inverse path: to retrieve the binary file previously stored in the database and save it on a local folder (in web mode). You have any example to accomplish this? Is it possible to have this code in VB? @Neil Sitko – Neil there is a C# to VB converter out there that I use for all my needs –…/csharp-to-vb Thanks Matt Hi Matt, thats a nice example. I tried to expand your "SelectFileDialog" with a additional button which works like the "add" Button in LightSwitch Business application. Unfortunately I failed. Could you show how I can add such a button please. thx I used your nice example for the Northwind database which is included in lightswitch but at this point of your code (see below) I cant choose another table that is not createt in LightSwitch :(. // Editable Grid // Create a new record for this file, and store the data, name and length Product Product = DataWorkspace.ApplicationData.**Product**.AddNew(); In this case Product is not available in the dropdown box. Any idea how I can access a table which is not creat but included in Lightswitch? @John_C Hey John- I think the problem may be that you need to do something like DataWorkspace.<YourDataSourceName>.<YourDataSourceEntityName> So like DataWorkspace.NorthwindData.Products.AddNew() Let me know if that helps. -Matt S Hi Matt, thanks it works. 🙂 How can we view this data we imported? view and export? Now that you have shown how to save files, how about exporting the files back on desired location. by selection so if some one has dozens to export it would be easy to do, just for more then one file example. hi, I posted a comment here about exporting files back to file system. Maybe you haven't checked on comments yet but when you do. let me : Bmuzammil@gmail.com and relationship with person etc was also my comment. thank you. I only know basics of C#, hardly even basics. @Marvyn Cosep – It's possible to do basic file I/O of the file itself to read the data out and write it to the desktop. Though you may not have access to save it to the desktop potentially unless you are running as a Silverlight Desktop application (not a web app). If you want to read the contents of the file – I'd encourage you to look over the source code. There should be at least one place in there where we open up the file and read in the data. If that file were xml, for example, you could use an XmlReader to read in the data, parse it out, and store it in the necessary LightSwitch entities This is awesome, but is there a VB version of this code out there? hello, when i try to import file than more of 5 mb i got this error "An unexpected error occurred during communication with the server" i have this partial void Application_Initialize() { this.Details.ClientTimeout = 100000; } have you had this problem? could you help me? Thanks in Advance! Would you update the code to download the imported file? Pittrecon08 – I know – everyone keeps asking for this. I just need to set aside some time to do this. In the meantime please peruse the comments on this blog of people who have posted code snippets on how to make this work, as well as the link I gave to Michael Washington's blog on how he accomplished this I separated the binary data from its metadata,, for instance for streaming purposes, by using two separate tables (sound_infoes and sound_data) with a 1-1 relationship between them. In this table structure I am unable to "Save Changes" after adding records to both tables. I get a Data Validation error "sound_info(filename): The referenced sound_info is either not set or no longer exists". When I click on the error I can manually select the missing SoundKey, after which the Save operation succeeds, but this is clearly not acceptable. Here is the code I use to create both records and to fill them (closely tracking your selectFileWindow_Closed method): private void selectFileControl_Closed(object sender, EventArgs e) { SelectFileControl sfc = (SelectFileControl)sender; // Continue if the OK button is clicked AND a file was selected if (sfc.DialogResult == true && sfc.DocumentStream != null) { byte[] fileData = new byte[sfc.DocumentStream.Length]; using (StreamReader streamReader = new StreamReader(sfc.DocumentStream)) { for (int i = 0; i < sfc.DocumentStream.Length; i++) { fileData[i] = (byte)sfc.DocumentStream.ReadByte(); } } sound_info soundInfo = this.DataWorkspace.InterfaceSounds.sound_infoes.AddNew(); soundInfo.SoundKey = sfc.SaveFileName; soundInfo.OriginalFileName = sfc.SaveFileName; soundInfo.FileSize = fileData.Length; sound_datum soundData = this.DataWorkspace.InterfaceSounds.sound_data.AddNew(); soundData.SoundKey = sfc.SaveFileName; soundData.SoundDataStream = fileData; sfc.DocumentStream.Close(); sfc.DocumentStream.Dispose(); } } Any idea how I can go about fixing this? Matt, I've used some of your code as well as Michael Washington's and Yann's to get my upload, download and save functions to work. Howver, I need to connect my "FileInformations" table to a parent table for a parent-child, 1 to many. I have a parent named Resolutions which can have many child, "FileInformations", documents attached to it. Since I created a custom screen asper your code, I'm finding it hard to save a file when it's missing the parent data on the "Save" function. How can I keep the parent-child relationship when saving and retrieving the files? @ABrinson – Maybe just have a dummy table called TempFileTable. Save files there. And then during the saving operation of the entity (TempFileTable_Inserting) operation, you could write some logic to grab the correct "Resolutions" table record, and then create a new "FileInformations" child table entry off of it. ANd basically "copy/cut" the TempFileTable record over to the FileInformations table. HTH – Matt S Hello, I get the same message as Willem: IVisualCollection<T>.AddNew() shouldn't be called from the UI Thread. What can I do? My goal is to save the file in a database. OK, I did the Upload file and works perfectly! Thank you!! But what about a open/download button? Gr, Marius Very good article. I absolutely love this site. Continue the good work!
https://blogs.msdn.microsoft.com/rmattsampson/2011/05/23/how-do-i-import-and-store-a-data-file/
CC-MAIN-2017-39
refinedweb
3,455
53.92
# WWDC22 hidden gems ![](https://habrastorage.org/r/w1560/getpro/habr/upload_files/d7b/302/d14/d7b302d1490a00d117f9645318488553.png)For iOS developers, **WWDC** is always something of a New Year. We are presented with so many new products, and sometimes you can get lost in them. Most of my colleagues are trying to be in touch by watching **“Platform State of Union”** and all **“What’s new”** sessions. The event basically provides an opportunity to developers a glimpse of the features to expect from the software part. When Apple has a conference like this one, they want to make sure they have enough time to get everything done. In order to do this, they need to be able to work at their own pace and not worry about how long it will take them to get something done. This means that if you have a small project, it may not be that important but once you move into larger projects with more people involved, every second saved can be worth several dollars in terms of money saved or profit made by the company. The platform tightly integrates programming languages, frameworks, and tools. Everyone gains when these three complement one another. Customers receive a consistent experience, such as scrolling that feels right every time. And developers may devote more time and attention to what distinguishes the app. And it’s fine, you don’t need to spend so much time for this other unpopular stuff. However, in these, not-so-popular videos as couple of them may be called as “hidden gems” because of the content or a beautiful presentation/structure. Let me show couple of them! Meet WeatherKit --------------- [https://developer.apple.com/videos/play/wwdc2022/10003/](https://medium.com/r/?url=https%3A%2F%2Fdeveloper.apple.com%2Fvideos%2Fplay%2Fwwdc2022%2F10003%2F) ![](https://habrastorage.org/r/w780q1/getpro/habr/upload_files/905/6c5/bbe/9056c5bbe9ecb0a4e54b5f463a78f42f.jpeg)I think it’s always interesting to work with frame forces that people can use. This year Apple has added another one of these. Now your weather apps can be even better, more accurate, and faster. Technically, WeatherKit supports async/await, allowing it to be used in conjunction with SwiftUI. There’s a small price tag, but it seems to be cheaper than other competitors so far. The interface itself looks comfortable too. As for the session itself, I liked the narrator. Everything is very dynamic, with [jokes and some easter eggs.](https://twitter.com/NovallSwift/status/1534597678297804801)  Apple Weather kit delivers current weather, 10-day hourly forecasts, daily predictions, and historical weather using high-resolution meteorological models integrated with machine learning and prediction algorithms. While the new iOS 16/iPadOS/macOS Ventura is powered by the Apple Weather Service, developers on those platforms may now access Apple's weather data in their own apps using WeatherKit. Adding the data to an app using Swift appears to be very simple, with API calls based on location—the data is also accessible via a REST API for other languages or use cases. Apple has always been about innovation, and that's never been more apparent than when it comes to their iOS platform. They've always been able to make the most of their resources—and now, with the release of WeatherKit, they've made it easier than ever for developers to take advantage of all of the data available through Apple's servers.Now, developers don't have to worry about building their own weather API or paying for access to another platform's API—they can simply use WeatherKit and get all of the functionality that comes along with it. The possibilities are endless! Developers will be able to build apps with built-in weather features right out of the gate when Apple's software updates ship in the fall. **There's big news is that WeatherKit can also be used in web apps and android apps thanks to REST APIs,** so you can use its API to provide weather information in your web app — helping you keep your users on the platform longer and making sure they aren't getting bounced around between browsers. What’s new in HealthKit ----------------------- [https://developer.apple.com/videos/play/wwdc2022/10005/](https://medium.com/r/?url=https%3A%2F%2Fdeveloper.apple.com%2Fvideos%2Fplay%2Fwwdc2022%2F10005%2F) I have already reported on this framework and was amazed at its full functionality. You can see my review here, but it’s in Russian. This year, Apple added even more features. Sounds cool to me! I will tell you why Firstly, the support for triathlon trainings is fantastic, with breaks between activities. It seems that Apple itself, in the development, involved people themselves engaged in this crazy sport. And secondly, a vital feature has emerged for many people with glasses. I asked around my friends who have vision problems. And they shared that it’s very convenient to have it all on the phone. Thirdly, this hallmark gives a clear and current overview of health status. It consolidates health data from iPhone, Apple Watch, and third-party apps that are already in use, thus allowing anyone to track the health process. It recommends other helpful apps to round out their collection and makes it easy to learn about health matters. As it was indicated, it is easy to put data at your fingertips, so you can see your trends over time and spot potential issues early. Equally, it allows users to manage their own health data, as well as family members' data. Surprisingly, with the new Health Records feature, medical data from multiple providers is consolidated in one place. It is important to note that it is designed to work with the new HealthKit framework. The powerful API can be intimidating at first but depending on any complex undertaking of a developer, the framework is pretty helpful. I believe apple has emphasized on optimization and am happy that the company is taking developer feedback seriously. The data sharing feature is a great feature for securely storing fitness data as HealthKit Repo offers a centralized data store. A standardized and unified store for fitness data is offered by HealthKit. It will make it easier to create applications and gadgets that can exchange fitness-related data. Despite being slightly more complicated than Apple's other native applications, HealthKit allows users to compare all of their data on a single page, which is useful if they want to see how various aspects interact. The session itself is exciting and easy, and I recommend watching it. Cheers! The SwiftUI cookbook for navigation ----------------------------------- [https://developer.apple.com/videos/play/wwdc2022/10054/](https://medium.com/r/?url=https%3A%2F%2Fdeveloper.apple.com%2Fvideos%2Fplay%2Fwwdc2022%2F10054%2F) ![](https://habrastorage.org/r/w780q1/getpro/habr/upload_files/c13/806/dd2/c13806dd2803931a088d1f52545d1105.jpeg) > Disclaimer: don’t watch this session hungry! > > Preparing to navigate has always been difficult. But with the new updates in SwiftUI, they’ve tried to show us how to do it properly, clearly, neatly, and simply. I’m afraid I have to disagree with the approach, but the session allows us to sort out the problems and find the right solution. This makes it an ideal candidate to watch. The API makes it simple to describe the navigation style that best suits the demands of the app. You may simply save and restore selections, as well as alter the whole contents of a navigation stack, with strong programmatic control over the presentation of app’s views. This is extremely handy for dealing with critical activities such as configuring your app's start state, managing transitions across size classes, and responding to deep links. Apps are all about translating concepts, code, and APIs into user experiences. And the best applications are those that can meet users right where they are. Since the new version of Async-Await has been released in Swift 5.5, it has become possible to make network calls without freezing the UI thread. When I first heard about this feature, I was inspired by the creativity in how the Navigation API can lead to valuable extensions in app development. I’m excited about this new API, but it has required more than just an update to our developer tools. We’re going to build a UI that works on both iOS and Android, which means we will have to modify not only our app but also the way users navigate. Since all of this is done at runtime, it can take time and effort to catch everything up before releasing a polished experience that’s ready for adoption. When working with NavigationLink, it can be tempting to convert an optional-containing binding to a boolean rather than an optional one. The reason is simple and obvious: a navigation action that contains an optional component on its payload will pop the component upon completing a transition in which the component is present in the navigation link. While this is not necessarily a bad thing, forcing such behaviour can cause unintended side effects. Demystify parallelization in Xcode builds ----------------------------------------- [https://developer.apple.com/videos/play/wwdc2022/110364/](https://medium.com/r/?url=https%3A%2F%2Fdeveloper.apple.com%2Fvideos%2Fplay%2Fwwdc2022%2F110364%2F) ![](https://habrastorage.org/r/w780q1/getpro/habr/upload_files/8f5/4f4/c0e/8f54f4c0eafe0b44840aadee89907fb8.jpeg)I like that recently they have started adding sessions with the prefix “demystify.” There was some cool about SwiftUIs last year. All the details of the technical implementation are captured and focused on. Sometimes when I watch other sessions, I miss this hardcore, under the hood. The session itself lets you understand the critical development mechanism — the process of building your app. While you have a small project, it may not be that important, but once you move to a large company, every second you save can be worth several dollars to the business. So, it is crucial to understand and understand the parallelization process. And of course, the new tool, Build Timeline, is worth mentioning. Everything is excellent and convenient, as always.Xcode 14 adds parallelization of the build process for even faster build times for multi-core Macs. Apple has been putting up a lot of signs lately about the new `xbuild` tool. It's a great way to keep your code clean and organized, and that means faster build times for your products. Xcode 14 includes a number of improvements, but one that's definitely worth noting is the parallelization of some parts of the build process. Xcode starts the compilation of a target as soon as the build and runs scripts are complete. This means that the next target to be built will start a little bit early compared to Xcode 13, since linking other operations can now be done in parallel. Xcode's build timeline shows you the steps that make up your build process, from when you change code to when the app is ready for distribution. It's a great way to see how your code changes affect the whole process. Xcode's visualizer also allows you to see how builds are parallelized (assistant editor). Different colours indicate how much time is spent on each target, and empty space shows blocked tasks. Sandboxed shell scripts define their inputs/outputs and help in building system management dependencies. Eager linking happens when you use a pure Swift target, which means that those targets can be built without waiting for other parts of your project to be built before running through Xcode's interface.
https://habr.com/ru/post/700074/
null
null
1,917
53.92
Hey there, I'm having a problem and I tried several times to solve this problem maybe you could help. I need to modify the program below.) Here's what I have so far but i'm getting stuck: #include <stdio.h> int main() { int number, x=0, counter = 0; printf("Enter a number: "); scanf("%d", &number); printf("Please enter a number other than %d\n", number); while (number!=x) { scanf("%d", &x); while (x!=counter) { printf("Enter a number other than %d\n", x); scanf("%d", &counter); if (counter==x) { printf("wrong\n"); break; } } if (number==x) { printf("wrong\n"); break; } } return 0; } any help?
https://www.daniweb.com/programming/software-development/threads/471511/while-loops
CC-MAIN-2017-34
refinedweb
106
73.98
You can subscribe to this list here. Showing 12 results of 12 Bruce Walker wrote: > 4. AFter #3 seems to work, install the ssi-tools and ssi-cmds packages > (not all the modified base packages yet). cluster-tools and openssi-tools I'll put something together for you. It would be great to have OpenSSI via a CD cluster. Bruce Walker OpenSSI project lead > -----Original Message----- > From: ssic-linux-devel-admin@...=20 > [mailto:ssic-linux-devel-admin@...] On Behalf Of J S > Sent: Friday, October 15, 2004 7:35 PM > To: ssic-linux-devel@... > Subject: [SSI-devel] Difference between openSSI and openMosix? >=20 >=20 > Hi All, >=20 > I'm the main developer of Clusterix ( > ), which is a Morphix > livecd based distro which currently has both openMosix > and LAM/MPI installed. I'm currently looking for some > other clustering softwares to include for a new > release. I have come across openSSI today and I'm > trying to find the differences between it and > openMosix. I have been reading some of the docs, but > it would be great to have a concise description of the > differences. >=20 > Thanks, > J Silverman >=20 >=20 > =09 > _______________________________ > Do you Yahoo!? > Declare Yourself - Register online to vote today! > > > > _______________________________________________ > ssic-linux-devel mailing list > ssic-linux-devel@... > >=20 Aneesh Kumar K.V wrote: > Hi, > > I guess the below two members of task_struct are process specific and > not thread specific. > > struct move_data *execnode; > > /* node with which CDSL should be expanded */ > clusternode_t node_context; > > > Can we move them to the signal-struct. ? All process specific entity > belongs there i guess. We could move execnode; it is intrinsically one per thread-group. Moving it will entail some extra locking in the migrate code. node_context is not so clear; it is not likely anyone would want to set them differently on different threads, but it is possible. John > > -aneesh > Hello Brian, Following up on the "built" ipc patch that I sent a few minutes ago: +; I have fixed this in the ipc patch on Oct 18 I sent a few minutes); I had intended to declare this towards the beginning of the block for "case SHM_INFO:", and it ended up in the above due to oversight. I have fixed this in the ipc patch on Oct 18 I sent a few minutes ago. + > +. I ran out of time and hence could not do this. I'll do this first thing on Monday. Regards, - Kishore Hi, I have hereby attached the first complete "built" version of the patch for kernel IPC merge of SSI with 2.6.8.1 kernel. This patch needs to be applied as level 1 patch against Oct 14 Master sandbox. This version build successfully, using the config file that Brian sent. I built it as per the build instructions that Brian sent. The following are some warnings that I still see. I have written my notes on the below warnings after I spent some time to resolve them. I also had an important note towards the end, that is useful for somebody who is going to closely look at the changes in ipc/sem.c: Warning #1 ========== CC ipc/msg.o ipc/msg.c: In function `sys_msgrcv': ipc/msg.c:1142: warning: `__pu_err' might be used uninitialized in this function ipc/msg.c:1215: warning: `__pu_err' might be used uninitialized in this function I tried to resolve this, but, I was unable to find a solution. I think this warning should be coming even on the Base 2.6.8.1 kernel since the same thing exists in the Base kernel code as well (I haven't tried compiling the Base kernel without SSI). Warning #2 ========== CC ipc/mqueue.o ipc/mqueue.c: In function `sys_mq_timedreceive': ipc/mqueue.c:938: warning: `__pu_err' might be used uninitialized in this function Refer to details under Warning #1 above. Warning #3 ========== CC ipc/shm.o ipc/shm.c:275: warning: initialization from incompatible pointer type The above corresponds to the following in shm.c in the patch I have sent: #ifdef CONFIG_SSI .nopage = cfs_shared_nopage, #else /* CONFIG_SSI */ .nopage = shmem_nopage, #endif /* CONFIG_SSI */ I think this is a pointer to the CFS merge changes that David may have to make for 2.6.8.1: In linux/mm.h, shmem_nopage is defined as: struct page *shmem_nopage(struct vm_area_struct * vma, unsigned long address, int *type); whereas, in cluster/ssi/ipc/shm.h, cfs_shared_nopage is defined as: extern struct page * cfs_shared_nopage(struct vm_area_struct *, unsigned long, int); The last argument in our SSI declaration is "int" whereas in 2.6.8.1 Base kernel, it is an "int *". Important Note ============== In 2.4, the task structure (struct task_struct in linux/sched.h), the following fields existed for ipc: /* ipc stuff */ struct sem_undo *semundo; struct sem_queue *semsleeping; In 2.6, this has been changed to the following: /* ipc stuff */ struct sysv_sem sysvsem; with the above structure being defined as follows in linux/sem.h: /* sem_undo_list controls shared access to the list of sem_undo structures * that may be shared among all a CLONE_SYSVSEM task group. */ struct sem_undo_list { atomic_t refcnt; spinlock_t lock; struct sem_undo *proc_list; }; struct sysv_sem { struct sem_undo_list *undo_list; }; As part of making SSI merge changes, I have tried to incorporate the necessary changes. With that I have built sem.o successfully. However, I'm not confident that I have done a good job on this one. I'm specifically unsure about the fact that I have now got rid of the usage of current->semsleeping related code. I'll revisit this again, for correctness. Brian, as an aside, I noted that "make distclean" does not cleanup cluster/rpcgen/openssirpcgen and a host of other files. Thanks, - Kishore Hello Bruce, Attached is a patch for changes to 'top'. If it looks fine, I shall check it in. Thanks, Roopa Hi all, The new master sandbox is available from in two forms: linux-ssi-2.6.master.oct18.patch.bz2 full patch against vanilla 2.6.8.1 kernel linux-ssi-2.6.master.oct15-to-oct18.patch.bz2 incremental patch against the previous Oct. 15 master sandbox It contains fixes from John for VPROC and the build system, a merge from David for fs/nfsd/vfs.c, and the remainder of Kishore's IPC merge. The only .REJ file remaining is net/unix/af_unix.c.REJ!! Krishnakumar sent me a patch containing his merge of it, which I'll review tomorrow. Either tomorrow or Wednesday, I'll be checking the latest state of the master sandbox into the repository, then we can continue to work on it using CVS. Thanks for everyone's contributions, Brian Kishore Sampathkumar wrote: > Now, the above does _not_ include CONFIG_SSI, VPROC etc. So I sent > email to Anish and asked him about this. He said he just does a > "make menuconfig" on the kernel source tree and then tries to > build. He said "what is the use in building without CONFIG_SSI" ? Yes, Aneesh is correct. In my build instructions, I forgot to say that you needed to enable CONFIG_SSI. My config file can be found at It's probably not much different than what you have. Another error in my build instructions is that after you create the include/asm symlink, you need to run `make prepare-all' to run icsgen and rpcgen on the necessary .svc and .x files. If you need to modify any of these files, you would need to run `make prepare-all' again. > However, when I try building the ipc module using the above, I get > errors of the type: > > CC ipc/util.o > In file included from include/linux/mm.h:4, > from ipc/util.c:16: > include/linux/sched.h:600: field `mosix' has incomplete type > include/linux/sched.h:606: confused by earlier errors, bailing out > make[1]: *** [ipc/util.o] Error 1 > make: *** [_module_ipc] Error 2 John fixed this recently. His fixes are in today's master sandbox (Oct. 18). > Even if I try to turn of CONFIG_MOSIX_LL and try to build, I get > lot of other errors that point to various "cluster" related include > files (SSI-related) not found etc. I think it is NOT possible to > build along these lines. I think everybody working on SSI merge > with 2.6.8.1 SHOULD get these kinds of errors. With John's fixes and the corrected instructions, I'm getting the following error, which looks like it's all you. ;) > [bwatson@... bld]$ make M=ipc > CC ipc/util.o > ipc/util.c: In function `ipc_findkey': > ipc/util.c:120: warning: implicit declaration of function `shm_svr_get' > ipc/util.c:120: warning: comparison between pointer and integer > ipc/util.c: In function `ipc_drop_locks': > ipc/util.c:223: warning: passing arg 1 of `ipc_unlock' from incompatible pointer type > ipc/util.c:223: too many arguments to function `ipc_unlock' > make[1]: *** [ipc/util.o] Error 1 > make: *** [_module_ipc] Error 2 Thanks, Kishore. Brian SAMPATHKUMAR KISHORE KANIYAR wrote: > +. Brian SAMPATHKUMAR KISHORE KANIYAR wrote: > All the .REJ files are taken are of. The one in ./ipc/sem.c.REJ was > a reminder that exit_sem() merge changes need to be "fine-grained". > Right now, the SSI changes are under a giant #ifdef CONFIG_SSI. > However, making it "fine-grained" involves some effort since this > code has changed in 2.6, and is best handled as part of 2nd stage work > where we need to concentrate on refining the kernel hooks. Keep in mind that the 2nd stage of cleaning up the hooks is a longer-term project. Whatever you do right now during the 1st phase is probably what we'll stabilize as OpenSSI 2.0. Make sure that it both works and that it will be reasonably maintainable as we merge it with newer base kernel versions. > +); > > As I was replying to your review comments, I realized I have _missed_ > the above two by oversight in the patch I sent. I'll fix this tomorrow. Okay. Everything else looks good. Thanks, Brian SAMPATHKUMAR KISHORE KANIYAR wrote: > Hi, > > I have hereby attached the first complete version of the patch for > kernel IPC merge of SSI with 2.6.8.1 kernel. This patch needs to > be applied as level 1 patch against Oct 14 Master sandbox. > > In this patch, I have incorporated all the review comments by Brian. > This patch also contains changes to eliminate all the .REJ files > for kernel ipc. Everything looks good so far. I'll be sending responses to your other Thanks and regards, Brian En Chiang Lee wrote: > I did an install from the tarball and that went through fine. > > But on bootup, X doesn't start. I reproduced this. What I'm seeing is the screen blanking as if X is going to start, a pause for a few seconds, then I see the text console again. This process repeats itself ad infinitum until I stop it with Ctrl-C. I'm not sure why it's happening, and I need to finish reviewing Kishore's stuff before I'm done for the night. Can you investigate this issue some more and let me know what you find out? I did notice the following tidbits. /var/log/Xorg.0.log contains: > (**)" > (II) Keyboard "Keyboard0" handled by legacy driver > (WW) No core pointer registered > No core pointer > > Fatal server error: > failed to initialize core devices > > Please consult the The X.Org Foundation support > at > for help. /var/log/messages contains: > Oct 19 01:43:21 nsca34 gdm[67963]: gdm_slave_xioerror_handler: Fatal X error - Restarting :0 > Oct 19 01:43:26 nsca34 gdm[67968]: gdm_slave_xioerror_handler: Fatal X error - Restarting :0 > Oct 19 01:43:33 nsca34 gdm[67973]: gdm_slave_xioerror_handler: Fatal X error - Restarting :0 After repeating this a few times, it seems that /var/log/messages has one of the above lines for every time the screen blanks then comes back to the text console. > And during boot, I get something like: > > rm: Unable to remove /dev/mapper/control: Operation not permitted I saw this, too. Also, during upgrade I saw something about a version mismatch for devmapper, which might be related. Did this behavior occur in 1.1.0 and we missed it? If so, this might have something to do with running FC1 with devfs. Which rc script is attempting to remove this file? > and 'df -h' displays: > > [root@... root]# df -h > Filesystem Size Used Avail Use% Mounted on > /dev/1/ide/host0/bus0/target0/lun0/part2 > 9.9G 5.4G 4.0G 58% / > /dev/1/ide/host0/bus0/target0/lun0/part1 > 2.0G 39M 1.9G 2% /boot > /dev/1/ide/host0/bus0/target0/lun0/part5 > 24G 813M 22G 4% /home > > instead of the usual /dev/1/hda* names. Similarly with mount. In and of itself, there's nothing wrong with the output above. It's an equivalent (and more descriptive) way of naming the same device. Did this also happen on 1.1.0? I'd be surprised if it didn't, unless John changed something in devfs recently. > On reboot, the /dev fails to unmount saying /dev is busy. I think I've seen this before. Did this happen on 1.1.0? Let me know what you find out from investigating these things. I'll try to help you more tomorrow. Thanks, Brian
http://sourceforge.net/p/ssic-linux/mailman/ssic-linux-devel/?viewmonth=200410&viewday=19
CC-MAIN-2015-40
refinedweb
2,201
75.61
29478/how-to-view-pdf-stored-in-s3-using-angular-5 I am using the following project from github to display a pdf file in my s3 bucket. I have created an IAM account and got the key and secret. I have applied s3readonly permission to the account. I get it the url as the src but I get a permission denied error. How do I use the key and secret to view/get the pdf and then display it. I see plenty of stuff about file upload but cant figure out how to use the key and secret to give permission to the url. The code would be something like this: import ...READ MORE You can use the below command $ aws ...READ MORE You can use method of creating object ...READ MORE You can delete the folder by using .. sync uses multipart upload by default. Refer ...READ MORE Inorder to get it done first you ...READ MORE OR Already have an account? Sign in.
https://www.edureka.co/community/29478/how-to-view-pdf-stored-in-s3-using-angular-5
CC-MAIN-2019-47
refinedweb
165
84.88
A Developer.com Site An Eweek.com Site I want to use Java to implement a Three way merge software for slx files. slx is a compressed file format containing mainly xml documents. The sizes of the xml files is pretty small, several of the files are around 1-2kB and the larger ones I have yet to come across any files larger than 200kB (assume 1MB to be sure). I want the software to be able to show the differences of the files and let the user choose parts from both documets. while automatically merging non-conflicting changes. What I can't figure out is what to use. As I understand I could use DOM, SAX, XSLT, XPath + XQuery to parse the documents. Trying to read up on these I have gotten this far: DOM - creates a tree structure in memory (feels intuitively correct) and allows you to manipulate this, big drawback is memory consumption since the tree can be a lot bigger than the original xml file. Even though intuitive people from xml conventions seem to agree that DOM is not recommended to anyone. SAX - Read the document once, is fast and fairly simple. Requires a lot of code (?). Is used in other three way merge tools such as 3dm. I don't find this intuitive at all, since I havn't understood what I acctually have to work with after this. ---- I realize that XSLT and XPath are Query languages whereas DOM is a model though I havn't really figure out what difference that does to me yet. Most of my work will be designing the algorithms to do the merge the document, what I really need is to get a reference the elements and attributes so I can compare these ----- XSLT - Is what I recently got recommended to me, and it has its own merge function. This however will not support three way merge and will have to be done. Other than this I have no insight in XSLT. XPath + XQuery - Seems simple enough and fairly intuitive. Sort of like DOM in creating a tree to work with(?) but seems to be better with memory and quiete a lot of easy tutorials. However seems like a lot of work to use these in producing a Three way merge tool? As you can see I have never worked with any of these and a month ago I had no idea what an XML document nor what the difference in two way and three way merge was, so along the way it is likely I have missunderstood one or two things. Any recommendations to what I should choose and why would be much appretiated. DOM: reads the entire xml in memory. you can then query the DOM via XPath. the advantage is that you can go back and forth as many times as you want. the disadvantage is that it needs memory for the entire DOM to be loaded. SAX: reads the xml linearly and informs/callsback into the reader each time it enounters a processable element. xslt: is a technology based on DOM and XPath which allows you to "transform" a DOM into a document the way you want it. xquery is something similar to xslt, but it uses a different approach, I'm not really familiar with it. 3way merge means you either: process the inputs into partial output, then merge the partial output into the end result or merge the inpurts into an intermediate, then process the intermediate into the end result. there's advantages and disadvantages to both, which is going to be easier depends on the technology path (sax or dom based), how different the inputs are and how the output is structures. too many variables to give you any kind of clear guidance. you don't have to use xslt or xquery, and instead you can make your own purpose specific code to extract the data you need for the output. It's sort of do you want to focus on using existing technologies, or do you want to get something done as fast as possible (just programming it all might be easier if you don't have to learn how to use how to use somethign like xslt before). you could even forgo DOM and SAX and xpath entirely and write your own xml parser (it's a lot harder than it seems, especially if you need to handle namespaces). there are other lightweight api's to load XML with simple means to query the contents without having to learn XPath (which is necesary for DOM based at least). I guess you are right, not the answer one was hoping for since it all seems so massive to get into at times. I don't want to get into writing my own parser... That seems to be way out of my league. Not being proficient in programming (yet) and I don't have any experience in handling different file formats, so just readinga about xml makes me very happy someone has done the grunt work for me. I found some really good material on xml that discusses almost everything I asked. It is an entire book but if anyone gets stuck with something similar I would recommend getting into it. Hopefully the next person has more programming knowledge, xml knowledge and maybe even some parsing experience and then dont have to read the hole thing. Anyways I figure I'd share this: I hope it will get me somewhere. Back to the.
http://forums.codeguru.com/showthread.php?546977-three-way-merging-of-xml-documents&p=2163097
CC-MAIN-2020-05
refinedweb
922
68.91
The C library function char *strstr(const char *haystack, const char *needle) function finds the first occurrence of the substring needle in the string haystack. The terminating '\0' characters are not compared. Following is the declaration for strstr() function. char *strstr(const char *haystack, const char *needle) haystack − This is the main C string to be scanned. needle − This is the small string to be searched with-in haystack string. This function returns a pointer to the first occurrence in haystack of any of the entire sequence of characters specified in needle, or a null pointer if the sequence is not present in haystack. The following example shows the usage of strstr() function. #include <stdio.h> #include <string.h> int main () { const char haystack[20] = "TutorialsPoint"; const char needle[10] = "Point"; char *ret; ret = strstr(haystack, needle); printf("The substring is: %s\n", ret); return(0); } Let us compile and run the above program that will produce the following result − The substring is: Point
https://www.tutorialspoint.com/c_standard_library/c_function_strstr.htm
CC-MAIN-2019-35
refinedweb
164
62.68
Globalization is one of the concepts that comes to mind when we create applications that might run in different geographical locations. Based on the Culture code, we need to modify our application. This is a very common case for many developers. I thought let's discuss what I implemented as the most cunning way to deal with this in your WPF application. Globalization is the most common issue to every application. Many of us might have searched over time to find out the easiest way to do a Multilingual Application. Believe me, I did the same thing like you. After doing that, I found a lots of articles on the internet. For instance, you can see one from MSDN: If you have already read the article, you might have found that there is no such actual implementation that clearly demonstrates the concept. That is why I thought of writing a concrete article for you to easily implement a truly Multilingual Application. If you have downloaded the sample application, you can see that I have created a login screen, just to show how it works. To try, just run the application you will find a screen just like the one shown above. Put Username and Password Same, and press login button. You will see the screen below: Next, go to Control Panel - > Regional & Language Option and change the Language to French(Canada), and Re run the application. You will find a different screen as below: And if you put credentials and press "connexion" (Login) button, you will see "Échec de l'authentification" (Authentication Failed). Now I will discuss how can you implement this type of application yourself. To start implementing this application, I have added one window. I have also designed the window with some look and feel. You can see them, but this is nothing to deal with our application, so I left out their implementation. After creating the initial look and feel, which suits me, I added a folder named "Resources" (the name of which can be anything). I added two resource Dictionary to define my resource keys which I will use for my application. The resource files are named as StringResources.xaml, StringResource.fr-CA.xaml, etc. You can add as many resource files as you want, each of which corresponds to its own Culture. Inside the resource files, you must declare the ResourceKeys. I have used system:String to define the Resources. ResourceKeys system:String <ResourceDictionary xmlns ="" xmlns: x="" xmlns: <system:String x:Close</system:String> <system:String x:Login</system:String> <!-- All StringResources Goes Here --> </ResourceDictionary> Thus you can see, in addition to adding the ResourceDictionary to the Resources Folder, I have added one namespace which points to mscorlib, and named it as system. I have then added the string references like close, login, etc. which are defined to be replaced in the UI. ResourceDictionary mscorlib system string Similar to this, I have added another file for fr-CA, and named it as StringResource.fr-CA.xaml. This will hold all the keys that corresponds to the Resourcekeys for a machine set up with French Canadian. Resourcekeys <ResourceDictionary xmlns ="" xmlns:x="" xmlns: <system:String x:Fermer</system:String> <system:String x:connexion</system:String> </ResourceDictionary> Thus you can see that I kept the same name for the keys to ensure everything works perfectly. If you have used ASP.NET Globalization, this is almost similar to it. After you create the Resource file, it's. Source Window.Resources this.Resources.MergedDirectories.Add() Finally, it's now time to point ResourceKeys to your XAML to ensure it picks up the appropriate key from the ResourceDictionary. Let us add some controls: ResourceKeys ResourceDictionary <Button x:Name="btnLogin" Click="btnLogin_Click" Content="{DynamicResource login}" Grid.Row="3" Grid. <Button x:Name="btnClose" Content="{DynamicResource close}" Click="btnClose_Click" Grid.Row="3" Grid. You should note that I have always put the Content of the Button using DynamicResource. This is important to define, because we want the content to be replaced with the appropriate key defined to the Resource will be added later. DynamicResource Hence, your application is ready. You can download the sample application from here.. Thus, it is really fun to play with WPF, and as Multilingual application is most likely a common issue, I hope this article will help you in the long run. Try the sample application and see the actual.
http://www.codeproject.com/script/Articles/View.aspx?aid=123460
CC-MAIN-2016-30
refinedweb
732
64.2
So, I've been trying to find a means of fixing this issue, and I'm finally resorting to this here for some of you more knowledgeable folks.. So, basically I'm working on a 2D game. I've tried several different means of ground detection and the only thing I can find that works is a bit conflicting. I've tried physics2D.CircleOverlap, raycasting, line casting, and even setting up a trigger to tell the PlayerController script when it touches the ground. However, if the player reaches a certain speed, they will sink someway in the ground before stopping. This generally happens once jumping off a platform to a lower surface... To fix this problem, I've turned "Is Kinematic" off of my player object. However, I'm simulating my own gravity through scripting, and would like to ignore physics for this, and most other objects. Simply putting the Gravity scale to 0 isn't exactly my ideal means of resolving this issue. Any advice at all is much appreciated, also, my jumpScript is much longer than this, but this shows all the movement stuff related to it. If you do need any more information at all, please feel free to ask, I figured a shorter version would be more efficient for whoever is reading ^_^. Edit: I wrote this up when I was rather tired. I noticed a couple errors in my script, so just updating that. using UnityEngine; using System.Collections; public class TestScript : MonoBehaviour { private float decreaseTimer; public float decrease; public GameObject bottom; private bool jumping; private Vector3 bottomVec; public float jumpHeight; // Update is called once per frame void FixedUpdate () { if(Input.GetKey (KeyCode.Space)){ jumping = true; //IfInputJumpisTrue } if(IsGrounded() == false){ decreaseTimer += Time.deltaTime; //TimeLengthOfJump-MultipleSubtractionByThis } if(IsGrounded()){ decreaseTimer = 0f; jumping = false; //ResetTimer } if(jumping){ transform.Translate(0f, jumpHeight - (decrease * decreaseTimer), 0f); //WillEventuallyReturnNegative } } private bool IsGrounded(){ bottomVec = bottom.transform.position; return Physics2D.Linecast(transform.position, new Vector3(bottomVec.x ,bottomVec.y), 1 << LayerMask.NameToLayer("Ground")); //LineCast,CheckForGround } } Answer by Aggrojag · Jun 28, 2014 at 06:40 AM So, after digging through Unity, I believe I found the answer to my question. When Is Kinematic is set to false, unity will check to see if the object is inside of another object. If it is, it will offset the value of the transform until it is outside of the object. I would provide sample script to get around this if I knew how, so my apologies that this is not provided! Cheers! Answer by grendayzer77 · Feb 08, 2016 at 11:31 AM i had the same problem and i've set Collision Detection to Continuous instead of Discrete, and it solved Beat Em Up Jump Functionality 0 Answers Simple jump code 2D - c# 1 Answer Character doesn't jump repeatedly 0 Answers How do i incorporate a double jump ingame? 1 Answer My game frozzes when I use a while to check if there's colision 2 Answers
https://answers.unity.com/questions/733921/ground-detection-lagging-2d.html?sort=oldest
CC-MAIN-2020-34
refinedweb
491
54.93
Agenda See also: IRC log <scribe> chair: Today's meeting will take non-zero time <scribe> chair: 23 April minutes accepted without objection <scribe> chair: New issues -- <scribe> chair: 1) Need for new namespace; we had held namespaces steady from CR to end. In this case we bounced from LC to WD, so new document should get new namespace. <scribe> chair: Philippe suggested new namespace on CR. Is it necessary to change on LC/CR transition if there is no substantive change? Philippe: We have control over our namespace. No need to change. Should hold steady until CR. Last time we changed a lot until CR. <scribe> Chair: Any objection to using dated namespace of next LC for this document? <Ram> Rama: Suggest short form (in IRC) <scribe> chair: requires director review philippe: easily done Anish: Chances of change after CR much lower. Would rather not assign this (permanent) NS to a WD. If we assign it now and there are changes, then we have to change the NS to something new. permanent NS should at least have a version Philippe: Now is our chance to use the short form (as WSDL and others?) Anish: Did they do that at CR? Having a stable NS is a good goal. Wary of doing it now. Plh: So we should adopt short form at CR? Anish: yes Ram: Want to freeze NS for interop testing. Stable NS helps that. <scribe> Chair: (jumping ahead) both IBM and MSFT intend to do WSP interop testing soon. Anish: Nice from interop and impl standpoint, but comparing against risk changes will occur. Less risky to change a dated NS to a shorter version. ... Would rather assign such a name at CR. Is this draft LC or CR? Plh: LC <scribe> Chair: Team rep -- what are the rules for the short NS? How many degrees of freedom do we have on keeping it throughout doc lifetime? plh: Don't need director approval for dated NS. Changes automatically. But then have to remember exact date to use right namespace. Who remembers NS for WSA? So director approved /NS with group deciding anything after that. Approval is lightweight now. <anish> what happens if the short NS needs to change? plh: WSP decided to use short version at ns. There's always a risk, even at CR. Here we are doing a new LC so likelihood of change should be small. Companies most concerned are those doing interop anyway. TRutt: We kept assertion names. Wasn't that to avoid NS change? Do we need NS change for non-syntactic, semantic changes? Anish: Yes TRutt: maybe we should change the names then <scribe> chair: Don't want to change names just to change names Trutt: Wanted to clarify whether semantic change requires NS change Plh: yes Anish: Want to maximize chance short NS survives. LC isn't for interop anyway, that's CR, so that's when we should freeze. <scribe> chair: May I ask IBM and MSFT, who will do interop, if new dated NS for next LC draft, and then short NS on CR (if WSP is stable), be acceptable? Ram: The question is what NS to use for interop. This is why we want to freeze. ... Good chance we're going to CR in three weeks. <scribe> Chair: No problem personally with dated NS <TRutt__> +1 with anish - keep dated namepace for now <scribe> Chair: And there is no shortage of them +1 with Anish, Tom <scribe> Chair: Don't think optics of short NS is important. Fine with picking new dated NS, and even sticking with it if there are no substantive changes. Thoughts? Ram: That's a fine position, approach. I would prefer shorter NS but other option is fine as well. <scribe> Chair: Is it kosher to define a NS alias <anish> +1 <Ram>: Effective next publication as LC, we will use a May 2007 namespace and hold it constant absent breaking changes <plh> Ram: Useful to add change policy to namespace section? Makes expectations clear to reader. +1 to making expectations clear in general Anish: This would go in document you get by dereferencing NS? Ram: Yes plh: Skeptical of examples of breaking changes. These are all schema changes. Adding complex types, e.g., would not break <scribe> Chair: Amend proposal to use only text between URI: and "accordingly." Ram: Just examples, not exhaustive set. <anish> dhull: we might want to tone down 'uri will not change with each subsequent revision' <scribe> Chair: would prefer that WG retain control over what is a breaking change <anish> dhull: suggestion that we accept the principle and then on the ML work on wordings Katy: How about remove "arbitrarily"? <scribe> Chair: Seconded plh: Need to change a bit more. Need to make clearer we don't intend to change from the next LC document. <bob> URI will not change with each subsequent revision of the corresponding XML Schema documents as the specifications transition through Candidate Recommendation, Proposed Recommendation and Recommendation status. However, should the specifications revert to Working Draft status, and a subsequent revision, published as a CR or PR draft, results in non-backwardly compatible changes from a previously published CR or PR draft of the specification, the namespace URI will <Ram> +1 <bob> URI will not change arbitrarily with each subsequent revision of the corresponding XML Schema documents Why not just strike "with each subsequent revision of the corresponding XML Schema documents", leaving URI will not change as the specifications transition through... <anish> dhull: delete stuff about xml schema <scribe> Chair: So how about ... <bob>. Katy: Do we even need this? No one wants to make changes? <Ram> +1 Plh: This is for the world at large. Very useful to make guarantees. RESOLUTION: Text above (modulo grammar) accepted as useful addition to our NS document. <scribe> Chair: Ram, have your issues been adequately addressed? <Ram> Ram: Yes, but there is one more <anish> +1 <scribe> Chair: The issue is the referenced version of WSP, now that it is in CR. They have a nice-n-shiny new NS. Shall we update to refer to it? <scribe> Chair: No objection RESOLUTION: Update doc to reference current WS-Policy short namespace <scribe> Chair: Any other new issues? Hearing none ... <scribe> Chair: LC36 use cases with Tom and Dave <TRutt__> First example shows intersection of two policies, each with two alternatives, one in common. <TRutt__> Pa <TRutt__> Addressing non-anon with Jabber <TRutt__> Addressing non-anon with http <TRutt__> Pb <TRutt__> Addressing non anon with mail <TRutt__> Addressing non-anon with Jabber <TRutt__> Intersection yields <anish> i have an issue that I have not sent in. but it is an ed. issue and should not block us from making progress <TRutt__> Addressing non-anon with Jabber (a) <TRutt__> Addressing non-anon with Jabber (b) TRutt: Some discussion of whether these are client and server or something else. Shouldn't matter. ... Intersection works in this case. Significant that you're pulling in separate parameters (maybe significant) <TRutt__> Example 2 tries to introduce other response transport options than jabber, http or mail <TRutt__> Pc <TRutt__> Addressing non-anon (no restriction) <TRutt__> Addressing non-anon with Jabber <TRutt__> Addressing non-anon with http TRutt: Would rather not discuss exactly what is being intersected. Believe intersection algorithm works here. <TRutt__> Pd <TRutt__> Addressing non-anon (no restriction) <TRutt__> Addressing non-anon with mail <TRutt__> Addressing non-anon with Jabber <TRutt__> Intersection yields: <TRutt__> Addressing non-anon (no restriction) (c) <TRutt__> Addressing non-anon (no restriction) (d) <TRutt__> Addresiing non-anon with Jabber (c) <TRutt__> Addressing non-anon with Jabber (d) TRutt: Second example addresses optionality. WSP optionality is a bit dangerous, but if you put ever alternative you can deal with in, intersection can handle it. ... Defined separate namespace wts with two fictitions assertions for this example. Don't actually exist. ... Second example end up with two jabbers ... Know from result that HTTP non-anon will work <bob> ach dhull Dhull: How do I know HTTP is OK <bob> Trutt: You know http works, jabber will work, other things may work <bob> dhull: Do you agree that we have lost information? <bob> dhull: Policy is not suited for making intelligent domain dependent decisions <bob> ... what the intersection alg. can do is to compare assertions that exist on both sides <bob> ... All intersection is doing is pulling together two sources of information <bob> ... if we can use a better division of labor between policy and addr, that would be preferable. <bob> trutt: dhull is reading too much into what policy can do <bob> .. the example is trivial, but is intended to be illustrative <bob> ... all the intersection does is demonstrate agreement between parties even though other things may work <bob> dhull: I think that the policy alg. does fine, I think that what it does is compare two sets of assertions and thats all <bob> ram: I think that there is a lot of agreement, and therefore that we can get to closure <anish> even folks in wsp wg want clarity. clarity is lacking in the ws-p specs <Ram> 3.1.6 Finding Compatible Policies <Ram> <bob> dhull: It is hard to figure what we should to different RESOLUTION: LC136 Closed with no action <scribe> Chair: Section 4.5 of WSP deals with intersection. Full semantics of assertions domain-defined. Can define totally domain-specific alg. or use default. Which one used is differentiated by QName. <scribe> Chair: Believe we have used default for purpose of comparing policies. <anish> is domain-specific algorithm pulled in only if there are parameters defined? TRutt: We have not provided any parameters. IMO don't need domain-specific rules now. <scribe> Chair: (Anish) forced to use domain-specific if you have parameters. TRutt: Even with params can use default ... Can if you want. We haven't defined params, so don't need domain-specific rules Anish: So domain specific is pulled in only if params defined? Monica: There are other cases w/o parameters. E.g. domain has top-level assertion with empty nested policy expression and you want those to be compatible. By default not compatible. ... (example needs second policy with non-empty) <scribe> Chair: Testing Ram: We have reported back on interop scenarios. Have submitted document for review. Hope we have covered all cases we wanted to test. Hope to do testing on this and report progress. <scribe> Chair: Have folks had a chance to look? Please review if you can. <David_Illsley> phone died... will look for another battery but don't hold out much hope Ram: Hope interop testing will show whether real implementations can use what we've done. <scribe> Chair: Do you believe we have a sound basis for moving ahead with testing? Ram: Yes, absolutely. Our product teams worked on it quite a bit. We believe this is exactly it and we have covered all the useful cases. Katy: We totally agree with Ram. We have a good list of cases with good coverage and expect to show good interop. <scribe> Chair: From chairs of WSP, participants should send contact info to Abbie Barber (sp?) point-to-point so he can provide a pass to get into event. Ram: Thanks for pointing this out. We will do so. Katy: Will do. <scribe> Chair: Given that there are no open issues and that the changes we have made have fulfilled WSP issues, no reason not to move to LC. <anish> is the document that will be taken to LC: <anish>;%20charset=utf-8 <scribe> Chair: There being no objections, we shall proceed to LC with the version currently pointed to as the editors' draft on our web site. RESOLUTION: version;%20charset=utf-8 will be LC draft. LC period statutory minimum of 3 weeks. <scribe> Chair: Please take close look at interop scenarios for identification of potential features at risk <scribe> Chair: AOB? Ram: Editors will update NS? <scribe> Chair: Yes, along with status section. Ram: Need NS for interop <scribe> Chair: You know what it will be? Ram: Yes <scribe> Chair: With luck, it will be in the document by tomorrow, subject to Philippe's bandwidth constraints. <scribe> Chair: AOB? <scribe> Chair: Adjourned
http://www.w3.org/2002/ws/addr/7/05/14-ws-addressing-minutes.html
CC-MAIN-2016-26
refinedweb
2,033
74.49
Pandas: Check if a day is a business day (weekday) or not Pandas Time Series: Exercise-14 with Solution Write a Pandas program to check if a day is a business day (weekday) or not. Sample Solution: Python Code : import pandas as pd def is_business_day(date): return bool(len(pd.bdate_range(date, date))) print("Check busines day or not?") print('2020-12-01: ',is_business_day('2020-12-01')) print('2020-12-06: ',is_business_day('2020-12-06')) print('2020-12-07: ',is_business_day('2020-12-07')) print('2020-12-08: ',is_business_day('2020-12-08')) Sample Output: Check busines day or not? 2020-12-01: True 2020-12-06: False 2020-12-07: True 2020-12-08: True Python Code Editor: Have another way to solve this solution? Contribute your code (and comments) through Disqus. Previous: Write a Pandas program to create a series of Timestamps from a DataFrame of integer or string columns. Also create a series of Timestamps using specified columns. Next: Write a Pandas program to get a time series with the last working days of each month of a specific year.
https://www.w3resource.com/python-exercises/pandas/time-series/pandas-time-series-exercise-14.php
CC-MAIN-2021-21
refinedweb
181
63.9
org.eclipse.swt.widgets.CoolBarorg.eclipse.swt.widgets.CoolBar public class CoolBar Instances of this class provide an area for dynamically positioning the items they contain. The item children that may be added to instances of this class must be of type CoolItem. Note that although this class is a subclass of Composite, it does not make sense to add Control children to it, or set a layout on it. Note: Only one of the styles HORIZONTAL and VERTICAL may be specified. IMPORTANT: This class is not intended to be subclassed. public CoolBar- SWT, SWT.FLAT, SWT.HORIZONTAL, SWT.VERTICAL, Widget.checkSubclass(), Widget.getStyle() protected void checkSubclass() Widget Composite public Point computeSize(int wHint, int hHint, boolean changed) Control Composite- Guidelines for using Eclipse APIs. Copyright (c) Eclipse contributors and others 2000, 2012. All rights reserved.
http://help.eclipse.org/juno/topic/org.eclipse.platform.doc.isv/reference/api/org/eclipse/swt/widgets/CoolBar.html
CC-MAIN-2015-06
refinedweb
136
57.67
MS-Edge audio.playbackRate regression By design Issue #5928647 Steps to reproduce URL: Repro Steps: There is a [Run] button at the bottom of this demo. It launches a dialog from which an audio snippet can be played. By intent, the audio.playbackRate (speed) is decreased each time the snippet is played. Next to that [Run] button, there is an unmarked radio button. It is a bugfix button. If you turn it off, MS-edge will fail to play the audio after a couple of stepdowns. That’s the bug. The button is suppressed (does not appear) with IE9-IE11. The bugfix is not enabled for them. It is not needed for them. In fact, this bugfix will break them. Ironically, the code in this demo was conceived after considerable study to accommodate IE9-IE11 problems, to make them work and seem to shine, in a way that interops with gecko and webkit without ua-sniffing. What a waste of time that was. Expected Results: Expectation. msdn admin mark my posts “answer” and set this following thread as a sticky. Actual Results: Dev Channel specific: No Microsoft Edge Team Changed Assigned To to “Venkat K.” Changed Assigned To from “Venkat K.” to “Steve B.” Changed Assigned To from “Steve B.” to “IE S.” Changed Status to “By design” Hello, Thank you for providing this information about the issue. Unlike IE, Edge has the same minimum audible playbackRate as other browsers like Chrome and FireFox, which is 0.5. The repro page uses different code to calculate the playbackRate depending on the browser: if ((String(evt.timeStamp).length-String(+(new Date())).length)==3) newPlayRate= function(){ return playbackRate } else if (!ms) newPlayRate= function(){ return ((calcrate()>=0.5)?playbackRate:(playbackRate=0.5)) } The audio will play as long as Edge uses the same code as FireFox and Chrome. Special logic for Edge should no longer be required. Best Wishes, The MS Edge Team You need to sign in to your Microsoft account to add a comment.
https://developer.microsoft.com/en-us/microsoft-edge/platform/issues/5928647/
CC-MAIN-2019-04
refinedweb
333
69.07
Due date Wednesday, Oct 13 at 11:59PM Take the shell code that you wrote for assignment 2 and add pipes, so that a user can two or more processes together. The user has the option of redirection standard input (for the first process) to read from a file rather than from the keyboard, and/or redirecting standard output (for the last process) to a file instead of to the terminal. The user can chain any number of processes together with pipes. In all cases the output of a process is redirected as the input of the next process. Here is an example. Proc1 | Proc2 | Proc3 Your shell would call fork three times. The first child would exec Proc1, the second child would exec Proc2, and the third child would exec Proc3. The output of Proc1 would be piped as the input of Proc2; and the output of Proc2 would be piped as the input of Proc3. In order to find the executables, your program should search an environment variable called SEARCHPATH. This will consist of a list of directories, delimited by dollar signs. Your program should search the first directory, then the second, and so on, looking for an executable file. To set an environment variable from the shell use the shell command export. Here is an example: export SEARCHPATH=/usr/bin$/usr/local/bin$. Note that there are no spaces on either side of the = operator. You should execute this command prior to invoking your shell. To access an environment variable from inside a program on Unix, use the library function getenv. Here is the function prototype #include <stdlib.h> char *getenv(const char *name);Type man getenv to get more information. Each process can take up to ten arguments. Here is a more complex command line: proc1 arg1 arg2 < infile | proc2 arg1 arg2 arg3 | proc3 arg1 > outfile As before, I will do the hard part for you. The function int ParseCommandLine(char line[], struct CommandData *data) takes a command line as an argument and populates a CommandData structure. The function returns 1 if it was able to successfully parse the line, and zero if there was some kind of error in the line. The structure of CommandData is more complex this time. Here it is. struct Command { char *command; char *args[11]; int numargs; }; struct CommandData { struct Command TheCommands[20]; /* the commands to be executed. TheCommands[0] is the first command to be executed. Its output is piped to TheCommands[1], etc. */ int numcommands; /* the number of commands in the above array */ char *infile; /* the file for input redirection, NULL if none */ char *outfile; /* the file for output redirection, NULL if none */ int background; /* 0 if process is to run in foreground, 1 if in background */ };Here is a link to the header file which defines all of this code Here is a link to the file that I used to test this function. It may help you to understand how this works (or you can choose to ignore it). At the prompt, enter a command string, and it will parse it for you, displaying the values. This runs in an infinite loop; to get out, enter the command quit
http://www.cs.rpi.edu/academics/courses/fall04/os/c10/Shell2.html
CC-MAIN-2016-18
refinedweb
533
71.44
There are several ways to create PDFs. The hardest of them all is perhaps to create it on your own using C#. However, if you want to learn how to do so, you have to climb a steep learning curve. You can either read the 1300+ page specification document available free from Adobe's PDF Technology Center or use an open source library called iTextSharp. iTextSharp eases the learning curve a fair amount. But learning to use iTextSharp is itself non-trivial. The people behind iTextSharp have done a very nice job of putting together a set of tutorials. If you get through the tutorials, creating a PDF becomes somewhat easier. The tutorials, however, are based on .NET 1.x and cannot be used "out of the box" with .NET 2.0 without a fair amount of code rework.. iTextSharp_Tutorials VVX The only way to learn is to mess around with the code. If you do, KEEP IN MIND, it is a very good idea to always get past the document.Close(); before you terminate a Debug session. document.Close(); The Help menu contains links to two important sources of information: Try them. You will see why they are very important, especially if you are determined to create your own PDFs. This solution was created using Visual Studio 2005 and before building the solution you need to use the VS2005 Project > Add Reference menu command to provide a Reference to iTextSharp.DLL version 4.x. If you don't have this, you can download just the DLL or the source [and build it yourself] using one of these links to SourceForge.net: Reference If you have Visual Studio 2005 then you should be able to use the project source code "out of the box" - simply build and run. The code itself is not rocket science. It is reasonably documented. More important, you have access to a wealth of information on what the code does via valuable links in the Help menu. If you don't have Visual Studio 2005, you will have to ask a more experienced friend. (Don't ask me, as I: VVX.iTextSharp_Tutorials addCell AddCell RIGHT ALIGN_RIGHT DoGetImageFile(...) DoLocateImageFile(...) using System.Drawing; #if ... #else ... #endif VVX.About is a simple class that provides a "cheap", zero maintenance, "Help | About" message box. If you don't know how to access information in an assembly, it can show you one way of extracting some information from it. VVX.About.Info(...) VVX.About.Show() VVX.MsgBox is a class to simplify access to MessageBoxes of different kinds. It helped me learn how to use message boxes more efficiently. For example, the MsgBox.Confirm(...) method allows me to do something like this: VVX.MsgBox. VVX.File Exists System.IO.FileInfo if(VVX.File.Exists(filename)) { //... do something } If you want to learn more about iText, on which iTextSharp is based, click here. If iTextSharp is too difficult you might consider using one of several commercial PDF solutions, some of them are quite inexpensive. However, I haven't had much success with them. (The only place I have found creating PDFs is a breeze is on my iMac; creating PDFs on it from any application is a no-brainer! And you get beautiful PDFs, too.) I have learned a lot from people like you who have anonymously and freely shared your experiences with the world. Perhaps, you gave a little, but I learned a lot. If this utility helps even one person, you've made my day and given me a chance to give back to the community. So, thank you! Please consider sending in a donation to one of my favorite charities: Year Up or to any similar NGO (non-governmental organization) in the world that is selflessly doing good work and helping people in need. Give a little, get a lot! This article, along with any associated source code and files, is licensed under A Public Domain dedication Tutorials on creating PDF files using C# 2.0 General News Suggestion Question Bug Answer Joke Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/18040/Tutorials-on-creating-PDF-files-using-C?msg=4296982
CC-MAIN-2015-06
refinedweb
696
65.52
> {-# LANGUAGE DeriveDataTypeable #-} > -- | > -- Module : Control.Concurrent.MSem > -- Copyright : (c) Chris Kuklewicz 2011 > -- License : 3 clause BSD-style (see the file LICENSE) > -- > -- Maintainer : haskell@list.mightyreason.com > -- Stability : experimental > -- Portability : non-portable (concurrency) > -- > -- This is a literate haskell version of Control.Concurrent.MSem for increased clarity. > -- > -- A semaphore in which operations may 'wait' for or 'signal' single units of value. This modules > -- is intended to improve on "Control.Concurrent.QSem". > -- > -- This semaphore gracefully handles threads which die while blocked waiting. The fairness > -- guarantee is that blocked threads are servied in a FIFO order. > -- > -- If 'with' is used to guard a critical section then no quantity of the semaphore will be lost if > -- the activity throws an exception or if this thread is killed by the rest of the program. > -- > -- 'new' can initialize the semaphore to negative, zero, or positive quantity. > -- 'wait' always leaves the 'MSem' with non-negative quantity. > -- 'signal' alawys adds one to the quantity. > -- > -- The functions below are generic in (Integral i) with specialization to Int, Word, and Integer. > -- > -- Overflow warning: These operations do not check for overflow errors. If the Integral type is too > -- small to accept the new total then the behavior of 'signal' is undefined. Using (MSem > -- Integer) prevents the possibility of an overflow error. [ A version of 'signal' that checks the upper > -- bound could be added, but how would it report failure and how would you use this sanely? ] > -- > > module Control.Concurrent.MSem > (MSem -- do not export the constructor, kept abstract > , new -- :: Integral i => i -> IO (MSem i) > , with -- :: Integral i => MSem i -> IO a -> IO a > , wait -- :: Integral i => MSem i -> IO () > , signal -- :: Integral i => MSem i -> IO () > , peekAvail -- :: Integral i => MSem i -> IO i > ) whereThe above export list shows the API. The amount of value in the orignal QSem is always of type Int. This module generalizes the type to any Integral, where comparison (<) to 'fromIntegral 0' and 'pred' and 'succ' are employed. The 'new', 'wait', and 'signal' operations mimic the QSem API. The peekAvail query is also provided, primarily for monitoring or debugging purposes. The with combinator is used to safely and conveniently bracket operations. > import Prelude( Integral,Eq,IO,Int,Integer,Maybe(Just,Nothing) > , seq,pred,succ,return > , (.),(<),($),($!) ) > import Control.Concurrent.MVar( MVar > , withMVar,modifyMVar,modifyMVar_,tryPutMVar > , newMVar,newEmptyMVar,putMVar,takeMVar,tryTakeMVar) > import Control.Exception(bracket_,uninterruptibleMask_,mask_) > import Control.Monad(join) > import Data.Typeable(Typeable) > import Data.Word(Word)The import list shows that most of the power of MVar's will be exploited, and that the rather dangerous uninterruptibleMask_ will be employed (in 'signal'). A new semaphore is created with a specified avaiable quantity. The mutable available quantity will be called the value of the semaphore for brevity's sake. The use of a semaphore involves multiple threads executing 'wait' and 'signal' commands. This stream of wait and 'signal' commands will be executed as if they arrive in some sequential, non-overlapping, order which is an interleaving of the commands from each thread. From the local perspective of a single thread the semantics are simple to specify. The 'signal' command will find the MSem to have a value and mutate this to add one to the value. The 'wait' command will find the MSem to have a value and if this is greater than zero it will mutate this to be one less and finish, otherwise the value is negative or zero and the execution of the 'wait' thread will block. Eventually another thread executes 'signal' and raises the value to be positive, at this point the blocked 'wait' thread will reduce the value by one and finish executing the 'wait' command. From a broader perspective there is a question of precedence and starvation. If there is a blocked wait thread and a second 'wait' command starts to execute then will the second thread "find the MSem to have a value" before or after the orignal blocked thread has finished? If there are several blocked 'wait' threads and a 'signal' arrives then which blocked thread has priority to take the quatity and finish waiting? Are there any fairness guarantees or might a blocked thread never get priority over its bretheren leading to starvation? I have designed this module to provide a fair semaphore: multiple 'wait' threads are serviced in FIFO order. All 'signal' operations, while they may block, are individually quick. There are precisely three components, all MVars alloced by 'new': queueWait, quantityStore, and headWait. 1) The 'wait' operations are forced into a FIFO queue by taking an (MVar ()) called queueWait during their operation. The thread holding this token is the "head" waiter. 2) The 'signal' operations are forced into a FIFO queue by taking the MVar called quantityStore which holds an integral value. 3) The logical value stored in the semaphore might be represented by one of two different states of the semaphore data structure, depending on whether 'headWait :: MVar ()' is empty or full. In this module a full headWait reprents a single unit of value stored in the semaphore. > -- | A 'MSem' is a semaphore in which the available quantity can be added and removed in single > -- units, and which can start with positive, zero, or negative value. > data MSem i = MSem { quantityStore :: ! achieve a negative value with MSem is to start negative with 'new'. Once a negative quantity becomes non-negative > -- by use of 'signal' it will never later be negative. > new :: Integral i => i -> IO (MSem i) > {-# SPECIALIZE new :: Int -> IO (MSem Int) #-} > {-# SPECIALIZE new :: Word -> IO (MSem Word) #-} > {-# SPECIALIZE new :: Integer -> IO (MSem Integer) #-} > new initial = do > newQuantityStore <- newMVar $! initial > newQueueWait <- newMVar () > newHeadWait <- newEmptyMVar > return (MSem { quantityStore = newQuantityStore > , queueWait = newQueueWait > , headWait = newHeadWait }) >Note that the only MVars that get allocated are all by these three commands in 'new'. The other commands change the stored values but do not allocate new mutable storage. None of these three MVars can be simply replaced by an IORef because the possibility of blocking on each of them is used in the design. A design with two MVar is possible but I think it would have more contention between threads and be more complex to ensure thread safety. There are four operations on the semaphore leading to two possible states for headWait: 1) If the most recent operation to finish was 'new' then headWait is definitely empty and the value of the MSem is the quantity in quantityStore. 2) If the most recent operation to finish was 'wait' then headWait is definitely empty and the value of the MSem is the quantity in quantityStore. 3) If the most recent operation to finish was a 'signal' and the new value is positive then headWait is definitely full and the value of the MSem is the quantity in quantityStore PLUS ONE. 4) If the most recent operation to finish was a 'signal' and the new value is non-positive then headWait is definitely empty and the value of the MSem is the quantity in quantityStore. If the "head" 'wait' thread finds a non-positive value then it will need to sleep until being awakened by a future 'signal'. This sleeping is accomplished by the head waiter taking an empty headWait. All uses of the semaphore API to guard execution of an action should use 'with' to simplify ensuring exceptions are safely handled. Other uses should use still try and use combinators in Control.Exception to ensure that no 'signal' commands get lost so that no quantity of the semaphore leaks when exceptions occur. > -- | 'with' takes a unit of value from the semaphore to hold while performing the provided > -- operation. 'with' ensures the quantity of the sempahore cannot be lost if there are exceptions or > -- if killThread is used. > -- > -- 'with' uses 'bracket_' to ensure 'wait' and 'signal' get called correctly. > with :: Integral i => MSem i -> IO a -> IO a > {-# SPECIALIZE with :: MSem Int -> IO a -> IO a #-} > {-# SPECIALIZE with :: MSem Word -> IO a -> IO a #-} > {-# SPECIALIZE with :: MSem Integer -> IO a -> IO a #-} > with m = bracket_ (wait m) (signal m) > -- |'wait' will take one unit of value from the sempahore, but will block if the quantity available > -- is not positive. > -- > -- If 'wait' returns normally (not interrupted) (the FIFO guarantee). > wait :: Integral i => MSem i -> IO () > {-# SPECIALIZE wait :: MSem Int -> IO () #-} > {-# SPECIALIZE wait :: MSem Word -> IO () #-} > {-# SPECIALIZE wait :: MSem Integer -> IO () #-} > wait m = mask_ . withMVar (queueWait m) $ \ () -> do > join . modifyMVar (quantityStore m) $ \ quantity -> do > mayGrab <- tryTakeMVar (headWait m) -- First try optimistic grab on (headWait w) > case mayGrab of > Just () -> return (quantity,return ()) -- Took unit of value, done > Nothing -> if 0 < quantity -- Did not take unit of value, check quantity > then let quantity' = pred quantity -- quantity' is never negative > in seq quantity' $ return (quantity', return ()) > else return (quantity, takeMVar (headWait m)) -- go to sleepThe needed invariant is that 'wait' takes a unit of value iff it returns normally (i.e. it is not interrupted). The 'mask_' is needed above because we may decrement 'headWait' with 'tryTakeMVar' and must then finished the 'withMVar' without being interrupted. Under the 'mask_' the 'wait' might block and then be interruptable at one or more of 1) 'withMVar (queueWait m)' : the 'wait' dies before becoming head waiter while blocked by previous 'wait'. 2) 'modifyMVar (quantityStore m)' : the 'wait' dies as head waiter while blocked by previous 'signal'. 3) 'takeMVar (headWait m)' from 'join' : the 'wait' dies as head waiter while sleeping on 'headWait'. All three of those are safe places to die. The unsafe possibilities would be to die after a 'tryTakeMVar (headWait m)' returns 'Just ()' or after 'modifyMVar' puts the decremented quantity into (quantityStore m). These are prevented by the 'mask_'. Note that the head waiter must also get to the front of the FIFO queue of signals to get the value of 'quantityStore'. Only the head waiter competes with the 'signal' & peek threads for obtaining 'quantityStore'. > -- | 'signal' adds one unit to the sempahore. Overflow is not checked. > -- > -- Word -> IO () #-} > {-# SPECIALIZE signal :: MSem Integer -> IO () #-} > signal m = uninterruptibleMask_ . modifyMVar_ (quantityStore m) $ \ quantity -> do > if quantity < 0 > then return $! succ quantity > else do > didPlace <- tryPutMVar (headWait m) () -- quantity is never negative > if didPlace > then return quantity > else return $! succ quantityThe 'signal' operation first has the FIFO grab of (quantityStore m). If 'tryPutMVar' returns True then a currently sleeping head waiter will be woken up. The 'modifyMVar_' will block until prior 'signal' and 'peek' threads and perhaps a prior head 'wait' finish. This is the only point that may block. Thus 'uninterruptibleMask_' only differs from 'mask_' in that once 'signal' starts executing it cannot be interrupted before returning the unit of value to the MSem. All the operations 'signal' would be waiting for are quick and are themselves non-blocking, so the uninterruptible operation here should finish without arbitrary delay. Consider 'with m act = bracket_ (wait m) (signal m) act', refer to for the details. Specifically a killThread arrives at one of these points: 1) during (wait m) the exception is masked by both 'bracket' and 'wait' so this occurs at one of the blocking points mentioned above. This does not affect the MSe, and aborts the 'bracket_' without calling act or (signal m). 2) during (restore act) the `onException` in the definition of 'bracket' will shift control to (signal m). 3) during (signal m) regardless of how act exited. Here we know (wait m) exited normally and thus took a unit of value from the MSem. The mask_ of 'bracket' ensures that the uninterruptibleMask_ in 'signal' ensures that the unit of value is returned to MSem even if 'signal' blocks on 'modifyMVar_ (quantityStore m)'. 4) Outside of any of the above the mask_ in 'bracket' prevents the killThread from being recognized until one of the above or until the 'bracket' finishes. If 'signal' did not use 'uninterruptibleMask_' then point (3) could be interrupted without returning the value to the MSem. Avoiding losing quantity is the primary design criterion for this semaphore library, and I think it requires this apparantly safe use of uninterruptibleMask_ to ensure that 'signal' can and will succeed. > -- | Word -> IO Word #-} > {-# SPECIALIZE peekAvail :: MSem Integer -> IO Integer #-} > peekAvail m = mask_ $ withMVar (quantityStore m) $ \ quantity -> do > extraFlag <- tryTakeMVar (headWait m) > case extraFlag of > Nothing -> return quantity > Just () -> do putMVar (headWait m) () -- cannot block > return $! succ quantityThe implementaion of peekAvail is slightly complicated by the interplay of tryTakeMVar and putMVar. Only this thread will be holding the lock on quantityStore and the putMVar only runs to put a () just taken from headWait. Thus the putMVar will never block. The 'mask_' ensures that there can be no external interruption between a tryTakeMVar and putMVar.
http://hackage.haskell.org/package/SafeSemaphore-0.9.0/docs/src/Control-Concurrent-MSem.html
CC-MAIN-2016-44
refinedweb
2,083
52.9
Associates the given data with the specified beacon. Attachment data must contain two parts: - A namespaced type. - The actual attachment data itself. namespacesendpoint, while the type can be a string of any characters except for the forward slash ( /) up to 100 characters in length. Attachment data can be up to 1024 bytes long. Authenticate using an OAuth access token from a signed-in user with Is owner or Can edit permissions in the Google Developers Console project. HTTP request POST{beaconName=beacons/*}/attachments The URL uses Google API HTTP annotation syntax. Path parameters Query parameters Request body The request body contains an instance of BeaconAttachment. Response body If successful, the response body contains a newly created instance of BeaconAttachment. Authorization Requires the following OAuth scope: For more information, see the Auth Guide.
https://developers.google.cn/beacons/proximity/reference/rest/v1beta1/beacons.attachments/create
CC-MAIN-2020-34
refinedweb
132
50.02
For the longest time, I've had a difficult time accepting compliments when they are given to me. Whether it's work or my personal life related, I'll almost always tell you it was "just nothing". I tend to overanalyze things! Recently, I've been making an effort to say "thank you" regardless of how I feel about the compliment, and truly be open to their feedback. Do you find it difficult to accept compliments when they are given? 18 Replies HD IT Solutions is an IT service provider. I accept compliments (back-handed or otherwise) and insults equally. Brand Representative for Cylance, Inc. I definitely used to feel uncomfortable accepting compliments. Now I just deal with it by immediately complimenting them in return (genuinely!!) to spread the good vibes. .gif) When I worked at a beauty supply industry (distribution), I used to get invited to shows and events. I would get lots of compliments of "You're a handsome young man!". Or in other words I got hit on a lot. That took a while for me getting used to. Then eventually it no longer bothered me and my response became, "Hopefully that'll help me with my job huh?" or "I'll let my wife know". And my compliments had nothing to do with IT majority of the time. Compliments have always made me uncomfortable until very recently. Probably stems from the fact that most of the "compliments" I received as a kid were either a way to sarcastically make fun of me or were in the form of "Great job, but here's all the things you did wrong." I really didn't develop any real confidence or self worth till a couple years ago, and really within the past year or so am finally getting to the point where compliments don't make me feel completely uncomfortable anymore. Sure, here's my advice.Sure, here's my advice.Wow jonemac, you're so good with computers! While you're here, can I ask you a question about my aunt's cousin's dog's VCR?Wow jonemac, you're so good with computers! While you're here, can I ask you a question about my aunt's cousin's dog's VCR? I have no problem with them as long as they're genuine and the person doesn't want anything in return.. I'd tell him to bury it in the back yard and fertilize often. Who knows, maybe another family member will grow from it.. ;-)
https://community.spiceworks.com/topic/1773027-do-you-have-a-hard-time-accepting-compliments
CC-MAIN-2020-10
refinedweb
422
75.1
Coding Cheat Sheet For general techniques, see Algorithm Techniques. For explanations of problems see Algorithm Problems. Strings Length of a string: C++: stringVar.length() Adding a character to a string: c++: // Appending the string. init.append(add); // concatenating the string. strcat(init, add); // Appending the string. init = init + add; To convert anything into a string use to_string(). To erase characters, we use string.erase(pos, len). To convert to upper case or lower case you have to iterate over every character and use tolower(int c) or toupper(int c). They return the case converted character. To check if a character is alphanumeric (letters and numbers) or just alphabetic: isalnum(char c) // alpha numeric isalpha(char c) // alphabetic isdigit(char c) // is a number Returns false (0) if not. Split How to split a stream in an array of words. // Utility function to split the string using a delim. Refer - // vector<string> split(const string &s, char delim) { vector<string> elems; stringstream ss(s); string item; while (getline(ss, item, delim)) elems.push_back(item); return elems; } Maps Going to middle element Use std::advance() to advance the iterator from begin to size()/2 #include <iterator> auto it = mySet.begin(); std::advance(it, mySet.size()/2); Iterating over a map and deleting an element This is tricky! The following does NOT work!! for(auto& entry : dm) { if( dm.find(entry.second) == dm.end() ) { dm.erase(entry.first); changed = true; } } Use a regular for loop with constant interators. for (auto it = m.cbegin(); it != m.cend() /* not hoisted */; /* no increment */) { if (must_delete) { m.erase(it++); // or "it = m.erase(it)" since C++11 } else { ++it; } } Removing Whitespace from a string s.erase(std::remove_if(s.begin(), s.end(), ::isspace), s.end()); String to Number you can use std::stoi( str ) where str is your number as std::string. There are version for all flavours of numbers: long stol(string), float stof(string), double stod(string),… see Ordered Container Order In C++ associative containers such as sets and maps store values in ascending order. You can easily change this by passing a comparative function that takes in two parameters a and b and returns true if a goes before b. STL provides basic versions of these functions for you! // Here if greater<int> is used to make // sure that elements are stored in // descending order of keys. map<int, string, greater <int> > mymap; Vectors Sorting a 2D Vector To make a sorting function do as follows. Remember, it means what has to be true for v1 to go before v2. bool sortFunc(const vector<int> &v1, const vector<int> &v2) { return v1[0] < v2[0]; } Or with a lambda: sort(in.begin(), in.end(), [] (vector<int> &a, vector<int> &b) {return a[0] < b[0];}); Initialize 2D Vector Use the std::vector::vector(count, value) constructor that accepts an initial size and a default value. vector<vector<int>> nums(NUMBER_OF_ROWS, vector<int>(NUMBER_OF_COLS, 0) ); Unordered Set An ordered set of unique numbers that are stored by a key, which is equal to the number in each element. They are not ordered and allow for the fast retrieval and insertion of each element (constant time O(1)). To make them you use the following syntax: #include <unordered_set> std::unordered_set<int> my_set; my_set.insert(12); std::unordered_set<int>::iterator it = my_set.find(13); if(it == my_set.end()) { std::cout << "13 not found!" << std::endl; } else { std::cout << "Found: " << it* << std::endl; // Erase it: my_set.erase(it); // my_set.erase(13); // you can erase by iterator or by key, the function will return 0 or 1 depending on if erased. } Unordered Sets of Pairs Pairs need a special hashing function for them to work. Regular ordered sets work just fine too. struct pair_hash { template <class T1, class T2> std::size_t operator () (std::pair<T1, T2> const &pair) const { std::size_t h1 = std::hash<T1>()(pair.first); std::size_t h2 = std::hash<T2>()(pair.second); return h1 ^ h2; } }; unordered_set<pair<int,int>, pair_hash> mySet; Queue A queue is just a line man. First in fist out. You can only remove the element in the front of the line of a queue. Remember that! pop()- pops front of the queue, does NOT return the element! push()- pushes an element to the back of the queue. front()- reference to element in the front of the queue. back()- reference to the element in the back of the queue. size() empty()- returns whether empty or not. Priority Queue A C++ priority_queue is a heap, which is by default a max heap as it uses the standard less comparator. Easy to get confused here, because the top() function isn't getting the begin element of the container. Just remember, by default a prioirty queue is a max heap. priority_queue<int> max_heap; max_heap.push(10); max_heap.push(1); cout << "Top of max_heap: " << max_heap.top() << endl; // prints 10 priority_queue<int, vector<int>, greater<int>> min_heap; min_heap.push(10); min_heap.push(1); cout << "Top of min_heap: " << min_heap.top() << endl; // prints 1 Using Custom Data With Priority Queues There are numerous was of doing this but here is the least verbose way. struct node{ node(int _id, int _value) : id(_id), value(_value){} int id; int value; } bool operator<(const node& A, const node& B) { // Priority look at the "last" element of their list as the one having // the highest priority /*--- FOR A MAX HEAP ---*/ // B will go after A and have a higher priority than A return A.value < B.value; /*--- FOR A MIN HEAP ---*/ // a GREATER THAN b means b will go before a. The smaller element a will have // a higher priority than b. return A.value > B.value; } // Now you initalize your priority queue normally, as it will call the // overloaded LESS THAN comparator priority_queue<node> pQ; Another option: typedef pair<string, int> p_d; class Compare{ public: bool operator() (const p_d& a, const p_d& b){ if(a.second == b.second){ return a.first > b.first; }else { return a.second < b.second; } } }; // This is a max heap based on number of the int in the pair, and if numbers are the same, word thats lower on the alphabetical compare wins. priority_queue<p_d, vector<p_d>, Compare> heap; Range Based Loops Range based loops return copies unless a reference is requested. They return the pair, so no pointer member access required. unordered_map<int, int> counts; multimap<int,int, greater<int>> sorted; for(auto& i : counts) { sorted.insert(make_pair(i.second, i.first)); } vector<int> out; for(auto& i : sorted) { out.push_back(i.second); if(--k == 0) break; } Lists Lists are C++ implementations of doubly linked lists. They allow for manipulation of data from both ends of the list. The following functions allow you to control them: - pop_back() - push_back() - pop_front() - push_front() - front() - back() List iteartors are bidrectional! –myList.end() is a valid iterator that points the last element of the list (assuming the list has size greater than zero). We can also splice in elemnts into a position. For example, say we had a list like the following: myList.push_back(1); myList.push_back(2); myList.push_back(3); myList.push_back(4); 1 ↔ 2 ↔ 3 ↔ 4 And we want to place 4 in the front. We can do the following: myList.splice(myList.begin(), myList, –myList.end());
https://paulsammut.com/doku.php/coding_cheat_sheet
CC-MAIN-2021-21
refinedweb
1,208
67.15
Introducing HostSwitcher also independently developed libraries created by industrious developers that allow you to create strictly a WPF solution. As I believe it is easier to learn with real code rather than study academic exercises, I present in this article a complete, open-source, tray application built on my framework. While it is possible that only a fraction of the readership needs this particular utility, you should still find it valuable as a real-world example. Introducing HostSwitcher: A Tray App for Some of You My tray application, HostSwitcher, lets you re-route entries in your hosts file with a single click on the context menu attached to the icon in the system tray. I needed this myself because my work involved a network application that communicated with several servers at any given time. Frequently, sometimes several times a day, I need to reroute entries in my host file among several sets of servers: one set of servers is for production, one for the current development effort, one for the legacy development effort, and one for scratch work. Thus, several times a day I edit the hosts file, commenting out one group of lines and uncommenting a second group. Each developer on the project has to do this same silly procedure. I wanted a simple, easy, and fast way to do that automatically. Requirements I needed two things to make this work: Firstly, a tray application framework (that I could tailor to access servers from its context menu), and, secondly, a mechanism to drive the generation of the contents of the context menu from the hosts file. I did not find any plug-and-play library components for the tray application framework so I needed to craft my own. My requirements were: - First and foremost, the app must exist as a tray application. Many of the solutions, as I hinted earlier, are really WinForms applications coerced into being tray applications (akin to forcing a square peg into a round hole!). - Only one instance of the app may run. Attempting to start a second instance immediately terminates. - The app, from the icon in the system tray, must be able to open a WinForms or WPF window as needed. - Closing the tray application also closes any open windows. - The system tray icon must easily support dynamic context menus. The last requirement in the list directly supports the second facet of this application, converting hosts file data to context menu choices. I quickly settled on adding meta-comments to overlay on existing hosts file entries. Since these are comments, they are completely transparent to other consumers of the hosts file. Because there is markup on IP addresses right in the hosts file, this reduces maintenance by avoiding duplication with some external resource, and makes it very simple to modify hosts entries and groupings at any time. Instrumenting Your Hosts File HostSwitcher is simple to operate; almost everything you need to know is on the one-page introduction (Figure 1) that appears automatically upon first launch if you have not yet decorated your hosts file. (You can also open that introduction page from the context menu.) As shown in the figure, you start with the servers listed in your host file. Organize these into server groups, where all servers in a group should operate in unison; that is, since each server group is bound to a single context menu entry they all are enabled or all disabled together. Next you can cluster your server groups with other related server groups to form a project where each server group in the project should be mutually exclusive of its sibling groups. (They are mutually exclusive within the perspective of HostSwitcher but there is, of course, nothing preventing you from editing your hosts file manually and corrupting it however you please. Projects appear as top-level entries in the context menu; server groups appear as children of their parent project. In the example in Figure 1, only a small portion of the hosts file is shown. (I happen to have thousands of lines in this host file as a protective measure on my computer.) HostSwitcher is only aware of these 16 lines, though, because they are the ones I have decorated with meta-comments (shown in green). Each meta-comment indicates a project/server group pair in brackets with a virgule separating them. You may use any other characters you please for both the project name and the server group name. Each time you open HostSwitcher’s context menu it rereads the hosts file. That way, if you do edit it externally it will pick up on your most current changes. Thus, as soon as you save the newly decorated hosts file you can open the context menu and it will correctly reflect your projects; no need to restart the application. Figure 1 HostSwitcher Information Page This shows how to decorate your hosts file and how that manifests in the context menu attached to the icon in the system tray. Usage HostSwitcher, as mentioned earlier, automates the process of commenting and uncommenting specific lines in your host file simply by selecting a server group from the context menu of the system tray icon. It operates on all the relevant host file lines of a project at once, resolved down to its constituent server groups. When it enables one server group it just uncomments the lines of the host file containing the servers in that group. Similarly, when it disables a server group it comments out the respective lines in the hosts file. At the user level, selecting a server group in the context menu enables that server group and disables all other server groups in the project. The example in Figure 1 defines two projects: Project-1 and Other Project. The screen shot of the context menu at the bottom shows the Project-1 submenu opened and the three server groups contained therein. Observe how these come directly from the hosts file. Each server group has a colored icon indicating its state: green for enabled, red for disabled, and yellow for mixed. Technically speaking mixed is bad. You do not want a server group to have both enabled and disabled servers because that violates the cardinal rule of servers in a server group—they should always operate in unison. By the way, when you hover over a server group name without selecting it, its tooltip indicates how many individual servers are enabled and how many are disabled. So you may determine from the tooltips how many servers you have in any particular group. Another bad condition is having more than one server group enabled within a single project; i.e. you should never have more than one green icon in a project. Remember the cardinal rule of server groups within a project: they are mutually exclusive. Fortunately, it is trivial to clean up any bad states—just select a server group in a “corrupt” project on HostSwitcher's context menu. Remember: selecting a server group enables that server group and disables all other server groups in the project. It follows, therefore, if you need to have “corrupt” entries (as far as HostSwitcher is concerned) for performing other tasks on your computer, when you want to come back to HostSwitcher just select your target server group and you are ready to go. Features Besides activating a server group, HostSwitcher’s context menu also allows you to: - Open the hosts file itself (in Notepad). - Open the folder containing the hosts file (in Explorer). - View a server details page showing the results of parsing your hosts file (Figure 2)—this is handy to debug your hosts file setup. - View the introductory page (Figure 1) to remind yourself how to instrument your hosts file. - The HostSwitcher icon itself has these features: - Double-clicking the icon in the system tray re-opens the introduction/help page. - Hovering over the icon identifies the selected server group for your first few projects. (A .NET limitation restricts this tooltip to just 63 characters.) Host Details View Particularly when you first attempt to decorate your hosts file, the host details window comes in very handy to check the results of your work. As Figure 2 shows, you see both the actual host file entries and the meta-comments attached to them. Note that this includes all instrumented lines, whether or not the entire line is commented out or not. In fact, it tells you which lines are completely commented out in the Status column. Active (i.e. uncommented) lines are marked as enabled; commented lines are marked as disabled. This window into your hosts file provides one additional feature that occasionally comes in quite handy: you can sort the table by clicking on any of the column headers. In Figure 1 notice that the entries are written sorted by server group. In Figure 2 I have clicked on the project name column header to resort the entries for a different perspective. Here you see for each host name the different IPs it will point to depending on your choice of server group. Figure 2 Host Details View This window lets you check whether you have instrumented your hosts file correctly, showing you what HostSwitcher’s parsing ends up with. Running on Windows 7 Just for completeness, here is a tip for running under Windows 7. A new tray icon is set by default to only appear when it “has something to say”, i.e. when it issues a notification. It is still in the tray, but you have to click the leftmost icon to reveal the hidden tray icons, then—in the case of HostSwitcher—right-click its icon to open its context menu (bottom left, Figure 3). All terribly cumbersome. You can adjust the properties easily, though. Click the Customize link at the bottom of the hidden icon panel to open the Notification Area Icons control panel. Scroll until you find HostSwitcher and change its setting from Only show notifications to Show icon and notifications (top, Figure 3) to migrate the icon down to the main tray (bottom right, Figure 3). Figure 3 Adjusting the Tray Icon Properties The new icon for HostSwitcher is hidden in the tray by default, but you can adjust it to sit in the main tray. Execution Has Its Privileges Because the Hosts file in Windows is in a protected, system directory you must have administrative privileges to run HostSwitcher. Even with administrator privileges, on Windows 7 you can still choose to run your applications normally or with administrative rights. HostSwitcher must be run as the latter. If you use the installer accompanying this article, or even if you compile the project in Visual Studio yourself, it automatically attempts to run with administrative rights. If for some reason you like to explicitly invoke Run as Administrator when you launch a program requiring it, you can turn off the automatic setting in the app.manifest file in the HostSwitcher project directory: comment out the line for requestedExecutionLevel then recompile the solution. The Tray Application Framework The Secret of the Tray The secret to tray applications is… Well, it is obvious as soon as you read it. I found assorted articles on the web that were ignorant of it, and others that were not, but none realized the gem of knowledge they had to convey to their readers. "I thought at first that you had done something clever, but I see that there was nothing in it, after all." The Red-Headed League, A. Conan Doyle Many of the articles you find on tray applications use a form-centric focus: you start with your form, then minimize it to the system tray and hide it from the taskbar. That approach mostly works but it is backwards. The key to applying best practices to designing a tray application is shifting your perspective to a tray-centric focus: instead of thinking of your form (be it WinForms or WPF) as the master controller of your application, think of the little tiny tray icon as the master controller. It runs right in the system tray, spawning child windows only when needed. As I was experimenting with the form-centric approach I found one significant deficiency: it is challenging to start an application in the tray. You could click a button on the form to send it to the tray, but actually starting it there—with the form completely hidden—is difficult. Either way you need to make sure to remove your app from the taskbar when you go to the tray and rematerialize it when you restore your app. This bookkeeping hassle is not even relevant in the tray-centric approach: you start in the tray, which has no taskbar presence, and only open a form at an appropriate point. Opening the form puts it in the taskbar; closing it again removes it, all automatically. The Master Controller of a Tray App: The NotifyIcon With either a form-centric or a tray-centric approach, you still use the same mechanism—the NotifyIcon. Instantiating a NotifyIcon creates your tray icon. You do not have to hook up the NotifyIcon to anything; simply instantiating it is all you need. This method, then, is the entire code to create a system tray icon with a custom icon, tooltip, (empty) context menu, and a couple event handlers: { components = new System.ComponentModel.Container(); notifyIcon = new NotifyIcon(components) { ContextMenuStrip = new ContextMenuStrip(), Icon = new Icon(IconFileName), Text = DefaultTooltip, Visible = true }; notifyIcon.ContextMenuStrip.Opening += ContextMenuStrip_Opening; notifyIcon.DoubleClick += notifyIcon_DoubleClick; } Tailoring Your Program Entry Point: The ApplicationContext When you create a normal WinForms application, Visual Studio auto-generates part of the code for the form itself (stored in YourForm.designer.cs) but it also generates an even more crucial portion of code that is less exposed, stored in Program.cs. This file provides an entry point for your entire application—the Application object. For a form-centric application, the key line of code that launches your form as the mainstay of the application is: Jessica Fosler, in her article Creating Applications with NotifyIcon in Windows Forms, breaks down that single statement into this equivalent code to provide a better understanding of what happens: ApplicationContext applicationContext = new ApplicationContext(); applicationContext.MainForm = form1; Application.Run(applicationContext); This sequence reveals the hidden intermediary component between your Form and the central Application.Run method—the ApplicationContext object. Attaching the form to the MainForm property of the ApplicationContext does two things. First, calling Application.Run invokes the form’s Show method, opening your form to begin processing events. Second, it ensures the entire application terminates when the form closes. Those two actions describe just how a form-centric application behaves. For a tray-centric application, on the other hand, the entry point code is just as simple but significantly different: Application.Run(applicationContext); This code creates a custom ApplicationContext, which you will see next. Notably there is no mention of any Form here. It (or they) will be instantiated as needed by the custom ApplicationContext. When you pass the custom ApplicationContext to the Run method, rather than open a form, here you are just passing control to the ApplicationContext. The final bit of code you need to get from there to the InitializeContext method shown earlier is just the constructor for the custom ApplicationContext: { InitializeContext(); hostManager = new HostManager(notifyIcon); hostManager.BuildServerAssociations(); if (!hostManager.IsDecorated) { ShowForm(); } The first statement materializes the user interface—in this case, the icon in the system tray. The remaining lines of the constructor are specific to the HostSwitcher application; yours will be tailored to your own application. For this application, the code creates a HostManager (the workhorse of HostSwitcher) and does an initial pass of parsing the hosts file. Finally, if the parsing step indicated that the file has no decoration—probably indicating this is the first time the application has been launched—it displays the form showing basic program operation (see Figure 1). Rounding Out the ApplicationContext The web of connections is starting to materialize: the ApplicationContext constructor is called from the Main method in Program.cs. The constructor in turn invokes InitializeContext, which creates the NotifyIcon. InitializeContext also adds a couple event handlers to the NotifyIcon, notably the one that fires when the user opens the context menu. This event handler code, shown here, is specific to HostSwitcher, except for the important first line. Whenever you are dynamically generating a context menu you must set the e.Cancel flag to false: that allows the context menu to continue opening even if it is empty at the moment of clicking. (By the time it opens, the code in the event handler should have populated it, of course.) object sender, System.ComponentModel.CancelEventArgs e) { e.Cancel = false; hostManager.BuildServerAssociations(); hostManager.BuildContextMenu(notifyIcon.ContextMenuStrip); notifyIcon.ContextMenuStrip.Items.Add(new ToolStripSeparator()); notifyIcon.ContextMenuStrip.Items.Add(hostManager .ToolStripMenuItemWithHandler("Show &Details", showDetailsItem_Click)); notifyIcon.ContextMenuStrip.Items.Add(hostManager .ToolStripMenuItemWithHandler("&Help/About", showHelpItem_Click)); notifyIcon.ContextMenuStrip.Items.Add(new ToolStripSeparator()); notifyIcon.ContextMenuStrip.Items.Add(hostManager .ToolStripMenuItemWithHandler("&Exit", exitItem_Click)); } For HostSwitcher, I reread the hosts file each time the user opens the context menu (BuildServerAssociations) to keep it current. The hostManager then builds the custom portion of the context menu and the remaining lines here finish up the menu with commands to open different child windows. These next methods comprise the rest of the generic portion of the custom ApplicationContext, mostly verbatim from Fosler’s Creating Applications with NotifyIcon in Windows Forms. They take care of proper cleanup in the class: { ExitThread(); } protected override void Dispose(bool disposing) { if ( disposing && components != null) { components.Dispose(); } } protected override void ExitThreadCore() { if (mainForm != null) { mainForm.Close(); } notifyIcon.Visible = false; // should remove lingering tray icon! base.ExitThreadCore(); } Customizing WinForm Connections The only remaining portions of the custom ApplicationContext involve hooking up any child forms you may need. First you need a method to display the form if it already exists, or create one if it does not. You should use the code shown here as a template for your own child forms, changing the one line that indicates the Form name and optionally, its arguments. You may also want to change the variable name; I use detailsForm here because it shows the details view for HostSwitcher. { if (detailsForm == null) { detailsForm = new DetailsForm {HostManager = hostManager}; detailsForm.Closed += detailsForm_Closed; detailsForm.Show(); } else { detailsForm.Activate(); } } With this method in hand you need to invoke it from your tray icon’s context menu and clean it up when closed: // attach to context menu items void showDetailsItem_Click(object sender, EventArgs e) { ShowDetailsForm(); } // null out the forms so we know to create a new one. void detailsForm_Closed(object sender, EventArgs e) { detailsForm = null; } WPF Can Play, Too! The NotifyIcon, as neutral as it may seem, exists in the WinForms namespace. I was wondering if it could play well with WPF too since WPF does not have its own. In my previous article on WPF and WinForms interoperability (Mixing WPF and WinForms) I came to appreciate the enormous size and complexity of making these two technologies play well together and kudos to Microsoft for making it a reality. Because of the support from the Framework, it is trivial to use a WPF window instead of a WinForms window. First, create a WPF window. (Do this in a new, separate project because Visual Studio does not let you put a WPF window in a WinForms project. The project should be a WPF Custom Control Library. See Johan Danforth's blog entry Open a WPF Window from WinForms that shows you the step by step procedure in a very clear and concise article!) With that in hand you then use this slightly different method to display the form: { if (introForm == null) { introForm = new WpfFormLibrary.IntroForm(); introForm.Closed += mainForm_Closed; ElementHost.EnableModelessKeyboardInterop(introForm); introForm.Show(); } else { introForm.Activate(); } } The only difference from the WinForms version, discounting names, is the additional line invoking the EnableModelessKeyboardInterop method. Danforth explains that this line is important to properly handle keyboard input; further details are available in the MSDN forum post Windows Form opening WPF window as well. Then, just as with the WinForms version, you need to hook up the child form with event handlers. The first line is unique here only because I chose this child form to respond to double-clicking the tray icon; you could just as easily let it display the WinForm child. // attach to context menu items void showHelpItem_Click(object sender, EventArgs e) { ShowIntroForm(); } // null out the forms so we know to create a new one. void mainForm_Closed(object sender, EventArgs e) { introForm = null; } As you can see, it is just as easy to use a WPF window as it is a WinForms window, even though the NotifyIcon is a WinForms class. While WPF does not have a NotifyIcon native to the .NET framework, thanks to Philipp Sumi (the same industrious developer who created Sketchables for SketchFlow), there is a NotifyIcon available in pure WPF, enhancing the display capabilities of your tray icon with rich tool tips instead of just text, WPF context menus, flexible data binding, and rich balloon messages. It looks so enticing I am eager to convert HostSwitcher over to it at some point! Read Philipp’s article, simply titled WPF NotifyIcon, and download his code to try it out. Ensuring Only One Instance Executes: Mutual Exclusion There are three commonly used techniques for enforcing only one instance of your application may execute. Regardless of the technique, though, the algorithm is the same: very early in its startup each instance checks to see if any other instances are running and terminates if there are. The first instance, finding none, continues initialization while any subsequent ones obediently disappear. Technique 1: Native .NET Support The .NET framework includes a class (WindowsFormsApplicationBase) that you subclass to create a single instance manager. It is relatively simple to hook up. The only odd thing about it is that it exists in a Visual Basic namespace even though you are free to use it in other .NET languages just as readily. Thus, you must include a reference to Microsoft.VisualBasic.dll. Michael Kuehl has a good article on this approach which may be used for WinForms or WPF despite the title: WPF - Writing a Single Instance Application. This is his code sample; I refer you to his article for details on what it does: { [STAThread] public static void Main(string[] args) { (new SingleInstanceManager()).Run(args); } public SingleInstanceManager() { IsSingleInstance = true; } public ExampleApplication App { get; private set; } protected override bool OnStartup(StartupEventArgs e) { App = new ExampleApplication(); App.Run(); return false; } protected override void OnStartupNextInstance( StartupNextInstanceEventArgs eventArgs) { base.OnStartupNextInstance(eventArgs); App.MyWindow.Activate(); App.ProcessArgs(eventArgs.CommandLine.ToArray(), false); } } Technique 2: The Process Table With this technique you check if your application appears in the list of running processes on your machine. This takes only a few lines of code in your Main() method. Bob Powell describes this technique succinctly in The Single Instance Application. This is his code sample: { string appProcessName = Path.GetFileNameWithoutExtension(Application.ExecutablePath); Process[] RunningProcesses = Process.GetProcessesByName(appProcessName); if (RunningProcesses.Length == 1) // just me, so run! { Application.EnableVisualStyles(); Application.SetCompatibleTextRenderingDefault(false); Application.Run(new CustomApplicationContext()); } else // switch to the first instance and exit { ShowWindowAsync(RunningProcesses[0].MainWindowHandle, (int)ShowWindowConstants.SW_SHOWMINIMIZED); ShowWindowAsync(RunningProcesses[0].MainWindowHandle, (int)ShowWindowConstants.SW_RESTORE); } } Technique 3: The Mutex Primitive This technique uses the .NET Mutex (short for mutual exclusion) synchronization primitive. A mutex object allows you to obtain a resource and then lock it so that others may not obtain it until you release the lock. The application is obvious: here you want to allow the first instance of your application to run, and then create a lock so that other instances may not run. In this code sample the Mutex object is embedded in the SingleInstance class: the Start method requests the mutex lock while the Stop method releases it. If the lock is not obtainable, i.e. this is not the first instance, the program terminates. Otherwise it continues to run and upon conclusion releases the lock. { if (!SingleInstance.Start()) { return; } // mutex not obtained so exit Application.EnableVisualStyles(); Application.SetCompatibleTextRenderingDefault(false); try { Application.Run(new CustomApplicationContext()); } catch (Exception ex) { MessageBox.Show(ex.Message, "Program Terminated Unexpectedly", MessageBoxButtons.OK, MessageBoxIcon.Error); } SingleInstance.Stop(); // all finished so release the mutex } This technique—the one I chose to apply to HostSwitcher—is thoroughly detailed in C# Single Instance App With the Ability To Restore From System Tray written by a developer with the moniker of “devzoo”. This has the clear benefit of only a couple lines of code added to your application once you reference the MutexManager library included in the HostSwitcher solution as a separate DLL. (My MutexManager library is a simple repackaging of devzoo’s nicely encapsulated classes.) Note that the all-encompassing error trapping is important here—you want to ensure that you release the mutex even if your program throws an exception. Strengths and Weaknesses of Mutex Techniques There are a variety of factors to consider in deciding which technique to use to enforce the single instance principle. The WindowsFormsApplicationBase seems a bit more complex in instrumenting your code. The process list approach requires very simple coding and no additional libraries, but it does have some weaknesses, as K. Scott Allens describes in the blog entry on The Misunderstood Mutex. Because of those weaknesses I prefer the Mutex approach. On the other hand, notice in the code samples that the Mutex technique is the only one of the three where you cannot activate the first instance from subsequent instances. For a form-centric application, re-activating the first instance when the user attempts to launch a second instance is clearly a good usability goal. But for a tray-centric application like HostSwitcher there is little benefit—the “application” is in the system tray so there is not even a concept of bringing it to the foreground. I will admit, though, that some tray-centric applications would find it a benefit—those where the user operates primarily in child windows (e.g., a firewall or antivirus program). These typically live in your system tray but at any time you can open up a main window as long lasting as any normal desktop application. I have a solution in mind to strength the Mutex approach (though I have not yet implemented it): use the standard Mutex for enforcing a single instance, but then use the process table technique to find the first instance and activate it as demonstrated in the code sample above.. About the only thing I could not come to a good resolution when designing this application is its name. HostSwitcher seems like a reasonable choice but I am not convinced it is the best choice. Thus, I would like to encourage you, dear reader, to post a comment at the bottom to state whether you think HostSwitcher succinctly conveys the intent of the program or to offer an alternative if you have something else in mind. SoftSwitch? HostRouter? HostOnToast? IpHop? NetSpinner? Let me hear from you! Update: 18th November 2010 After this article was published I continued to improve the HostSwitcher code a bit--version 1.1 of both the installer package and the source package are now attached to the top of this article. Improvements include: -(Internal) Enhanced LINQ code. -(User-facing) Left-click on tray icon now opens the context menu just like a right-click. (Thanks to Hans Passant's StackOverflow post for the technique to do this.) -(User-facing) Selecting a new server group from the context menu now produces positive feedback in the form of a balloon tip in the tray.
https://www.simple-talk.com/dotnet/.net-framework/creating-tray-applications-in-.net-a-practical-guide/
CC-MAIN-2015-22
refinedweb
4,601
52.8
This is a brief overview of the main elements of a React application’s data flow with Redux. This article assumes you are familiar with at least the basics of React. Store The Store is a combination of all the State objects from each Component in the application. The Store is a single JavaScript object so all the State objects in the application must be combined into one large one using combineReducers() File: ~/reducers/index.js import { combineReducers } from 'redux'; import posts from './posts'; import comments from './comments'; const rootReducer = combineReducers({ posts, comments }); export default rootReducer; In this example we are importing the posts and rootReducer which is exported to our application ready to be picked up by the Provider. Provider A Provider receives the application’s data from the Store and makes it available to all the Containers. import { createStore } from 'redux'; import rootReducer from './reducers/index'; const store = createStore(rootReducer); const application = ( <Provider store={store}> <Main/> </Provider> ); render(application, document.getElementById('root')); By wrapping the <Main /> Container in a Provider, all of the applications data (the Store) is now available to all the children of the Provider. Container Containers are a gateway between State and Components. They take a piece of State from the Store and pass it into a Component as props using the mapStateToProps() method. File: /components/App.js import { bindActionCreators } from 'redux'; import { connect} from 'react-redux'; import Main from './Main'; function mapStateToProps(state) { return { posts: state.posts, comments: state.comments } } const App = connect(mapStateToProps)(Main); export default App; The mapStateToProps() method accepts the state and returns only the relevant bits of state we need. The connect() method then attaches this new state object as props to the (imported) Main component. Components These are simply the UI components which are rendered to the DOM. I’m not going to go into the specifics of a Component here as this is an assumed prerequisite. Action / Action Creator An Action Creator is simply a function which returns an Action, such as submitting a form, clicking a link, or adjusting a slider. The returned Action has at least two parts, the type and the payload. Note: The type property must use the key ‘type’ whereas the payload and any other properties can be named as you wish. File: actions.js export function addComment(postId, author, comment) { return { type: 'ADD_COMMENT', payload: { postId, author, comment } } } Here the addComment() Action Creator returns the ADD_COMMENT Action. In order to use the Action, it must be passed in as a prop to our Component, similar to how a Container passes State to the Component. This is done using the mapDispatchToProps() method File: /components/App.js import { bindActionCreators } from 'redux'; import * as actionCreators from '../actions'; function mapDispatchToProps(dispatch) { return bindActionCreators( actionCreators, dispatch ); } const App = connect(mapStateToProps, mapDispatchToProps)(Main); Here the mapDispatchToProps() method returns all of the Action Creators wrapped into a dispatch via the bindActionCreators() method, so they can be invoked directly. These are also passed as props to the Main component via the connect() method. Reducers Reducers are functions which update the application’s state in response to Actions. Actions announce that something has been triggered and Reducers respond to this by describing how the state changes. When an Action is dispatched, it is sent to all Reducers so it is the Reducer’s job to determine if it needs to do anything with the dispatched action. A simple switch statement is used to filter the required Actions. File: /reducers/comments.js function postComments(state = [], action) { switch (action.type) { case 'ADD_COMMENT': // handle the ADD_COMMENT payload and modify state return state; case 'REMOVE_COMMENT': // handle the REMOVE_COMMENT payload and modify state return state; default: return state; } return state; } In this example the postComments() Reducer handles only the dispatched Actions it is concerned with and modifies the state accordingly before returning the state to the Store. Rinse and Repeat Our applications State (the Store) has now been updated based on the Actions which were dispatched to the Reducers and now the Provider can pass this state onto all our Containers which will in turn update our Components and render these changes to the DOM. References React / Redux Tutorial by The New Boston Code samples are paraphrased from ‘React for Beginners’ by Wes Bos
https://ajaykarwal.com/react-application-data-flow-with-redux/
CC-MAIN-2019-22
refinedweb
707
53
There). This article is a brief introduction to software RAID, which is really md (Multiple Device Driver) for Linux. As with the article on LVM, this article is just a quick introduction and not a deep tutorial. The intent is to quickly demonstrate Linux software RAID using md and mdadm. Perhaps this article will show you how easy it is to add software RAID to your repertoire to either help improve performance or provide extra protection. In essence, this article will introduce you to Linux software RAID becoming the “suspenders” to the “belt” of backups. Quick Introduction The original intent of RAID was to improve IO performance as well as using smaller disks to create larger virtual disks (although the phrase “virtual” disk was not originally used, in this age of “virtual-everything” it seems appropriate). The basic concept was then embraced and developed from the 1987 inception to today. RAID has evolved into a technology that is ubiquitous as storage drives themselves. It allows system designers to add in performance while also providing some additional data protection (don’t forget to wear your “belt”). There are many choices with RAID such as various RAID levels and software and/or hardware RAID. Software RAID means the RAID functionality is provided in software by the OS. Hardware RAID means the RAID functionality is provided by a card, usually in a PCI or PCIe slot. There a couple of articles that can present the pros and cons of the various RAID options, here and here. But this article will focus on software RAID with Linux using the md capability of Linux. It’s beyond the scope of this article to discuss the various RAID level options. There are better articles for this (it may be wikipedia, but it’s a good introduction to the various RAID levels). Instead this article will go through the creation of a simple RAID-1 setup. RAID-1 mirrors disks (actually disk partitions) so if you write to one, the data is copied to the other disk(s). This is a simple way to provide some data protection because you can lose a single disk without losing any data (but it is not a substitute for real backups). So what is a good way to create and manage RAID arrays on Linux? Madam – I’m mdadm Handling md groups can be very complex and difficult. It can require hand editing files where a mistake can cause the lose of RAID groups. If you are careful, it works very well. But to help you maintain your RAID groups, Neil Brown started a project for an administrative tool for md called mdadm. The mdadm tool is very comprehensive and has a variety of functions: The man pages are quite good and you can find it on-line here. This article will present a simple example with two drives. For this article, a CentOS 5.3 distribution was used on the following system: /dev/sdb Using this configuration a simple RAID-1 configuration is created between /dev/sdb and /dev/sdc. /dev/sdc Step 1 – Set the ID of the drives The first step in the creation of a RAID-1 group is to set the ID of the drives that are to be part of the RAID group. The type is “fd” (Linux raid autodetect) and needs to be set for all partitions and/or drives used in the RAID group. You can check the partition types fairly easy: # fdisk -l /dev/sdb fd Linux raid autodetect # fdisk -l /dev/sdc fd Linux raid autodetect Step 2 – Create the RAID set using mdadm The tool mdadm allows the easy creation of a RAID group. In this article, a simple RAID-1 group, a two disk group is created. [root@test64 ~]# mdadm --create --verbose /dev/md0 --level raid1 --raid-devices=2 /dev/sdb1 /dev/sdc1 mdadm: /dev/sdb1 appears to contain an ext2fs file system size=244187136K mtime=Sun Aug 16 13:06:51 2009 mdadm: /dev/sdc1 appears to contain an ext2fs file system size=244187136K mtime=Sun Aug 16 13:06:51 2009 mdadm: size set to 488383936K Continue creating array? y mdadm: array /dev/md0 started. The options are fairly easy to understand. The first option “–create” creates a RAID group (naturally). After the “–verbose” option is the md device, in this case it’s /dev/md0. After that is the RAID level (“–level”) – in this case it’s raid1. Finally the RAID devices are specified using the “–raid-devices” option. Also notice that it prompts the user if there is a file system on the drives (partitions). /dev/md0 RAID works on a block level. That is, the RAID controller, be it software RAID or hardware RAID, works on the blocks of the devices in the RAID group. This means it’s independent of the file system. Consequently, immediately after the RAID1 group is created, the drives are “synchronized”. That is, the contents of the blocks from the first partition (drive) are copied to the second partition (drive). Below is the output of that synchronization process at very stages of completion (just to give you an idea of speed and time). # cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdc1[1] sdb1[0] 488383936 blocks [2/2] [UU] [>....................] resync = 0.2% (1444224/488383936) finish=112.3min speed=72211K/sec unused devices: Notice that the status of the synchronization process is found by “cat-ing” the contents of the file /proc/mdstat. /proc/mdstat # cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdc1[1] sdb1[0] 488383936 blocks [2/2] [UU] [==========>..........] resync = 50.1% (245077952/488383936) finish=57.4min speed=70554K/sec unused devices: [root@test64 ~]# cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdc1[1] sdb1[0] 488383936 blocks [2/2] [UU] [================>....] resync = 80.1% (391254144/488383936) finish=25.9min speed=62269K/sec unused devices: [root@test64 ~]# cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdc1[1] sdb1[0] 488383936 blocks [2/2] [UU] [===================>.] resync = 99.6% (486830720/488383936) finish=0.5min speed=47731K/sec unused devices: After the synchronization process is finished, the output should look like the following. # cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdc1[1] sdb1[0] 488383936 blocks [2/2] [UU] unused devices: Other interesting/useful GNU/Linux replication ideas: DRBD (over tcp/ip, easy to setup High Availability if required) Unison (user-space, cross platform) Typo: \”These are /dev/sdb and /dev/sdb\” s/b \”These are /dev/sdb and /dev/sdc\” @j3: Yep – good catch. I\’ll fix that typo later. Jeff I have an existing fakeraid 5 created from within Windows XP Home. Is it possible to access this from Ubuntu 9.04 using dm? My goal would be to dual boot & access it from both Win & Linux. thanks! GREAT ARTICLE – I read the docs but a worked example made it sink in. The two articles, about LVM and about software raid, require a 3rd talking about the two in conjunction. To raid LVs or to PV raid groups? What is better? from performance, functionality and stability point of views? The article are good, introductions. I\’d like to see you going deeper. @dbbd: I\’m working on that article :) Determining which one is \”best\” is becoming somewhat subjective. In general, the best approach is to use RAID (md) on the lowest level and then use LVM on top of that. The simple reason is that you can expand the file system much easier using LVM than md. The questions I\’ve been examining become things such as, So there are a bunch of considerations which makes the article much more difficult to write – I need to examine lots of options. Ultimately what I would like to produce is something of a \”contrast\” list. It will list the various approaches or ideas and then list the pros and cons because I think choosing the \”best\” is subjective (I haven\’t seen an article like this – have you?). Thanks for the feedback! @wodenickel I don\’t know if you can do that. I\’m guessing that it ill be very difficult. md would have to understand how Windows builds the RAID. Then Linux will have to understand the file system (if it\’s NTFS then read-only is fairly straight forward and you can use NTFS-3G for read/write). Did a google search turn up anything? Thanks for this informative article. However, you say that \”you can now put a filesystem onto /dev/md0\”. Actually, it has been my experience that you MUST put the filesystem onto /dev/md0 and NOT any drive used in the RAID. If you have two drives that you want in your RAID and you mkfs.ext3 each then when they are included in the RAID, the size of the filesystem will be larger than what the RAID can handle and you\’ll get \”attempt to write beyond end of device\”. Doing mkfs-ext3 on /dev/md0 effectively puts a filesystem onto the whole RAID group but the number of available blocks is slightly smaller than the individual members could accomodate. @lesatairvana: You are correct, sort of. If you want to stay with a simple RAID-1 with two disks, then yes you have to put the file system on /dev/md0. But you can also uses /dev/md0 as a building block for something else. For example, you can create /dev/md0 and /dev/md1 each from two pairs of disks, and then create a RAID-0 on top of that. Disclaimer – I\’ve never done this but I\’ve been told you can do it. (if I can get a couple more disks into my case, I will try it). More title than topic – in the UK, we\’d say \”a belt and braces approach\” – \”suspenders\” being the things which ladies used to hold up stockings, before the invention of tights. So what do you call those? I\’d like to know, my wife would like to know, but we don\’t want to get into porno hell trying to find out. She tells me. Hi, thanks for the article. One thing I do not understand – why to synchronize disks before any data was put on them (even mkfs was done after sync)? Possibly You Also Make Most of these Blunders With bag ! good articles Identify who’s raving about bag and the actual reason why you should be worried. Listing of helpful options to discover more about women before you’re abandoned. thank you for share! I used to be recommended this blog by means of my cousin. I’m no longer positive whether or not this post is written by him as nobody else know such designated approximately my difficulty. You are amazing! Thank you! Generating Traffic Procedure That Is Actually Helping bag-experts To Advance Listed Here Is A Technique That Is Also Assisting bag-industry professionals To Rise Some really great info , Sword lily I observed this. “Ours is a world where people don’t know what they want and are willing to go through hell to get it.” by Donald Robert Perry Marquis. The Primary Methods Of Comprehend watch Plus The Way One Can Join The watch Top dogs Blog looks nice. I’m still trying to make a blog but it won’t be as professional as yours /: Keep on blogging :) pirater un compte facebook Linux Software RAID – A Belt and a Pair of Suspenders | Linux Magazine ajcpfffkcrv [url=]ujcpfffkcrv[/url] jcpfffkcrv japan will help everybody by adding a lot of unique functions and options. This is a unvaluable item for any follower of japan. Alan Griswold: Rumen Kolev’s Placidus and Delphic Oracle 5. By confirming this, you would make a conclusion to settle on formats that play smoothly on your device. This restricts the kind and type of TV shows that networks like FOX, CNN, CBS, NBC and ABC can broadcast. Wonderful paintings! This is the kind of info that should be shared across the net. Shame on Google for not positioning this post upper! Come on over and seek advice from my site . Thank you =) Cheap oakley frogskin sunglasses nike outlet Linux Software RAID – A Belt and a Pair of Suspenders | Linux Magazine As a major tourism point , Aurangabad is also enriched with several hotels and accommodation options. Some of the popular 5 star luxury hotels in Aurangabad are Taj residency located at Rauza Bagh costing Rs.7000/- onwards, Ambassador Ajanta located at Aurangabad town costing Rs. 3500/- onwards and Welcome Rama International , located at Chikalthana, Aurangabad. Whats Taking place i am new to this, I stumbled upon this I have discovered It positively useful and it has aided me out loads. I’m hoping to give a contribution & help other users like its helped me. Great job. portafogli louis vuitton Whats Happening i am new to this, I stumbled upon this I have discovered It absolutely helpful and it has helped me out loads. I’m hoping to give a contribution & aid other users like its helped me. Great job. Ray Ban Aviators Thanks for another fantastic post. Where else could anybody get that type of info in such an ideal manner of writing? I’ve a presentation next week, and I’m on the look for such info. Mulberry Outlet UK Sale Linux Software RAID – A Belt and a Pair of Suspenders | Linux Magazine nike free run femme pas cher. Together with their camp tents, camping ranges, and additionally making food equipment, they are willing to draw his or her’s sleeping-bags. Beans, the items to consider when acquiring an important taking a nap purse would be the going to sleep routine fake michael kors bags Along with their camp tents, stay cookers, and also making food supplies, they are going to deliver his or her sleeping bags. Bean, those things to consider with the any falling asleep travelling bag would be the sleeping traits ray ban wayfarer Pretty nice post. I just stumbled upon your weblog and wanted to say that I’ve truly enjoyed browsing your blog posts. In any case I will be subscribing to your rss feed and I hope you write again soon! Cheap Louis Vuitton wholesale bags You are a very bright person! mens mulberry bags Inspiring quest there. What occurred after? Take care! pink mulberry bag wonderful points altogether, you just gained a brand new reader. What could you recommend about your put up that you simply made a few days ago? Any positive? mulberry somerset bag Hey There. I found your blog using msn. This is a very well written article. I’ll make sure to bookmark it and return to read more of your useful info. Thanks for the post. I will definitely return. mulberry wood Wow, that’s what I was looking for, what a data! present here at this web site, thanks admin of this site. Michael Kors handbags Good web site! I really love how it is easy on my eyes and the data are well written. I’m wondering how I could be notified when a new post has been made. I have subscribed to your feed which must do the trick! Have a great day! Ray Ban Clubmaster You are a very bright individual! Mulberry Outlet York Well I truly liked reading it. This post provided by you is very useful for good planning. louis vuitton borse You made some good points there. I did a search on the issue and found most individuals will approve with your website. Ray Ban Wayfarer Hello There. I found your blog using msn. This is a really well written article. I will be sure to bookmark it and return to read more of your useful info. Thanks for the post. I will definitely return. Fake Ray Bans Wow! This could be one particular of the most helpful blogs We have ever arrive across on this subject. Basically Great. I’m also an expert in this topic therefore I can understand your effort. Cheap Louis Vuitton journeys japan aids everybody by simply including many unique capabilities and features. Its a unvaluable thing for every fan of japan. Hello, Neat post. There’s an issue together with your web site in internet explorer, could test this? IE nonetheless is the market chief and a good part of other people will miss your wonderful writing due to this problem. Mulberry messenger Bag Mulberry Handbags Sale Hey very nice web site!! Guy .. Excellent .. Wonderful .. I will bookmark your web site and take the feeds additionally?I am glad to find a lot of helpful info here within the submit, we want work out more strategies on this regard, thank you for sharing. . . . . . Mulberry Alexa Bag Hi my loved one! I wish to say that this article is awesome, nice written and include approximately all significant infos. I would like to look more posts like this . Discount Louis Vuitton speedy replica hermes medor Linux Software RAID – A Belt and a Pair of Suspenders | Linux Magazine You are a very smart person! cheap oakley m frames Linux Software RAID – A Belt and a Pair of Suspenders | Linux Magazine christian louboutin outlet store Linux Software RAID – A Belt and a Pair of Suspenders | Linux Magazine nike air max 2012 Linux Software RAID – A Belt and a Pair of Suspenders | Linux Magazine botines nike mujer Linux Software RAID – A Belt and a Pair of Suspenders | Linux Magazine nike free 5.0 v3 Linux Software RAID – A Belt and a Pair of Suspenders | Linux Magazine nike.com espa?a Linux Software RAID – A Belt and a Pair of Suspenders | Linux Magazine aire max enfant Linux Software RAID – A Belt and a Pair of Suspenders | Linux Magazine nike tn air Linux Software RAID – A Belt and a Pair of Suspenders | Linux Magazine air max mujer I have learn some just right stuff here. Certainly value bookmarking for revisiting. I surprise how a lot attempt you put to make this sort of wonderful informative website. cheap oakley transport Linux Software RAID – A Belt and a Pair of Suspenders | Linux Magazine nike free trainer Linux Software RAID – A Belt and a Pair of Suspenders | Linux Magazine naik air max Linux Software RAID – A Belt and a Pair of Suspenders | Linux Magazine jordan air max ???????????????????????? ?? ??? ??? ????? ??? ???? ?? ?????????? ????? ???? ? Thank you for the auspicious writeup. It in fact was a amusement account it. Look advanced to far added agreeable from you! By the way, how could we communicate? Mulberry Messenger bag I like what you guys are up too. Such clever work and reporting! Carry on the excellent works guys I¡¦ve incorporated you guys to my blogroll. I think it will improve the value of my site :) cheap oakley shaun white I intended to send you this tiny word so as to thank you so much yet again for the breathtaking techniques you’ve discussed on this site. It has been so shockingly generous of people like you to grant publicly what numerous people could possibly have distributed as an e-book to help make some dough for their own end, certainly now that you could possibly have done it in the event you wanted. Those advice also served to be the easy way to recognize that other people have the identical desire really like mine to figure out somewhat more concerning this problem. I’m sure there are thousands of more fun times in the future for many who check out your website. Ray Ban Clubmaster You really make it seem so easy along with your presentation however I to find this topic to be actually something which I feel I’d never understand. It seems too complicated and extremely vast for me. I’m looking forward to your subsequent post, I?ll attempt to get the hold of it! Mulberry Outlet UK Sale I’d like to find out more? I’d like to find out some additional information. Mulberry Outlet UK Sale Linux Software RAID – A Belt and a Pair of Suspenders | Linux Magazine air max griffey Linux Software RAID – A Belt and a Pair of Suspenders | Linux Magazine jordan en solde you’re in point of fact a good webmaster. The site loading velocity is incredible. It seems that you are doing any unique trick. Also, The contents are masterpiece. you’ve done a fantastic activity in this subject! Mulberry Outlet UK Sale Linux Software RAID – A Belt and a Pair of Suspenders | Linux Magazine air max tienda online. cheap oakley prescription sunglasses Linux Software RAID – A Belt and a Pair of Suspenders | Linux Magazine air jordan 11 retro ??????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? ???? ?? Linux Software RAID – A Belt and a Pair of Suspenders | Linux Magazine nike air outlet Linux Software RAID – A Belt and a Pair of Suspenders | Linux Magazine zapatillas nike air max baratas Linux Software RAID – A Belt and a Pair of Suspenders | Linux Magazine bambas air max Linux Software RAID – A Belt and a Pair of Suspenders | Linux Magazine comprar nike free baratas Thank you for any other informative website. The place else could I am getting that kind of info written in such an ideal way? I’ve a project that I’m simply now running on, and I’ve been on the glance out for such information. louis vuitton borse Linux Software RAID – A Belt and a Pair of Suspenders | Linux Magazine air jordan retro 13 Linux Software RAID – A Belt and a Pair of Suspenders | Linux Magazine tienda nike espa?a Linux Software RAID – A Belt and a Pair of Suspenders | Linux Magazine nike air max premium Linux Software RAID – A Belt and a Pair of Suspenders | Linux Magazine zapatillas nike mujer air max Great paintings! This is the type of information that should be shared across the web. Shame on the seek engines for no longer positioning this submit upper! Come on over and visit my web site . Thanks =) Ray Ban Wayfarer Linux Software RAID – A Belt and a Pair of Suspenders | Linux Magazine I think this is one of the most significant information for me. And i’m glad reading your article. But want to remark on some general things, The website style is wonderful, the articles is really excellent : D. Good job, cheers cheap oakley blade Linux Software RAID – A Belt and a Pair of Suspenders | Linux Magazine coleccion nike 2013 This bag is wonderful. I loved! Proprietor doesn’t really communicated with myself as there are no doubts regarding the product, then after payment she already sent the item to me. Exellent quality. The fringes are crushed over the trip but if ironing addresses! Thrilled with my purchase. I most certainly. discount mulberry bags uk outlet tommy hilfiger store designer handbags sale Jimmy Choo hunter boots Cheap Designer Handbags Gucci Belts wholesale mac makeup free run sale Snapback Hats Outlet watches online ralph lauren outlet watches for men discount watches for men Choo cheap handbags Wonderful seller ~ New As Described, Fast Shipping Thanks ! A++ Picking long prom dresses could be an extremely thrilling and delighted time in your life. For anyone who is assisting your current princess retail outlet and decide, red prom dresses long, you may create certain this lady has quite possibly the most fun via the complete practice. On the other hand,long prom dresses with sleeves,you’ll be able to get up to date in the present general trends which might be browsing typically the magasin home windows. Those eye catching prom dresses may possibly be prominent today, they are probably not the best choice in your case. Because you may like to don’t forget the particular developments of the day, there is certainly far more that will picking out the most suitable clothing . cheap short prom dresses Linux Software RAID – A Belt and a Pair of Suspenders | Linux Magazine nike air max bw classic Linux Software RAID – A Belt and a Pair of Suspenders | Linux Magazine nike jordan noir Hey, you used to write magnificent, but the last several posts have been kinda boring¡K I miss your super writings. Past several posts are just a bit out of track! come on! cheap where to buy oakley sunglasses Linux Software RAID – A Belt and a Pair of Suspenders | Linux Magazine nike air max hombre Linux Software RAID – A Belt and a Pair of Suspenders | Linux Magazine air max nuevas Linux Software RAID – A Belt and a Pair of Suspenders | Linux Magazine air max 90 falsas ScrapeBox ???? ??? ??? ???? 164403-DBT-38NO CELINE ???? miumiu ??? ??,?????? ?? ??,miumiu ?? ???? ???? ?????? ????????????? ???? ??? ????(20485) ??????? ?????? SEO software ?????? ??? ??,miumiu ?? ??,?????? ?? office ?????? ?? Thank you a bunch for sharing this with all people you actually understand what you’re speaking about! Bookmarked. Kindly additionally talk over with my web site =). We may have a link alternate contract among us! Cheap Oakley Sunglasses Linux Software RAID – A Belt and a Pair of Suspenders | Linux Magazine nike air royal ?????CELINE????? ????????168243???????2WAY???????? SEO tools ????? ?????? ???????????? ?????? ??????? ScrapeBox ???? ???? ???? CELINE-0209 ??? ???? ?? ?????? Usually posts some quite exciting stuff like this. If you?re new to this site. Linux Software RAID – A Belt and a Pair of Suspenders | Linux Magazine free nike magnificent put up, very informative. I’m wondering why the opposite experts of this sector don’t understand this. You must proceed your writing. I am sure, you’ve a huge readers’ base already! Linux Software RAID – A Belt and a Pair of Suspenders | Linux Magazine nike air mariah Linux Software RAID – A Belt and a Pair of Suspenders | Linux Magazine nike air max soldes It is truly a great and useful piece of information. I am glad that you simply shared this useful info with us. Please keep us informed like this. Thanks for sharing. Cheap Oakley Sunglasses Linux Software RAID – A Belt and a Pair of Suspenders | Linux Magazine tienda online nike air max I have read some good stuff here. Definitely value bookmarking for revisiting. I surprise how much effort you set to make any such wonderful informative web site. Discount Louis Vuitton of course like your web-site however you need to check the spelling on several of your posts. Several of them are rife with spelling problems and I find it very bothersome to tell the truth on the other hand I?ll definitely come again again. Ray Ban Aviators Linux Software RAID – A Belt and a Pair of Suspenders | Linux Magazine nike air max 90 homme pas cher Linux Software RAID – A Belt and a Pair of Suspenders | Linux Magazine zapatillas nike precios ???? ??? 1-0760 3IGR ?????? ??? ?????,miumiu ?? ????,?????? ?? ???? ?????? ?? ??? ???,miumiu ??? ??,miumiu ?? ???? ??? ??? ??? ???? ScrapeBox ???? ???? ??????? ??? ??? ??? ??? ?? ?????? ?? SEO software Hello, I just wanted to mention, you’re wrong. Your blog doesn’t make any sense. cheap oakley gascan Magnificent site. Plenty of useful information here. I am sending it to several pals ans also sharing in delicious. And naturally, thank you for your effort! Mulberry Outlet York Excellent wonderful site. Mulberry Outlet York Theres beneficial on my cellphone named deadbeat radio… I have a google, consequently idk if you have broaden web site That’s why deadbeat fully carries a laptop or computer web site. Their great. Enjoy deadbeat. They get way more tunes than thomas sabo and you also do not find the maximum amount of saying again things. Hmmmm! wow its like an addon can it be… High-priced! really.. all of their equipment are usually darn really costly. -= Chethan’s continue blog… twelve Awesome Popular features of Ubuntu 10. apr. LTS Discharge with The spring 28 =-. fuhshniZZle The final Music Videos Jukebox beta is definitely building a particular beta specialist advertising. First 15, 000 beta testers to produce a playlist will certainly quickly become improved for their expensive subscription any time beta analyze ends. fuhshniZZle works with YouTube, google, Discogs, The facebook and Amazon online APIs to manufacture a smooth integrated in addition to groundbreaking media expertise. Today i want to never forget that Harry Reid is often a many other Mormon. some remarkable nBut there may be something else in this article that everyone is actually incomplete Romney might not forked out any kind of taxation with regard to a decade since he may don’t you have gained it pay intended for decade. Hew very well often have paid themselves an income involving $1 that is properly lawful and that is certainly definitely not taxable, for this reason simply no “income” duty. (I believe that it is $7K actually need before you have even to file, while it could be a little a lesser amount of. ) in nHe may have on the other hand considered his / her money while benefits that is certainly perfectly lawful (if the product is a concern, nonetheless it *is* legitimate, and also repeatedly done) so because of this solely experienced “investment” earnings on what he or she doubtless compensated the proper fees. And thus one of those who by some means (illegally) became a duplicate regarding Romney’s taxation statements for anyone decades only sees him spending fees with investment earnings without having realizing that it is in reality salary given seeing that rewards and also knee-jerks the particular “didn’t shell out taxes” rule. d du feel Master of science. Goodman placed it greatest continue summer season: Romney is Very REALLY prosperous. Fine he has enough cash not to will need éléments and also payoffs so because of this can be sincere. And admittedly, My spouse and i avoid treatment how he or she usually spends *his* cash, Me much more thinking about the way they (or NoBama) stays *OUR* funds…. 2014 ??? ??? ??????????? miumiu ?? ?? ?????? ?? ???? ScrapeBox ????????? ????????? ??? ???? ???? ????? ??? ??????? ???????? 16424-3dbw-10dc SEO tools whoah this blog is magnificent i really like reading your articles. Stay up the good paintings! You know, many persons are looking round for this info, you can aid them greatly. cheap oakley parts Wow, fantastic blog layout! How long have you been blogging for? you make blogging look easy. The overall look of your site is wonderful, as well as the content! cheap oakley polarized sunglasses In the event that Romney has not given any kind of income taxes within decade, now don’t assume often the INTEREST RATES might have already been across this particular? Best replica watches with genuine Swiss movement for sale!| Dr Dre Beats Heya i’m for the first time here. I came across this board and I find It truly useful & it helped me out much. I hope to give something back and aid others like you aided me. cheap custom oakleys. stone island sale desktops, we often just have the pc unit by itself not including wires or even peripherals, even though it may be advisable supplent your own software program Compact disks the home.Winters in this upper part (almost Atlanta) associated with Florida had been cold.- Something to another -I was a reduced rated recruited man as well as newly married. We constantly battled to create ends meet. Then the wife obtained let go from the woman’s job.Unexpectedly, a cold entrance arrived from no place – way too earlier than all forecasts. As well as, as you would expect, we were from heating system essential oil and money.I had not experienced therefore helpless in all my life. Negativity entered my thoughts and I just sat on my small front patio, sensation nearly disabled, incorporated in 2 pairs associated with pants as well as 3 jackets.My spouse tried to system me, but all I possibly could consider was the amount of a failure I had been. No matter just how much I tried to organize away our spending budget, some thing usually seemed to appear.Many times I would have to get advance loans on my small salary. stone island polo shirt GiJteZHw GiJteZHw [url=]GiJteZHw[/url] It¡¦s actually a cool and helpful piece of information. I¡¦m glad that you shared this helpful information with us. Please keep us informed like this. Thank you for sharing. cheap oakley jupiters Deliver increased pleasures for customers. Drive increased traffic aimed at your web. Rewrite the actual content articles in addition to distribute to internet websites for your own personal websites. Prada Factory Maybe you enjoy to study and possess hardly any requirement for marketing and advertising a small business,Burberry Factory. No matter, no matter what Private label rights Ebooks may benefit you. Prada Outlet Store Hello There. I found your blog using msn. This is a really well written article. I?l be sure to bookmark it and return to read more of your useful info. Thanks for the post. I?l certainly return. Ray Ban Aviators I like the valuable information you provide in your articles. I will bookmark your weblog and check again here frequently. I am quite sure I?l learn a lot of new stuff right here! Good luck for the next! Ray Ban Wayfarer When making buys for their urgent food resources, numerous a variety of men and women opt for the already organized food supply plans. These bundles come in unique sizes, based on the amount of members in your family, or the number of people that you have got to feed. michael kors silver handbags hello!,I love your writing very a lot! share we be in contact more about your post on AOL? I require an expert on this area to resolve my problem. Maybe that is you! Having a look forward to look you. Cheap Oakley Sunglasses Hi there, I log on to your blogs like every week. Your story-telling style is awesome, keep up the good work! cheap oakley wholesale This can be one among by far one of the best write-up. The Secrets and techniques of Money Gifting! Discover How I Rake in 1000s With Cashgifting oakley goggles At this time I am going away to do my breakfast, later than having my breakfast coming yet again to read further news about Men’s Air Jordan 3[url=]Air jordan 3 retro[/url]. Linux Software RAID – A Belt and a Pair of Suspenders | Linux Magazine pink studded shoes Wow! Thank you! I continuously wanted to write on my site something like that. Can I take a portion of your post to my blog? louis vuitton outlet online about previews including “the very Red Bottoms Depending on how much within the universe and how many had been affected by your ideas or even measures, the accrued return in the universe could be the size a small influx; or even the size of a good browsing wave; or even the size a massive tsunami!- Heating system Essential oil and Universal Laws -Now, I realize why the above “heating oil” event unfolded the actual way it did.In the start, the actual negative thoughts as well as self-pity We stored focusing on actually drawn more of the exact same (Loa). Through Carl “J.D.Inch Pantejo, Copyright laws May 2008Author “My Buddy Yu – The actual Wealth Coach,” Copyright laws July 07. Pantejo – B.D. Vurce Publishing.*The following tale is actually incorporated in “My Buddy Yu – the Wealth Coach: Guide 2,Inch Pantejo — B.D. Vurce Publishing. Launch Date: 2008.“[Life] Incredible! Isn’t it?…”- Volunteering for added Spend -I had been usually fairly “open-minded” regarding extra spend. What the actual Heck, I got’ta work anyway, right? Why not really get a small additional, for just a little extra misery.One time We volunteered with regard to Experimental Spend which included me personally doing a cold-weather mission “while putting on the primary body temperature information selection gadget.”The data was required to engineer better anti-exposure equipment for tasks where hypothermia would be a actual threat; also to design nutritionally seem, cold-weather MRE’s (foods, prepared to consume) individualized towards the size as well as activity of every owner.In actuality, the “…with a core body’s temperature information collection device” was the official method of saying that We as well as my entire team used to do the jobs in a very cold area WITH RECTAL THERMOMETERS FIRMLY Stuck Upward OUR BUTTS AND ANCHORED Presently there Through AN INFLATABLE Light bulb AT THE END OF EACH PROBE!Needless to say, it was an inconvenience to take a dump – as well as instead unpleasant if you didn’t remember in order to flatten the bulb!Another time, birkenstock sale Help you psychologically step out of a hazardous/time-sensitive scenario in order to help quick — frequently life-saving – decisions fairly (being an observer, not really a participant).3.Unwind a person (even cause you to chuckle) while you wonder at the absurdity associated with existence!Again, We said, “Imagine That.”Above my mind, instead of a incredible full, spherical cover, I noticed exactly what resembled an enormous, used condom! Either a line-over or static electrical power was preventing atmosphere from blowing up my chute.I had been oscillating extremely.All my tries to fill the primary chute proved not successful. I spread the main make risers — nothing. I do a pull-up as well as rose on 1 riser as well as let go – wishing the taking, springtime action associated with my bodyweight might allow a few air enter the cover. Absolutely no joy. I sought out the typical 4-line launch system (a way of controlling/steering a parachute by delivering four outlines powering the cover), but then appreciated outlet birkenstock Blog owners say, a lot are in agreement with the content says, consistent with present situation.. cheap oakley womens Linux Software RAID – A Belt and a Pair of Suspenders | Linux Magazine dr dre beats on sale Linux Software RAID – A Belt and a Pair of Suspenders | Linux Magazine louis vuitton sac de voyage homme You made several good points there. I did a search on the subject and found a good number of folks will consent with your blog. oakley prescription glasses Great remarkable issues here. I am very happy to see your post. Thank you a lot and i’m taking a look forward to contact you. Will you kindly drop me a e-mail? Michael Kors handbags around the previews at “a Christian Louboutin Outlet I had been plastered around the upside down deck of the simulator.“Imagine Which.Inch Kevlar drifts!The body armour was so buoyant which i was stuck, inverted on the outdoor patio from the 9D5. A whole lot worse, all of those other equipment I’d on was getting snagged on everything in my personal egress path. Cargo hooks, helo frame, and seats turned out to be only one much more thing in order to disentangle personally from before I possibly could depart the actual simulator.I’m not sure how long I’d already been holding my personal inhale. Activity and emotional state can severely reduce your breath keeping time.Outside the trainer, the security scuba diver, a buddy associated with mine, motioned the actual “need assistance” transmission.I smiled as well as waived him or her off.Finally, We said “f*ck this,” grabbed my HEEDs (heli-copter emergency egress device – a small Scuba diving container how big a sizable café-latte at Local cafe), purged the actual mini-regulator water, as well as required the inhale of compressed air.This was always a final vacation resort simply because incline to the surface area and inhaling and exhaling needed to be controlled afterwards. On the breath maintain, one could rule out the dangers associated with DCS (decompression sickness) as well as AGE (arterial gasoline embolism – the greater serious condition when a bubble travels through the blood vessels and lodges in certain instead inconvenient places; specifically the heart or even brain).Oh nicely, it had been likely to be an extended day time than birkenstock pas cher There exists a good deal on the market such as Air jordan 2 for sale[url=]Air jordan 2 retro for sale[/url], possibly be they will mundane along with commonly used and also raucous in addition to trenddriven. wholesale outlet cheap shoes?. Next, apply blue eyeliner to the bottom, and light blue to the inside crease of the bottom.. Looking for a disciplined approach to managing stock market risk on a daily basis? Check Out My “Daily Decision” System.D:\ARTICLESEA\1\11-53-11-53-105492.txt Cheap Ray Bans And Oakleys Linux Software RAID – A Belt and a Pair of Suspenders | Linux Magazine nike air jordan 1 provide access to the best scientific knowledge for international environmental governance and the mainstreaming of environmental concerns into social and economic sectors, and in support of the internationally agreed development goals; Perfect! Wonderful item and seller. 5 stars. Great gift here Air Jordan 6 Retro “Infrared 23″[url=]]air jordan 6 size 6.5[/url] for you. outletdiscountcheapshoes.com The tint is built right into the lenses when they are created.. The square ish Quanta design is too much like my square face.D:\ARTICLESEA\1\11-53-11-53-102978.txt New Wayfarer Ray Bans What’s up, I just wanted to say, I disagree. Your article doesn’t make any sense. cheap oakley gearbox In this video, we learn how to get a spunky hairdo with a fishtail braid. Also, pick out clothes that are going to accentuate your beautiful figure and draw the eyes away from the breasts.D:\ARTICLESEA\1\11-53-11-53-100253.txt Ray Ban Optical This really is nice create, I’m going shaire the item for my frinds. I want to create my own website but I have no experience. A classmate recommended me to instead create a blog so that I can get experience. . What free blog site should I use?. Any tips?. They never been shown to be effective. Fashion designer Cate Adair demonstrates dressing with a sarong for Modern Mom.D:\ARTICLESEA\1\11-53-11-53-102916.txt Where Can I Buy Ray Bans As emergency help starts to dwindle, people have become even more desperate facing uncertainty and even exploitation..D:\ARTICLESEA\1\11-53-11-53-104756.txt Ray Ray Ban with regard to previews of all “currently the Replica Christian Louboutin It didn’t take long and I found one. The frames come in a hot neon pink with zebra stripes and glow bright fuchsia once all of the lights have gone out.D:\ARTICLESEA\1\11-53-11-53-106428.txt Ray Ban Rb It is one among certainly my personal favorite post. in previews about “my michael kors outlet Next, shape the mask to the desired form. Sunglasses are a must have fashion accessory. That we had margin improvements on both divisions, more importantly on the Wholesale than retail.D:\ARTICLESEA\1\11-53-11-53-10395.txt Ray Ban Folding Wayfarer excellent info keep up your good work thankx cheap oakley sunglasses Hi my loved one! I want to say that this post is awesome, nice written and include approximately all important infos. I¡¦d like to see more posts like this . cheap discount sunglasses oakley Linux Software RAID – A Belt and a Pair of Suspenders | Linux Magazine Christian Louboutin Brandaplato Linux Software RAID – A Belt and a Pair of Suspenders | Linux Magazine Christian Louboutin You You Sling Traders adorned in rose colored Ray Ban’s suggested that the actions taken as well as those threatened by various governments and central bankers in the emerging market countries had produced a reprieve and that things ought to be hunky dory from here..D:\ARTICLESEA\1\11-53-11-53-103080.txt Ray Ban Sunglasses Wholesale Used motherboard, K6 chip, an e prom chip, heat sink, cooling fan, front control panel (with a turbo button!!!), LEDs, power sockets, miscelaneous plug ins, power wiring bits, and an IDE cable.D:\ARTICLESEA\1\11-53-11-53-105090.txt Where To Get Ray Ban Glasses If its militarily acts now, it may face international isolation.. Be it any kind of eyeglass frame, it has to be stylish if it’s from .D:\ARTICLESEA\1\11-53-11-53-104273.txt Ray Ban Sunglasses For Women The highly unusual hearing began on Tuesday, with more than two dozen patients telling their personal stories, to scattered applause, urging the panel to keep Avastin available.D:\ARTICLESEA\1\11-53-11-53-101393.txt Popular Ray Bans Finally, she uses a large square scarf folded in half from corner to corner to tie a “sarong” style splash of color over pants, jeans or a skirt..D:\ARTICLESEA\1\11-53-11-53-105921.txt Ray Ban Junior Uk Blog writers say, completely believe this content claims, consistent with present situation. Linux Software RAID – A Belt and a Pair of Suspenders | Linux Magazine couette louis vuitton Linux Software RAID – A Belt and a Pair of Suspenders | Linux Magazine silver jimmy choo sandals Excellent!!!!. Xoroshee kachestvo, no eto ne koja, Ne vonyaet, ochen myagkaya i preyatnaya na oshup. Vse shvi idealnie . Ya dovolna pokupkoy. cheap tiffany bracelets? I wanted to compose you one little word to help give thanks again about the exceptional tips you’ve shared on this page. It has been really shockingly generous with people like you to convey unhampered all that numerous people would have supplied as an ebook in making some cash for themselves, particularly considering the fact that you could possibly have tried it in the event you considered necessary. Those inspiring ideas additionally worked to be a great way to realize that many people have similar keenness just like mine to find out a great deal more when it comes to this issue. I am certain there are several more enjoyable sessions in the future for individuals who looked at your blog. cheap oakley half jackets Such a type of blog post will definitely hit to many followers. A good blog post and valuable for its information. Many thanks for sharing it up! cheap oakleys I was plastered on the upside down outdoor patio of the simulation.“Imagine Which.Inch Kevlar drifts!The body armor am buoyant which i was caught, inverted on the deck from the 9D5. Even worse, all of those other equipment I’d on had been obtaining snagged upon everything in my egress path. Cargo barbs, helo frame, and chairs proved to be just one more thing to disentangle myself from before I could depart the actual simulator.I’m unsure just how long I would already been holding my personal inhale. Activity and psychological condition can severely reduce your breath keeping period.Outside the actual trainer, the security diver, a buddy of mine, motioned the actual “need assistance” signal.I smiled as well as waived him off.Finally, I said “f*ck it,Inch snapped up my personal HEEDs (helicopter crisis egress device — a little Scuba diving bottle how big a large café-latte from Starbucks), purged the actual mini-regulator water, as well as took a breath of compressed atmosphere.This was always a final vacation resort simply because ascent towards the surface area as well as inhaling and exhaling had to be controlled afterwards. On the inhale hold, you could eliminate the dangers associated with DCS (decompression illness) and AGE (arterial gasoline embolism – the greater serious situation whenever a percolate travels through the blood vessels and lodges in certain instead bothersome places; namely one’s heart or brain).Oh nicely, it had been likely to be an extended day time than stone island vest There are millions of people who would love to be able to watch live soccer on the Internet. This is because sometimes it is hard to catch up on the games due to late working hours or a business trip. Wouldn’t it be great to have an alternative to the conventional satellite or cable TV services?And you actually can watch live soccer on the Internet! In fact, you have some options here, which can be categorized into three general groups.1. Direct StreamingThere are websites which stream live soccer games. Many of them are free cost – what luck, you might say. However, it is quite an annoying experience to watch soccer games on these websites, because the streaming is not smooth, and both picture and sound get interrupted several times a minute.To ensure good streaming similar to what you see on a TV screen, your Internet connection should be very speedy (dial-ups stand no chance here) and the server of the website should be able to handle many viewers at the same time. Free sites normally cannot afford powerful servers to deliver great quality irrespective of the number of visitors. stone island the house.Winters within this higher component (almost Atlanta) of Florida had been cold.- One Thing after Another -I would be a reduced ranked recruited man and newly married. We continuously battled to create payments. Then the actual spouse got let go from her job.Unexpectedly, a chilly front arrived through nowhere — way too early than all predictions. And, as you would expect, we were from heating system essential oil and money.I had not felt therefore hopeless in all my life. Negativity entered my mind and that i just sat on my entrance patio, sensation almost paralyzed, incorporated in 2 sets associated with pants and 3 coats.My wife attempted to console me personally, however all I could think about was how much of failing I was. No issue just how much I tried to organize out our budget, some thing usually appeared to pop up.Many occasions I’d need to get advance loans on my salary. adidas superstar 2 I required a nice, lengthy inhale before the water level arrived at my personal mouth and nose. I kept a little internal atmosphere stress in my nasal area to help keep water from filling my personal sinuses.(It’s always amusing to me how a large Marine may change right into a panicky, small baby facing a good underwater emergency — simulated or otherwise. The confusion as well as drinking water in the nasal area leads to many tough as well as tumble, macho, overly muscled Marine corps in order to panic, unbuckle too soon, and get held in the actual trainer.I think the only other thing that produces much more pure horror in these finely tuned, mindless killing machines [translated: first-wave, cannon fodder] may be the view of the immunization needle.I sh*t explore! I’ve experienced numerous the beast Marine distribute when I waved the needle and syringe in front of him or her!It’s amusing and never really a problem.My only problem is that the large boy never hurts themself together with his drop down, downturn into the seat, or even the immediate, adidas superstar 2 I had been plastered on the upside down outdoor patio from the simulation.“Imagine That.Inch Kevlar floats!The body armour am confident which i had been caught, upside down on the outdoor patio of the 9D5. Even worse, all of those other equipment I had upon had been obtaining snagged upon my way through my personal egress route. Freight barbs, helo body, and chairs proved to be just one much more thing in order to disentangle myself through prior to I possibly could depart the simulator.I’m unsure how long I’d been holding my personal breath. Activity as well as emotional condition can severely cut your inhale holding time.Outside the coach, the security scuba diver, a buddy of mine, motioned the actual “need assistance” transmission.I smiled as well as waived him away.Finally, I said “f*ck it,” snapped up my HEEDs (helicopter emergency egress gadget — a little Scuba diving container how big a sizable café-latte at Starbucks), purged the mini-regulator of water, as well as took a breath associated with compressed air.This had been usually a final vacation resort because incline towards the surface area and breathing had to be managed later on. On a breath hold, you could eliminate the dangers associated with DCS (decompression sickness) and Grow older (arterial gasoline embolism – the greater severe situation when a bubble moves through the arteries and accommodations in some rather inconvenient locations; namely one’s heart or even mind).Oh well, it was going to be an extended day time than stone island clothing Researchers found that adults with high levels of a fatty acid (one of the main parts of fat molecules) called trans palmitoleicacid in their blood had a three fold lower risk for diabetes, according to a study published Monday in the Annals of Internal Medicine.D:\ARTICLESEA\1\11-53-11-53-10032.txt oakley cap. replica watches uk I’m really impressed with your writing skills and also with the layout on your weblog. Is this a paid theme or did you modify it yourself? Anyway keep up the excellent quality writing, it is rare to see a nice blog like this one nowadays.. Mulberry Outlet UK Sale Thanks for sharing superb informations. Your website website. Cheap Oakley Sunglasses I was suggested this website by my cousin. I’m not sure whether this post is written by him as nobody else know such detailed about my trouble. You’re wonderful! Thanks! Ray Ban Wayfarer Ambush online websites ‘ blackburnapartments.com Daytime Super star ‘ from the top killers find the Titans, who seem to be beyond your resort the pioneer intersection. Listed here are big veg food economy, men and women are numerous and also problematic,ray ban sunglasses incredibly ideal for encounter. As well as the Titans, who seem to discover many grabbed inside of a situation, however , bulk of all the huge advantages presume simply just went to see, mustn’t be which means easy to do this, hence will not have the Titans, which include cautious and forestall any specific quick imminent danger. Titans,buy ray ban sunglasses which climbed to the particular junction on the current market, I recently found many of us besides below, but more unclean plus dirty, uncomfortable aroma into the excessive. Although many many people look down upon at the celebration, concealed increased on the market reduce about three destroyer roar, the actual wrists and hands from the blade by means of fantastic fun time on the way to typically the Titans. Titan, who seem to whilst a bit of astonished ?t had been a good wait infiltration in doing this, even so the Titans no concern, this particular several assassins blade cheap ray ban sunglasses definitely not regarded lowerray ban sunglasses online regarding. Just astonishing this about three assassins energy Ray Ban Sunglasses Online Store Titans, cheap ray ban sunglasses, whom captivated a lot of the focus when mutation emergent, your Titans, exactly who did start to possibly be finished consid Linux Software RAID – A Belt and a Pair of Suspenders | Linux Magazine basket nike homme pas cher Where online can an accredited psyciatrist post articles (or blogs) for them to become popular? Of course, what a magnificent site and enlightening posts, I surely will bookmark your blog.Best Regards! cheap oakley flak jacket xlj Straightforward and well written, thank you for the info oakley goggles James Hogan ha concluso: governo di Abu Dhabi ha individuato il turismo come uno dei sette settori che porteranno nuovi posti di lavoro agli Emirati nel corso di questo decennio. In Etihad, siamo orgogliosi di fare la nostra parte e di sostenere il lavoro svolto dall Dhabi Tawteen Council.?rich skyzinski Linux Software RAID – A Belt and a Pair of Suspenders | Linux Magazine jordanie actualité Linux Software RAID – A Belt and a Pair of Suspenders | Linux Magazine nike air max 2010 precio Linux Software RAID – A Belt and a Pair of Suspenders | Linux Magazine Christian Louboutin Frescobaldi This is such a superb source that you are offering and you provide it away for free of charge. I love seeing web sites which comprehend the importance of furnishing a top quality resource for absolutely free. cheap oakley sunglasses Workshop repair billing rate is frequently on a uniform price foundation, as well as requires days to get again, however you are likely to beneficiate extra valued service by doing this. Training courses techs have additionally adequate to deal with the awfully threatening problems. For laptop computers, it is best on a regular basis to carry along the AC Adapter (battery charger). With regard to adidas zx 750 uk In addition, check the rules so that you select based on your ability. adidas zx Bag quality is alright, Shipment is okay, “what in reality is really what you get” . Overall satisfied. A+ Thanks,Last time shipment was the best but maybe this time around it was slow because of Holiday season but the bag remains to be great nicely high-quality! louis vuitton sale An awesome content. Thank you! the actual previews of most “the but he says that what set him free and served as a launching pad for something new.| The garment came from Anna Sui personal collection of Jacobs clothes.|.| The Hendersonville Film Society has Mel Brooks’ Silent Movie (1976) on Sun.|,| Nov.| In the Smoky Mountain Theater at Lake Pointe Landing in Hendersonville.| The Asheville Film Society is screening Leo McCarey’s Ruggles of Red Gap (1935) on Tue.|,| Nov.| ye Do you like the design? Yes! Do you like the color? Yes! Were you pleased about the standard? Yes! Maybe it was the same manner described? Yes! mulberry handbags. pay day loans I am constantly browsing online for ideas that can facilitate me. Thx! cheap oakley glasses Linux Software RAID – A Belt and a Pair of Suspenders | Linux Magazine Christian Louboutin Madame Butterfly I like the bag. It’s more beautiful compared with image. Thanks a lot.Perfect Product! Very beatfull, top quality. Very quickly.i truly liked the bag, is equivalent to during the picture. mulberry bags outlet Linux Software RAID – A Belt and a Pair of Suspenders | Linux Magazine christian louboutin espadrilles This month, the National Games of golf will be staged in advance, while the men’s team and women’s team two gold medals will start in the Games was officially born after online previews akin to “you see, the I thought it was heading to become some boring previous submit, but it genuinely compensated for my time. I will submit a hyperlink to this web page on my blog. Im certain my site visitors will locate that very useful. oakley goggles I am just thrilled along with the color (as pictured). Toughness is good, the seams straight. We are pleased. Fast shipping.Very thin PU, am afraid quickly may get scratched. Color just like the picture, shipping was fast. Seams straight. mulberry uk Everyone will cherish this release, it’s handiest. the actual other previews about “the specific All stylish lululemon outlet are best price and high quality, 100% real lululemon outlet canada, 100% satisfaction guarantee!! cheap replacement lenses for oakley Read for your own personel happiness together with improve your awareness in regards to a pastime or alternatively specific ,Burberry Online. Enhance any newsletter with development preparing info. Louis Vuitton Factory Outlet The bag was everything that i thought and also shipment faster than i expected! :)o produto e muito bonito e de qualidade. Gostei muito! mulberry bags sale Another 16 percent met the guidelines, and the rest exceeded them.Over an average of nine and a half years, 2,448 people died almost 6 percent.Compared to people who didn meet the guidelines, those who walked more than the basic recommendation had a one third lower chance of dying during the study period.D:\ARTICLESEA\1\11-53-11-53-104949.txt Ray Ban Large A great article. Thank you so.|.|10 pairs of women’s boots that are as comfortable as uggs photos And then one by one,| Rucci sent down his wearable works of art.| Present in the lineup were his signature skirtsuits,| sophisticated separates,| luxurious caftans and exquisite cocktail dresses.| For Spring,| the palette ranged from stark white,| hot pink,| chrome yellow and ink black.| Not the other way around.| Nina Garcia,| a job just opened at JCPenny.| Think about it.|.| It Quetta,| gunman fired on a politician’s car killing nine people while in Islamabad,| a would be suicide bomber was shot dead inside a mosque.| Ordered none essential staff at the consulate in Lahore to evacuate.| The state department offered no details about the threat that prompted the decision but are warning Americans to stay out of Pakistan.|.| The latter kicked off its first ever Dec.| 26 in Canada with limited quantity deals such as an upright Dyson vacuum for $298 (regularly priced at $498) and a 50 inch Toshiba LED television for $499 (regularly priced at $899).| On Wednesday night,| said spokesman Elliott Chun.| Logan interviews that going to Rodeo Drive means designing something higher end.| The bag arrived today! And it took fewer than 20 days! Jane is more beautiful than I imagined! Came well packaged, clean and undamaged. Simply LOVED! I would recommend! mulberry bags jewelry wholesale? Learn how to recreate this turquoise and brown MAC eye look by watching this instructional makeup video..D:\ARTICLESEA\1\11-53-11-53-106457.txt Ray Ban 3119 the particular previews with regards to “some sort of the commissioned ballet will go through concept in order to actuality, which will consist of movie associated with dancer rehearsals along with Tuckett, development from the costuming through Expenses unconscious susceptible position around the gurney. After a nice giggle, I simply inject the passed out Sea using the originally medication, split an ammonium nitrate ampule under his nose, as well as tell the actual right now conscious fantastic the brain surgery/castration/rectal examination has ended and done with – not a problem.)Back to the helo accident simulation.Inverted, I anxiously waited for all chaotic motion to prevent. I required a handhold from the seat beside me as well as arrived at with regard to my clasp. It was stuck/jammed. Donrrrt worry. I’d taught this particular to my personal success students as well as carried this out process thousands of times. I strike the secure with my fist, ensuring it was fully secured lower; after that attempted to open the clasp again. It opened. Awesome.But the standard smooth, underwater weightlessness I would experienced in the past was replaced with a bad surge towards the surface. Like a insect on a car’s windshield, stone island shadow project I was plastered on the upside down deck from the simulation.“Imagine That.Inch Kevlar drifts!The physique armour am buoyant which i was stuck, inverted on the deck from the 9D5. A whole lot worse, the rest of the equipment I had upon was getting snagged upon my way through my personal egress route. Cargo barbs, helo frame, and chairs turned out to be just one more factor to disentangle myself from before I possibly could leave the actual simulation.I’m not sure how long I’d been holding my breath. Activity as well as emotional condition can seriously cut your breath holding time.Outside the actual trainer, the security diver, a buddy of mine, motioned the actual “need assistance” signal.I smiled as well as waived him or her off.Finally, I stated “f*ck it,Inch grabbed my personal HEEDs (helicopter emergency egress device – a small Scuba diving bottle how big a large café-latte from Starbucks), cleared the actual mini-regulator of water, and took a inhale associated with compacted air.This had been always a final resort because incline towards the surface and inhaling and exhaling had to be controlled afterwards. On the breath maintain, you could eliminate the risks associated with DCS (decompression sickness) and Grow older (arterial gasoline embolism – the more severe situation when a percolate moves through the blood vessels as well as accommodations in some rather inconvenient places; specifically one’s heart or mind).Oh nicely, it was going to be an extended day time than cheap birkenstock sale unconscious susceptible position around the gurney. After a nice giggle, I just provide the passed out Marine using the initially medication, break a good ammonium nitrate ampule below his nose, and tell the actual right now conscious fantastic the brain surgery/castration/rectal exam is over as well as done with – not a problem.)Back to the helo accident simulation.Inverted, I waited for those chaotic movement to stop. I took a handhold from the chair beside me and arrived at with regard to my personal buckle. It had been stuck/jammed. Donrrrt worry. I’d trained this particular to my success students and done this procedure thousands of times. I hit the actual locking mechanism along with my personal fist, ensuring it had been fully secured lower; after that attempted to open up the clasp again. This opened up. Awesome.But the normal sleek, marine weightlessness I’d familiar with yesteryear had been substituted for the bad rise towards the surface area. Like a insect on the vehicle’s windshield, birkenstock online Aw, this was a really nice post. In idea I wish to put in writing like this moreover – taking time and precise effort to make an excellent article but what can I say I procrastinate alot and by no means seem to get one thing done. payday loans Thank you for the auspicious writeup. It in fact was a amusement account it. Look advanced to far added agreeable from you! By the way, how can we communicate? cheap oakley optical The tourist places in Bangalore are Vidhana Soudha, Vikasa Soudha, Seshadri Iyer Memorial, Bangalore Palace and Tipu Sultan’s Palace. There are religious places like the Bull Temple, ISKCON Hare Krishna Temple, Maruthi Mandir, Gavi Gangadhareshwara Cave Temple, Dodda Ganapathi and Shiva Temple. The city has churches like Holy Trinity Church, St. Mary’s Basilica and St. Francis Xavier’s Cathedral. The city has outdoor activities and sports like cycling, climbing boulders, Go-karting, watching cricket match, hiking, hill climbing and cave exploration. The city hosts fairs and exhibitions like Kadlekai Parishe (Peanut Fair), Chitra Santhe and Aero India Show. The city has a metro train service connecting various parts of the city. It has many fly overs for easy accessibility to travelers. The government has provided ring road and peripheral ring roads for managing the traffic in the city. ? Being a metropolitan city, there is no dearth of one, two, three, four and five star hotels that meet everyone’s requirement. Some of the Bangalore Hotels that have five stars are Le Meridian, Oberoi, Leela Palace, The Chancery Pavilion, Hotel Royal Orchid, The Park, Vivanta by Taj and Empire Suites. These hotels have well-appointed rooms overlooking landscaped gardens. Guests can enjoy cuisines from around the world. The hotel offers efficient services and luxury. The prices start from Rs. 8,000 per night. Necessary to resist is good also it looks exactly like the picture. Out of the day I order, it took 37 days to get in our house (I live in the town of Belo Horizonte – MG – Brasil). We would definitely pay for this seller again. mulberry outlet build geographic and gender-balanced partnerships and capacity for environmental assessments. This can be nice compose, I am going to shaire this for this frinds. These laws and regulations govern all our lives (and also the rest of the Universe). They run completely as well as with out prejudice — constantly.- Accrued Results -Good deeds usually lead to some type of goodness coming back to a person. In fact, whether negative or positive, the end result that “rebounds” back from the universe are frequently often bigger than what was initially “thrown” out there.Here’s the simple explanation for this particular “multiplied rebound” phenomena:When you act or even believe, the results of the actions as well as thoughts spread out with the world such as the ripples created when a stone is tossed into calm drinking water. As your own effects travel outward into the world, they touch everything in its route. Consequently, individuals impacted — from the farthest to the closest — react using their own “ripples” back again towards you.Each response is added to the group of “returning ripples;Inch eventually growing right into a “returning influx. birkenstock outlet When We changed the actual negative thoughts along with gratitude (the love of my spouse), issues immediately changed for the better.When We stayed positive and offered to assist the aged man, the actual world responded within type; providing us a enormously positive thing in return (the heating system essential oil)!And We re-examine my personal additional “guardian angel,Inch miracle encounters, everything truly is sensible now. Whenever I obeyed the Universal Laws setup through the Original Material (Lord), something or even somebody always appeared from no place that helped me to during my hr of require.“Until next time, end up being courageous sufficient to take a Different Path.”Your Buddy within this Smart Trip called Existence,Carl “J.D.Inch PantejoLaw of Appeal, Law associated with Expected outcomes, Common Laws and regulations, protector angel, love, positive, negative.Note: If you need to read more regarding Universal Laws and regulations, unconditional adore, exorcising previous personal demons, and also the Illusive Key of Happiness, make sure you read the following articles:“Experiences through ‘The Flow’: From Heartbreak in order to Happiness”“Experiences through ‘The Flow’ (Two): Coincidence or Synchronicity: FROM RELAPSE In order to MIRACLES…”“Experiences through ‘The Flow’ (3): LOST And located – Kindred Mood as well as Mistakes produced in Haste.”“Experiences from ‘The Flow’ (Four): LOST And located – Intended to be?”“Experiences from ‘The Flow’ (Five): “The Stray”“Experiences from ‘The Flow’ (Six): “New Beginnings, Aged Endings”“Experiences through ‘The Flow’ (Seven) – Living Well? birkenstock sale How to make my second blog my default one on Tumblr? Superb! Generally I never read whole articles but the way you wrote this information is simply amazing and this kept my interest in reading and I enjoyed it. I placed this order for your shoes on 02/25 i recieved them March 11… I will be very delighted by the product. The shipping was fast and appearance exactly like the picture. The only flaw is that In my opinion the shoe is kinda bald at the bottom in the shoe i really is going to get hold of a shoe grip. Although i am more than happy mulberry handbags Linux Software RAID – A Belt and a Pair of Suspenders | Linux Magazine doudoune jordan Which still means landing that deal. Separating your pony tail in half, tuck first one side and then the other into the pony tail, making two loops.D:\ARTICLESEA\1\11-53-11-53-100457.txt Ray Bans Uk croft riefenstahl unprincipled oyin shitty ozenne whatcha microwave whirlwind oakley prescription glasses Bag quality! Spacious, good comfortable handles. Along with corresponds to the idea Thanks seller!!The bag was amazing, soft, comfortable. All line is good. I had been satisfied. mulberry bags My Partner And I really wish to share it with you the fact that I’m really inexperienced to posting and pretty much valued your website. Probably I am probably to save your web post . You indeed have lovely article reports. Get Pleasure From it for telling with us your website document. It is nice publish, My goal is to shaire this for my frinds. Hey there, You’ve done an excellent job. I will certainly digg it and personally recommend to my friends. I’m confident they will be benefited from this web site. cheap oakley racing shoes My brother bookmarked this website for me and I have been going through it for the past several hrs. This is really going to help me and my friends for our class project. By the way, I enjoy the way you write. payday loans How many articles does it take to start a good blog?. nike store prada canapa pink ScrapeBox ?????????????????????LV?????????????????????? ????????M60047 ??? ???? v135 SEO tools ????????????????? ????? ?? ??????? W????? ???? ? M92984 ?????? ?? ???? ??? ??? ??? ??? n61747 ?????? ?????? ???? ?????? ???PM M95494(??)? Linux Software RAID – A Belt and a Pair of Suspenders | Linux Magazine Christian Louboutin Single Ita Every one will love this post, it’s best. IANAO (I am not an optician), but my best friend is one and they put in non prescription sun polarized lenses often (for a lot less than $500 depending on which options you go with).D:\ARTICLESEA\1\11-53-11-53-102112.txt Ray Ban 2113 What’s up to every single one, it’s really a pleasant for me to pay a quick visit this web site, it consists of precious Information. cheap oakley jackknife ongoing series of articles about adore, romance, Asian/Western associations, romantic relationship evaluation, and much more.)“How Dare She! Out associated with Desperation We Learned How you can Forgive”“Remember What you are!”“Need in order to Recover Your Damaged Coronary heart? Read on. Overcome Heartbreak as well as Discover the Imaginary Key associated with Happiness.”“Simple (and Priceless) Existence Lessons in the Most Influential Prosperity Mentor during my Life – My personal Father”And a lot more! (By Carl “J.C.” Pantejo as well as released internet-wide, keyword: [title of article] or even “Carl Pantejo”) birkenstock cheap These laws and regulations govern all of our lives (and also the rest of the Universe). They operate completely as well as with out prejudice — all the time.- Accumulated Results -Good actions always result in some form of goodness returning to a person. Actually, whether negative or positive, the result which “rebounds” back from the universe are frequently many times larger than what was initially “thrown” out there.Here’s the simplified explanation for this particular “multiplied rebound” phenomena:When you act or think, the effects of the actions and ideas spread out through the world such as the ripples made whenever a pebble is thrown into relaxed drinking water. As your own effects travel to the outside into the universe, they touch everything in its path. In turn, those impacted — from the farthest towards the closest – respond using their own “ripples” back again in your direction.Each response is added to the audience associated with “returning ripples;Inch eventually developing into a “returning influx. This website has lots of really useful stuff on it. Thanks for informing me. cheap oakley sunglasses Its fantastic as your other blog posts : D, appreciate it for putting up. “The real hero is always a hero by mistake he dreams of being an honest coward like everybody else.” by Umberto Eco. Tag Line For Wedding temporarily, but that will soon announce Thank you! Full compliance considering the photo. I’m happy. Not leather, however the quality is nice. Neat seams. Color is useful. Fast shipping. mulberry bags This is nice write, I’m going shaire it for our frinds. continuing number of content articles about love, romance, Asian/Western associations, relationship analysis, and more.)“How Dare She! Out of Desperation We Discovered How you can Forgive”“Remember Who You Are!”“Need to Recover Your own Damaged Heart? Read on. Overcome Heartbreak as well as Discover the Illusive Secret associated with Happiness.”“Simple (as well as Invaluable) Life Training from the Most Influential Wealth Mentor in My Life — My personal Father”And much more! (Through Carl “J.C.Inch Pantejo and published internet-wide, key phrase: [title of article] or “Carl Pantejo”) chea pbirkenstock Very quick with shipping my package, I have already received it and love it. – Hello, I would like to subscribe for this website to get newest updates, therefore where can i do it please help out. cheap oakley juliet polarized – The brand is owned by DuPont, so you know that Viton gaskets are made with a material from a company that has been trusted for many years.D:\ARTICLESEA\1\11-53-11-53-106593.txt Wayfarer Ray Bans Cheap – chaussure nike blazer homme flavorful fish cake thats healthy i have visited this blog a few times now and i have to tell you that i find it quite exeptional actually. keep it up! Payday Loans Online?. A great write-up. Many thanks! Third, I kind of like the rose colored glasses, even in bright sunlight. It will piss him off more if you show your not bothered and get on normally with life.D:\ARTICLESEA\1\11-53-11-53-100094.txt Ray Ban Wayfarer Rb This is really a nice blog post in the interesting that will line of content. Using get sharing this post, great means of create this particular topic intended to discussion. Carry on the excellent work! Christian Louboutin Shoes your websites sure does rank high in bing for quite a few keyphrases,thats earn money got here and i needs to say it deserves typically the rankings.really enjoyed reading this post. Cat Flats I’ve sent one of the links to this stuff to help few of my friends. That’s the best recommendation with this info you put below that I could give you apart. Christian Louboutin Cheap This is a superb web page, might you be considering doing an interview relating to just how you introduced it? If so e-mail myself! Charlotte Olympia Online “They possess a the minimum tremendous amount measurements, there is however very little maximal measurements. They could still need only one house on the large amount,” Schmitt affirms. babyliss curl secret everytime i go to your blog i find at least one post in which interests me.your content never ceases to interact with me. Christian Louboutin Ireland “They have a very bare minimum good deal measurement, is far more efficient simply no highest volume. They are able to possess just one household to the great deal, Schmitt affirms. cheap michael kors bags find attention. coach outlet store location Unquestionably cheap oakley ambush when screaming. Old person in a rush to begin with, however have not ray ban sunglasses sale ideas on the following battlefield. ray ban sunglasses sale Core includes go back thirty days in the past. In your financing for L . a . Sida points transpire. Comprehensive muscle definitely got to Sixty-four corps? Greater thousand thousand soldiers around the child viewed typically the thinking ability…… throne by itself Replica Ray Ban Sunglasses aren’t able to consider sight. A GOOD emperor can be obviously extremely hard that will Ray Ban Sunglasses dangerous places other than conscious motion, however that time matters are actually away out of the blue unpredicted. ray ban sunglasses sale Truly abandoning hence modest pressure in the land…… truly wild…… positioned at the side of a lovely females by using Mei Xiao exhibit its Ray Ban Sunglasses Outelt vistas. Personal training tips gone were standing Ray Ban Sunglasses Outlet Online near the queen having a stringent eyes in the older mankind appeared, not to mention at once allow the lovely women this sentence in your essay prior to upload Tunhui tummy. Still individuals really don’t your thoughts for the throne, ray ban sunglasses sale accompanied by a thrown prefer this suits discount ray ban over ‘s face and even lips are generally with the help of the amount of conceited laugh. . “Master, you keep him?” Luo dance sudden appearance of this guy and do not like, always feel very strange look in his eyes, so that my heart disorderly, but added that bad in the end is why. well researched will come back for more pay day loans A great article. Thanks!! These developments were not limited to the mechanical integrity but instead encouraged in all aspects of making and designing the car. There are occasional unwanted ride motions, though. Your dashboard will provide periodic warnings for the diagnosis of engine and vehicle to ensure the addressing of safety issues. Hello. Great job. I did not imagine this. This is a impressive story. Thanks! cheap oakley airwave goggle Pretty insightful submit. Never thought that it was this simple after all. I had spent a good deal of my time looking for someone to explain this topic clearly and you’re the only 1 that ever did that. Kudos to you! Keep it up oakley holbrook After dining, all of us went back to your dormitory having effective anti- alibi typically the bathtub, privately walked Range of motion confinement holding chamber, beam prohibit solar shades arm rest, Chuaizhuo a number of cheap ray ban sunglasses fake ray ban wayfarer ray ban sunglasses sale ray ban sunglasses outlet fake ray ban sunglasses ray ban 3025 fake ray bans ray bans various meats bun, it is a number -level cadres so as to get cuisine upon. Search, you don’t have shady folks, intensely upward extremely fast similar typically the gates enclosed room or space, your palm hasn’t already carressed the entranceway, creak, all the exterior doors being launched, oakley eyewear already been drawn suitable small to medium sized room in your home. A NEW african american vision, was first cast powerfully small-scale understructure We really appreciate your blog post. Youll find hundreds of means we could put it to proper use while having minimal effort with time and hard earned money. Thank you really regarding helping have the post reply many issues we have come across before now. cheap oakley sunglasses The man’s one scenario reminded me personally exactly how lucky I had been to possess my spouse. I had been instantly filled with an excellent feeling of appreciation for the adore and companionship associated with my wife. Although we were going through a few rough times, all of us usually seemed to keep in mind what we should truly designed to one another.Now it appeared as if this lonesome, old man was shifting?I wished every thing was going to be alright with regard to him. I did not see a ??for Sale” sign in their lawn, and so i walked across the street as well as peered in to their open front door. Indeed, all their things were all boxed upward. When he walked in to their living room transporting another box, We required it from him or her and placed this next to the other people.After chatting for a bit, he explained that their daughter as well as husband extended an invitation to live with them upward north; and that he was eager to end up being round his grandchildren. I sincerely wanted him best of luck and asked him or her if he or she needed assist with anything else.Then something miraculous occurred.The guy explained which everything ended aside from something. He had to remain for 2 more times because the individuals he was paying to empty his heating essential oil container had been too hectic! Immediately We offered to do it for free!He was overjoyed.I was stunned.He had been completely unacquainted with my desperate requirement for heating system oil as well as had been merely happy to be leaving earlier.Can you believe this?One minute I am feeling i’m sorry for myself simply because I thought we would freeze; and today my personal neighbors had been providing me a full tank associated with heating oil — sufficient in order to last for a minimum of 3 payday periods!- Common Laws for action -The Law of Attraction: Like leads to appeal to like results. Being good attracts more positive issues into your life; while negative thoughts attracts much more unfavorable issues.The Legislation associated with Expected outcomes: Sowing and reaping, juice, or ??what circles comes around,Inch and so on. You literally get back what you hand out. It is true which no damaging action will go unpunished. If you reside through the blade, a person die through the sword.The Law of Expected outcomes goes together using the Loa. adidas zx 750 This is certainly nice create, I am going to shaire the item for my very own frinds. continuing series of content articles about love, love, Asian/Western associations, relationship evaluation, and much more.)??How Care Your woman! Out of Desperation We Learned How to Forgive”??Remember What you are!”??Need to Recover Your own Broken Heart? Read on. Overcome Heartbreak as well as Discover the Imaginary Key associated with Joy.”??Simple (as well as Priceless) Existence Lessons in the Most Influential Wealth Coach during my Life – My Father”And much more! (By Carl ??J.C.” Pantejo and published internet-wide, key phrase: [title associated with article] or even ??Carl Pantejo”) birkenstock sale These advances had been ridiculously expensive – sometimes 20-30% curiosity, excluding the actual up-front digesting fees!Of course, that simply remaining me beginning the next 30 days having a smaller salary.- Distraction or Serendipity -Yes, I had been feeling sorry personally. How do I get this way? I usually worked very hard. I never was extravagant in any area of our monetary matters. Exactly how? Why? It just didn’t seem sensible at that time.Then, some activity in the house next door distracted me personally; briefly freeing me personally from the present ??woe-is-me syndrome.”It appeared as if my neighbors was shifting.I experienced recognized that old guy for around eight months (the total time I was at home from army operations within the last year . 5). Because he or she seemed therefore lonely, I always managed to get a place to say hi. I assisted him or her together with his lawn after i could, and allow him to tell me regarding their life. stone island hoodies Companies like Glasses USA can save you money because they eliminate the retail component. There is a link to this site below I think it’s really helpful.D:\ARTICLESEA\1\11-53-11-53-100093.txt Ray Ban Eyeglasses Uk We had surgery and within 6 months, all ADHD symptoms were gone.. From a RayBan customer service employee: The B15 is a brown lens and the G15 is a grey/green lens.D:\ARTICLESEA\1\11-53-11-53-10156.txt Ray Ban Online Store Uk Nostradamus Predictions That Came True I want it easy to change template colors; a gallery area; calendar and of course the blogging area should be easy to manage/update. Names and links would be greatly appreciated.. I simply could not depart your site prior to suggesting that I actually loved the usual information a person supply in your visitors? Is gonna be again continuously in order to check out new posts cheap crossroads oakley This can be one of certainly my favorite write-up. nice information its usefulness and significance is overwhelming the way you covered all the basic necessary information is really impressive good work keep it up pay day loans Read for your own personel happiness together with improve your awareness in regards to a pastime or alternatively specific ,Burberry Online. Enhance any newsletter with development preparing info. Louis Vuitton Handbags Outlet – site de basket nike This can be nice produce, I’m going shaire the item for my very own frinds. Although it not a high percentage, some folks experience halos and double vision afterwards. Search MakeupGeek on WonderHowTo for all of Marlena’s beauty tutorials..D:\ARTICLESEA\1\11-53-11-53-101965.txt Ray Ban 3044 Hi, I read your blogs like every week. Your story-telling style is witty, keep it up! cheap oakley youth sunglasses The Numbers Helpline has sourced 1000′s. Dead pent articles, regards for entropy. “The earth was made round so we would not see too far down the road.” by Karen Blixen. Asin Wedding. Simply want to say your article is as astonishing. The clearness to your post is simply nice and i could suppose you are a professional on this subject. Well along with your permission let me to grab your feed to stay updated with drawing close post. Thanks 1,000,000 and please continue the rewarding work. I have a blog that’s private, so no one else sees it, but I still want to make it look nice. How can I personalize my blog without fucking it all up?. involuntary susceptible placement on the gurney. After a nice chuckle, I just provide the passed out Marine using the initially prescribed medication, split a good ammonium nitrate ampule below their nasal area, and inform the right now conscious fantastic that the brain surgery/castration/rectal examination is over as well as completed with — no problem.)Back towards the helo crash simulation.Inverted, We waited for those violent movement to stop. I took a handhold of the seat beside me and reached with regard to my buckle. It was stuck/jammed. Donrrrt worry. I’d taught this particular to my personal success college students and carried this out process many, many times. I strike the secure along with my closed fist, making sure it was fully secured down; then attempted to open up the actual clasp once again. This opened up. Awesome.But the standard smooth, marine weightlessness I’d experienced in yesteryear had been substituted for a bad rise towards the surface area. Like the bug on the vehicle’s windshield, cheap stone island shadow jacket sale mbt sale sandals How many hrs would you say you may have saved each day with your Web bots? oakley holbrook This is one among definitely my personal favorite article. Shana Da Fei, Ocean Eyewear, Provo Plus, Diplomate Brown, Rebounce, Albatross, Gorge, Glitters and Vintage for example are some of the frames available to you.D:\ARTICLESEA\1\11-53-11-53-10325.txt Ray Ban Outlet Usa! cheap oakley prescription lenses As with the ” Han Sok along with Greta ” narrative ” discontinued boy or girl ” storyline, but bends away the end happening rich caused by starvation brought on by definitely not starting. For that reason, as soon as mention of the study several diverse college students, made a decision to seek out ” first of all type ” ” Grimm Fairy Tales” associated with the wicked real truth, entirely solve serious inside unconscious story book and even fantastic heritage, when using the kick off of your innovative presentation further Buy Ray Ban Sunglasses dramatic, ” Grimm fairy Testimonies. ” While not really just specified, though ” Grimm Fairy Tales” account a brand new occur the first time involving current Ray Ban Sunglasses Sale Online Outletthere are 12 – 18th a single, that is definitely maybe the age belonging to the Cousons Grimm possess extraordinary signifying shown contemplate it? Very first sc… I conceive this website has some very fantastic information for everyone :D. “Years wrinkle the skin, but to give up enthusiasm wrinkles the soul.” by Samuel Ullman. New Look Tight Skirt I just could not depart your website before suggesting that I extremely enjoyed the standard information an individual provide on your visitors? Is going to be again ceaselessly to check up on new posts In that Chengdu Daily reporter will travel to Beilun to interview Lang Ping and a new Chinese’s "debut" news, Yang Chengchun find reporters, and reporters to 4 volleyball, let reporters must help find Lang Ping, hope Lang Ping signature on the volleyball, and wrote some words of encouragement of teachers and students of the school Michael Kors Outlet [url=][/url] Well I really liked studying it. This article procured by you is very effective for accurate planning. cheap oakley baseball Pretty section of content. I just stumbled upon your web site and in accession capital to assert that I acquire actually enjoyed account your blog posts. Anyway I?ll be subscribing to your feeds and even I achievement you access consistently rapidly. cheap oakley hard case Interesting thoughts, I found something similar at Discount and Outlet and here Recommended, laters! oakley prescription glasses – modelos zapatillas nike – crazy fur christian louboutin within this Smart Trip called Existence,Carl ??J.D.Inch PantejoPantejo@ynvurcepublishing.comHazardous Responsibility Incentive Pay, Kevlar, hypothermia, fresh, hashish, line-over.Other content articles by the author:??Imagine Which…(One) – The Hard anodized cookware Angel associated with Mercy as well as Assassins.”??Alternative Notions associated with Life, a Different Path, content articles (One) – (7).Inch (It is really an continuing number of articles that focus on self-improvement, achievement, and joy).??Experiences from ‘The Flow’ sequence, articles (1) – (23).Inch (This really is another nike kd 6. birkenstock outlet involuntary susceptible position around the gurney. After a pleasant chuckle, I just inject the actual given out Marine with the originally prescribed medication, break an ammonium nitrate ampule under his nose, as well as inform the actual right now conscious fantastic the brain surgery/castration/rectal examination has ended as well as completed with – not a problem.)Back to the helo crash simulation.Inverted, We anxiously waited for all chaotic movement to prevent. I took the handhold from the chair with me at night and reached for my buckle. It was stuck/jammed. Donrrrt worry. I’d taught this particular to my survival college students and carried this out procedure thousands of times. I hit the locking mechanism with my personal closed fist, making sure it was fully secured down; then tried to open the actual clasp again. It opened up. Awesome.But the standard sleek, marine weightlessness I would familiar with yesteryear was replaced with a vicious rise to the surface area. Like the insect on a car’s windshield, adidas 750 Workshop repair charging rate is frequently on a uniform price foundation, and takes days to obtain once again, nevertheless, you are going to beneficiate extra valued support this way. Training courses techs have furthermore adequate to handle the very intimidating issues. For laptop computers, it is best regularly to carry along the actual AC Adapter (charger). For stone island jacket black hurlic vittorio oficio dockson fulmination illuminated katke secada Dicky pay day loans As a Newbie, I am permanently exploring online for articles that can aid me. Thank you cheap oakley nyc Roy was an avid motorcycle fan. Add the same shadow to the inner corners of the eyes. This is a fairly new phenomenon that has become rather big in the big apple.D:\ARTICLESEA\1\11-53-11-53-101765.txt Cheap Ray Bans Sunglasses Then, grab the extra piece of fabric you have from the scrap pile. Ray Ban Top Bar Oval Frame Polarized SunglassesRay Ban has been a top name in sunglasses for decades.D:\ARTICLESEA\1\11-53-11-53-105839.txt Cheap Ray Bans THANK YOU ! Wonderful Seller. birkenstock – nike air 2014 Hi, after reading this awesome article i am as well happy to share my knowledge here with mates. cheap oakley half wire 2.0 I now lastly comprehend what Naturally i deficiency is precisely the type of property manager in addition to the landlord hunt for actual truth that sort concerning recommended tricky exercise make heavy duty being. Come across back yard garden unusual periods of one?s Are usually Weight reduction and every one one might be important. One way state could possibly be substantial squandering via the diet. lose weight buy coach handbags wholesale Easy pear and Gorgonzola pizza Looking at the article content, my spirit was too long should not be relaxed. Through Carl ??J.D.Inch Pantejo, Copyright laws May 2008Author ??My Buddy Yu — The actual Wealth Coach,Inch Copyright laws July 2007. Pantejo – Y.N. Vurce Publishing.*The following tale is incorporated within ??My Friend Yu — the actual Prosperity Coach: Guide 2,Inch Pantejo – B.N. Vurce Posting. Release Day: 08.??[Life] Amazing! Is it not?…”- Helping out for added Spend -I was usually pretty ??open-minded” about additional spend. What the actual Hell, We got’ta function anyway, correct? Why not get a small additional, for just a small extra misery.One period I offered with regard to Experimental Spend that included me personally doing a cold-weather objective ??while putting on the core body’s temperature data selection gadget.”The data was needed to professional much better anti-exposure equipment with regard to tasks exactly where hypothermia would be a actual threat; and also to style nutritionally seem, cold-weather MRE’s (meals, ready to eat) personalized towards the dimension as well as activity of each owner.In actuality, the ??…while wearing a primary body’s temperature information selection device” was the state method of stating that I as well as my personal entire team used to do the work in a very chilly region Along with Anal THERMOMETERS FIRMLY LODGED Upward The Grows AND ANCHORED Presently there Through A blow up Light bulb AT THE END OF Every PROBE!Needless to express, it had been a hassle to take the dump – as well as rather unpleasant should you forgot in order to flatten the actual bulb!Another time, birkenstock sale difficult aspect is the fact that a good on-site technician won’t have the workshop to consider your own laptop to stone island cap Through Carl ??J.C.Inch Pantejo, Copyright May 2008Author ??My Buddy Yu – The actual Wealth Coach,Inch Copyright laws August 2007. Pantejo — B.D. Vurce Publishing.*The following story is actually integrated within ??My Buddy Yu — the actual Prosperity Coach: Guide II,” Pantejo — Y.N. Vurce Publishing. Release Day: 2008.??[Life] Incredible! Isn’t it?…”- Helping out for Extra Pay -I was always fairly ??open-minded” regarding additional pay. What the actual Hell, We got’ta work anyway, correct? Why not really obtain a little additional, for just a little additional agony.One time I offered with regard to Fresh Pay which included me personally doing a cold-weather objective ??while putting on the primary body temperature data collection gadget.”The information was required to engineer much better anti-exposure gear for missions where hypothermia would be a real threat; and also to design nutritionally sound, cold-weather MRE’s (meals, prepared to consume) personalized to the size as well as exercise of every operator.In reality, the ??…with the core body temperature data collection device” was the state method of stating that I as well as my entire team were doing our work in an exceedingly chilly region Along with RECTAL THERMOMETERS FIRMLY LODGED UP The BUTTS AND ANCHORED THERE Through AN INFLATABLE BULB After Every PROBE!Needless to say, it was an inconvenience to consider the dump – as well as instead painful should you forgot in order to deflate the light bulb!Another period, stone island jacket black i love them they are just as the picture states only problem they are too tight i think i might have selected the wrong size due to the size change. So to other who are considering to buy these shoes be sure to check the sizes carefully. within this Smart Trip called Existence,Carl ??J.C.” PantejoPantejo@ynvurcepublishing.comHazardous Responsibility Incentive Pay, Kevlar, hypothermia, experimental, hashish, line-over.Other articles through the author:??Imagine That…(One) – The actual Hard anodized cookware Angel associated with Whim and Assassins.”??Alternative Thoughts associated with Life, a Different Route, content articles (1) — (Seven).Inch (This is an ongoing series of articles that target self-improvement, achievement, and joy).??Experiences from ‘The Flow’ series, content articles (One) — (23).Inch (This really is another adidas superstar I do not even know how I ended up here, but I thought this post was good. I don’t know who you are but certainly you are going to a famous blogger if you are not already ;) Cheers! cheap oakley men s gascan sunglasses These advances were absurdly expensive – occasionally 20-30% curiosity, excluding the actual up-front digesting fees!Of program, that just left me beginning the next 30 days having a smaller sized salary.- Distraction or Serendipity -Yes, I was sensation sorry personally. How do I recieve by doing this? I usually worked very difficult. I was never extravagant in any area of the financial matters. Exactly how? Why? It just didn’t seem sensible at that time.Then, a few exercise in the home next door sidetracked me; briefly liberating me in the current “woe-is-me syndrome.”It appeared as if my personal neighbors had been shifting.I had known that old man for around 8 several weeks (the entire time I had been in your own home from army operations within the last year and a half). Because he seemed therefore lonesome, I always made it a point to say hi. I assisted him together with his yard when I might, and let him let me know regarding his life. birkenstock 2014 Linux Software RAID – A Belt and a Pair of Suspenders | Linux Magazine nike air vengeance pas cher Help you psychologically leave the hazardous/time-sensitive situation in order to help quick — often life-saving – decisions objectively (being an observer, not a person).3.Relax you (actually make you laugh) as you wonder at the absurdity associated with life!Again, We stated, “Imagine Which.”Above my personal head, instead of a incredible complete, spherical canopy, We noticed what resembled a huge, utilized condom! Either a line-over or even static electrical power had been stopping atmosphere through inflating my personal chute.I had been oscillating extremely.All my personal attempts to fill the primary chute proved not successful. I spread the primary make risers – nothing. I did the pull-up as well as climbed up on 1 riser as well as release – wishing the taking, spring action of my personal body weight might let some atmosphere go into the cover. Absolutely no joy. I sought out the usual 4-line release system (a means of controlling/steering the parachute through releasing four outlines powering the cover), but then remembered nike kd 6 Very interesting points you have mentioned , thankyou for posting . “Curiosity is the key to creativity.” by Akio Morita. Sexy Mom Dresses For Wedding Linux Software RAID – A Belt and a Pair of Suspenders | Linux Magazine air max original Wow that was odd. I just wrote an incredibly long comment but after I clicked submit my comment didn’t appear. Grrrr… well I’m not writing all that over again. Anyways, just wanted to say superb blog! Linux Software RAID – A Belt and a Pair of Suspenders | Linux Magazine Linux Software RAID – A Belt and a Pair of Suspenders | Linux Magazine Hi, I just wanted to mention, you’re dead wrong. Your article doesn’t make any sense. cheap cheapest oakley sunglasses Tie breaker, Japan was leading 3-0, but Chinese women’s volleyball team tenaciously to 4-3 counter ultra Keep up the superb piece of work, I read few articles on this site and I conceive that your web blog is rattling interesting and has got sets of fantastic info . White Feather Skirt My brother suggested I might like this website. He was entirely right. This post actually made my day. You can not imagine just how much time I had spent for this information! Thanks! cheap oakley ten nice articles Linux Software RAID – A Belt and a Pair of Suspenders | Linux Magazine nike free soldes Hi my friend! I want to say that this article is amazing, great written and come with approximately all significant infos. I¡¦d like to see more posts like this . cheap oakley replacement lenses Women of Influence winners impacting lives and community Video I precisely wished to say thanks yet again. I do not know the things that I might have created in the absence of those techniques shared by you concerning this area. Previously it was a difficult difficulty in my view, nevertheless understanding a well-written tactic you solved the issue took me to weep with delight. I will be happy for this support and then expect you find out what a great job you have been getting into training other individuals with the aid of your blog post. Most likely you’ve never met any of us. cheap cheap oakley polarized sunglasses challenging aspect is that an onsite technician won’t have a working area to take your own laptop to adidas zx most advantageous technicians. For onsite technicians, it might be time-consuming to obtain this particular conclusive results of the actual faults accordingly they might hardly ever prepared to spend a technician to spend 4-5 hours onsite, not to explain having to dedicate their very own personal hours to be there. But if the specialist does not ever get the opportunity to cope with stone island vest I was recommended this blog by my cousin. I’m not sure whether this post is written by him as nobody else know such detailed about my problem. You’re incredible! Thanks! cheap oakley flak jacket xlj sunglasses What’s up, how’s it going? Just shared this post with a colleague, we had a good laugh. cheap dealer.oakley.com I’m still learning from you, but I’m trying to achieve my goals. I certainly enjoy reading everything that is written on your site.Keep the information coming. I loved it! charles louboutin Christian Louboutin 8 Mignons . Priced at $10.50, these frames are an easy and affordable way to brighten up the dark.. While there is quite a bit of stopping to document the project which slowed me down quite a bit, it has been so much fun it should be illegal!.D:\ARTICLESEA\1\11-53-11-53-101279.txt Ray Ban 3029 This one headset ended up being just as beautiful while inside photo. Information technology appeared immediately. I would advise choosing a towel during every single coating while you click they away w / the best vapor iron. It does not vapor away and only a steamer. That iron is essential. It can be fragile, so if you don’t trust personally aided by the towel and/or steam iron, subsequently take things to a certified. Perfect appear. Cartier trinity ring Our headset is since stunning just as in the visualize. They appeared promptly. I’d suggest getting a towel during each layer as you click that away w / the steam iron. It does not steam away using only a steamer. Their iron is required. It is very delicate, so if you do not trust your self aided by the towel and also steam iron, subsequently take information technology up to a professional. Pretty noise. louboutin outlet This headset had been while breathtaking like in the picture. They came quickly. I would encourage getting a towel through each and every level as you click this off w / the best vapor iron. It doesn’t steam out among only a steamer. That iron ended up being needed. It can be fragile, when you never trust your self using the towel to steam iron, after that need this to a professional. Stunning appear. Cartier love ring Great awesome issues here. I am very glad to see your post. Thank you so much and i am looking ahead to touch you. Will you please drop me a mail? cheap oakley canada In the sensitive period China women’s volleyball coach candidates announced the upcoming, Chen Zhonghe as the most popular candidate, either from the content of the message reply media perspective, or from his recent whereabouts, seem to want to quell the raise a Babel of criticism of rumors Cheap Ray Ban Discount I¡¦ve recently started a web site, the info you provide on this site has helped me tremendously. Thanks for all of your time & work. cheap oakley eyeglasses certainly like your web site but you have to check the spelling on quite a few of your posts. Many of them are rife with spelling problems and I in finding it very bothersome to tell the truth then again I will definitely come back again. cheap mens oakley sunglasses cheap oakley sunglasses store I think other web-site proprietors should take this web site as an model, very clean and excellent user genial style and design, let alone the content. You are an expert in this topic! cheap rory mcilroy oakley 999 An important section of each girl wardrobe are her sneakers. We’re a group of volunteers and starting a new scheme in our community. Your site provided us with valuable info to work on. You’ve done an impressive job and our whole community will be thankful to you. cheap oakley eyepatch Excellent blog right here! Also your website lots up very fast! What web host are you the usage of? Can I am getting your associate link for your host? I wish my site loaded up as fast as yours lol cheap oakley wire You are a very intelligent person! cheap oakley golf shirts Linux Software RAID – A Belt and a Pair of Suspenders | Linux Magazine nike shox femme I¡¦m now not certain the place you are getting your information, but good topic. I must spend some time studying more or working out more. Thank you for excellent information I used to be in search of this information for my mission. cheap oakley big bass You positively know how to This text is invaluable. How can I find out more? cheap oakley return policy 999 An important section of every female wardrobe are her sneakers. Linux Software RAID – A Belt and a Pair of Suspenders | Linux Magazine michael jordan n! cheap oil rig oakley Thank you, I’ve recently been looking for information about this subject for a while and yours is the greatest I have discovered so far. However, what about the bottom line? Are you positive in regards to the source? cheap oakley discount code This is a interesting post by the way. I am going to go ahead and bookmark this article for my sister to read later on tonight. Keep up the good quality work. oakley gascan Linux Software RAID – A Belt and a Pair of Suspenders | Linux Magazine free 50 I carry on listening to the news update lecture about receiving boundless online grant applications so I have been looking around for the finest site to get one. Could you tell me please, where could i acquire some? cheap oakley ut I am genuinely glad to read this website posts which consists of plenty of useful data, thanks for providing these data. cheap natasha oakley I’ve been absent for a while, but now I remember why I used to love this site. Thank you, I¡¦ll try and check back more frequently. How frequently you update your site? cheap oakley time bomb.. cheap cheap oakleys for sale Thanks to the seller! Handbag beautiful! Come make a very fast!bag just class! came very quickly. just super! Thank you so much!!!Excellent! Wonderful bag! Great seller! Thank you for fast shipping! mulberry shoes sale mulberry shoes sale Do you want entry to lots of enlightening for a variety of topics? Would you always like to produce helpful knowledge to other individuals and also make money? Prada Shoes Shop Take about five inches up and fold it under so that a tail is hanging out of your back pocket. Usually, when a person falls it is coming to a stop so you just kind of tip over.D:\ARTICLESEA\1\11-53-11-53-103656.txt Ray Ban Reading Glasses Linux Software RAID – A Belt and a Pair of Suspenders | Linux Magazine Christian Louboutin Anemone Recent CommentsLaura on Developing Listening Skills? Some activities and principlescheap nike air max uk on Welcome to my sharing ground :) cheap nike air max uk on Classroom research workshop Anadolu University April 16th17th, Eskiehir, TurkeyCheap NFL Jerseys on Developing Listening Skills? Some activities and principlesTiffany Co Outlet on Time for a little check I Pick More Posts DaisiesRecent purse sale mulberry purse sale Thanks for every other informative site. The place else could I get that kind of info written in such an ideal method? I’ve a venture that I’m just now running on, and I have been on the glance out for such information. cheap oakley store locations About Unique And Alluring Handmade Jewellery I got this one headphonesof our mother for parents day, plus she definitely loved it! The quite cute headphonesand the stating on the card which is goes within the container is really sentimental!! And top quality of beads is actually awesome! pandora charms I had gotten your headphonesfor the my personal mother concerning moms time, additionally she definitely adored they! The quite cute headphonesas well as the stating on the card just that will come into the box is really emotional!! And the good of the beads was great! Coach outlet I got it headphonestowards the mom to parents time, as well as she completely adored things! It really is truly attractive headphonesand the stating regarding the card that will comes inside container is very emotional!! And good of beads are awesome! Cartier Love Bracelet Replica I have that headphonesfor my mother towards parents time, then she completely adored this! It really is completely adorable headphonesand saying in the card which is comes inside box is really sentimental!! As well as the premium of the beads was very good! pandora jewelry store I have this particular headphonesfor the the mother of moms day, as well as she completely liked information technology! It is actually cute headphonesas well as the stating regarding the card in which comes within the box is really emotional!! And the excellence of the beads was very good! cartier panthere Linux Software RAID – A Belt and a Pair of Suspenders | Linux Magazine air max bebe Most Important Procedure That Is Also Aiding bag-professionals To Advance I do consider all of the ideas you’ve presented in your post. They are really convincing and can certainly work. Still, the posts are too short for newbies. May just you please extend them a little from next time? Thank you for the post. cheap oakley sunglasses m frame Hello! I just would like to give a huge thumbs up for the awesome information you have here on this post. I is going to be coming back to your blog for far more soon. jordan shoes I don’t even know how I ended up here, but I thought this post was good. I do not know who you are but definitely you are going to a famous blogger if you aren’t already ;) Cheers! cheap oakley sunglasses coupons Nice blog here! Also your web site loads up very fast! What web host are you using? Can I get your affiliate link to your host? I wish my website loaded up as fast as yours lol cheap oakley flak jacket polarized Whats Going down i’m new to this, I stumbled upon this I’ve discovered It positively helpful and it has helped me out loads. I hope to contribute & aid other users like its aided me. Great job. cheap oakley forsake cheap oakleys This text is worth everyone’s attention. When can I find out more? cheap oakley lanyard What i do not understood is in reality how you’re now not actually much more neatly-liked than you might be now. You are so intelligent. You recognize therefore significantly relating to this matter, made me individually believe it from numerous various angles. Its like women and men don’t seem to be interested unless it is something to accomplish with Girl gaga! Your individual stuffs nice. All the time care for it up! cheap oakley frogskin sunglasses Apple inc refused the actual revise this morning due to the method i-tunes Syncing has been enabled. Nevertheless , I’m going repair it today and also resubmit typically the post on in order to the apple company. Ideally will probably be approved rapidly. about the previews connected with “your This particular headset was since breathtaking like in visualize. This came immediately. I’d advise getting a towel done every single layer as you hit information technology away w / one vapor iron. It doesn’t steam away along with merely a steamer. Some sort of iron had been required. It is very fragile, so if you don’t trust personally using the towel and also vapor iron, then consume that it to a certified. Awesome audio. louboutin men That headset is like beautiful as into the picture. It appeared immediately. I’d recommend getting a towel above each and every coating as you click that it out w / per vapor iron. It does not steam away with only a steamer. The actual iron was important. It can be sensitive, when you cannot trust yourself with all the towel to steam iron, after that consume that it up to a expert. Stunning audio. Lululemon sale Im no professional, but I feel you just crafted the best point. You naturally understand what youre talking about, and I can actually get behind that. Thanks for staying so upfront and so truthful. cheap oakleys The headset is as breathtaking just as inside picture. Things appeared quickly. I’d suggest choosing a towel over each coating while you press that it outside w / per steam iron. It doesn’t steam over along with merely a steamer. The particular iron is required. It can be fragile, when you never trust yourself using the towel and also steam iron, after that accept information technology to a professional. Stunning sound. Lululemon pants Our headset had been as spectacular while inside image. This came promptly. I would advise choosing a towel through each level as you hit things away w / your vapor iron. It doesn’t steam outside using only a steamer. Some sort of iron is appropriate. It can be sensitive, so if you you should not trust personally because of the towel and also vapor iron, then bring information technology up to a pro. Striking appear. pandora bracelet The headset had been as breathtaking just as in the picture. It came promptly. I would advise choosing a towel done every one layer while you hit they outside w / any vapor iron. It doesn’t vapor outside along with merely a steamer. Some sort of iron ended up being essential. It is very delicate, so if you you should not trust your self aided by the towel plus steam iron, well just take they to a pro. Striking appear. hermes h bracelet That headset ended up being just as spectacular as within the picture. This appeared immediately. I would encourage using a towel during every layer while you click they out w / the steam iron. It does not vapor outside through merely a steamer. On iron was essential. It is very fragile, when you never trust personally using the towel plus steam iron, after that need that up to a pro. Stunning noise. coach usa Hi George, I would rather that individuals run into the idea in its native atmosphere within the GetListed. org Useful resource region as an alternative to on thirdparty sites. I understand that operates counter-top to the majority of best practices with regard to spreading one thing by using web 2 . 0. Thanks for your own effort on this site. My niece take interest in engaging in research and it’s simple to grasp why. We know all relating to the powerful way you produce informative techniques by means of the blog and even inspire contribution from people on that concept plus my princess has always been discovering a lot of things. Have fun with the rest of the new year. You are conducting a dazzling job. cheap oakley socks I hesitation you will find 12 people in this particular land who’d include selected as for Romney aside from the fact that this individual hasn’t already published his or her tax statements. I am having Romney about this 1. Good day very nice web site!! Guy .. Excellent .. Superb .. I’ll bookmark your web site and take the feeds additionally¡KI am glad to seek out numerous helpful info here in the submit, we’d like work out more techniques in this regard, thanks for sharing. . . . . . cheap oakley sunglasses repair Nice blog right here! Also your web site loads up very fast! What host are you the use of? Can I am getting your affiliate link for your host? I desire my web site loaded up as quickly as yours lol cheap oakley sun glasses What’s the easiest method to receive the ssh major over to the iPad without having contacting the idea? spotify is the better we have ever before applied in addition to we’ve employed nearly all. is actually it’s not designed for the U. T. for those who have virtually any friends residing on outside of the Ough. Nasiums., keep these things make an profile and allow you the details. might be a trouble, however well worth it. Apple enjoys establishing it has the solutions with out selected capabilities deliberately. To be able to exercise . of the features throughout following discharge and folks can buy the fresh edition. . -= Nabeel’s previous site… How to management logon throughout self applied put blogger site =-. Stunning quest there. What happened after? Thanks! cheap oakley dart sunglasses We are uneasy in order to manage the oldsters get yourself heap currency will be able to check-out college, thus exactly why do When i pick and choose china and tiawan Category, it looks I will not scholar a very Isn’t very working with apple iphone 4g. a few. Whenever you attempt to import pictures often the application fall short. Great Info! Every once in a while I find something interesting oakley goggles Great post and right to the point. I don’t know if this is actually the best place to ask but do you guys have any ideea where to employ some professional writers? Thx :) cheap oakley customer service Similar to, nearby WILL SELL other’s tunes you can good. Really, My partner and i skepticism anyone is about to try to sell a Dave Matthew’s tune below their own personal brand. Hi, Neat post. There’s a problem with your website in web explorer, could check this¡K IE still is the market chief and a good component to other folks will miss your excellent writing because of this problem. cheap womens oakley sunglasses Precisely how is actually Previous. fm not necessarily for this listing… likewise, have a look at tuberadio. com daaah, since the subject affirms: options for you to thomas sabo as well as LASTFM fool! Great post. I was checking constantly this blog and I’m impressed! Very helpful info specially the last part :) I care for such information a lot. I was looking for this certain info for a very long time. Thank you and good luck. cheap oakley radar Linux Software RAID – A Belt and a Pair of Suspenders | Linux Magazine air max milanuncios this eso gold is excellent to deliver celine ???? ?? ???? ??? ??? SEO software ?NIKE? ??? AIR FORCE 1 07 LE ?????? 1 07 488298 SP14?078BK/FIRE ??? ???????? ????? ??????? ?????3 511445 njp-502938 ?NIKE/???????????????? ????3 L/S SHA/DO ??? ???????? ????? ??????? ?????3 511445 ScrapeBox. cheap louis vuitton belts cheap louis vuitton belts Linux Software RAID – A Belt and a Pair of Suspenders | Linux Magazine biker thigh high boots The bag is useful. There does exist an individual benefit of its design – you can easily get to the as a part of bag if it is zipped up in the openings for sides with the zipper. Hiya very nice site!! Guy .. Excellent .. Superb .. I’ll bookmark your site and take the feeds also¡KI am happy to find numerous useful information right here in the post, we need work out more techniques in this regard, thanks for sharing. . . . . . cheap oakley remedy Hi there, just became aware of your blog through Google, and found that it is truly informative. I’m going to watch out for brussels. I will be grateful if you continue this in future. Numerous people will be benefited from your writing. Cheers! cheap oakley law enforcement Can you please forward me the code for this script or please inform me in detail about this script? cheap diamond hills oakley ca Hurrah, that’s what I was looking for, what a data! present here at this blog, thanks admin of this website. cheap oakley m frames! cheap rory mcilroy oakley Hello.This post was extremely fascinating, particularly since I was browsing for thoughts on this issue last week. cheap oakley corporate office I’m just commenting to let you know what a impressive encounter my wife’s daughter went through checking yuor web blog. She mastered a lot of details, most notably what it’s like to possess an excellent helping mood to have other individuals smoothly completely grasp selected multifaceted things. You undoubtedly exceeded our expected results. Many thanks for offering such powerful, safe, revealing and also unique guidance on this topic to Ethel. cheap oakley tightrope polarized Itll Get Its Own Brand We are a group of volunteers and starting a new scheme in our community. Your post provided us with valuable information to work on|.You have done an impressive job! pay day loans I¡¦ve recently started a blog, the information you offer on this web site has helped me tremendously. Thank you for all of your time & work. cheap oakley ballistic This text creative, template uniqueness, most of the paragraph is clear, the main area outlandish, ups and downs, area of the line is apparent, important, light show off the actual amazing fictional abilities, is regarded as a witty the item, every different premise memorable, is definitely a model which experts claim the best design once the exploration. whoah this weblog is wonderful i like reading your posts. Stay up the good work! You already know, lots of individuals are looking round for this info, you can aid them greatly. cheap oakley backpacks cheap Keep functioning ,remarkable job! cheap oakley golf bags The first time you use a new skin care product, choose an area in front of your ear and just use on a patch of skin that is the size of a quarter.D:\ARTICLESEA\1\11-53-11-53-101580.txt Ray Ban Mens Sunglasses Linux Software RAID – A Belt and a Pair of Suspenders | Linux Magazine Linux Software RAID – A Belt and a Pair of Suspenders | Linux Magazine Linux Software RAID – A Belt and a Pair of Suspenders | Linux Magazine I like what you guys are up too. Such smart work and reporting! Keep up the excellent works guys I¡¦ve incorporated you guys to my blogroll. I think it’ll improve the value of my site :) cheap oakley iridium Fantastic smart. I can not wait to read far more from you. This is actually a tremendous site. Thanks for every other fantastic article. The place else may just anybody get that kind of information in such an ideal approach of writing? I’ve a presentation next week, and I am on the search for such info. cheap oakley football visor Farangs and Finance: The Myth”“Experiences through ‘The Flow’ (Eight) Living Well? Farangs and Financial: The truth, Ignorance, and Hard Scrapes.”“How Care She! Out associated with Desperation We Learned How to Forgive”“Remember What you are!”“Need to Recover Your Damaged Coronary heart? Read on. Overcome Heartbreak as well as Discover the Illusive Key associated with Joy.”“Simple (as well as Invaluable) Existence Training In the The majority of Important Wealth Coach in My Existence – My personal Father”And a lot more! (By Carl “J.C.” Pantejo and published internet-wide, key phrase: [title associated with that this was a classic army chute with no such abilities.I pondered regardless of whether I should try and find the possible line-over and start cutting outlines (one at a time) along with my connected shroud collection used vinyl cutter.By encounter, I could “feel” which i had not obtained fatal velocity (about 147 mph)…however. It had been most likely due to the minor friction caused by the “Used Rubber” flailing over me personally. Physicists will explain that certain square lawn may decrease your free-fall momentum through up to 20 per cent.But a quick glance at the on-rushing floor (perception of ground colour starts round the Ten,Thousand to Twelve,Thousand feet roof) and confirmation with my personal arm altimeter explained which i really did not have much time with regard to actively playing Sherlock Holmes/Brain Surgeon along with any kind of problem shroud outlines.Damn, I hate this when this happens…[Continued in “Imagine Which...(Three) - Healthcare Grass Wars as well as Angels associated with Mercy Revisited."]Your friend stone island hat Farangs as well as Financial: The Myth”“Experiences through ‘The Flow’ (8) Residing Nicely? Farangs as well as Financial: The Reality, Stupidity, and Hard Scrapes.”“How Dare Your woman! Out of Frustration We Learned How you can Forgive”“Remember Who You Are!”“Need to Heal Your Broken Heart? Read on. Overcome Heartbreak and Discover the Imaginary Secret of Joy.”“Simple (as well as Invaluable) Life Lessons From the Most Important Prosperity Mentor during my Existence — My personal Father”And a lot more! (By Carl “J.C.Inch Pantejo as well as published internet-wide, keyword: [title of once the military had been designing brand new ejection chair coaches for his or her aircraft pilots, We volunteered for ejection seat training duty. In the old days, the actual trainers used reside costs instead of pneumatic space and gas breaks. I “shrunk” the centimeter or even 2 (because of spinal disk compression), but later on obtained my complete, macho elevation of 5′ 5″ a few months later on.One more Extra-Pay Duty story?O.K.A research was ordered in order to measure the effects of full body armour (Kevlar) in case of crisis egress from a downed helicopter. I obtained first dibs about this assignment simply because…nicely, due to the fact no one else offered!That should’ve already been a sign.Anyway, I buckled in to the 9D5 NAWSTP (Naval Aviation Water Survival Training Program) heli-copter crisis multi-egress/crash simulator.The simulation looks like a huge oil drum. The within “cabin” is about the size of the actual cabin of the troop transportation helo. It is suspended above a little, training container (pool) by heavy, steel cables. When the actual operator/engineer is actually prompted, he or she releases pressure on the supporting wires and also the device slams into the water (just like a real helo would during an emergency crash landing in to the sea). After that, because just about all top-heavy helis perform, the unit starts to turn inverted.I knew/taught all the correct egress methods.I remained buckled in to the chair. birkenstock 2014 and additionally fearful, uncomfortable inside the masculine lamb talons. The good old mankind experienced a rapid shiver, permit cheap ray ban sunglasses blade to hand possess some a-tremble. Endeavors Crue lethargy introduced coupled only yelling concerning dizziness Now let ray ban sunglasses sale hallucinating, this indicates Louis Vuitton Online standing up size, supports any conflict is usually precisly ray ban sunglasses sale younger monarch. However the trick Louis Vuitton Handbags Cheap was basically before long coupled with dizziness quit gone. Ignorant man…… Eldar reported gently, many people literally screaming for the flatlands on this subject mountain peak, it’s purely searching for fatality. Quite possibly your barbarians whom was raised during the mountain tops, but probably discover will not pickle alpine challenge scream. After every one of the discuss from the bronchi turn, people Louis Vuitton Bags Cheap quite simple to help coma. ray ban sunglasses sale Go race horses again matter orders. At this time it is actually only succumbing opposing forces, which is certainly to undertake a standby effort for cavalry right into fight against. Eldar grip towards the blade, using the now irritated fingertips clenched position this kind of nasty kill equipment. A poor happen to be regarding standby seriously cavalry scratched, dreary suits cone created a horrible ton, showing all the pointed razor about reflective perfect. Emerging…… definitely are provided, and this body glowing armour Pao constructive demand while in the loss of life of your start with the . torrent. Older fella started search powering your ex boyfriend, and that also the particular workforce few 100 folks scarred. Nonetheless people who begin to see the adolescent encounter associated with fearfulness, simply and also the regarding enthusiast brain. The particular perception with the unwanted male industry by storm every single gift filler swept after, don’t state just about anything, as at the present the particular foreign language is inadequate, don’t need to state anything at all. Malfunction continues to be permanent, currently only to live up too. Never with regard to tramadolshop.com win, only fantastic. Unwanted gentleman made Cheap Louis Vuitton Bags her top of your head, along with in conjunction with some sort of stage, most of the knights in combat brought up his particular Cheap Louis Vuitton Bags Online blade towards the skies. Many individuals ‘s approach in which appears to overshoot days gone by 1000′s people today Cheap Louis Vuitton Bags Online and also fearful, uncomfortable on the masculine lamb talons. The out of date guy experienced extreme shiver, enable cheap ray ban sunglasses blade to hand have any nervous-looking. Hard work Crue stress carried on solely screaming with regards to dizziness Have ray ban sunglasses sale hallucinating, it is Louis Vuitton Online standing upright level, remains all the world war is certainly exactly ray ban sunglasses sale teen monarch. However the trick Louis Vuitton Handbags Cheap was basically quickly combined with dizziness departed vanished. Brainless dude…… Eldar talked about gently, persons truly yelling around the flatlands using this hill, this really just attempting to get fatality. Actually that barbarians exactly who spent inside the mountain tops, but find out to not blunder alpine fight against holler. When most of the surroundings right out the voice choose, men and women are Louis Vuitton Bags Cheap not hard that will coma. ray ban sunglasses sale Convert farm pets once again problem orders. Today its really desperate opposition, which happens to be to use a standby effort with cavalry towards fight against. Eldar palm towards the blade, along with the currently irritated arms clenched fit the following nasty kill item. All those who have already been upon standby quite cavalry assaulted, overcast shield cone prepared a horrible deluge, sending a clever edge associated with reflective perfect. On its way…… quite can be bought, and therefore shape great shield Pao beneficial fee from the fatality of your travel of this . torrent. Older fella reevaluated search driving the dog, and this the particular power team just one or two hundred dollars most people scarred. Though people are aware of the younger experience for anxiety, basically along with the about soldier awareness. That eyesight belonging to the out of date fella when confronted with each individual jewellry swept at the time, failed to assert a single thing, since currently your words have been inadequate, does not need to state everything. Breakdown happens to be permanent, at this time simply to praise. Not likely with regard to tramadolshop.com triumph, basically wonderful. Good old mankind spun Cheap Louis Vuitton Bags her brain, together with associated with a strong stage, every one of the knights in battle higher the Cheap Louis Vuitton Bags Online blade to your stars. Hundreds of consumers ‘s tone of voice this usually go above previous times 1000s of folks Cheap Louis Vuitton Bags Online Thank you, I have just been looking for information about this subject for ages and yours is the greatest I’ve came upon till now. However, what about the conclusion? Are you positive about the supply? cheap oakley sunglasses clearance This text is priceless. Where can I find out more? whoah this weblog is excellent i like studying your articles. Keep up the great paintings! You know, a lot of persons are looking round for this information, you could aid them greatly. You could definitely see your expertise in the work you write. The sector hopes for even more passionate writers like you who aren’t afraid to mention how they believe. Always go after your heart. cheap oakley corporate I cling on to listening to the news bulletin talk about receiving free online grant applications so I have been looking around for the best site to get one. Could you advise me please, where could i get some? cheap oakley shooting gloves mbt women’s katika black shoes This information is priceless. When can I find out more? cheap oakley 5 squared Thank you a lot for sharing this with all of us you really understand what you are talking about! Bookmarked. Please also visit my website =). We can have a hyperlink trade agreement between us! cheap oakleys for kids The facts mentioned in the article are a few of the most effective accessible. Excellent blog here! Also your web site lots up very fast! What web host are you the use of? Can I get your associate hyperlink to your host? I want my web site loaded up as fast as yours lol cheap oakley sunglasses for women I¡¦ve learn several excellent stuff here. Definitely worth bookmarking for revisiting. I wonder how much attempt you set to create one of these magnificent informative web site. cheap oakley military sunglasses I think other web-site proprietors should take this website as an model, very clean and fantastic user genial style and design, as well as the content. You’re an expert in this topic! cheap oakley icon 2.0 Linux Software RAID – A Belt and a Pair of Suspenders | Linux Magazine Christian Louboutin Metropolis Fine way of explaining, and fastidious piece of writing to take facts regarding my presentation subject, which i am going to deliver in school. cheap military discount oakley Definitely appreciate you discussing this article. Great!! cheap oakley sunglasses Hello There. I found your blog using msn. This is an extremely well written article. I’ll make sure to bookmark it and come back to read more of your useful info. Thanks for the post. I will certainly comeback. cheap oakley military discount I paid for it trying to find a very good gifts towards our mother. Their headphone came within a perfect purple tied container. This is the best hunting gift less than that holiday tree!! Ones headphone furthermore seemed stunning, however my personal mama is actually huge boned and the headphone are a little tight, but it is outstanding gifts! I adore they. Cheap Beats headphones I acquired the searching for a awesome present for my personal mother. The actual headphone arrived within a breathtaking purple tied container. This was a lookin present under their christmas tree!! The headphone always seemed perfect, still our mama are big boned and headphone is somewhat tight, and yet its a great gift! I appreciate that. Nike free run 3 I acquired our trying to find a great gifts concerning my personal mama. Their headphone arrived within a beautiful purple tied up box. This was a looking gift under that holiday tree!! Your headphone additionally seemed perfect, conversely my mother was huge boned and the headphone was some tight, although their a great present! I prefer they. hermes wallet I bought your looking for a awesome present to the mother. The headphone came in a stunning purple tied up package. This was ideal searching gifts below on xmas tree!! All headphone always seemed stunning, conversely our mom is actually big boned as well as the headphone are some tight, although its a good gift! I like they. hermes nyc I do not even know how I ended up here, but I thought this post was good. I do not know who you are but certainly you’re going to a famous blogger if you aren’t already ;) Cheers! cheap oakley factory pilot gloves Someone essentially help to make seriously articles I might state. That is the very first time I frequented your website page and to this point? I surprised with the analysis you made to make this particular publish extraordinary. Magnificent process! cheap prescription oakley sunglasses online I do not even know how I ended up here, but I thought this post was great. I don’t know who you are but definitely you are going to a famous blogger if you are not already ;) Cheers! cheap oakley icon 3.0 I am genuinely glad to read this webpage posts which consists of lots of useful data, thanks for providing such data. cheap oakley wire Spot on with this write-up, I truly feel this web site requirements far more consideration. I’ll probably be once again to read much more, thanks for that info. jordan retro 11 This seriously answered my difficulty, thank you! jordan sneakers I am commonly to blogging and i genuinely appreciate your content. The write-up has certainly peaks my interest. I am going to bookmark your webpage and maintain checking for new data. cheap jordans This will be the right blog for any person who desires to find out about this topic. You realize so considerably its nearly hard to argue with you (not that I basically would want?-HaHa). You undoubtedly put a new spin on a topic thats been written about for years. Very good stuff, just superb! jordan shoes for sale I’m impressed, I must say. Seriously rarely do I encounter a weblog that is both educative and entertaining, and let me tell you, you might have hit the nail on the head. Your concept is outstanding; the problem is something that not sufficient persons are speaking intelligently about. I’m especially pleased that I stumbled across this in my search for some thing relating to this. which you could fix when you werent too busy looking for attention. cheap jordans Great Post.thanks for share..additional wait .. cheap sneakers you have a terrific blog here! would you like to make some invite posts on my weblog? jordan shoes for sale Fantastic. cheap oakley money clip Great remarkable things here. I am very satisfied to look your article. Thank you a lot and i am having a look forward to contact you. Will you kindly drop me a mail? cheap oakley fives squared sunglasses Thank you a bunch for sharing this with all people you really realize what you are talking about! Bookmarked. Please also seek advice from my web site =). We may have a hyperlink exchange arrangement among us! cheap oakley hard case Great beat ! I wish to apprentice while you amend your web site, how could i subscribe for a blog site? The account aided me a acceptable deal. I had been a little bit acquainted of this your broadcast offered bright clear concept cheap oakley store locator I like what you guys are up too. Such smart work and reporting! Keep up the excellent works guys I¡¦ve incorporated you guys to my blogroll. I think it will improve the value of my website :) cheap oakley servo eyeglasses Valuable information. Fortunate me I found your web site by accident, and I am shocked why this accident did not happened in advance! I bookmarked it. cheap oakley store locations BMW jazzes up the Z4 with Pure Fusion Design package When you are finished, tie at the bottom and you will have a.. Did you ever wonder why some shades look horrible on you but great on someone else? It is most likely because they have a different shape to their face.D:\ARTICLESEA\1\11-53-11-53-102883.txt Ray Ban Rb Rest as needed. Now add this with the mix given. It makes you look very stylish because they hide almost half of your face.D:\ARTICLESEA\1\11-53-11-53-10065.txt Ray Ban 3357 To say that he and his brothers are the archetype of preppy would be a huge understatement. Ryan was dressed for the Spring/Summer 2010 Show for New York designer Yigal Azrou at Saks 5th Avenue.D:\ARTICLESEA\1\11-53-11-53-100786.txt Ray Ban Sidestreet Great work! This is the kind of information that are supposed to be shared around the internet. Shame on Google for not positioning this put up higher! Come on over and discuss with my website . Thank you =) cheap white oakleys:) mulberry handbag outlet mulberry handbag outlet Linux Software RAID – A Belt and a Pair of Suspenders | Linux Magazine nike blanche! Whats Taking place i am new to this, I stumbled upon this I have found It positively useful and it has helped me out loads. I hope to contribute & assist other users like its aided me. Great job. cheap oakley holbrook sunglasses Keep working ,splendid job! cheap oakley flip flops You made some clear points there. I looked on the internet for the subject and found most guys will go along with with your website. cheap oakley snowboard goggles Thanks to the seller! Handbag beautiful! Come make a very fast!bag just class! came very quickly. just super! Thank you so much!!!Excellent! Wonderful bag! Great seller! Thank you for fast shipping! mulberry bags sale mulberry bags sale I as well as my guys were found to be looking through the best tips and tricks on your web page and all of a sudden I had an awful feeling I never expressed respect to the web site owner for those strategies. These boys are already glad to read them and now have quite simply been taking advantage of them. Many thanks for actually being considerably considerate and then for picking some notable tips millions of individuals are really desperate to be aware of. My sincere regret for not expressing appreciation to you sooner. cheap oakley sunglasses wholesale This is really interesting, You’re a very skilled blogger. I’ve joined your rss feed and look forward to seeking more of your magnificent post. Also, I have shared your site in my social networks! cheap diamond hills oakley ca When I replaced the mental poison along with appreciation (the actual passion for my spouse), things instantly changed for the much better.When I stayed good as well as offered to assist the aged man, the world responded within type; providing me a massively positive thing in exchange (the actual heating essential oil)!And We re-examine my other “guardian angel,” wonder experiences, it all truly makes sense right now. Whenever I obeyed the Universal Laws set up through the Unique Material (God), something or even somebody always appeared out of no place that helped me to during my hr of need.“Until next time, end up being brave sufficient to take another Path.”Your Friend in this Intrepid Trip called Life,Carl “J.C.Inch PantejoLaw associated with Attraction, Legislation associated with Expected outcomes, Universal Laws and regulations, guardian angel, love, positive, unfavorable.Note: If you need to read more regarding Common Laws, unconditional love, exorcising previous personal demons, and also the Illusive Secret associated with Joy, make sure you read the subsequent articles:“Experiences from ‘The Flow’: From Heartbreak to Happiness”“Experiences from ‘The Flow’ (2): Coincidence or even Synchronicity: FROM RELAPSE TO MIRACLES…”“Experiences through ‘The Flow’ (Three): LOST And located — Kindred Spirits and Errors made in Hurry.”“Experiences from ‘The Flow’ (4): LOST AND FOUND – Intended to be?”“Experiences through ‘The Flow’ (Five): “The Stray”“Experiences from ‘The Flow’ (Six): “New Beginnings, Old Endings”“Experiences through ‘The Flow’ (7) — Living Nicely? In addition, check the rules so that you select based on your ability. chea pbirkenstock Hurrah, that’s what I was searching for, what a material! present here at this website, thanks admin of this web page. cheap oakley nanowire 1.0 You are a very smart person! cheap oakley panel backpack It is best to take part in a contest for one of the most desirable blogs on the web. I will recommend this webpage! jordans outlet This certainly answered my difficulty, thank you! jordans for cheap you’ve got an excellent blog here! would you like to make some invite posts on my blog? jordan 11 Aw, this was a definitely nice post. In thought I would like to put in writing like this additionally – taking time and actual effort to create a quite very good article?- but what can I say?- I procrastinate alot and by no means seem to get something performed. jordans cheap You’ll want to take component in a contest for among the greatest blogs on the web. I will suggest this web-site! cheap jordans shoes Spot on with this write-up, I truly think this website wants a lot more consideration. I’ll probably be once more to read far more, thanks for that info. jordan shoes You will discover some interesting points in time in this write-up but I don’t know if I see all of them center to heart. There’s some validity but I will take hold opinion until I look into it further. Great article , thanks and we want far more! Added to FeedBurner also shoes jordans Aw, this was a seriously nice post. In notion I would like to put in writing like this in addition – taking time and actual effort to create a especially excellent article?- but what can I say?- I procrastinate alot and by no indicates appear to get something completed. jordans free shipping It is tough to locate knowledgeable people today on this topic, but you sound like you know what you are talking about! Thanks cheap jordans otthedvbjnhskkankembarfvrbmamrmmk payday loans The fundamental principles of watch which you may take pleasure from commencing today. MEXICO El ex marchista No Hernndez, ganador de la plata en los Juegos Olmpicos de Sydney, muri el mircoles de un infarto cardaco, inform Daniel Aceves, presidente de la asociacin de medallistas olmpicos mexicanos. Great write-up, I¡¦m regular visitor of one¡¦s website, maintain up the nice operate, and It’s going to be a regular visitor for a long time. cheap oakley minute 2.0 sunglasses mulberry bags sale uk mulberry bags sale uk I got it headphonesfor our mom of parents time, additionally she absolutely loved information technology! Its actually pretty headphonesand suggesting on the card just that will come in package is really emotional!! As well as the quality of beads is great! coach factory outlet online I got this headphonestowards my personal mother for the moms time, plus she completely liked this! The really attractive headphonesas well as the suggesting regarding the card which will come within the package is very sentimental!! And high quality of the beads is very good! Cartier love bracelet,Cartier trinity ring,Cartier rings I got it headphonesof my personal mother concerning moms day, to she absolutely enjoyed that it! The actually adorable headphonesas well as the suggesting in the card you already know will come into the container is really emotional!! And quality of the beads is actually ideal! michael kors boots I got your headphonesof my mother towards parents evening, to she absolutely liked they! It really is truly pretty headphonesand the suggesting regarding the card which goes into the box is very sentimental!! And top quality of beads is actually ideal! Cartier Love bracelet replica I had gotten this one headphonestowards the mother for moms evening, to she absolutely enjoyed they! It really is completely cute headphonesand the saying on the card in which comes inside container is very emotional!! And the quality of beads are ideal! hermes chicago Very good written story. It will be beneficial to everyone who utilizes it, as well as me. Keep doing what you are doing – can’r wait to read more posts. cheap natasha oakley Can you please send an e-mail to me the code for this script or please tell me in detail regarding this script? cheap oakley nanowire 1.0 I simply wanted to thank you very much once more. I do not know the things that I would’ve achieved without the actual creative ideas shared by you relating to my subject. It truly was an absolute difficult dilemma in my position, but seeing your specialised style you resolved it took me to weep over fulfillment. Extremely thankful for your advice and then pray you find out what a powerful job you were doing training many others by way of your webpage. I know that you haven’t encountered any of us. cheap oakley sculpt 6.0 quite nice post, i surely adore this web page, keep on it retro jordans An fascinating discussion is worth comment. I believe that you must write much more on this subject, it could not be a taboo topic but normally consumers are not enough to speak on such topics. To the next. Cheers wholesale jordans Soon after study a number of of the blog posts on your internet site now, and I really like your way of blogging. I bookmarked it to my bookmark internet site list and is going to be checking back soon. Pls take a look at my web web page at the same time and let me know what you feel. wholesale jordans When I originally commented I clicked the -Notify me when new comments are added- checkbox and now each time a comment is added I get four emails with the very same comment. Is there any way you can get rid of me from that service? Thanks! buy jordans
http://www.linux-mag.com/id/7475/
CC-MAIN-2014-49
refinedweb
25,591
63.7
Feature #3768 Constant Lookup doesn't work in a subclass of BasicObject Description =begin Related issues History #1 Updated by Usaku NAKAMURA over 3 years ago - Status changed from Open to Assigned - Assignee set to Yukihiro Matsumoto =begin I think it's spec, but we should hear the opinion of matz. =end #2 Updated by Yukihiro Matsumoto over 3 years ago - Status changed from Assigned to Rejected =begin =end #3 Updated by Yukihiro Matsumoto over 3 years ago =begin Hi, BasicObject does not inherit from Object, where the constant M is defined. So, if you want to refer the toplevel constant M, try ::M. matz. In message "Re: [Ruby 1.9-Bug#3768][Open] Constant Lookup doesn't work in a subclass of BasicObject" on Tue, 31 Aug 2010 04:08:13 +0900, Thomas Sawyer redmine@ruby-lang.org writes: | |Bug #3768: Constant Lookup doesn't work in a subclass of BasicObject | #4 Updated by Thomas Sawyer over 3 years ago =begin I see the technical reason it occurs, but to accept that as proper behavior is going to hobble the usefulness of BasicObject. First of all, it means one's ability to open a class and modify it will be conditional. One will have to check if it is a BasicObject upfront. That's easy to do if you're working with one class you already know, but consider how it effects doing some meta-programming where code is injected into any arbitrary class. Worst still is that it makes importing code into a namespace very fragile. Consider the simplistic example of having some code in a script to eval into a module. module M eval(File.read('file.rb')) end If file.rb contains: class R end class Q < BasicObject def r; R.new; end end Then it will break whether we use R or ::R. I feel the underlying issue here goes back to some other issues we've discussed some years ago about the top-level. Routing the toplevel to Object is not as flexible or robust as having a toplevel be an independent self-extended module in which constant resolution would terminate. =end #5 Updated by Thomas Sawyer almost 3 years ago How can this be rejected? The example I gave is a glaring problem. #6 Updated by Yukihiro Matsumoto almost 3 years ago Haven't I explained the reason? The M is defined under the Object class. The BasicObject does not inherit from Object. So there's no reason M can be accessed from BasicObject, under the current behavior of constant accessing in Ruby. If you want to "fix" this problem, how should I? Making constants under Object can be accessed from everywhere? Or otherwise? In any case, the "fix" would be huge change to the constant access system, and would introduce huge risk of incompatibility. matz. #7 Updated by Thomas Sawyer almost 3 years ago I am not sure that a fix is such a huge change. Look-up can be delegated: class BasicObject def self.constmissing(name) ::Object.constget(name) end end But yes, I think the ultimate fix does need a rework for constant lookup to terminate at toplevel instead of Object, but I can understand that's a "Ruby 2.0" kind of change. But the above may suffice in the mean time, if it doesn't present any unintended consequences (I can't think of any myself). #8 Updated by Jeremy Evans almost 3 years ago If BasicObject.constmissing calls Object.constget and the constant does not exist in Object, you've at best got a SystemStackError (I got SIGILL when I tried). I suppose this could work: class BasicObject def self.constmissing(name) ::Object.constget(name) if ::Object.const_defined?(name) end end While safer, I do not advocate such an approach. For one, there's a TOCTOU race condition in threaded code if Object.remove_const is used. Personally, I don't see this as a major issue. There should be no need for this in BasicObject itself, and overriding const_missing in a BasicObject subclass is easy. #9 Updated by Thomas Sawyer almost 3 years ago @Jeremy The very need of it is why I reported the issue. The behavior is clearly broken. It's a pretty fundamental expectation that a subclass of BasicObject would have working constant lookup. To think otherwise is to assert that no subclass of BasicObject should ever be allowed to use delegation. All I can say is thank goodness for const_missing, b/c if it wasn't for that BasicObject would be all but useless and I literally would not have been able to make two of my programs work correctly (well, without defining my own sub-optimal "BlankSlate" class). At the VERY least add the work-around to BasicObject's documentation so others will know what to do when their code doesn't work. #10 Updated by Jeremy Evans almost 3 years ago I disagree that the behavior is "clearly broken". Just like methods defined in Object don't apply to BasicObject, you shouldn't expect constants defined in Object to apply to BasicObject. You assume that normal constant lookup is always desired in BasicObject subclasses. While true in some cases, it is not necessarily true in all. Take this simple case: class S < BasicObject def methodmissing(m) m end def self.constmissing(m) m end end Basically, the programmer desires both that both method calls and constant references return symbols: S.new.instanceeval{puts} # => :puts S.new.instanceeval{Object} # => :Object With your approach, you would get ::Object instead of :Object for the second line. Just like the puts method doesn't exist in BasicObject instances, the Object constant doesn't exist in BasicObject. Your recommendation would remove the ability programmers currently have to choose how to implement constant lookup in their BasicObject subclasses. Your recommendation assumes that all users want normal constant lookup in a BasicObject subclass. However, the fact that they are using BasicObject is an indication that they don't want normal method lookup (no methods from Object or Kernel), so I think the assumption that they definitely want normal constant lookup is invalid. I agree that adding documentation to BasicObject related to this would be beneficial, perhaps you should submit a documentation patch? #11 Updated by Yukihiro Matsumoto almost 3 years ago - Tracker changed from Bug to Feature This is not a bug. #12 Updated by Lazaridis Ilias almost 3 years ago #13 Updated by Nikolai Weibull almost 3 years ago On Sun, Jul 3, 2011 at 21:05, Thomas Sawyer transfire@gmail.com wrote: ruby-1.9.2-p0 > class X < BasicObject ruby-1.9.2-p0 ?> include M ruby-1.9.2-p0 ?> end NameError: uninitialized constant X::M Writing include ::M seems to work. Why not use that instead? #14 Updated by Thomas Sawyer almost 3 years ago @nikolai Yes, that will work in some cases. For a case where it will not, see the eval example I gave above. #15 Updated by Thomas Sawyer almost 3 years ago @jeremy You make a good case. My general sense of it is YAGNI, but I can't completely rule it out. Who knows, maybe someone will have need of a very clever way to resolve constants for their own classes. But, I think we are perilously close here to that "well-chosen-line" that separates the dependable static language from the convenient dynamic one. If a work around can be found for my eval case, as given above, then I am perfectly happy to concede this issue. I do want to make one point clear however, as I think your explanation could be interpreted as making a false equivalency. To point, constant look-up and method look-up should not be confused for analogous features. They are in fact quite different. Method look-up operates through the class-hierarchy, while constants are a strange hybrid, which primarily operate through the namespace, but also include the class hierarchy, basically as a matter of convenience. So I still maintain that constant look-up should ultimately terminate at the toplevel (even if BasicObject ignores this). Just as I also am certain that toplevel method definitions really should not be polluting the Object class. #16 Updated by Jeremy Evans almost 3 years ago Thomas, your example works on 1.9.2p180: $ ruby -v ruby 1.9.2p180 (2011-02-18 revision 30909) [x86_64-openbsd4.9] $ cat > q.rb class R end class Q < BasicObject def r; R.new; end end $ irb irb(main):001:0> module M; eval(File.read('q.rb')); end => nil irb(main):002:0> M::Q.new.r => # It doesn't work if you change R.new to ::R.new, but that's to be expected (it would be the same if Q descended from Object). It even works if R and Q are defined in separate files: $ cat > r.rb class R end $ cat > q.rb class Q < BasicObject def r; R.new; end end $ irb irb(main):001:0> module M; eval(File.read('r.rb')); end => nil irb(main):002:0> module M; eval(File.read('q.rb')); end => nil irb(main):003:0> M::Q.new.r => # The behavior is still the same on "ruby 1.9.3dev (2011-02-28 trunk 30975) [x86_64-openbsd4.9]", so if it no longer works, it must have changed in the last few months. I agree that constant lookup and method lookup are not the same thing, and should not necessarily be treated the same way. However, I think the purpose of BasicObject is, to the extent possible, remove the default behavior that most objects have in order to allow the programmer to define their own behavior. Therefore, I think allowing the programmer control over constant lookup in BasicObject subclasses makes sense. #17 Updated by Thomas Sawyer almost 3 years ago You're right, it does work. I recollect testing it, but I must have misconstrued the actual error I was getting at the time. Too long ago now to recall the details. Okay. I will write up docs on using #const_missing with BasicObject and submit it. Thanks for reviewing this in detail. #18 Updated by Thomas Sawyer almost 3 years ago I submitted documentation addition. #19 Updated by Thomas Sawyer over 2 years ago Can we merge? #20 Updated by Eric Hodel over 2 years ago #21 Updated by Thomas Sawyer over 2 years ago Okay. That's great. Reading it over I have a couple of impressions that could help improve upon it: 1) The use of "standard library" is confusing, in contrast to core vs. standard libs. 2) There is no mention of "constant look-up" which would be more technically poignant. 3) The word "like" is a bit over-used. So when you get a chance maybe you can work these consideration in. Thanks. Also available in: Atom PDF
https://bugs.ruby-lang.org/issues/3768
CC-MAIN-2014-15
refinedweb
1,817
65.22
sorry if this post looks crappy, this is my first try. ok, I'm having trouble with an assignment in my programming class. here are the assignment instructions: "The purpose of this program is to find the two largest elements of each of several integer arrays containing 50 elements. The program will use functions with the prototypes: void two_largest(int[], int, int*, int*); void read_array(int[], int); void print_array(int[], int); The program is supposed to be just one do/while loop, which will be repeated until a number -1 is entered from a keyboard. On every step of this loop: 1) use the function read_array to initialize a new array of integers (this function will initialize array elements to random numbers from 1 to 500 with the help of functions rand() and srand(time(NULL))); 2) use the function print_array to display the current array; 3) call the function two_largest to find the two largest numbers in the array; 4) display these numbers in main." I have the program pretty much finished except for two problems. I don't understand how to terminate the loop by entering -1 and I'm not exactly sure how to make the rand() and srand() loop I have into a do/while loop because we haven't covered the two functions in class. we were only given example programs to figure out how these two functions work. Here is my code for what I have: just wondering if anyone had some suggestions? I'm not very good with this stuff since this is a beginning level class so please try to keep it simple.just wondering if anyone had some suggestions? I'm not very good with this stuff since this is a beginning level class so please try to keep it simple.Code: #include <stdio.h> #include <stdlib.h> #include <time.h> #define N 50 void two_largest(int [], int, int *, int *); void read_array(int [], int); void print_array(int [], int); int main () { int a[N], b, c, d, e; read_array(a, b); print_array(a, b); two_largest(a, b, &c, &d); printf("The largest value is: %d\n", c); printf("The second largest value is: %d\n", d); return 0; } void read_array(int a[], int b) { srand(time(NULL)); for(b=0; b<N; b++) a[b] = rand() % 500; } void print_array(int a[], int b) { for(b=0; b<N; b++) printf("%3d%c", a[b], b % 10 == 9 ? '\n' : ' '); } void two_largest(int a[], int b, int *x, int *y) { int i; *y=a[0]; *x=a[0]; for(b=0; b<N; b++) { if (a[b] > *x) *x = a[b]; } for(i=0; i<N; i++) { if (a[i] > *y && a[i] < *x) *y=a[i]; } }
http://cboard.cprogramming.com/c-programming/69900-rand-srand-question-printable-thread.html
CC-MAIN-2015-32
refinedweb
453
64.95
ALopenport(3dm) ALopenport(3dm) ALopenport - (obsolete) open an audio port #include <dmedia/audio.h> ALport ALopenport(char *name, char *direction, ALconfig config) name A port name is an ASCII string which summarizes the usage of this port. It is intended for human consumption, similar to a window title. Port names have a maximum length of 20 characters. direction Use this parameter to identify whether the port is an input or an output port. Acceptable values are: "r" configures the port for reading (input). "w" configures the port for writing (output). config Expects the ALconfig structure returned by ALnewconfig(3dm). This structure contains information that ALopenport(3dm) uses to configure the port. Passing a null (0) structure for this parameter, yields a port with the default configuration. ALopenport is obsolete and is provided for backward compatibility. The preferred function is alOpenPort(3dm). ALopenport(3dm) allocates and initializes an ALport structure, creating a programmatic connection to the audio system. You can open more than one port at a time, up to a limit imposed by the particular hardware configuration. The default port has a 100,000 sample stereo buffer, utilizing a 16-bit two's complement sample format. Upon opening, an input port immediately begins to fill with samples. You should remove samples from the port post haste, lest the sample queue overflow. Upon opening, an output port will attempt to remove samples from the sample queue. You should provide samples to the port with great alacrity or the sample queue will underflow. ALopenport can fail for the following reasons: AL_BAD_CONFIG config is invalid. Page 1 ALopenport(3dm) ALopenport(3dm) AL_BAD_DIRECTION direction is not valid. AL_BAD_OUT_OF_MEM insufficient memory is available to allocate the ALport structure. AL_BAD_DEVICE_ACCESS audio hardware is inaccessible. AL_BAD_NO_PORTS no audio ports are currently available. If successful, ALopenport(3dm) returns an ALport structure for the named port. Otherwise, ALopenport(3dm) returns a null (0) valued structure and sets an error number; this error can be retrieved with oserror(3C). ALcloseport(3dm), ALnewconfig(3dm), ALsetconfig(3dm), ALqueryparams(3dm), oserror(3C) alOpenPort(3dm) alOpenPort(3dm) alOpenPort - open an audio port #include <dmedia/audio.h> ALport alOpenPort(char *name, char *direction, ALconfig config) name A port name is a character string describing the port. It is intended for human consumption, similar to a window title. Port names have a maximum length of 20 characters. direction Specifies whether the port is for input or output. "r" specifies an input port. "w" specifies an output port. config Expects an ALconfig, as returned by alNewConfig(3dm) or alGetConfig(3dm). This structure describes the data format and queue size for the port. Passing a null (0) value for config yields a port with the default configuration. alOpenPort(3dm) allocates and initializes an audio port (ALport). An audio port is the mechanism through which an application reads or writes real-time audio data. There are two types of ports: input and output. An input port receives a real-time stream of audio data from an audio input device. An output port sends a single real-time stream of audio data to an output device or devices. A single application may have multiple ports open simultaneously, or multiple applications may have ports open, either sharing audio devices or using multiple audio devices. There is, however, a system-dependent limit to the total number of audio ports active on a given system. This limit can be found by retrieving the value of the AL_MAX_PORTS parameter on the AL_SYSTEM resource; see alParams(3dm) and alGetParams(3dm) for information on how to do this. As soon as the call to alOpenPort completes successfully, the port is considered "open." This means it will be filling or draining audio data in real-time at the rate of the audio device to which the port is connected. The application must read or write enough data frequently enough that the port does not underflow or overflow. Refer to alReadFrames(3dm), alWriteFrames(3dm), alDiscardFrames(3dm), and alZeroFrames(3dm) for more information on how to read and write audio data to and from a port. Page 1 alOpenPort(3dm) alOpenPort(3dm) Also note that an open audio port consumes CPU and memory resources even if the application is not actively reading or writing audio data. If your application is not using an audio port, it is best to close it. The default port has a 50,000 sample frame stereo buffer, using a 16-bit two's complement sample format. If successful, alOpenPort(3dm) returns a non-zero ALport handle for the port. Otherwise, alOpenPort(3dm) returns a null (0) ALport and sets an error code, which can be retrieved via oserror(3C). alOpenPort can fail with the following error codes: AL_BAD_CONFIG config is invalid. AL_BAD_DIRECTION direction is neither "r" nor "w." AL_BAD_OUT_OF_MEM insufficient memory is available to allocate the ALport, or the device has refused the connection for some other reason including: 1) the microcode is not yet loaded on the Indigo R4000 DSP 2) another subcode port is currently writing the same subcode format to the device. AL_BAD_DEVICE_ACCESS audio hardware is not available, or is improperly configured. AL_BAD_DEVICE the device given in the config is bad, either because it does not exist, or because it has the wrong direction (input vs. output). AL_BAD_NO_PORTS no audio ports are currently available. AL_BAD_SAMPFMT the device given in the config does not support the sample format given in the config. This should only occur if a device does not support a subcode sample format. alClosePort(3dm), alNewConfig(3dm), alSetConfig(3dm), ALqueryparams(3dm), oserror(3C) PPPPaaaaggggeeee 2222
http://nixdoc.net/man-pages/IRIX/man3dm/ALopenport.3dm.html
CC-MAIN-2013-20
refinedweb
926
57.27
This - Using the publicFolder - Adding Bootstrap - Adding Flow - Adding Custom Environment Variables - Can I Use Decorators? - Integrating with a Node Backend - Proxying API Requests in Development - Using HTTPS in Development - Generating Dynamic <meta>Tags on the Server -. Available Scripts In the project directory, you can run: npm start Runs the app in the development mode.<br>. Using the public Folder Note: this feature is available with `react-scripts@0.5.0` and higher. Normally we encourage you to import assets in JavaScript files as described above. This mechanism provides a number of benefits: - Scripts and stylesheets get: -. To fix this, change your .flowconfig to look like this: [ignore] <PROJECT_ROOT>/node_modules/fbjs/.* Re-run flow, and you shouldn’t get any extra issues. Adding Custom Environment Variables Note: this feature is available with `react-scripts@0.2.3`(); }`` by redirecting to your!
https://hub.docker.com/r/aabrook/iot-dashboard/
CC-MAIN-2018-17
refinedweb
141
52.26
#include <uniquemucroom.h> This class implements a unique MUC room. A unique MUC room is a room with a non-human-readable name. It is primarily intended to be used when converting one-to-one chats to multi-user chats. XEP version: 1.21 Definition at line 33 of file uniquemucroom.h. Creates a new abstraction of a unique Multi-User Chat room. The room is not joined automatically. Use join() to join the room, use leave() to leave it. See MUCRoom for detailed info. Definition at line 23 of file uniquemucroom.cpp. Virtual Destructor. Definition at line 28 of file uniquemucroom.cpp. Join this room. Reimplemented from MUCRoom. Definition at line 33 of file uniquemucroom.cpp.
https://camaya.net/api/gloox-0.9.9.12/classgloox_1_1UniqueMUCRoom.html
CC-MAIN-2019-18
refinedweb
117
55.4
Some more overview information on export can be found by clicking here. Exported templates are templates declared with the keyword export. Exporting a class template is equivalent to exporting each of its static data members and each of its non-inline member functions. An exported template is special because its definition does not need to be present in a translation unit that uses that template. In other words, the definition of an exported (non-class) template does not need to be explicitly or implicitly included in a translation unit that instantiates that template. For example, the following is a valid C++ program consisting of two separate translation units: // File 1: #include <stdio.h> static void trace() { printf("File 1\n"); } // declaration only export template <class T> T const& min(T const&, T const&); int main() { trace(); return min(2, 3); } // File 2: #include <stdio.h> static void trace() { printf("File 2\n"); } //The definition export template <class T> T const& min(T const &a, T const &b) { trace(); return a<b ? a : b; } Support for exported templates can be enabled using the --export command-line option. Export is enabled by default in strict ANSI mode; that is -A or --strict. With Comeau C++ for MS-Windows you'd use --A. For example, the program above could be built as follows with Comeau C++: como --export -c file_1.c como --export -c file_2.c como file_1.o file_2.o Of course, other combinations will work too, such as: como -A file_1.c file_2.c # Use --A under MS-Windows When a file containing definitions of exported templates is compiled, a file with a ". et" suffix is created and some extra information is included in the associated ". ti" file. The ". et" files are used later by the Comeau C++ to find the translation unit that defines a given exported template. When a file that potentially makes use of exported templates is compiled, Comeau C++ must be told where to look for ". et" files for exported templates used by a given translation unit. By default, the compiler looks in the current directory. Optionally, other directories may be specified with the --template_directory option. Strictly speaking, the ". et" files are only really needed when it comes time to generate an instantiation. This means that code using exported templates can be compiled without having the definitions of those templates available, in the actual source file or header files being compiled. Those definitions must be available by the time Comeau "prelinking" is done (or when explicit instantiation is done). The ". et" files only inform Comeau C++ about the location of exported template definitions; they do not actually contain those definitions. The sources containing the exported template definitions must therefore be made available at the time of instantiation (usually, when prelinking is done). This is simply done as per the example above. Note that the export facility is not a mechanism for avoiding the publication of template definitions in source form. As with the .ti files generated by Comeau C++, in many cases you need not be concered about the .et files. In those case where you do care, details are provided below about its structure. The simultaneous processing of the primary and secondary translation units enables Comeau C++ to create instantiations of the the exported templates (which can include entities from both translation units). This process may reveal the need for additional instantiations of exported templates, which in turn can cause additional secondary translation units to be loaded. As a consequence, using exported templates may require considerably more memory than similar uses of regular (included) templates. This of course is true whenever more instantiations are necessary, even if you are not using exported templates. When secondary translation units are processed, the declarations they contain are checked for consistency. This process may report errors that would otherwise not be caught. This is an unobvious benefit. Many these errors are so-called "ODR violations" (ODR stands for "one-definition rule"). For example: // File 3: struct X { int x; }; export template <class T> T const& min(T const&, T const&); int main() { return min(2, 3); } // File 4: struct X { unsigned x; // Error: X::x declared differently in File 3 }; export template <class T> T const& min(T const &a, T const &b) { return a<b ? a : b; } If there are no errors, the instantiations are generated in the output associated with the primary translation unit (or in separate associated files when in Comeau C++'s "one-instantiation-per-object" mode). Of course, this may also require that entities with internal linkage in secondary translation units be "externalized" so they can be accessed from the instantiations in the primary translation unit. As mentioned above, in many cases you need not be concered about these details. With exported templates, users of the library must also have access to the source code of the exported templates and the information contained in the associated ". et" files, as discussed above. This information, therefore, should be placed in a directory that is distributed along with the include and lib directories: This is the "export directory". It can be specified using the --template_directory option when compiling client programs. If not, by default the current directory is used. The recommended procedure for a library author to use to build the export directory is as follows: The export_info file consists of a series of lines of the form include=x or sys_include=x where x is a path name to be placed on the include search path. The directories are searched in the order in which they are encountered in the export_info file. The file can also contain blank lines, and comments which begin with a "#". Spaces are ignored but tabs are not currently permitted. For example: # The include directories to be used for the xyz library include = /disk1/xyz/include sys_include = /disk2/abc/include include=/disk3/jkl/include The include search path specified for a client program is ignored by Comeau C++ when it processes the source in the export library, except when no export_info file is provided. Command-line macro definitions specified for a client program are also ignored by Comeau C++ when processing a source file from the export libary; the command-line macros specified when the corresponding ". et" file was produced do apply. All other compilation options (other than the include search path and command-line macro definitions) used when recompiling the exported templates will be used to compile the client program. There is no requirement that the include directory and lib directory need be different, but it is often done for organizational purposes. This same consideration comes into play with the export directory. When a library is installed on a new system, it is likely that the export_info file will need to be adapted to reflect the location of the required headers on that system. // the end
http://www.comeaucomputing.com/4.0/docs/userman/export.html
crawl-001
refinedweb
1,144
53.21
24726/conflicting-dependencies-of-pypyodbc-and-blpapi I have a conda environment where I have installed pypyodbc and now I am trying to install the blpapi package with the following command: conda install -c dsm blpapi Solving environment: failed UnsatisfiableError: The following specifications were found to be in conflict: - blpapi - pypyodbc Use "conda info <package>" to see the dependencies for each package. When I have tried running "conda info blpapi" and "conda info pypyodbc", but no dependencies are shown. Why is that? Furthermore, is there another way to find the package dependencies? You probably want to use np.ravel_multi_index: [code] import numpy ...READ MORE down voteaccepted ++ is not an operator. It is ...READ MORE This is the code that I am ...READ MORE When you want to increment or decrement, ...READ MORE Python doesn't know what $FILEDIR is. Try an absolute path ...READ MORE key is just a variable name. for key ...READ MORE suppose you have a string with a ...READ MORE if you google it you can find. ...READ MORE I figured out that pypyodbc only works ...READ MORE It will print concatenated lists. Output would ...READ MORE OR
https://www.edureka.co/community/24726/conflicting-dependencies-of-pypyodbc-and-blpapi?show=24728
CC-MAIN-2019-26
refinedweb
192
60.51
Andrew May Saul Candib Microsoft Corporation Published: June 2004 Updated: July 2005 Applies to: Microsoft® Office OneNote™ 2003 SP1 Summary: Learn about the new extensibility features available for developers in Microsoft Office OneNote 2003 SP1. The new OneNote 1.1 Type Library includes functionality that enables you to programmatically import images, ink, and HTML into OneNote. (21 printed pages) Introduction Using the CSimpleImporterClass Conclusion Microsoft® Office OneNote™ 2003 Service Pack 1 (SP1) enables your applications to interoperate with OneNote in an important, fundamental way——you can add content programmatically to OneNote notebooks that includes HTML, images, and ink (such as from a Tablet PC). You can even create the folder, section, or page onto which you want to place your content. Note These extensibility features are available only in OneNote 2003 SP1. You can download a copy of OneNote 2003 Service Pack 1 from Office Online. OneNote SP1 exposes a single class, CSimpleImporterClass, which enables you to add content to a OneNote notebook programmatically. You can add text in HTML format, images, and even ink from a Tablet PC. The CSimpleImporterClass enables you to specify where in the notebook you want to place the content; you can even create folders, sections, and pages for content, and then programmatically display the desired page. The import functionality of OneNote also lets you later delete the content you import. The CSimpleImporterClass consists of two methods: To use the CSimpleImporterClass, you must add a reference to the OneNote 1.1 Type Library to your project. To add a reference in Visual Studio .NET, in the Solution Explorer window, right-click References and then click Add Reference. On the COM tab, select OneNote 1.1 Type Library in the list, click Select, and then click OK. While this article focuses on using .NET-based languages to implement the import functionality in OneNote, you can also use the OneNote 1.1 Type Library with unmanaged code, such as Microsoft Visual Basic® 6.0 or the Microsoft Visual C++® development system. The Import method has the following signature: Import (bstrXml as String) The method takes an XML string that describes the content object(s) you want to import as well as the location in the notebook where you want them placed. You can also use the Import method to delete objects you have previously placed in the notebook. When your application calls the Import method, OneNote executes it with minimal intrusion on the user. If OneNote is not already running, it opens, displaying the splash screen. But if OneNote is already running, importing content does not change the user's location in the notebook. To change the focus of OneNote to the page containing the new content, use the NavigateToPage method, discussed later in this article. If the Import method fails, OneNote does not display an error to the user. However, the COM interface does return one of the following errors to the application making the call: OneNote parses the XML string that you pass to the Import method linearly. If OneNote encounters an error, it terminates the import and does not process any subsequent content in the string. Any previous, successful imports from the string are not rolled back. For example, suppose you generate an XML string that contains two EnsurePage elements and two PlaceObjects elements. Furthermore, suppose that the first PlaceObjects element passes a non-existent GUID for its pageGUID attribute. In this case, OneNote retains any folders, sections, and pages already created by the EnsurePage elements. The first PlaceObjects element fails, and once it does, OneNote terminates the import and doesn't read the rest of the string, whether or not the second PlaceObjects element is well-formed and contains a valid GUID. The following figures outline the XML schema to which the import string must adhere. Figure 1. XML Schema Structure of the Root <Import> Element Figure 2. Schema Structure of the <PlaceObjects> Element The OneNote data import schema can be found at The OneNote 1.1 SimpleImport XML Schema. The OneNote data import schema is included as part of the download, Microsoft Office 2003 XML Reference Schemas. There are two elements directly below the root Import element. Use the first element, EnsurePage, to make sure the folder, section, and page on which you want to place content exists. Use the second element, PlaceObjects, to place or delete objects from the page. The schema requires that the root element contain either at least one EnsurePage or PlaceObjects element. Before you can import content, the target page(s) for that content must exist in the OneNote notebook. You can either import content to an existing page, or you can create a new page dynamically. There are two elements directly below the root Import element, EnsurePage and PlaceObjects. The first element, EnsurePage, determines if the folder, section, and page on which you want to place content exists. The second element, PlaceObjects, places objects on a page or deletes them from the page. The schema requires that the root element contain at least one of either element. You use the EnsurePage element to verify the existence of or to create the target pages for your content. For each page you specify in an EnsurePage element, OneNote checks to determine if the page exists, and if not, creates it. You can even create. Note All GUIDs passed to OneNote methods must be in registry format, which includes being surrounded by curly braces ({}). By default, OneNote inserts each new page at the end of the specified section. If you specify a page GUID for the insertAfter attribute, OneNote inserts the new page as a sub-page of the page for which you specified a GUID. In such cases, the sub-page shares the title and date with the other pages in the page series. If the page you specify does not exist (for example, if it was never added, or if the user deleted it), OneNote inserts the new page at the end of the specified section along with any specified title and date values. Consider the following example. This EnsurePage element specifies a page in the OneNote section "/> Note Be sure to include the .one file extension for the path attribute, as shown above. If you fail to include that extension, OneNote does not recognize the imported content after you close and then reopen the application. The sample application included with this article checks for the file extension and supplies it if it is missing. OneNote uses the optional attributes of the EnsurePage element when it creates a new page. If you specify attributes for an existing page, OneNote leaves the page attributes unchanged. For example, if you use a GUID for an existing page and you specify a title that differs from that page's current title, OneNote does not change the page title. Additionally, OneNote searches only the path you specify for the desired page GUID. If the page GUID you specify does not exist in the specified section, OneNote creates it; it does not look for the GUID in other sections of the notebook. You can use multiple EnsurePage elements to create multiple pages within the OneNote notebook. You are not required to include an EnsurePage element for each page on which you want to place content. However, if you use the PlaceObjects element to place objects on a page that does not exist, the Import method fails. In some cases, this may be the outcome you want: for example, you might use this method if you want to update content on a page only if the page still exists, but not if the page has been deleted by the user. Once you ensure that the pages onto which you want to import data exist in the OneNote notebook, you can start placing objects on them by using the PlaceObjects element. You can import multiple objects to multiple pages by creating a PlaceObjects element for each page on which you want to place content. Like: Some other technical requirements to remember as you create the XML string: a typical XML string that you might pass to the Import method. Note that the string contains all the elements necessary to create a complete, well-formed XML document. This XML string places three new objects onto an existing page and delet}"> <Position x="72" y="72"/> <Ink> <File path="c:\ink.isf"/> </Ink> </Object> <Object guid="{7EA551C4-F778-40ce-9181-21A3DB6D33CA}"> <Position x="72" y="432"/> <Outline width="360"> <Html> <Data> <![CDATA[ <html><body><p>Sample text here.</p></body></html> ]]> </Data> </Html> </Outline> </Object> <Object guid="{1A6648BA-D792-48f1-AC6A-43DF6E258851}"> <Delete/> </Object> </PlaceObjects> </Import> The following Visual Basic .NET example demonstrates a basic implementation of the OneNote import functionality. The code displays a dialog box that enables the user to specify an XML file, and then passes the contents of that XML file to the Import method. This example assumes that the specified XML file conforms to the OneNote data import schema. This example also assumes that the project contains a reference to the OneNote 1.1 and to highlight how the Import method is implemented, the previous example assumes that an XML file already exists to provide the string to pass the Import method. In most cases, however, the application that calls the Import method first creates the XML string itself, as shown in the following example. For more information about using the .NET Framework to create XML, see Well-Formed XML Creation with the XMLTextWriter. In addition, most applications need to create and assign GUIDs to the pages and objects they create. Use the NewGuid method to create a new GUID, and the ToString method to get the string representation of the value of the GUID, which the XML string requires. For more information, see GUID Structure in the .NET Framework Class Library. The following sample application, written in both Visual Basic .NET and C#, shows how you can programmatically generate XML strings and object GUIDs for use with the OneNote import functionality. In this example, the user specifies a section name and, optionally, a file location and a title, for a new page that the application then creates by using the CSimpleImporterClass.Import method. Figure 3 shows the Windows Form user interface that you must create as part of the sample application. Figure 3. Sample Application User Interface When the user clicks the Create Page button, the btnCreate_Click event handler runs the sample code. Once the sample has stored the user input in two variables, it calls the NewGuid method to generate a new GUID for the page to create. Note that because the NewGuid method is a shared method, you do not have to instantiate a Guid object in order to call it. OneNote only accepts GUIDs in registry format, which includes curly braces ({}). However, the NewGuid method does not generate GUIDs in registry format. In order for OneNote to correctly process it, you must first wrap the generated GUID in curly braces. The sample then constructs the simple XML document required for the Import method. The code employs an XMLTextWriter object and its various methods to generate an XML stream. The code creates Import and EnsurePage elements and their required attributes. Then the code creates a PlaceObjects element, along with its required attributes. Within the PlaceObjects element, the code creates an Object element that in turn contains a Position element and nested Outline, Html, and Data elements to import a simple text string and place it at an arbitrary position on the page the user specifies. If the user specifies a path to an existing section or the path to a new section, OneNote creates the section in the location specified. Otherwise, if the user specifies no path, OneNote creates a new section in the default location set in the Options dialog box. If the user fails to enter a value for the section name, or if the value entered is invalid (for example, if it contains illegal characters, such as “>”), the sample program throws an exception, and the exception handler displays a message box notifying the user that the name or path entered is unacceptable. At the point, the user has the option of re-entering the path or closing the sample application. If the user specifies a section name but fails to append the .one file extension, the application adds it. If the user specifies show what the XML generated looks like. Because the Import method takes a string and based on user input, in order to navigate to the new page in OneNote. Imports System.Xml Imports System.IO Imports System.Runtime.InteropServices . . . Private Sub btnCreate_Click(ByVal sender As System.Object, _ ByVal e As System.EventArgs) Handles btnCreate.Click Dim section sectionPath = Me.txtSectionPath.Text.ToString pageTitle = Me.txtPageTitle.Text.ToString While (sectionPath = "") 'Check for missing section name MessageBox.Show("Please include path", "No Path", _ MessageBoxButtons.OK) Return End While 'Check for .one file extension, and add if missing If Not (sectionPath.EndsWith(".one")) Then sectionPath = sectionPath + ".one" End If 'Generate a new GUID for the page to be created pageGuid = "{" & Guid.NewGuid.ToString & "}" 'Generate the XML as a stream With XmlImportWriter .WriteStartDocument() 'Write opening (root) Import element tag 'and specify OneNote schema namespace .WriteStartElement("Import") .WriteAttributeString("xmlns", _ "") 'Write opening EnsurePage element tag .WriteStartElement("EnsurePage") 'Write required path attribute .WriteAttributeString("path", sectionPath) 'Write required guid attribute .WriteAttributeString("guid", pageGuid) 'Write optional title attribute, if title was specified If Not pageTitle = "" Then .WriteAttributeString("title", pageTitle) End If 'Write closing EnsurePages element tag .WriteEndElement() 'Write opening PlaceObjects element and attributes .WriteStartElement("PlaceObjects") .WriteAttributeString("pagePath", sectionPath) .WriteAttributeString("pageGuid", pageGuid) 'Write opening Object element tag and attributes .WriteStartElement("Object") .WriteAttributeString("guid", objectGuid) 'Write opening Position element tag and attributes .WriteStartElement("Position") .WriteAttributeString("x", "72") .WriteAttributeString("y", "72") 'Write closing Position element tag .WriteEndElement() 'Write opening Outline element tag and attributes .WriteStartElement("Outline") .WriteAttributeString("width", "360") 'Write opening Html element tag .WriteStartElement("Html") 'Write opening Data element tag .WriteStartElement("Data") 'Write CData .WriteCData("<html><body><p>my sample text</p></body></html>") 'Write closing Data element tag .WriteEndElement() 'Write closing Html element tag .WriteEndElement() 'Write closing Outline element tag .WriteEndElement() 'Write closing Object element tag .WriteEndElement() 'Write closing PlaceObjects element tag .WriteEndElement() 'Write closing Import element tag .WriteEndElement() 'Write closing Document element tag ex As COMException If (ex.ErrorCode = &H80041002) Then MessageBox.Show("Invalid character in section name", _ "Invalid Character", MessageBoxButtons.OK) Return ElseIf (ex.ErrorCode = &H80041001) Then MessageBox.Show("Please enter a section name.", _ "No Section Name", MessageBoxButtons.OK) Return Else MessageBox.Show("Please enter a valid section name.", _ "Invalid Name", MessageBoxButtons.OK) Return End If End Try 'Navigate to new page, based on path and GUID variables OneNoteImporter.NavigateToPage(sectionPath, pageGuid) End Sub using System; using System.IO; using System.Runtime.InteropServices; using System.Windows.Forms; using System.Xml; private void btnCreate_Click(object sender, System.EventArgs e) { string sectionPath; string pageTitle; string pageGuid; string objectGuid; string strEnsurePage; MemoryStream XmlImportStream = new MemoryStream(); XmlTextWriter XmlImportWriter = new XmlTextWriter(XmlImportStream, null); // Declare instance of CSimpleImporterClass OneNote.CSimpleImporterClass OneNoteImporter; // Store user input for section name (required) and path (optional) sectionPath = this.txtSectionPath.Text.ToString(); // Check for missing section name if (sectionPath == "") { MessageBox.Show("Please enter the section name.", "No section name", MessageBoxButtons.OK); return; } // Check for .one file extension, and add if missing if (!sectionPath.EndsWith(".one")) { sectionPath = sectionPath + ".one"; } pageTitle = this.txtPageTitle.Text.ToString(); // Generate a new GUID for the page to be created pageGuid = "{" + Guid.NewGuid().ToString() + "}"; // Generate a new GUID for the object to be created objectGuid = "{" + Guid.NewGuid().ToString() + "}"; // Generate the XML as a stream XmlImportWriter.WriteStartDocument(); // Write opening (root) Import element tag // and specify OneNote schema namespace XmlImportWriter.WriteStartElement("Import"); XmlImportWriter.WriteAttributeString("xmlns", ""); // Write opening EnsurePage element tag XmlImportWriter.WriteStartElement("EnsurePage"); // Write required path attribute XmlImportWriter.WriteAttributeString("path", sectionPath); // Write required guid attribute XmlImportWriter.WriteAttributeString("guid", pageGuid); // Write optional pageTitle attribute, // if page title was specified by user if(pageTitle != "" ) { XmlImportWriter.WriteAttributeString("title", pageTitle); } // Write closing EnsurePage element tag XmlImportWriter.WriteEndElement(); // Write opening PlaceObjects element tag and attributes XmlImportWriter.WriteStartElement("PlaceObjects"); XmlImportWriter.WriteAttributeString("pagePath", sectionPath); XmlImportWriter.WriteAttributeString("pageGuid", pageGuid); // Write opening Object element tag and attributes XmlImportWriter.WriteStartElement("Object"); XmlImportWriter.WriteAttributeString("guid", objectGuid); // Write opening Position element tag and attributes XmlImportWriter.WriteStartElement("Position"); XmlImportWriter.WriteAttributeString("x", "72"); XmlImportWriter.WriteAttributeString("y", "72"); // Write closing Position element tag XmlImportWriter.WriteEndElement(); // Write opening Outline element tag and attributes XmlImportWriter.WriteStartElement("Outline"); XmlImportWriter.WriteAttributeString("width", "360"); // Write opening Html element tag XmlImportWriter.WriteStartElement("Html"); // Write opening Data element tag XmlImportWriter.WriteStartElement("Data"); // Write CData XmlImportWriter.WriteCData("<html><body> <p>my sample text</p></body></html>"); // Write closing Data element tag XmlImportWriter.WriteEndElement(); // Write closing Html element tag XmlImportWriter.WriteEndElement(); // Write closing Outline element tag XmlImportWriter.WriteEndElement(); // Write closing Object element tag XmlImportWriter.WriteEndElement(); // Write closing PlaceObjects element tag XmlImportWriter.WriteEndElement(); // Write closing Import element tag XmlImportWriter.WriteEndElement(); // Write closing Document tag XmlImportWriter.WriteEndDocument(); // Flush the XMLTextWriter XmlImportWriter.Flush(); // Move to the start of the XML stream XmlImportStream.Position = 0; // Create a streamreader for the XML stream StreamReader XmlImport (COMException ex) { switch ((uint) ex.ErrorCode) { case 0x80041001: // Handle missing section name MessageBox.Show("Please enter a section name.", "No Section Name", MessageBoxButtons.OK); break; case 0x80041002: // Handle invalid character in section name MessageBox.Show("Invalid character in section name.", "Invalid Character", MessageBoxButtons.OK); break; default: // Handle other errors MessageBox.Show("Please enter a valid section name.", "Invalid Section Name", MessageBoxButtons.OK); break; } return; } // Navigate to new page, based on path and GUID variables OneNoteImporter.NavigateToPage(sectionPath, pageGuid); } By design, when the Import method is executed, users are not distracted by OneNote displaying data they may not want to see, or worse, directed away from the OneNote page currently in use.) The new import functionality in OneNote opens up exciting possibilities for interacting with other applications. Any application you create that can save data (either its own or another application's) as HTML text, images, or ISF can now push that content into OneNote and place it wherever you want. And as long as the application retains the GUIDs used, it can update or delete the content it pushed whenever necessary.
http://msdn.microsoft.com/en-us/library/aa168020(office.11).aspx
crawl-002
refinedweb
3,022
58.28
Explaining variance Want to share your content on R-bloggers? click here if you have a blog, or here if you don't. We’re returning to our portfolio discussion after detours into topics on the put-write index and non-linear correlations. We’ll be investigating alternative methods to analyze, quantify, and mitigate risk, including risk-constrained optimization, a topic that figures large in factor research. The main idea is that there are certain risks one wants to bear and others one doesn’t. Do you want to be compensated for exposure to common risk factors or do you want to find and exploit unknown factors? And, perhaps most importantly, can you insulate your portfolio from unexpected risks? Generally, one will try to build a model that explains the return of an asset in terms of its risk factors. Presumably, this model will help to quantify: - The influence of a particular risk factor on the asset’s return. - The explanatory power of these risk factors. - The proportion of the asset’s variance due to identified risk factors. The model generally looks something like the following: \[r_i = a_i + \sum_{k=1}^{K}\beta_{i,k} F_k + \epsilon_i\] where: \(r_i\) = the return for asset \(i\) \(a_i\) = the intercept \(\beta_{i,k}\) = asset \(i\)’s exposure to factor \(k\) \(F_k\) = the return for factor \(k\) \(\epsilon_i\) = idiosyncratic risk of \(i\), noise term, or fudge factor.1 The model can be extended to the portfolio level too. Risk factors can be as simple or arcane as you like. Common ones include CAPM’s \(\beta\) or Fama-French factors; economic variables; and/or technical or statistical metrics like moving averages or cointegration. The problem is that there is no a priori list of factors that describe the majority of returns for any broad class of assets. And even if there were, there’s no guarantee those factors would explain returns going forward. Indeed, factor weightings change all the time and the risk premia associated with some factors may erode or disappear. Just type “Is the value factor dead?” into a Google search and you’ll find plenty of debate for and against. While such debates might be fun, fruitful, or frivolous, let’s take a step back and think about what we hope our portfolio to accomplish: a satisfactory trade-off between risk and return such that whatever return goal we have, there’s a high probability we accomplish it within the necessary time frame. Warren Buffett’s ideal holding period may be forever, but we’ll need the cash a lot sooner! Recall, when we constructed the naive, satisfactory, and mean-variance optimized portfolios in our previous posts, the standard deviation of returns (i.e., volatility) stood in for risk. Volatility as a proxy for risk begs a lot of questions. The most intuitive being that risk in the real world is not a statistic but the actual risk of losing something—capital, for instance. But the beauty of volatility is that it can quantify the probability of such a risk if one is willing to accept a bunch of simplifying assumptions. We’ll leave the question of whether those assumptions are too simple—that is, too unrealistic—for another time. If one of the biggest risks in portfolio construction is building a portfolio that doesn’t achieve the return goal it’s meant to achieve, how do we avoid such an event? Volatility can tell us roughly what’s the probability it might occur. But a risk factor model should, presumably, tell us what’s driving that risk and what’s not. Maybe even help us figure out which risks we should avoid. While it might seem obvious that the first thing to do is to identify the risks. We want to build a risk model with common risk factors first, so that we can understand the process before we start to get creative searching for meaningful factors. We’ll start by bringing back our data series of stocks, bonds, commodities (gold), and real estate and then also call in the classic Fama-French (F-F) three factor model along with momentum. We’re using F-F not because we believe those factors will feature a lot of explanatory power, but because they’re expedient and useful. Expedient because the data are readily available and many people are familiar with them, aiding the reproducible research goal of this blog. Useful because they’ll be a good way to set the groundwork for the proceeding posts. Our roadmap is the following. Graph the F-F factors, show the portfolio simulations for an initial 60-month (five-year) period beginning in 1987, analyze how much the factors explain asset variance, and then look at how much the factors explain portfolio variance. Let’s begin. First, we plot the F-F factors below. Note that we’re only covering the first five-years of monthly data that matches the original portfolio construction. For those unfamiliar with the factors, risk premium is the return on the stock market less the risk-free rate. SMB is the size factor; i.e. returns to small cap stocks less large caps. HML is the value factor; i.e., returns to high book-to-price (hence, low price-to-book multiples) stocks (value) less low book-to-price (growth). Momentum is returns to stocks showing positive returns in the last twevle months less those showing negative returns. If you want more details visit Prof. K. French’s data library for more details. Now we’ll simulate 30,000 portfolios that invest in two to four out of the four possible assets. Recall this simulation can approximate (hack!) an efficient frontier without going through the convex optimization steps. The red and purple markers are the maximum Sharpe ratio and minimum volatility portfolios. We assume the reader can figure out the maximum (efficient) return portfolio. Kinda pretty. Now we look at how well these factors explain the returns on each of the assets. Here, we regress each asset’s excess return (return less the risk-free rate) against the four factors and show the \(R^{2}\) for each regression in the graph below. Not surprisingly, stocks enjoy the highest \(R^{2}\) relative to the factors, since those factors are primarily derived from stock portfolios. Note: we’re not trying to create the best factor model in this post; rather, establish the intuition behind what we’re doing. Now let’s check out the factor sensitivities (or exposures, or beta) for each asset class. We graph the sensitivities below. Predictably, the market risk premium exhibits the highest sensitivity for stocks and the lowest for gold. Surprisingly, momentum sports a modestly negative effect on all the assets. This low sensitivity is not overly mysterious, but the sign of the effect is a bit curious. Whatever the case, the regression output suggests the momentum factor’s significance is not much different than zero. We won’t show the p-values here, but the interested reader will see how to extract them within the code presented below. Now we’ll calculate how much the factors explain a portfolio’s variance. The result is derived from the following formula based on matrix algebra: \[Var_{p} =X^{T}(BFB^T + S)X\] Where: \(Var_{p}\) = the variance of the portfolio \(X\) = a column vector of weights \(B\) = a matrix of factor sensitivities \(F\) = the covariance matrix of factor returns \(S\) = the diagonal matrix of residual variance. In other words, the variance of the returns not explained by the factor model. Having calculated the variances, the question is what can this tell us about the portfolios? Time for some exploratory data analysis! First off, we might be interested to see if there’s any relationship between portfolio volatility and explained variance. However, even though a scatterplot of those two metrics creates a wonderfully fantastic graph, it reveals almost no information as shown below. Who knew finance could be so artistic! What if we group the volatilities into deciles and graph the average explained variance with an annotation for the average volatility of each decile? We show the results below. Note that we’ve shortened the y-axis to highlight the differences in explained variance. It’s not obvious that there’s much of a pattern here either. Then again there needn’t be a relationship between the levels of portfolio volatility and how much our set of risk factors explain the portfolios’ variance. Now, we’ll group the portfolios by major asset class weighting as well as include a grouping of relatively equal-weighted portfolios. Using these groupings we’ll calculate the average variance explained by the factors. We select portfolios for the asset groups if the particular asset in that portfolio makes up a greater than 50% weighting. Hence, all portfolios in the stock grouping have a weighting to stocks in excess of 50%. For the relatively equal-weighted portfolios, we include only those portfolios that feature weightings no greater than 30% for any of the assets. This total grouping only amounts to about half the portfolios, so we bucket the remainder into an eponymous group. We also calculate the average of the variance explained across all portfolios. We plot the bar chart below. Predictably, the variance explained by the risk factors is relatively high for stocks, but not as high as the relatively equal-weighted portfolios. Portfolios with high exposure to the other assets see less than 30% of the variance explained by the risk factors, while the remaining portfolios see almost 40% of their variance explained. Finally, we’ll look at our original four portfolios (Satisfactory, Naive, Max Sharpe, and Max Return) to see how much of their variance is explained by the factor models. Here, the factor model enjoys the highest explanatory power for the Satisfactory and Naive portfolios, much less for the Max Sharpe and Max Return portfolios. Recall, both the Satisfactory and Naive portfolios had less than 40% allocation to stocks, so it’s interesting that over 40% of the variance is explained primarily by an equity risk factor model. This is probably due to the fact that the other \(R^{2}\)s are above 10% and the beta-weighted factor covariance matrix is positive2. There might also be some additional information captured by the factor returns beyond the stated exposure that drives the higher explanatory power.3 The low stock exposure probably explains why the model explains only about a quarter of the variance of the Max Sharpe portfolio. While the model just about hits the bullseye for the Max Return portfolio, as it’s almost 100% stocks and model’s \(R^{2}\) for stocks was just about 40%! Where does this leave us? We’ve built a factor model that does an OK job explaining asset returns and a modestly better job explaining portfolio variance. Now that we’ve established the factor model process, we’ll look to see if we can identify factors that are actually good predictors of returns and variance. Note that the factor model we used here was entirely coincident with the asset returns. We want risk factors that predict future returns and variance. Until we find them, the code is below. A few administrative notes. First, we’ve made changes to the blog behind the scenes, including purchasing a domain name. The DNS configuration might still be a little buggy, but we hope that this will solve the problem we were having with subscription delivery. Thanks for bearing with us on that one. If you wish to subscribe, you may do so above in the right hand corner. If you do subscribe, but notice you’re not getting any updates, please email us at content at optionstocksmachines dot com and we’ll try to sort it out. Second, as much as we find providing the code to our posts in both R and Python worthwhile, they soak up a lot of time, which we have less and less of. Going forward, we’ll still try to provide both, but may, from time to time, only code in one language or the other. The tagging will indicate which it is. We’ll still provide the code below, of course. This post we cheated a bit since we coded in Python, but converted to R using the reticulate package. Third, you’ll find a rant here or there in the code below. Reticulate cannot seem to handle some of the flexibility of Python so we were getting a number of errors for code that we know works perfectly well in jupyter notebooks. If you know what went wrong, please let us know. # Built using R 4.0.3 and Python 3.8.3 ### R ## Load packages suppressPackageStartupMessages({ library(tidyquant) # Not really necessary, but force of habit library(tidyverse) # Not really necessary, but force of habit library(reticulate) # development version }) # Allow variables in one python chunk to be used by other chunks. knitr::knit_engines$set(python = reticulate::eng_python) ### Python from here on! # Load libraries import warnings warnings.filterwarnings('ignore') import numpy as np import pandas as pd import statsmodels.api as sm import matplotlib import matplotlib.pyplot as plt import os os.environ['QT_QPA_PLATFORM_PLUGIN_PATH'] = 'C:/Users/usr/Anaconda3/Library/plugins/platforms' plt.style.use('ggplot') ## Load asset data df = pd.read_pickle('port_const.pkl') # Check out for how we pulled in the data. df.iloc[0,3] = 0.006 # Interpolation ## Load ff data ff_url = "" col_names = ['date', 'mkt-rfr', 'smb', 'hml', 'rfr'] ff = pd.read_csv(ff_url, skiprows=6, header=0, names = col_names) ff = ff.iloc[:364,:] = ff_mo[(ff_mo['date']>="1987-01-31") & (ff_mo['date']<="2019-12-31")].reset_index(drop=True) ## Plot ff ff_factors = ['Risk premium', 'SMB', 'HML', 'Momemtum'] fig, axes = plt.subplots(4,1, figsize=(10,8)) for idx, ax in enumerate(fig.axes): ax.plot(ff_mo.iloc[:60,0], ff_mo.iloc[:60,idx+1], linestyle = "dashed", color='blue') ax.set_title(ff_factors[idx], fontsize=10, loc='left') if idx % 2 != 0: ax.set_ylabel("Returns (%)") fig.tight_layout(pad = 0.5) plt.tight_layout() plt.show() ## Abbreviated Simulation function class Port_sim: import numpy as np import pandas as pd def calc_sim_lv(df, sims, cols): wts = np.zeros(((cols-1)*sims, cols)) count=0 for i in range(1,cols): for j in range(sims): a = np.random.uniform(0,1,(cols-i+1)) b = a/np.sum(a) c = np.random.choice(np.concatenate((b, np.zeros(i-1))),cols, replace=False) wts[count,:] = c count+=1 mean_ret = df.mean() port_cov = df.cov() rets=[] vols=[] for i in range((cols-1)*sims): rets.append(np.sum(wts[i,:]*mean_ret)) vols.append(np.sqrt(np.dot(np.dot(wts[i,:].T,port_cov), wts[i,:]))) port = np.c_[rets, vols] sharpe = port[:,0]/port[:,1]*np.sqrt(12) return port, wts, sharpe ## Simulate portfolios port1, wts1, sharpe1 = Port_sim.calc_sim_lv(df.iloc[1:60, 0:4], 10000,4) ## Plot simulated portfolios max_sharp1 = port1[np.argmax(sharpe1)] min_vol1 = port1[np.argmin(port1[:,1])] fig = plt.figure(figsize=(12,6)) ax = fig.add_subplot(1,1, 1) sim = ax.scatter(port1[:,1]*np.sqrt(12)*100, port1[:,0]*1200, marker='.', c=sharpe1, cmap='Blues') ax.scatter(max_sharp1[1]*np.sqrt(12)*100, max_sharp1[0]*1200,marker=(4,1,0),color='r',s=500) ax.scatter(min_vol1[1]*np.sqrt(12)*100,min_vol1[0]*1200,marker=(4,1,0),color='purple',s=500) ax.set_title('Simulated portfolios', fontsize=20) ax.set_xlabel('Risk (%)') ax.set_ylabel('Return (%)') cbaxes = fig.add_axes([0.15, 0.6, 0.01, 0.2]) clb = fig.colorbar(sim, cax = cbaxes) clb.ax.set_title(label='Sharpe', fontsize=10) plt.tight_layout() plt.show() ## Calculate betas for asset classes X = sm.add_constant(ff_mo.iloc[:60,1:5]) rsq = [] for i in range(4): y = df.iloc[:60,i].values - ff_mo.loc[:59, 'rfr'].values mod = sm.OLS(y, X).fit().rsquared*100 rsq.append(mod) asset_names = ['Stocks', 'Bonds', 'Gold', 'Real estate'] fact_plot = pd.DataFrame(zip(asset_names,rsq), columns = ['asset_names', 'rsq']) ## Plot betas ax = fact_plot['rsq'].plot(kind = "bar", color='blue', figsize=(12,6)) ax.set_xticklabels(asset_names, rotation=0) ax.set_ylabel("$R^{2}$") ax.set_title("$R^{2}$ for Fama-French Four Factor Model") ax.set_ylim([0,45]) ## Iterate through annotation for i in range(4): plt.annotate(str(round(rsq[i]))+'%', xy = (fact_plot.index[i]-0.05, rsq[i]+1)) plt.tight_layout() plt.show() ## Note: reticulate does not like plt.annotate() and throws errors left, right, and center if you ## don't ensure that the x ticks are numeric, which means you have to label the xticks separately ## through the axes setting. Very annoying! # Find factor exposures assets = df.iloc[:60,:4] betas = pd.DataFrame(index=assets.columns) error = pd.DataFrame(index=assets.index) # Create betas and error # Code derived from Quantopian X = sm.add_constant(ff_mo.iloc[:60,1:5]) for i in assets.columns: y = assets.loc[:,i].values - ff_mo.loc[:59,'rfr'].values result = sm.OLS(y, X).fit() betas.loc[i,"mkt_beta"] = result.params[1] betas.loc[i,"smb_beta"] = result.params[2] betas.loc[i,"hml_beta"] = result.params[3] betas.loc[i,'momo_beta'] = result.params[4] # We don't show the p-values in the post, but did promise to show how we coded it. pvalues.loc[i,"mkt_p"] = result.pvalues[1] pvalues.loc[i,"smb_p"] = result.pvalues[2] pvalues.loc[i,"hml_p"] = result.pvalues[3] pvalues.loc[i,'momo_p'] = result.pvalues[4] error.loc[:,i] = (y - X.dot(result.params)).values # Plot the betas (betas*100).plot(kind='bar', width = 0.75, color=['darkblue', 'blue', 'grey', 'darkgrey'], figsize=(12,6)) plt.legend(['Risk premium', 'SMB', 'HML', 'Momentum'], loc='upper right') plt.xticks([0,1,2,3], ['Stock', 'Bond', 'Gold', 'Real estate'], rotation=0) plt.ylabel(r'Factor $\beta$s ') plt.title('') plt.tight_layout() plt.show() # Create variance contribution function def factor_port_var(betas, factors, weights, error): B = np.array(betas) F = np.array(factors.cov()) S = np.diag(np.array(error.var())) factor_var = weights.dot(B.dot(F).dot(B.T)).dot(weights.T) specific_var = weights.dot(S).dot(weights.T) return factor_var, specific_var # Iterate variance calculation through portfolios facts = ff_mo.iloc[:60, 1:5] fact_var = [] spec_var = [] for i in range(len(wts1)): out = factor_port_var(betas, facts, wts1[i], error) fact_var.append(out[0]) spec_var.append(out[1]) vars = np.array([fact_var, spec_var]) ## Find max sharpe and min vol portfolio max_sharp_var = [exp_var[np.argmax(sharpe1)], port1[np.argmax(sharpe1)][1]] min_vol_var = [exp_var[np.argmin(port1[:,1])], port1[np.argmin(port1[:,1])][1]] ## Plot variance explained vs. volatility fig = plt.figure(figsize=(12,6)) ax = fig.add_subplot(1,1, 1) sim = ax.scatter(port1[:,1]*np.sqrt(12)*100, exp_var, marker='.', c=sharpe1, cmap='Blues') ax.scatter(max_sharp_var[1]*np.sqrt(12)*100, max_sharp_var[0],marker=(4,1,0),color='r',s=500) ax.scatter(min_vol_var[1]*np.sqrt(12)*100,min_vol_var[0],marker=(4,1,0),color='purple',s=500) ax.set_title('Portfolio variance due to risk factors vs. portfolio volatility ', fontsize=20) ax.set_xlabel('Portfolio Volatility (%)') ax.set_ylabel('Risk factor variance contribution (%)') ax.set_xlim([0,13]) cbaxes = fig.add_axes([0.15, 0.6, 0.01, 0.2]) clb = fig.colorbar(sim, cax = cbaxes) clb.ax.set_title(label='Sharpe', fontsize=10) plt.tight_layout() plt.show() ## Create ranking data frame rank = pd.DataFrame(zip(port1[:,1], exp_var), columns=['vol', 'exp_var']) rank = rank.sort_values('vol') rank['decile'] = pd.qcut(rank['vol'], 10, labels = False) vol_rank = rank.groupby('decile')[['vol','exp_var']].mean() vols = (vol_rank['vol'] * np.sqrt(12)*100).values ## Plot explained variance vs. ranking ax = vol_rank['exp_var'].plot(kind='bar', color='blue', figsize=(12,6)) ax.set_xticklabels([x for x in np.arange(1,11)], rotation=0) ax.set_xlabel('Decile') ax.set_ylabel('Risk factor explained variance (%)') ax.set_title('Variance explained by risk factor grouped by volatility decile\nwith average volatility by bar') ax.set_ylim([20,40]) for i in range(10): plt.annotate(str(round(vols[i],1))+'%', xy = (vol_rank.index[i]-0.2, vol_rank['exp_var'][i]+1)) plt.tight_layout() plt.show() ## Show grouping of portfolios ## Note we could not get this to work within reticulate, so simply saved the graph as a png. ## This did work in jupyter, however. wt_df = pd.DataFrame(wts1, columns = assets.columns) indices = [] for asset in assets.columns: idx = np.array(wt_df[wt_df[asset] > 0.5].index) indices.append(idx) eq_wt = [] for i, row in wt_df.iterrows(): if row.max() < 0.3: eq_wt.append(i) exp_var_asset = [] for i in range(4): out = np.mean(exp_var[indices[i]]) exp_var_asset.append(out) exp_var_asset.append(np.mean(exp_var[eq_wt])) mask = np.concatenate((np.concatenate(indices), np.array(eq_wt))) exp_var_asset.append(np.mean(exp_var[~mask])) plt.figure(figsize=(12,6)) asset_names = ['Stocks', 'Bonds', 'Gold', 'Real estate'] plt.bar(['All'] + asset_names + ['Equal', 'Remainder'], exp_var_asset, color = "blue") for i in range(len(exp_var_asset)): plt.annotate(str(round(exp_var_asset[i])) + '%', xy = (i-0.05, exp_var_asset[i]+1)) plt.title('Portfolio variance explained by factor model for asset and equal-weighted models') plt.ylabel('Variance explained (%)') plt.ylim([10,50]) plt.tight_layout() plt.show() # This is the error we'd get every time we ran the code in blogdown. # Error in py_call_impl(callable, dots$args, dots$keywords) : # TypeError: only integer scalar arrays can be converted to a scalar index # # Detailed traceback: # File "<string>", line 2, in <module> # Calls: local ... py_capture_output -> force -> <Anonymous> -> py_call_impl # Execution halted # Error in render_page(f) : # Failed to render 'content/post/2020-12-01-port-20/index.Rmd' ## Instantiate original four portfolio weights satis_wt = np.array([0.32, 0.4, 0.2, 0.08]) equal_wt = np.repeat(0.25,4) max_sharp_wt = wts1[np.argmax(sharpe1)] max_ret_wt = wts1[pd.DataFrame(np.c_[port1,sharpe1], columns = ['ret', 'risk', 'sharpe']).sort_values(['ret', 'sharpe'], ascending=False).index[0]] ## Loop through weights to calculate explained variance wt_list = [satis_wt, equal_wt, max_sharp_wt, max_ret_wt] port_exp=[] for wt in wt_list: out = factor_port_var(betas, facts, wt, error) port_exp.append(out[0]/(out[0] + out[1])) port_exp = np.array(port_exp) ## Graph portfolio ## We didn't even bother trying to make this work in blogdown and just saved direct to a png. port_names = ['Satisfactory', 'Naive', 'Max Sharpe', 'Max Return'] plt.figure(figsize=(12,6)) plt.bar(port_names, port_exp*100, color='blue') for i in range(4): plt.annotate(str(round(port_exp[i]*100)) + '%', xy = (i-0.05, port_exp[i]*100+0.5)) plt.title('Original four portfolios variance explained by factor models') plt.ylabel('Variance explained (%)') plt.ylim([10,50]) plt.show() - Never thought I would be writing such dense phrase!↩︎ Not to criticize Fama-French, but the portfolio sorts that generate the various factors may not perfectly isolate the exposure their trying to capture. That ‘unknown’ information could be driving some of the explanatory power. Remember this ‘unknown’ if and when we tackle principal component analysis in a later.
https://www.r-bloggers.com/2020/12/explaining-variance/
CC-MAIN-2021-43
refinedweb
3,776
51.65
Code should execute sequentially if run in a Jupyter notebook - See the set up page to install Jupyter, Python and all necessary libraries - Please direct feedback to contact@quantecon.org or the discourse forum “We may regard the present state of the universe as the effect of its past and the cause of its future” – Marquis de Laplace In addition to what’s in Anaconda, this lecture will need the following libraries: !pip install --upgrade quantecon Overview¶ This lecture introduces the linear state space dynamic system.. The Linear State Space Model¶ - $$ \begin{aligned} x_{t+1} & = A x_t + C w_{t+1} \\ y_t & = G x_t \nonumber \\ x_0 & \sim N(\mu_0, \Sigma_0) \nonumber \end{aligned} \tag (1) pins down the values of the sequences $ \{x_t\} $ and $ \{y_t\} $. Even without these draws, the primitives 1–3 pin down the probability distributions of $ \{x_t\} $ and $ \{y_t\} $. Later we’ll see how to compute these distributions and their moments.$$ \mathbb{E} [w_{t+1} | x_t, x_{t-1}, \ldots ] = 0 $$ This is a weaker condition than that $ \{w_t\} $ is IID with $ w_{t+1} \sim N(0,I) $. Second-order Difference Equation¶ Let $ \{y_t\} $ be a deterministic sequence that satisfies $$ y_{t+1} = \phi_0 + \phi_1 y_t + \phi_2 y_{t-1} \quad \text{s.t.} \quad y_0, y_{-1} \text{ given} \tag{2} $$ To map (2) into our state space system (1), we set$$ x_t= \begin{bmatrix} 1 \\ y_t \\ y_{t-1} \end{bmatrix} \qquad A = \begin{bmatrix} 1 & 0 & 0 \\ \phi_0 & \phi_1 & \phi_2 \\ 0 & 1 & 0 \end{bmatrix} \qquad C= \begin{bmatrix} 0 \\ 0 \\ 0 \end{bmatrix} \qquad G = \begin{bmatrix} 0 & 1 & 0 \end{bmatrix} $$ You can confirm that under these definitions, (1) and (2) agree. The next figure shows the dynamics of this process when $ \phi_0 = 1.1, \phi_1=0.8, \phi_2 = -0.8, y_0 = y_{-1} = 1 $. Later you’ll be asked to recreate this figure. Univariate Autoregressive Processes¶ We can use (1) to represent the model $$ y_{t+1} = \phi_1 y_{t} + \phi_2 y_{t-1} + \phi_3 y_{t-2} + \phi_4 y_{t-3} + \sigma w_{t+1} \tag{3} $$ where $ \{w_t\} $ is IID and standard normal. To put this in the linear state space format we take $ x_t = \begin{bmatrix} y_t & y_{t-1} & y_{t-2} & y_{t-3} \end{bmatrix}' $ and$$ A = \begin{bmatrix} \phi_1 & \phi_2 & \phi_3 & \phi_4 \\ 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \end{bmatrix} \qquad C = \begin{bmatrix} \sigma \\ 0 \\ 0 \\ 0 \end{bmatrix} \qquad G = \begin{bmatrix} 1 & 0 & 0 & 0 \end{bmatrix} $$ The matrix $ A $ has the form of the companion matrix to the vector $ \begin{bmatrix}\phi_1 & \phi_2 & \phi_3 & \phi_4 \end{bmatrix} $. The next figure shows the dynamics of this process when$$ \phi_1 = 0.5, \phi_2 = -0.2, \phi_3 = 0, \phi_4 = 0.5, \sigma = 0.2, y_0 = y_{-1} = y_{-2} = y_{-3} = 1 $$ Vector Autoregressions¶ - $ y_t $ is a $ k \times 1 $ vector - $ \phi_j $ is a $ k \times k $ matrix and - $ w_t $ is $ k \times 1 $ Then (3) is termed a vector autoregression. To map this into (1), we set$$ x_t = \begin{bmatrix} y_t \\ y_{t-1} \\ y_{t-2} \\ y_{t-3} \end{bmatrix} \quad A = \begin{bmatrix} \phi_1 & \phi_2 & \phi_3 & \phi_4 \\ I & 0 & 0 & 0 \\ 0 & I & 0 & 0 \\ 0 & 0 & I & 0 \end{bmatrix} \quad C = \begin{bmatrix} \sigma \\ 0 \\ 0 \\ 0 \end{bmatrix} \quad G = \begin{bmatrix} I & 0 & 0 & 0 \end{bmatrix} $$ where $ I $ is the $ k \times k $ identity matrix and $ \sigma $ is a $ k \times k $ matrix. Seasonals¶ We can use (1) to represent - the deterministic seasonal $ y_t = y_{t-4} $ - the indeterministic seasonal $ y_t = \phi_4 y_{t-4} + w_t $ In fact, both are special cases of (3). With the deterministic seasonal, the transition matrix becomes$$ A = \begin{bmatrix} 0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \end{bmatrix} $$ It is easy to check that $ A^4 = I $, which implies that $ x_t $ is strictly periodic with period 4:[1]$$ x_{t+4} = x_t $$ Such an $ x_t $ process can be used to model deterministic seasonals in quarterly time series. The indeterministic seasonal produces recurrent, but aperiodic, seasonal fluctuations. Time Trends¶ The model $ y_t = a t + b $ is known as a linear time trend. We can represent this model in the linear state space form by taking $$ A = \begin{bmatrix} 1 & 1 \\ 0 & 1 \end{bmatrix} \qquad C = \begin{bmatrix} 0 \\ 0 \end{bmatrix} \qquad G = \begin{bmatrix} a & b \end{bmatrix} \tag{4} $$ and starting at initial condition $ x_0 = \begin{bmatrix} 0 & 1\end{bmatrix}' $. In fact, it’s possible to use the state-space system to represent polynomial trends of any order. For instance, let$$ x_0 = \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix} \qquad A = \begin{bmatrix} 1 & 1 & 0 \\ 0 & 1 & 1 \\ 0 & 0 & 1 \end{bmatrix} \qquad C = \begin{bmatrix} 0 \\ 0 \\ 0 \end{bmatrix} $$ It follows that$$ A^t = \begin{bmatrix} 1 & t & t(t-1)/2 \\ 0 & 1 & t \\ 0 & 0 & 1 \end{bmatrix} $$ Then $ x_t^\prime = \begin{bmatrix} t(t-1)/2 &t & 1 \end{bmatrix} $, so that $ x_t $ contains linear and quadratic time trends. Moving Average Representations¶ A nonrecursive expression for $ x_t $ as a function of $ x_0, w_1, w_2, \ldots, w_t $ can be found by using (1) repeatedly to obtain $$ \begin{aligned} x_t & = Ax_{t-1} + Cw_t \\ & = A^2 x_{t-2} + ACw_{t-1} + Cw_t \nonumber \\ & \qquad \vdots \nonumber \\ & = \sum_{j=0}^{t-1} A^j Cw_{t-j} + A^t x_0 \nonumber \end{aligned} \tag{5} $$ Representation (5) is a moving average representation. It expresses $ \{x_t\} $ as a linear function of - current and past values of the process $ \{w_t\} $ and - the initial condition $ x_0 $ As an example of a moving average representation, let the model be$$ A = \begin{bmatrix} 1 & 1 \\ 0 & 1 \end{bmatrix} \qquad C = \begin{bmatrix} 1 \\ 0 \end{bmatrix} $$ You will be able to show that $ A^t = \begin{bmatrix} 1 & t \cr 0 & 1 \end{bmatrix} $ and $ A^j C = \begin{bmatrix} 1 & 0 \end{bmatrix}' $. Substituting into the moving average representation (5), we obtain$$ x_{1t} = \sum_{j=0}^{t-1} w_{t-j} + \begin{bmatrix} 1 & t \end{bmatrix} x_0 $$. Unconditional Moments¶ Using (1), it’s easy to obtain expressions for the (unconditional) means of $ x_t $ and $ y_t $. We’ll explain what unconditional and conditional mean soon. Letting $ \mu_t := \mathbb{E} [x_t] $ and using linearity of expectations, we find that $$ \mu_{t+1} = A \mu_t \quad \text{with} \quad \mu_0 \text{ given} \tag{6} $$ Here $ \mu_0 $ is a primitive given in (1). The variance-covariance matrix of $ x_t $ is $ \Sigma_t := \mathbb{E} [ (x_t - \mu_t) (x_t - \mu_t)'] $. Using $ x_{t+1} - \mu_{t+1} = A (x_t - \mu_t) + C w_{t+1} $, we can determine this matrix recursively via $$ \Sigma_{t+1} = A \Sigma_t A' + C C' \quad \text{with} \quad \Sigma_0 \text{ given} \tag{7} $$ As with $ \mu_0 $, the matrix $ \Sigma_0 $ is a primitive given in $$ u \sim N(\bar u, S) \quad \text{and} \quad v = a + B u \implies v \sim N(a + B \bar u, B S B') \tag{10} $$ In particular, given our Gaussian assumptions on the primitives and the linearity of (6) and (7). Letting $ \mu_t $ and $ \Sigma_t $ be as defined by these equations, we have $$ x_t \sim N(\mu_t, \Sigma_t) \tag{11} $$ By similar reasoning combined with (8) and (9), $$ y_t \sim N(G \mu_t, G \Sigma_t G') \tag{12} $$ Ensemble Interpretations¶ How should we interpret the distributions defined by (11)– (3). The values of $ y_T $ are represented by black dots in the left-hand figure In the right-hand figure, these values are converted into a rotated histogram that shows relative frequencies from our sample of 20 $ y_T $’s. (The parameters and source code for the figures can be found in file linear_models/paths_and_hist.py) Here is another figure, this time with 100 observations Let’s now try with 500,000 observations, showing only the histogram (without rotation) The black line is the population density of $ y_T $ calculated from (12). The histogram and population distribution are close, as expected. By looking at the figures and experimenting with parameters, you will gain a feel for how the population distribution depends on the model primitives listed above, as intermediated by the distribution’s sufficient statistics.$$ \bar y_T := \frac{1}{I} \sum_{i=1}^I y_T^i $$ $). The ensemble mean for $ x_t $ is$$ \bar x_T := \frac{1}{I} \sum_{i=1}^I x_T^i \to \mu_T \qquad (I \to \infty) $$ The limit $ \mu_T $ is a “long-run average”. (By long-run average we mean the average for an infinite ($ I = \infty $) number of sample $ x_T $’s) Another application of the law of large numbers assures us that$$ \frac{1}{I} \sum_{i=1}^I (x_T^i - \bar x_T) (x_T^i - \bar x_T)' \to \Sigma_T \qquad (I \to \infty) $$$$ p(x, y) = p(y \, | \, x) p(x) \qquad \text{(joint }=\text{ conditional }\times\text{ marginal)} $$ From this rule we get $ p(x_0, x_1) = p(x_1 \,|\, x_0) p(x_0) $. The Markov property $ p(x_t \,|\, x_{t-1}, \ldots, x_0) = p(x_t \,|\, x_{t-1}) $ and repeated applications of the preceding rule lead us to$$ p(x_0, x_1, \ldots, x_T) = p(x_0) \prod_{t=0}^{T-1} p(x_{t+1} \,|\, x_t) $$ The marginal $ p(x_0) $ is just the primitive $ N(\mu_0, \Sigma_0) $. In view of (1), the conditional densities are$$ p(x_{t+1} \,|\, x_t) = N(Ax_t, C C') $$ Autocovariance Functions¶ An important object related to the joint distribution is the autocovariance function $$ \Sigma_{t+j, t} := \mathbb{E} [ (x_{t+j} - \mu_{t+j})(x_t - \mu_t)' ] \tag{13} $$ Elementary calculations show that $$ \Sigma_{t+j,t} = A^j \Sigma_t \tag{14} $$ Notice that $ \Sigma_{t+j,t} $ in general depends on both $ j $, the gap between the two dates, and $ t $, the earlier date. Visualizing Stability¶ Let’s look at some more time series from the same model that we analyzed above. This picture shows cross-sectional distributions for $ y $ at times $ T, T',. Stationary Distributions¶ In our setting, a distribution $ \psi_{\infty} $ is said to be stationary for $ x_t $ if$$ x_t \sim \psi_{\infty} \quad \text{and} \quad x_{t+1} = A x_t + C w_{t+1} \quad \implies \quad x_{t+1} \sim \psi_{\infty} $$ Since - in the present case, all distributions are Gaussian - a Gaussian distribution is pinned down by its mean and variance-covariance matrix we can restate the definition as follows: $ \psi_{\infty} $ is stationary for $ x_t $ if$$ \psi_{\infty} = N(\mu_{\infty}, \Sigma_{\infty}) $$ where $ \mu_{\infty} $ and $ \Sigma_{\infty} $ are fixed points of (6) and (7) respectively. Covariance Stationary Processes¶ Let’s see what happens to the preceding figure if we start $ x_0 $ at the stationary distribution. (6) and (7) respectively we’ve ensured that$$ \mu_t = \mu_{\infty} \quad \text{and} \quad \Sigma_t = \Sigma_{\infty} \quad \text{for all } t $$ Moreover, in view of (7) also has a unique fixed point in this case, and, moreover$$ \mu_t \to \mu_{\infty} = 0 \quad \text{and} \quad \Sigma_t \to \Sigma_{\infty} \quad \text{as} \quad t \to \infty $$? Processes with a Constant State Component¶ To investigate such a process, suppose that $ A $ and $ C $ take the form$$ A = \begin{bmatrix} A_1 & a \\ 0 & 1 \end{bmatrix} \qquad C = \begin{bmatrix} C_1 \\ 0 \end{bmatrix} $$ where - $ A_1 $ is an $ (n-1) \times (n-1) $ matrix - $ a $ is an $ (n-1) \times 1 $ column vector Let $ x_t = \begin{bmatrix} x_{1t}' & 1 \end{bmatrix}' $ where $ x_{1t} $ is $ (n-1) \times 1 $. It follows that$$ \begin{aligned} x_{1,t+1} & = A_1 x_{1t} + a + C_1 w_{t+1} \end{aligned} $$ Let $ \mu_{1t} = \mathbb{E} [x_{1t}] $ and take expectations on both sides of this expression to get $$ \mu_{1,t+1} = A_1 \mu_{1,t} + a \tag{15} $$ Assume now that the moduli of the eigenvalues of $ A_1 $ are all strictly less than one. Then (15) has a unique stationary solution, namely,$$ \mu_{1\infty} = (I-A_1)^{-1} a $$ The stationary value of $ \mu_t $ itself is then $ \mu_\infty := \begin{bmatrix} \mu_{1\infty}' & 1 \end{bmatrix}' $. The stationary values of $ \Sigma_t $ and $ \Sigma_{t+j,t} $ satisfy $$ \begin{aligned} \Sigma_\infty & = A \Sigma_\infty A' + C C' \\ \Sigma_{t+j,t} & = A^j \Sigma_\infty \nonumber \end{aligned} \tag{16} $$ (7) converge to the fixed point of the discrete Lyapunov equation in the first line of (16). Averages over Time¶ Ensemble averages across simulations are interesting theoretically, but in real life, we usually observe only a single realization $ \{x_t, y_t\}_{t=0}^T $. So now let’s take a single realization and form the time-series averages$$ \bar x := \frac{1}{T} \sum_{t=1}^T x_t \quad \text{and} \quad \bar y := \frac{1}{T} \sum_{t=1}^T y_t $$ $$ \begin{aligned} x_{t+1} & = A x_t + C w_{t+1} \\ y_t & = G x_t + H v_t \nonumber \\ x_0 & \sim N(\mu_0, \Sigma_0) \nonumber \end{aligned} \tag{17} $$ The sequence $ \{v_t\} $ is assumed to be independent of $ \{w_t\} $. The process $ \{x_t\} $ is not modified by noise in the observation equation and its moments, distributions and stability properties remain the same. The unconditional moments of $ y_t $ from (8) and (9) now become $$ \mathbb{E} [y_t] = \mathbb{E} [G x_t + H v_t] = G \mu_t \tag{18} $$ The variance-covariance matrix of $ y_t $ is easily shown to be $$ \textrm{Var} [y_t] = \textrm{Var} [G x_t + H v_t] = G \Sigma_t G' + HH' \tag{19} $$ The distribution of $ y_t $ is therefore$$ y_t \sim N(G \mu_t, G \Sigma_t G' + HH') $$ Forecasting Formulas – Conditional Means¶ The natural way to predict variables is to use conditional distributions. For example, the optimal forecast of $ x_{t+1} $ given information known at time $ t $ is$$ \mathbb{E}_t [x_{t+1}] := \mathbb{E} [x_{t+1} \mid x_t, x_{t-1}, \ldots, x_0 ] = Ax$$ x_{t+1} - \mathbb{E}_t [x_{t+1}] = Cw_{t+1} $$ The covariance matrix of the forecast error is$$ \mathbb{E} [ (x_{t+1} - \mathbb{E}_t [ x_{t+1}] ) (x_{t+1} - \mathbb{E}_t [ x_{t+1}])'] = CC' $$ More generally, we’d like to compute the $ j $-step ahead forecasts $ \mathbb{E}_t [x_{t+j}] $ and $ \mathbb{E}_t [y_{t+j}] $. With a bit of algebra, we obtain$$ x_{t+j} = A^j x_t + A^{j-1} C w_{t+1} + A^{j-2} C w_{t+2} + \cdots + A^0 C w_{t+j} $$$$ \mathbb{E}_t [x_{t+j}] = A^j x_t $$ The $ j $-step ahead forecast of $ y $ is therefore$$ \mathbb{E}_t [y_{t+j}] = \mathbb{E}_t [G x_{t+j} + H v_{t+j}] = G A^j x_t $$ Covariance of Prediction Errors¶ It is useful to obtain the covariance matrix of the vector of $ j $-step-ahead prediction errors $$ x_{t+j} - \mathbb{E}_t [ x_{t+j}] = \sum^{j-1}_{s=0} A^s C w_{t-s+j} \tag{20} $$ Evidently, $$ V_j := \mathbb{E}_t [ (x_{t+j} - \mathbb{E}_t [x_{t+j}] ) (x_{t+j} - \mathbb{E}_t [x_{t+j}] )^\prime ] = \sum^{j-1}_{k=0} A^k C C^\prime A^{k^\prime} \tag{21} $$ $ V_j $ defined in (21) can be calculated recursively via $ V_1 = CC' $ and $$ V_j = CC^\prime + A V_{j-1} A^\prime, \quad j \geq 2 \tag{22} $$ $ V_j $ is the conditional covariance matrix of the errors in forecasting $ x_{t+j} $, conditioned on time $ t $ information $ x_t $. Under particular conditions, $ V_j $ converges to $$ V_\infty = CC' + A V_\infty A' \tag{23} $$ Equation $. Forecasts of Geometric Sums¶ In several contexts, we want to compute forecasts of geometric sums of future random variables governed by the linear state-space system Formulas¶ Fortunately, it is easy to use a little matrix algebra to compute these objects.. Exercise 3¶ Replicate this figure modulo randomness using the same class. The state space model and parameters are the same as for the preceding exercise. Exercise 4¶ Replicate this figure modulo randomness using the same class. The state space model and parameters are the same as for the preceding exercise, except that the initial condition is the stationary distribution. Hint: You can use the stationary_distributions method to get the initial conditions. The number of sample paths is 80, and the time horizon in the figure is 100. Producing the vertical bars and dots is optional, but if you wish to try, the bars are at dates 10, 50 and 75. import numpy as np import matplotlib.pyplot as plt from quantecon import LinearStateSpace ϕ_0, ϕ_1, ϕ_2 = 1.1, 0.8, -0.8 A = [[1, 0, 0 ], [ϕ_0, ϕ_1, ϕ_2], [0, 1, 0 ]] C = np.zeros((3, 1)) G = [0, 1, 0] ar = LinearStateSpace(A, C, G, mu_0=np.ones(3)) x, y = ar.simulate(ts_length=50) fig, ax = plt.subplots(figsize=(10, 6)) y = y.flatten() ax.plot(y, 'b-', lw=2, alpha=0.7) ax.grid() ax.set_xlabel('time') ax.set_ylabel('$y_t$', fontsize=16) plt.show() ϕ_1, ϕ_2, ϕ_3, ϕ_4 = 0.5, -0.2, 0, 0.5 σ = 0.2 A = [[ϕ_1, ϕ_2, ϕ_3, ϕ_4], [1, 0, 0, 0 ], [0, 1, 0, 0 ], [0, 0, 1, 0 ]] C = [[σ], [0], [0], [0]] G = [1, 0, 0, 0] ar = LinearStateSpace(A, C, G, mu_0=np.ones(4)) x, y = ar.simulate(ts_length=200) fig, ax = plt.subplots(figsize=(10, 6)) y = y.flatten() ax.plot(y, 'b-', lw=2, alpha=0.7) ax.grid() ax.set_xlabel('time') ax.set_ylabel('$y_t$', fontsize=16) plt.show() from scipy.stats import norm import random] I = 20 T = 50 ar = LinearStateSpace(A, C, G, mu_0=np.ones(4)) ymin, ymax = -0.5, 1.15 fig, ax = plt.subplots(figsize=(8, 5)) ax.set_ylim(ymin, ymax) ax.set_xlabel('time', fontsize=16) ax.set_ylabel('$y_t$', fontsize=16).legend(ncol=2) plt.show()] T0 = 10 T1 = 50 T2 = 75 T4 = 100 ar = LinearStateSpace(A, C, G, mu_0=np.ones(4), Sigma_0=Σ_x) ymin, ymax = -0.6, 0.6 fig, ax = plt.subplots(figsize=(8, 5)) ax.grid(alpha=0.4) ax.set_ylim(ymin, ymax) ax.set_ylabel('$y_t$', fontsize=16) ax.vlines((T0, T1, T2), -1.5, 1.5) ax.set_xticks((T0, T1, T2)) ax.set_xticklabels(("$T$", "$T'$", "$T''$"), fontsize=14) μ_x, μ_y, Σ_x, Σ_y = ar.stationary_distributions() ar.mu_0 = μ_x ar.Sigma_0 = Σ_x for i in range(80): rcolor = random.choice(('c', 'g', 'b')) x, y = ar.simulate(ts_length=T4) y = y.flatten() ax.plot(y, color=rcolor, lw=0.8, alpha=0.5) ax.plot((T0, T1, T2), (y[T0], y[T1], y[T2],), 'ko', alpha=0.5) plt.show() Footnotes [1] The eigenvalues of $ A $ are $ (1,-1, i,-i) $. [2] The correct way to argue this is by induction. Suppose that $ x_t $ is Gaussian. Then (1) and (10) imply that $ x_{t+1} $ is Gaussian. Since $ x_0 $ is assumed to be Gaussian, it follows that every $ x_t $ is Gaussian. Evidently, this implies that each $ y_t $ is Gaussian.
https://lectures.quantecon.org/py/linear_models.html
CC-MAIN-2019-35
refinedweb
3,158
58.11
Hi Stefan, On Thu, Sep 9, 2010 at 10:09 PM, Stefan Seelmann <seelmann@apache.org>wrote: > On Thu, Sep 9, 2010 at 5:11 PM, Emmanuel Lecharny <elecharny@gmail.com> > wrote: > > Hi guys, > > > > it seems that when I did the big modification (merging all the Messages) > > last month, I forgot to uncomment the dsml-parser which is part of > shared. > > > > I had it working by pointing to the ldap-api project, as it now depends > on > > it, but that was not enough to be able to build it when uncommented in > the > > shared/pom.xml file, as shared does not depend on ldap-api. > > > > However, in my mind, the next step was to integrate the ldap-api project > > into shared (well, imo, shared <==> ldap API up to a point). > > > >? > Sometimes I feel this shared thing grew big fast and started gaining things from everywhere like from the LDAP efforts as well as Kerberos and other aspects. Maybe we need a good review/tally of our package namespace to see which direction is best for us. > - about the number of modules, should we merge some? Especially the > ldap-schema* modules contain only 10 classes splitted into 3 modules. > Don't think we should worry about the size or contents but rather more about coherence and coupling issues as well as possible snags we can get into with potential Maven Module dependency cycles. We really need to review and map out our existing top level project => maven module => package layout as well as a nice dependency map of what exists today to formulate a nice plan as we expand into the future. > - the shared-ldap module contains some packages that are not directly > related to a client API: aci, sp, trigger, subtree.Should we still > keep them in the ldap-api project or move them to a server module? > > Yep these are some of the sticking points that I've had in mind as well. We need to think more about this. This is why this is not such a simple thing to do with IDE refactoring we must plan or else we're going to have another mess in the future to clean up yet again. Every time we do this our users will get pissed off at us and perhaps rightfully so. > At last before publishing an API we should decide which classes we > consider as public API and which classes are for internal use only. > > Absolutely agree 100% with you on this. API's are not like our internals and we're going to have to have a contract with our users and manage deprecation etc. We need to be thorough and careful. > Thought? > > Kind Regards, > Stefan > Regards, -- Alex Karasulu My Blog :: Apache Directory Server :: Apache MINA :: To set up a meeting with me:
http://mail-archives.apache.org/mod_mbox/directory-dev/201009.mbox/%3CAANLkTimexbOLW+6T2A6+=Pa=mcCg2A_MxqJRYAB0-RRL@mail.gmail.com%3E
CC-MAIN-2014-52
refinedweb
466
68.81
Note: this article is the first in a two-part series, you can find the second post here. Cool urls don't change, it is said. And according to itself, that URL must be pretty cool, because it hasn't changed in 18 years! The gits[1] of it is: "the net is vast and infinite" and people will reference your site so you should be a good citizen and not shuffle resource URLs because we'll get dangling links and dangling links are bad, m'kay? Not changing URLs is the best thing to do, and Varnish can help you with that, but there are cases where it's not applicable, and Varnish can help too! Let's dive into the world of URL rewriting and HTTP redirection. This post will cover URL rewriting, and the next post in the series will focus on 301, 302 and all sort of things about HTTP redirection. Rewriting URLs Stop me if you've heard this one before: you were using CMS-A but realized (or management realized it for you) that it just wasn't good enough, so you are now migrating to CMS-B. Both platforms are pretty strict about locations and they are of course incompatible! One wants images to be reached from /image/, the other from /images/, and articles that were in /cmsa/post/ should now be accessible from /content/articles, and so on. Migration was mostly ok (you used a custom script to migrate) but clearly missed a few spots, plus the internet is full of links pointing to the old URLs instead of the new ones. VTC-Driven Development Instead of brutally changing the file locations, we cas use Varnish to rewrite URLs from the old locations to the new. Let's take a hint from programmers here, and apply some TDD (Test-Driven Development) philosophy. First, let's specify what we want. In our case, we'd like Varnish to rewrite: - URLs looking like /cmsa/post/* into /content/articles/* - URLs looking like /image/* into /images/* This sounds easy enough, let's write rewriting.vtc: varnishtest "Testing URL rewriting" server s1 { rxreq txresp expect req.url == "/content/articles/my_post.html" rxreq txresp expect req.url == "/image/cutter_otter.jpg" } -start varnish v1 -vcl+backend { # VCL logic goes here } -start client c1 { txreq -url "/cmsa/post/my_post.html" rxresp txreq -url "/images/cutter_otter.jpg" rxresp } -run This spawns a server, a Varnish, then runs a client, with a few notable points: - v1 and s1 are started and run in the background (-start), c1 is started and we then wait for it to return (-run) - s1 and c1 are "fake" HTTP server and client, running a minimal HTTP stack, while Varnish is a real instance - -vcl+backend automatically creates a vcl with "vcl 4.0;" and backends (here, s1) prepended to it. - c1 connects to the first Varnish instance available (here, v1). - in s1, expect is done after the resp to make varnishtest fail faster. It's counterintuitive, I know, but trust me on this one. Let's run this! gquintard@home:master:~/work/varnish-cache$ varnishtest foo.vtc ... ** s1 0.4 === expect req.url == "/content/articles/my_post.html" ---- s1 0.4 EXPECT req.url (/cmsa/post/my_post.html) == "/content/articles/my_post.html" failed ... Unsurprisingly, that didn't go too well, but the error message (look for the line starting with '----') is helpful: s1 didn't receive what it expected (req.url was resolved as "/cmsa/post/my_post.html"), which is normal because we gave no instruction to Varnish. Time to change that! ~, regsub, regsuball Copied from somewhere on the intarwebz, and adapted for our needs, this should do the trick, right? varnishtest "Testing URL rewriting, for real" server s1 { rxreq txresp expect req.url == "/content/articles/my_post.html" rxreq txresp expect req.url == "/image/cutter_otter } -run What does varnishtest say about it? gquintard@home:master:~/work/varnish-cache$ varnishtest foo.vtc # top TEST foo.vtc passed (1.607) Cool, it works! Job done then! Or is it? Let's try to add a few tests: varnishtest "Testing URL rewriting" server s1 { rxreq txresp expect req.url == "/content/articles/my_post.html" rxreq txresp expect req.url == "/image/cutter_otter.jpg" rxreq txresp expect req.url == "/image/user-pics/images/avatar1234.jpg" rxreq txresp expect req.url == "/othersite/images/moon-landing txreq -url "/images/user-pics/images/avatar1234.jpg" rxresp txreq -url "/othersite/images/moon-landing.jpg" rxresp } -run And BOOM! gquintard@home:master:~/work/varnish-cache$ varnishtest foo.vtc ... ** s1 0.5 === expect req.url == "/image/user-pics/images/avatar1234.jpg" ---- s1 0.5 EXPECT req.url (/image/user-pics/image/avatar1234.jpg) == "/image/user-pics/images/avatar1234.jpg" failed ... We may have been overzealous here, and the second "/images/" also got converted into "/image/", not good. Let's take a step back and look at the code; after all, we are using regular expressions without having introduced them first: - ~: checks the pattern match, and in our case, if it does, we enter the if statement. This prevent us from trying to execute all the regsuballs in sequence, we only want to run them on the original req.url. - regsuball(STRING, PAT, REP): looks at STRING, and replaces all PAT occurrences with REP. So the error comes from regsuball, that captures all the matching patterns. Maybe we should be better with regsub? It does the same thing, but only on the first match. Asking varnishtest, we get: gquintard@home:master:~/work/varnish-cache$ varnishtest foo.vtc ... ** s1 0.5 === expect req.url == "/othersite/images/moon-landing.jpg" ---- s1 0.5 EXPECT req.url (/othersite/image/moon-landing.jpg) == "/othersite/images/moon-landing.jpg" failed ... Okay, good news and bad news here. Good news is we passed the previous test, bad news is we failed the next one because /images/ is only interesting to us if it starts the location, and we didn't tell that to the VCL. When using regular expressions, we can signify the beginning of a string with "^" and the end of it with "$", so this should work: sub vcl_recv { if (req.url ~ "/cmsa/post/") { set req.url = regsub(req.url, "^/cmsa/post/", "/content/articles/"); } else if (req.url ~ "/images/") { set req.url = regsub(req.url, "^/images/", "/image/"); } } gquintard@home:master:~/work/varnish-cache$ varnishtest foo.vtc # top TEST foo.vtc passed (1.607) And it does! That's a relief! Let's stop here, on a victory. The point here is to understand that even if the logic seems sane, tests will often reveal problem, and so, you really should write them. Doing more complicated stuff Regular expressions are a powerful tool allowing you to describe and change text, for example, you can swap the first and second directories of a URL: set req.url = regsub(req.url, "^/([^/]+)/([^/]+)/", "/\2/\1/"); Underneath the shiny VCL coating, Varnish uses libpcre, a standard among regex implementations, meaning the regex you write in VCL should be compatible pretty much everywhere (excluding character escapes). Notably, if you want to try out a regular expression, have a look at sites like regex101.com to explain what's going on. Scaling up "What if I have thousands of rules?" you may be wondering. Well, first, you should not have thousands of rules because it's going to make your life harder, but stuff happens... Having thousands of rewrite rules isn't really a problem because VCL compilation mandate the regext to be literals (known-in-advance strings) so as to compile and optimize them when VCL is loaded. Not only that, but don't forget that it's all C behind the curtain, so it's super fast. Going back to your hypothetical question, put all your rules inside one file: "/cmsa/post/","/content/articles/" /images/","/image/" "/foo/","bar" "baz/([^/]+)/","/\1/" ... Use a script to generate the VCL, for example, in python: import csv out = [] with open('foo') as csvfile: reader = csv.reader(csvfile) for row in reader: print(row[1]) out.append('''if (req.url ~ "{}") '''.format(row[0], row[0], row[1])) print("# generated code, do not edit") print("sub rewrite_url {\n\t", end="") print(" else ".join(out), end="") print("\n}", end="") # generated code, do not edit sub rewrite_url { if (req.url ~ "/cmsa/post/") { set req.url = regsuball(req.url, "/cmsa/post/", "/content/articles/"); } else if (req.url ~ "/images/") { set req.url = regsuball(req.url, "/images/", "/image/"); } else if (req.url ~ "/foo/") { set req.url = regsuball(req.url, "/foo/", "bar"); } else if (req.url ~ "baz/([^/]+)/") { set req.url = regsuball(req.url, "baz/([^/]+)/", "/\1/"); } ... } Include and use it in your VCL, and voilà: include urls.vcl sub vcl_recv { call rewrite_url; } Oooooooooooor, if you are a Varnish Plus customer, you can use vmod-kvstore to map old URLs to new ones (with no regex mumbo-jumbo) and load the file directly from VCL, with no script to transform the data. Your file would look the same, but with no quotes: /url/1,/newurl/1 /url/2,/newurl/2 And from the VCL: Super easy, right?Super easy, right? sub vcl_init { kvstore.init_file(0, 25000, "/some/path/rewrites.csv", ",") } sub vcl_recv { set req.http.X-rewrite = kvstore.get(0, req.url, ""); if (req.http.X-rewrite != "") { set req.url = req.http.X-rewrite; } } A word of wisdom However, being easy doesn't make it right, and you should plan and aim to never need such extreme cases. Behind this, there a very mathematical reason: the pool of URLs you have to rewrite can only grow, never shrink, and that can a lot of rules to keep track of. This is why your first rule source should never be the VCL. It should be generated from a git repository, or from a database: keep it in a neutral format, replicated so you can re-use and transform the data. Also, remember when I said that regex are powerful, two sections ago? I meant it. They are so powerful that if you use them wrong and shoot yourself in the foot, you generally vaporize the floor, burn your whole leg, AND hurt your feelings. People telling you otherwise may be perl users, beware! More seriously, regular expressions should be used carefully, and this is why we kicked off by using varnishtest. But also, regex should only be used when they are the right tool for the job, but sometimes people forget that, and do crazy things, like this. In passing, note that this article is 7 years old, and the URL still works! In VCL, you'll be tempted to use regex to manipulate querystrings and cookies. If it happens, restrain yourself! We have vmod-cookie in varnish-modules and vmod-querystring to deal with them in a safe and sane manner, and you should definitely use them. And that's all for now, stay tuned for the next post! [1]: not a typo, just a play on words and pop culture, a cyber-pun(k), if you will... Image (c) 2012 astroshots42 used under Creative Commons license.
https://info.varnish-software.com/blog/rewriting-urls-with-varnish
CC-MAIN-2020-05
refinedweb
1,831
67.25
#include "my_dbug.h" #include "plugin/group_replication/include/group_actions/group_action.h" #include "plugin/group_replication/include/member_version.h" Go to the source code of this file. Result data type for user_has_gr_admin_privilege. There are three cases: error: There was an error fetching the user's privileges ok: The user has the required privileges no_privilege: The user does not have the required privileges In the no_privilege case, the result contains the user's name and host for the caller to create an helpful error message. Checks if tables are locked, and logs to message if so. Checks whether the group contains a member older than the specified version. Checks if a member in recovery exists in the group. Checks if an unreachable member exists in the group. Logs the group action action_name result from result_area into result_message. Logs the privilege status of privilege into Checks whether the server is ONLINE and belongs to the majority partition. Throw a error on a UDF function with mysql_error_service_printf. Checks whether the user has GROUP_REPLICATION_ADMIN privilege. Checks if the uuid is valid to use in a function It checks:
https://dev.mysql.com/doc/dev/mysql-server/latest/udf__utils_8h.html
CC-MAIN-2020-05
refinedweb
181
59.19
pkginfo 1.2b1.2b1 (2013-12-05) - Added support for the "wheel" distribution format, along with minimal metadata 2.0 support (not including new PEP 426 JSON properties). Code (re-)borrowed from Donald Stuft's twine package. 1.1 (2013-10-09) - Fix tests to pass with current PyPy releases. 1.1b1 (2013-05-05) - Support "develop" packages which keep their *.egg-info in a subdirectory. See. - Add support for "unpacked SDists" (thanks to Mike Lundy for the patch). 1.0 (2013-05-05) - No changes from 1.0b2. 1.0b2 (2012-12-28) - Suppress resource warning leaks reported against clients. - Fix 'commandline' module under Py3k. 1.0b1 (2012-12-28) - Add support for Python 3.2 and 3.3, including testing them under tox. - Add support for PyPy, including testing it under tox. - Test supported Python versions under tox. - Drop support for Python 2.5. - Add a setup.py dev alias: runs setup.py develop and installs testing extras (nose and coverage). 0.9.1 (2012-10-22) - Fix test failure under Python >= 2.7, which is enforcing 'metadata_version == 1.1' because we have classifiers. 0.9 (2012-04-25) - Fix introspection of installed namespace packages. They may be installed as eggs or via dist-installed 'egg-info' files. - Avoid a regression in 0.8 under Python 2.6 / 2.7 when parsing unicode. 0.8 (2011-03-12) - Work around Python 2.7's breakage of StringIO. Fixes - Fixed bug in introspection of installed packages missing the __package__ attribute. 0.7 (2010-11-04) - Preserve newlines in the description field. Thanks to Sridhar Ratnakumar for the patch. - 100% test coverage. 0.6 (2010-06-01) - Replaced use of StringIO.StringIO with io.StringIO, where available (Python >= 2.6). - Replaced use of rfc822 stdlib module with email.parser, when available (Python >= 2.5). Ensured that distributions "unfold" wrapped continuation lines, stripping any leading / trailing whitespace, no matter which module was used for parsing. - Removed bogus testing dependency on zope.testing. - Added tests that the "environment markers" spelled out in the approved PEP 345 are captured. - Added Project-URL for 1.2 PKG-INFO metdata (defined in the accepted version of PEP 345). 0.5 (2009-09-11) - Marked package as non-zip-safe. - Fixed Trove metadata misspelling. - Restored compatibility with Python 2.4. - Noted that the introspection of installed packages / modules works only in Python 2.6 or later. - Added Index class as an abstraction over a collection of distributions. - Added download_url_prefix argument to pkginfo script. If passed, the script will use the prefix to synthesize a download_url for distributions which do not supply that value directly. 0.4.1 (2009-05-07) - Fixed bugs in handling of installed packages which lack __file__ or PKG-INFO. 0.4 (2009-05-07) - Extended the console script to allow output as CSV or INI. Also, added arguments to specify the metadata version and other parsing / output policies. - Added support for the different metadata versions specified in PEPs 241, 314, and 345. Distributions now parse and expose only the attributes corresponding to their metadata version, which defaults to the version parsed from the PKG-INFO file. The programmer can override that version when creating the distribution object. 0.3 (2009-05-07) - Added support for introspection of "development eggs" (checkouts with PKG-INFO, perhaps created via setup.py develop). - Added a console script, pkginfo, which takes one or more paths on the command line and writes out the associated information. Thanks to runeh for the patch! - Added get_metadata helper function, which dispatches a given path or module across the available distribution types, and returns a distribution object. Thanks to runeh for the patch! - Made distribution objects support iteration over the metadata fields. Thanks to runeh for the patch! - Made Distribution and subclasses new-style classes. Thanks to runeh for the patch! 0.2 (2009-04-14) - Added support for introspection of bdist_egg binary distributions. 0.1.1 (2009-04-10) - Fixed packaging errors. 0.1 (2009-04-10) - Initial release. - Downloads (All Versions): - 139 downloads in the last day - 839 downloads in the last week - 3562 downloads in the last month - Author: Tres Seaver, Agendaless Consulting - Documentation: pkginfo package documentation - Keywords: distribution sdist installed metadata - License: Python - Platform: Unix,Windows - Categories - Intended Audience :: Developers - License :: OSI Approved :: Python Software Foundation License - Operating System :: OS Independent - Programming Language :: Python :: 2.6 - Programming Language :: Python :: 2.7 - Programming Language :: Python :: 3.2 - Programming Language :: Python :: 3.3 - Programming Language :: Python :: Implementation :: CPython - Programming Language :: Python :: Implementation :: PyPy - Topic :: Software Development :: Libraries :: Python Modules - Topic :: System :: Software Distribution - Package Index Owner: tseaver - DOAP record: pkginfo-1.2b1.xml
https://pypi.python.org/pypi/pkginfo
CC-MAIN-2014-10
refinedweb
773
62.24
Topic: Navair Publications Online Not finding your answer? Try searching the web for Navair Publications Online Answers to Common Questions How to Block Public Information Online Public information is information that is made accessible to the public. With the rise of modern technology, though, much information is made public on the Internet that does not have to be so. If you want to prevent your information from b... Read More » Source:... How to Become a Notary Public for Free Online A notary public is a public officer tied to a state's laws and regulations. Notaries public witness signatures, certify the validity of documents and in some states perform marriages. Becoming a notary public is not a free process, and beco... Read More » Source: How to Make Money by Writing Online for Web Publications A good place to get started with your online writing is a company called Demand Media. You can go to their website and apply to become a writer for their various online publications. They will walk you through the process, and you will get ... Read More » Source:... More Common Questions Answers to Other Common Questions There was a time when the only way to find public records information was to physically visit a courthouse. Now public records are available 24 hours a day if you have a computer and internet access. Public records encompass everything from... Read More » Source:... Decide on how much you're willing to spend on the process before committing. You need to be aware of the various costs: the class itself (either online or in an actual classroom); then the exam (which you cannot take online, but must take i... Read More » Source: Public speaking is often a difficult subject to teach because many people are simply afraid of speaking in public. Public-speaking teachers in physical classes find this subject tricky to teach. Teaching it online can be even more tricky be... Read More » Source: If you don't have a library card, visit your local branch and obtain one. Just make sure you bring a valid picture I.D. and proof of address. You can look up your local library branch on-line by visiting: Once... Read More » Source: The Internet has made it much easier to find information, including free public records online. However, there isn't a friendly office clerk to ask how and where to find those records, so how can you be sure you are looking in the best plac... Read More » Source: Film copyrights are not forever. The owner of the movie must renew the copyright before it expires to retain ownership rights. When a copyright on a film is not renewed by the owner, the film becomes public domain. This means that there are... Read More » Source:.... Creating a public class online can give your website visitors an incentive to come back again and again. To impart knowledge and interact with students on the Internet, several methods are available. The public class can be an effective too... Read More » Source:
http://www.ask.com/questions-about/Navair-Publications-Online
crawl-003
refinedweb
506
65.83
kirupaForum > Art and Design > Drawing and Design > anime PDA View Full Version : anime DDD March 19th, 2003, 12:07 PM Here is a first attempt at anime. Ya'll let me know what you think. These are raw pencil sketches so...... hojo March 19th, 2003, 12:09 PM very nice import those into flash, trace, and color and they'll be awesome DDD March 19th, 2003, 12:14 PM Yeah I plan on doing the vector thing they are going to be characters for a project I am working on. hojo March 19th, 2003, 12:14 PM so far so good :) mdipi March 19th, 2003, 12:15 PM it seems more like maga but either way, very nice work, and where have you been? busy? DDD March 19th, 2003, 12:18 PM I have put a new definition on the word busy......BUt it is starting to calm down now so I am back. I dont know the diff between manga and anime :crazy: but I just picked up a pencil and this is what came out! Who knows the diff between manga and anime. mdipi March 19th, 2003, 12:33 PM me, manga is japanise comics, like comic books. Anime is Japanise cartoon shows...Manga is usually more detaled and smooth cause its still... andr.in March 19th, 2003, 01:15 PM kewl! =) DDD March 19th, 2003, 04:37 PM pleez more input..... nobody March 19th, 2003, 05:15 PM hey 3d-iva I wouldn't really classify it as anime, its more american anime-esque stuff... not that thats bad, i kinda of like a mixture like that. I really like your pic... great job mdipi March 19th, 2003, 06:36 PM thats what i thought, but thats why i thought it was manga too vts31 March 19th, 2003, 07:43 PM the second one to the left looks like a chick i know. VERY cool stuff i like them alot.! DDD March 19th, 2003, 08:17 PM Thanks a bunch edwin...a thumbs up from you is worth 4 thumbs up. Man that chick you know must be hot. My friend wants me to make him a poster of that one. He is a perv. I guess I have my own style somewhere between manga and anime.... Thanks ya'll I am going to touch em up then make them vector or I may try a lil sumthin sumthin...In painter. MOre comments welcome. iLikePie March 19th, 2003, 11:40 PM they look really schweet... but xxviii is correct when he says it's not true anime/manga, and also correct when he says there's nothing wrong with that! having said that, i don't really like the hair on the far right one... it looks like you were trying to do a specific anime hairstyle, but it's not quite the right way to go about it. I found a tute somewhere that explains how to do it, but i forgot :( if you are interested in developing the manga side more, try this site : then when you choose tutorials go to the one on hair, they might cover it there :) alternatively, try they have some also methinks overall it's really cool, and i think a vector version would be so good, not sure about painter :block: Stuart DDD March 20th, 2003, 12:52 AM Thank pie I think I am going to try to develop my own style but I will use whats on that site as a guide thanks for the links. So I think I am going to roll with these. Kinda like the idea of the being neither manga nor anime. Hehe and your right about the hair mind you I was looking at a pic of angelina jolie when I created her. Powered by vBulletin® Version 4.1.10 Copyright © 2012 vBulletin Solutions, Inc. All rights reserved.
http://www.kirupa.com/forum/archive/index.php/t-17910.html
crawl-003
refinedweb
647
88.97
ity with -bootstrap- handling an option on command input I just found a situation in which -bootstrap- appears to be confused by the presence of the letter "l" ("el") as a name for one of the parameters of the command it is given to execute. -bootstrap- appears to confuse this for its own level() option. Here is a nonsense example that demonstrates this behavior: cap prog drop test_with_l prog test_with_l, rclass syntax varlist, l(integer) summ `varlist' return scalar m = r(mean) = -999 end // cap prog drop test_with_hidel prog test_with_hidel, rclass syntax varlist, hidel(integer) // option renamed "l" to "hidel" to not confuse -bootstrap- summ `varlist' return scalar m = r(mean) end // sysuse auto local somevalue = 4 // // This works. bootstrap r(m), reps(10) : test_with_hidel weight price, hidel(`somevalue') // // The following gives an error, as -bootstrap- thinks "l" refers to its level() option bootstrap r(m), reps(10) : test_with_l weight price, l(`somevalue') ------------------------------------- I can produce the same problem with -permute-. Is this a feature, i.e., are option names generally to be regarded as reserved words in relation to commands that accept a command name? I would have thought that -bootstrap-'s own macros, and that of its input commands, would not share namespace. I'm running version 11.2, updated, Win XP, for what it's worth. Regards, Mike Lacy Dept. of Sociology Colorado State University Fort Collins CO 80523-1784 * * For searches and help try: * * *
http://www.stata.com/statalist/archive/2011-09/msg00787.html
CC-MAIN-2015-18
refinedweb
238
67.89
#include <iostream> main() { int employeeid; int hoursworked; float hourlyrate, grosspay; cout << "ENTER THE EMPLOYEE ID: "; cin >> employeeid; cout << "ENTER THE HOURS WORKED: "; cin >> hoursworked; cout << "ENTER THE HOURLY RATE: "; cin >> hourlyrate; grosspay = hoursworked * hourlyrate; cout << "EMPLOYEE ID IS " << employeeid << endl; cout << "THE HOURS WORKED ARE " << hoursworked << endl; cout << "THE HOURLY RATE IS " << hourlyrate << endl; cout << "THE GROSSPAY IS " << grosspay << endl; return 0; }//MAIN I am running the program in xcode.....command line utility....c++ tool where do i place the numbers (id, hours worked, hourly rate)to recieve an output (build and run) p.s I also get an error message, when i try, next to main() { saying "warning ISO C++ forbids declaration of 'main' with no type I am lost.
https://www.daniweb.com/programming/software-development/threads/117044/noob
CC-MAIN-2018-43
refinedweb
121
58.66
1st Arduino project, beyond the very basic intros, and no coding experience before this endeavor, so I’m sure I’m just not searching the right things/way to figure this out. The project - replacing gauges in my truck with Arduino+TFT display. As a starter, I’m working strictly on single fuel gauge functionality, and eventually including dual fuel gauges (2 separate fuel tanks in truck), voltage gauge, coolant temp gauge, and GPS driven speedometer. Yeah…I’m already realizing I’m in for a bit of a steep learning curve here, lol. The setup - Genuine Arduino Mega 2560, Seeed Studio 2.8" touchscreen sheild V1.0, aftermarket universal style fuel sender. Sender is connected to Analog pin 9 through a voltage divider circuit running roughly 1.5VDC-4.95VDC, and I get appropriate numbers from the serial monitor when cycling the sender. I’m not currently utilizing the touch features of the screen, though I may in the future. RIght now it’s strictly a display device. I did find out how to modify the TFT.h file to get the display to function on the Mega board, and am writing static text to it currently. The problem - how the heck do I get the value read from the Analog pin to display on the screen? I’ve spent the last couple of days searching the forums here and on Adafruit, as well as various other sites found on Google. I’ve spent hours looking at other’s code to try and figure this out, but not being a coder before this, I’m finding it difficult to determine which parts of the code are relevant to what I’m attempting to do, and I think I may be confusing myself/WAY overthinking it, lol. It seems like it should be a simple thing… This is my current code. I started with the Draw Text example sketch, and modified it for my use. dTankPin is the variable I set for the driver’s side fuel tank, with dLevel being the variable set to store the reading I get from the sender. I set static text lines for Tank - D, Tank - P (driver and passenger side fuel tanks), Volts, and C/T (coolant temp), then MPH for the future GPS speedometer. The commented out lines in there are just static values I added to initially set font size to fit the screen, but that I want to replace with the dynamic values i get from reading the various sensors. That part I’m good with, but I can’t figure out how to get a value read from the analog pins to display as numbers on the screen. I’m not looking to be spoon fed the answers, but if I could maybe get some guidance on what functions I’m missing, or what I should be searching for to figure this out? // Draw Texts - Demonstrate drawChar and drawint.h> #include <TouchScreen.h> #include <TFT.h> #ifdef SEEEDUINO #define YP A2 // must be an analog pin, use "An" notation! #define XM A1 // must be an analog pin, use "An" notation! #define YM 14 // can be a digital pin, this is A0 #define XP 17 // can be a digital pin, this is A3 #endif #ifdef MEGA #define YP A2 // must be an analog pin, use "An" notation! #define XM A1 // must be an analog pin, use "An" notation! #define YM 54 // can be a digital pin, this is A0 #define XP 57 // can be a digital pin, this is A3 #endif int dTankPin = A9; //driver's side tank sender pin int dLevel = 0; //variable to store the value from sender void setup() { Serial.begin(9600); Tft.init(); //init TFT library Tft.drawString("Tank-D",0,0,3,CYAN); //Tft.drawString("25%",150,0,3, RED); Tft.drawString("Tank-P",0,30,3,CYAN); //Tft.drawString("100%",150,30,3, GREEN); Tft.drawString("Volts",0,60,3, WHITE); //Tft.drawString("13.0",150,60,3, YELLOW); Tft.drawString("C/T",0,90,3, WHITE); //Tft.drawString("195*",150,90,3, GREEN); Tft.drawString("MPH",60,140,5, BLUE); //Tft.drawString("53",10,190,15, BLUE); } void loop() { dLevel = analogRead(dTankPin); //read the value from the sender Serial.println( dLevel, DEC); // print the value from sender to screen }
https://forum.arduino.cc/t/displaying-variable-on-tft/252339
CC-MAIN-2021-43
refinedweb
716
72.97
scobleizer wrote:Macromedia is asking "What do you find to be the most obnoxious thing about Macromedia right now?" So, thought it'd be a good time to ask something similar: "what would you like Microsoft to do differently this year?" - Offical support for IronPython (sorry, pet peeve). - Polish up the anti-spyware software offering and get it out the door as a 1.0 version so I can install it on my parents computer. - Give Beer28 the rabies shot he so desperately needs. - A XAML parser - not Avalon, just an "offical" XML format and .NET library for it, to be deserialized into Windows.Forms controls; you know, let people can start hammering out language agonistic designers instead of seeing the 60 bazillion "C# Forms Designer support!" crap everywhere, which does me no good. - Higher level API support - how many times have you had to create a simple dialog box with an edit control, an 'OK' button and a 'Cancel' button? Over and over again. System.Windows.Forms.Prefabs; an entire namespace devoted to prefabricated, basic Windows.Forms controls that most developers end up creating on their own anyway. I don't know, maybe System.Windows.Forms.HamSandwich. - Jim Hugunin on Channel9. Thread Closed This thread is kinda stale and has been closed but if you'd like to continue the conversation, please create a new thread in our Forums, or Contact Us and let us know.
https://channel9.msdn.com/Forums/Coffeehouse/33596-What-would-you-like-to-see-Microsoft-do-differently-this-year?page=4
CC-MAIN-2015-27
refinedweb
237
66.64
> > > Eh... Arbitrary limitations are fun, aren't they?> > > > But these mounts _are_ special. There is really no point in moving or> > pivoting them.> > pivoting - probably true, moving... why not?I don't see any use for that. But indeed, it should not be too hardto do.> > > What about MNT_SLAVE stuff being set up prior to that lookup?> > > > These mounts are not propagated. Or at least I hope so. Propagation> > stuff is a bit too complicated for my poor little brain.> > Er... These mounts might not be propagated, but what about a bind> over another instance of such file in master tree?So your question is, which mount takes priority on the lookup? Itprobably should be the propagated real mount, rather than thedir-on-file one, shouldn't it?> > I think they should be the same superblock, same dentry. What would> > be the advantage of doing otherwise?> > Then you are going to have interesting time with locking in final mntput().Final mntput of what?> BTW, what about having several links to the same file? You have i_mutex> on the inode, so serialization of those is not a problem, but...Sorry, I lost it...> > I think doing this recursively should be allowed. "Releasing last ref> > cleans up the mess" should work in that case.> > Releasing the last reference will lead to cascade of umounts in that> case... IOW, need to be careful with locking.I think it's done right: detach_mnt() with namespace_sem andvfsmount_lock, then release locks, and path_release(&old_nd).If the recursion is extremely deep we could have stack overflowproblems though, aargh...Miklos-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
http://lkml.org/lkml/2007/5/23/59
CC-MAIN-2017-43
refinedweb
290
70.09
Stop Order with Short Trades - wendellmartin last edited by I have been using your fantastic software to do some testing of some technical trading strategies I have been learning in a course. I have a basic framework operational. The strategy calls for long and short trades, and setting stops, and adjusting those stops over time. What I am seeing is that the stop seems to work as expected for the long trades, but not work at all for the short trades. Essentially as soon as the stop order is processed, the short trade triggers immediately, rather than taking the direction of the trade into account--as it would if I were to set a stop higher than the current price for a long trade. It makes me suspect that either I am doing something wrong, or there is a hidden assumption of long in the implementation of the stop order. Here's how I'm creating the stop order: def adjustStop(self): if self.openTradeDirection=="long": self.log('CLOSE-STOP created long at {}'.format(self.last.low)) self.order = self.close(exectype=bt.Order.Stop, price=self.last.low) else: self.log('CLOSE-STOP created short at {}'.format(self.last.high)) self.order = self.close(exectype=bt.Order.Stop, price=self.last.high) Can anyone help me understand what I am doing wrong? Is the stop functionality not working for short trades? - wendellmartin last edited by Please disregard! A copy paste error in another part of the code was causing this behavior, nothing in the stop functionality.
https://community.backtrader.com/topic/2043/stop-order-with-short-trades
CC-MAIN-2022-40
refinedweb
257
55.44
Writing. Before diving into the tutorial – don’t forget to read introduction of ironpython windows forms and also tutorial on how to use labels in ironpython. We’re going to continue with program in previous tutorial of labels. This is going to be a short tutorial for adding and displaying textbox widget on your program. In future tutorials we’ll cover how to handle the text input from the events and changing the label or other parts of program. Step 1: In order to create a textbox, call the constructor with no arguments. Now our textbox is added to the current program, we just need to set the properties of text box. self.textbox = TextBox() Step 2 : You have to add custom text to textbox widget. self.textbox.Text = "Baby Text Widget" Step 3 : You can also edit the position of the widget as per your wish. Use the following positions for now in your program. self.textbox.Location = Point(50, 50) Step 4: After position it’s time to set the width for the text widget. I’m setting width of ‘200’ to the widget. You can increase or decrease it as per you wish. self.textbox.Width = 200 Now our program is ready to display the textwidget on windows form. You can run this program now on terminal or command prompt. import clr clr.AddReference("System.Drawing") clr.AddReference("System.Windows.Forms") from System.Drawing import Point from System.Windows.Forms import Application, Form, Label,TextBox class LabelDemoForm(Form): def __init__(self): self.Text = 'Text Widget Demo' self.label = Label() self.label.Text = "This is text widget Demo" self.label.Location = Point(100, 150) self.label.Height = 50 self.label.Width = 250 self.textbox = TextBox() self.textbox.Text = "Baby Text Widget" self.textbox.Location = Point(50, 50) self.textbox.Width = 200 self.Controls.Add(self.label) self.Controls.Add(self.textbox) form = LabelDemoForm() Application.Run(form) Note: Make sure you’re keeping proper indentation in the program otherwise there are chances of warning throw on your program. Also don’t forget to define ‘textbox’ in line no 5.
https://onecore.net/ironpython-using-textbox-widget.htm
CC-MAIN-2020-10
refinedweb
350
61.83
The guys behind the Play framework have been hard at work at the new version Play 2.0. In Play 2.0 scala plays a much more important role, and especially the complete build process has been immensely improved. The only problem so far, I’ve encountered with Play 2.0 is the lack of good documentation. The guys are hard at work at updating the wiki, but its often still a lot of trial and error to get what you want. Note though, that often this isn’t just caused by Play, I also sometimes still struggle with the more exotic Scala constructs ;-) In this article, I’ll give you an introduction into how you can accomplish some common tasks in Play 2.0 using Scala. More specifically I’ll show you how to create an application that: - uses sbt based dependency management to configure external dependencies - is edited in Eclipse (with the Scala-ide plugin) using the play eclipsify command - provides a Rest API using Play’s routes - uses Akka 2.0 (provided by the Play framework) to asynchronously call the database and generate Json (just because we can) - convert Scala objects to Json using the Play provided Json functionality (based on jerkson) I won’t show the database access using Querulous, if you want to know more about that look at this article. I’d like to convert the Querulous code to using Anorm. But since my last experiences with Anorm were, how do I put this, not convincingly positive, I’m saving that for a later day. Creating an application with Play 2.0 Getting up and running with Play 2.0 is very easy and is well documented, so I won’t spent too much time on this. For complete instruction see the Play 2.0 Wiki. To get up and running, after you you have downloaded and extracted Play 2.0, take the following steps: Execute the following command from the console: $play new FirstStepsWithPlay20 This will create a new project, and show you something like the following output: _ __ | | __ _ _ _| | | '_ \| |/ _' | || |_| | __/|_|\____|\__ (_) |_| |__/ play! 2.0-RC2, The new application will be created in /Users/jos/Dev/play-2.0-RC2/FirstStepsWithPlay20 What is the application name? > FirstStepsWithPlay20 Which template do you want to use for this new application? 1 - Create a simple Scala application 2 - Create a simple Java application 3 - Create an empty project > 1 OK, application FirstStepsWithPlay20 is created. Have fun! You’ve now got an application you can run. Change to the just created directory and execute play run. $ play run /) --- (Running the application from SBT, auto-reloading is enabled) --- [info] play - Listening for HTTP on port 9000... (Server started, use Ctrl+D to stop and go back to the console...) If you navigate to, you can see your first Play 2.0 application. And you’re done with the basic installation of Play 2.0. Dependency management I mentioned in the introduction that I didn’t start this project from scratch. I rewrote a Rest service I made with Play 1.2.4, Akka 1.x, JAX-RS and Json-Lift to the components provided by the Play 2.0 framework. Since dependency management changed between Play 1.2.4 and Play 2.0, I needed to configure my new project with the dependencies I needed. In Play 2.0 you do this in a file called build.scala, which you can find in the project folder in your project. After adding the dependencies from my previous project, this file looked like this: import sbt._ import Keys._ import PlayProject._ object ApplicationBuild extends Build { val appName = "FirstStepsWithPlay2" val appVersion = "1.0-SNAPSHOT" val appDependencies = Seq( "com.twitter" % "querulous" % "2.6.5" , "net.liftweb" %% "lift-json" % "2.4" , "com.sun.jersey" % "jersey-server" % "1.4" , "com.sun.jersey" % "jersey-core" % "1.4" , "postgresql" % "postgresql" % "9.1-901.jdbc4" ) val main = PlayProject(appName, appVersion, appDependencies, mainLang = SCALA).settings( // Add extra resolver for the twitter resolvers += "Twitter repo" at "" , resolvers += "DevJava repo" at "" ) } How to use this file is rather straightforward, once you’ve read the sbt documentation (). Basically we define the libraries we want, using appDependencies, and we define some extra repositories where sbt should download its dependencies from (using resolvers). A nice thing to mention is that you can specify a %% when defining dependencies. This implies that we also want to search for a library that matches our version of scala. SBT looks at our current configured version and adds a qualifier for that version. This makes sure we get a version that works for our Scala version. Like I mentioned, I wanted to replace most external libraries I used with functionality from Play 2.0. After removing the stuff I didn’t use anymore this file looks like this: import sbt._ import Keys._ import PlayProject._ object ApplicationBuild extends Build { val appName = "FirstStepsWithPlay2" val appVersion = "1.0-SNAPSHOT" val appDependencies = Seq( "com.twitter" % "querulous" % "2.6.5" , "postgresql" % "postgresql" % "9.1-901.jdbc4" ) val main = PlayProject(appName, appVersion, appDependencies, mainLang = SCALA).settings( // Add extra resolver for the twitter resolvers += "Twitter repo" at "" ) } With the dependencies configured, I can configure this project for my IDE. Even though all my colleagues are big IntelliJ proponents, I’m still coming back to what I’m used to: Eclipse. So lets see what you need to do to get this project up and running in Eclipse. Work from Eclipse In my Eclipse version I’ve got the scala plugin installed, and the Play 2.0 framework nicely works together with this plugin. To get your project in eclipse all you have to do is run the following command: play eclipsify jos@Joss-MacBook-Pro.local:~/dev/play-2.0-RC2/FirstStepsWithPlay2$ ../play eclipsify /) [info] About to create Eclipse project files for your project(s). [info] Compiling 1 Scala source to /Users/jos/Dev/play-2.0-RC2/FirstStepsWithPlay2/target/scala-2.9.1/classes... [info] Successfully created Eclipse project files for project(s): FirstStepsWithPlay2 jos@Joss-MacBook-Pro.local:~/dev/play-2.0-RC2/FirstStepsWithPlay2$ Now you can use “import project” from Eclipse, and you can edit your Play 2.0 / Scala project directly from Eclipse. Its possible to start the Play environment directly from Eclipse, but I haven’t used that. I just start the Play project from the command line, once, and all the changes I make in Eclipse are immediately visible. For those of you who’ve worked with Play longer, this is probably not so special anymore. For me, personally, I still am amazed by the productivity of this environment. provides a Rest API using Play’s routes In my previous Play project I used the jersey module to be able to use JAX-RS annotations to specify my Rest API. Since Play 2.0 contains a lot of breaking API changes and is pretty much a rewrite from the ground up, you can’t expect all the old modules to work. This was also the case for the Jersey module. I did dive into the code of this module, to see if the changes were trivial, but since I couldn’t find any documentation on how to create a plugin for Play 2.0 that allows you to interact with the route processing, I decided to just switch to the way Play 2.0 does Rest. And using the “routes” file, it was very easy to connect the (just) two operations I exposed to a simple controller: # Routes # This file defines all application routes (Higher priority routes first) # ~~~~ GET /resources/rest/geo/list controllers.Application.processGetAllRequest GET /resources/rest/geo/:id controllers.Application.processGetSingleRequest(id:String) The corresponding controller looks like this: package controllers import akkawebtemplate.GeoJsonService import play.api.mvc.Action import play.api.mvc.Controller object Application extends Controller { val service = new GeoJsonService() def processGetSingleRequest(code: String) = Action { val result = service.processGetSingleRequest(code) Ok(result).as("application/json") } def processGetAllRequest() = Action { val result = service.processGetAllRequest; Ok(result).as("application/json"); } } As you can see I’ve just created to very simple, basic actions. Haven’t look at fault and exception handling yet, but the Rest API offered by Play really makes using additional Rest framework unnecessary. Thats the first of the frameworks. The next part of my original application that needed to change was the Akka code. Play 2.0 includes the latest version of the Akka library (2.0-RC1). Since my original Akka code was written against 1.2.4, there were a lot of conflicts. Updating the original code, wasn’t so easy though. Using Akka 2.0 I won’t dive into all the problems I had with Akka 2.0. Biggest problem was the very crappy documentation on the Play Wiki and the crappy documentation on the Akka website my crappy skills at locating the correct information in the Akka documentation. Together with me only using Akka for about three or four months, doesn’t make it the best combination. After a couple of frustation hours though, I just removed all the existing Akka code, and started from scratch. 20 minutes later I got everything working with Akka 2, and using the master configuration from Play. In the next listing you can see the corresponding code (I’ve intentionally left the imports, since in a lot of examples you can find, they are omitted, which makes an easy job, that much harder) import akka.actor.actorRef2Scala import akka.actor.Actor import akka.actor.Props import akka.dispatch.Await import akka.pattern.ask import akka.util.duration.intToDurationInt import akka.util.Timeout import model.GeoRecord import play.libs.Akka import resources.commands.Command import resources.commands.FULL import resources.commands.SINGLE import resources.Database /** * This actor is responsible for returning JSON objects from the database. It uses querulous to * query the database and parses the result into the GeoRecord class. */ sender ! some.toJson(records) } case _ => sender ! null } } /** * Handle the specified path. This rest service delegates the functionality to a specific actor * and if the result from this actor isn't null return the result */ class GeoJsonService { def processGetSingleRequest(code: String) = { val command = SINGLE(); command.parameters = List(code); runCommand(command); } /** * Operation that handles the list REST command. This creates a command * that forwards to the actor to be executed. */ def processGetAllRequest:String = { runCommand(FULL()); } /** * Function that runs a command on one of the actors and sets the response */ private def runCommand(command: Command):String = { // get the actor val actor = Akka.system.actorOf(Props[JsonActor]) implicit val timeout = Timeout(5 seconds) val result = Await.result(actor ? command, timeout.duration).asInstanceOf[String] // return result as String result } } A lot of code, but I wanted to show you the actor definition and how to use them. Summarizing, the Akka 2.0 code you need to use, to execute a request/reply pattern with Akka is this: private def runCommand(command: Command):String = { // get the actor val actor = Akka.system.actorOf(Props[JsonActor]) implicit val timeout = Timeout(5 seconds) val result = Await.result(actor ? command, timeout.duration).asInstanceOf[String] // return result as String result } This uses the global Akka configuration to retrieve an actor of the required type. We then send a command to the actor, and are returned a Future, on which we wait 5 seconds for a result, which we cast to a String. This Future waits for our Actor to send a reply. This is done in the actor itself: sender ! some.toJson(records) With Akka replaced I finally got a working system again. When looking through the documentation on Play 2.0 I noticed that they provided their own Json library, starting from 2.0. Since I used Json-Lift in the previous version, I thought it would be a nice exercise to move this code to the Json library, named Jerkson, provided by Play. Moving to Jerkson The move to the new library was a fairly easy one. Both Lift-Json and Jerkson use pretty much the same concept of building Json objects. In the old version I didn’t use any automatic marshalling (since I had to comply with the jsongeo format) so in this version I also did the marshalling manually. In the next listing you can see the old version and the new version together, As you can see the concepts used in both are pretty much the same. #New version using jerkson val jsonstring = JsObject( List("type" -> JsString("featureCollection"), "features" -> JsArray( records.map(r => (JsObject(List( "type" -> JsString("Feature"), "gm_naam" -> JsString(r.name), "geometry" -> Json.parse(r.geojson), "properties" -> ({ var toAdd = List[(String, play.api.libs.json.JsValue)]() r.properties.foreach(entry => (toAdd ::= entry._1 -> JsString(entry._2))) JsObject(toAdd) }))))) .toList))) #Old version using Lift }))))) And after all this, I have exactly the same as I already had. But now with Play 2.0 and not using any external libraries (except Querulous). So far my experiences with Play 2.0 have been very positive. The lack of good concrete examples and documentation can be annoying sometimes, but is understandable. They do provide a couple of extensive examples in their distribution, but nothing that matched my use cases. So hats off to the guys who are responsible for Play 2.0. What I’ve seen so far, great and comprehensive framework, lots of functionality and a great environment to program scala in. In the next couple of weeks I’ll see if I can get enough courage up to start with Anorm and I’ll look at what Play has to offer on the client side. So far I’ve looked at LESS which I really like, so I’ve got my hopes up for their template solution ;-) Reference: Play 2.0: Akka, Rest, Json and dependencies from our JCG partner Jos Dirksen at the Smart Java blog. coolllllllllllllllllllll nice work
http://www.javacodegeeks.com/2012/03/play-20-akka-rest-json-and-dependencies.html/comment-page-1/
CC-MAIN-2015-11
refinedweb
2,319
58.79
Ticket #70 (closed defect: worksforme) really slow on large input ... Description yaml.load appears to be really (really) slow on large input. My string is --- seqno : 1 outof : 1 result : - <1.6Mbyte string> Both the input and output are fairly simple. Why does it take a long time to parse? Lal Change History comment:2 Changed 7 years ago by xi - Status changed from new to closed - Resolution set to worksforme Have you tried to parse the file with a libyaml based parser? import yaml yaml.load(input, Loader=yaml.CLoader) comment:3 Changed 7 years ago by anonymous Indeed, someone kindly suggested this. However, I am having trouble installing with libyaml. I managed to build LibYaml? and Pyrex successfully, however when I try and install yaml with libyaml using: python setup_with_libyaml.py install I get the errors shown below. It's complaining about no yaml.h file which is correct, and also a syntax error at the lines: 71 struct pyx_obj_5_yaml_CParser { 72 PyObject_HEAD 73 struct pyx_vtabstruct_5_yaml_CParser *pyx_vtab; 74 yaml_parser_t parser; Any thoughts. My version of gcc is 3.2.3! Any help is appreciated. Best, Lal comment:4 Changed 7 years ago by xi Could you attach the pyrex-generated C source and post the complete output of the compiler? comment:5 Changed 7 years ago by anonymous xi, I finally managed to build with LibYaml?, but it was a real pain ... The primary issues had to do with the options to gcc for the include path to yaml.h, and the library path to libyaml.so. I haven't include the generated C file, or the compiler output, as the C file was OK. Included below is what I did eventually. If the correct method for installation is explained somewhere, then I must have missed it; however, what I went through should not have been necessary. Best, Lal Step 1: Command: python setup_with_libyaml.py install --prefix <path> Failed on: gcc -pthread -fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -fPIC -I/home/lageorge/Tools/include/python2.5 -c ext/_yaml.c -o build/temp.linux-i686-2.5/ext/_yaml.o In file included from ext/_yaml.c:26: ext/_yaml.h:2:18: yaml.h: No such file or directory Fix: redid gcc command with manual -I option added for yaml.h Result: Compiled with warnings. ext/_yaml.c: In function `pyx_f_5_yaml_7CParser_init': ext/_yaml.c:461: warning: passing arg 2 of `yaml_parser_set_input' from incompatible pointer type ext/_yaml.c: In function `pyx_f_5_yaml_8CEmitter_init': ext/_yaml.c:4427: warning: passing arg 2 of `yaml_emitter_set_output' from incompatible pointer Step 2) Command: python setup_with_libyaml.py install --prefix <path> Failed on: gcc -pthread -shared build/temp.linux-i686-2.5/ext/_yaml.o -lyaml -o build/lib.linux-i686-2.5/_yaml.so /usr/bin/ld: cannot find -lyaml collect2: ld returned 1 exit Fix: manually executed gcc with -L option to gcc. Defining environment variable LD_LIBRARY_PATH did not help Result: gcc ran to completion. Step 3) Command: python setup_with_libyaml.py install --prefix <path> Result: went to completion. comment:6 10 months ago by RichardKew This breast enlargement before and after would include such tumours still very as own systems and farmers, initially of the nhs of their gaps. Born on a serrated day work, ivonde highly expected to be in men, but with her tragic series and worldwide acting omnivore, she was destined for check. comment:8 Changed 10 months ago by RichardKew Environment action attempts to group the people that contribute to relapse into two competitive brains: specific hands and other studies. [ adderall depression - Everyone of moroccan spectrum is accomplished more even after present widespread album iron than after long numerous year dopamine. comment:9 Changed 10 months ago by RichardKew Zuvor ihr deutschenglischals ist aber der schlangenmaul getrennt. Grabsteine, dass notwehr in erste wunstorf von einem stadtgründung oder beiden personen kam. comment:10 Changed 10 months ago by RichardKew Trotzdem sahen ihre identität beim annexion gepflegt und sind bis erst erhalten getrennt. Innerhalb von nachwort ist restaurant zum frau tod verwandelt, weight loss supplements kardashian. comment:11 Changed 9 months ago by RichardKew Die übung aus solchen vielen rockband war jetzt aufwendig.är-sucht-frau-antenne-bayern.html Ich werde einige zuge nachweisen, obesity effects on society. comment:12 Changed 9 months ago by RichardKew When grafton learned of these times he published gills to episcopalians to stop supporting vilatte. On the modern others, this sinipit is reversed, the several steamer containing two plants, and the dirty way three. comment:13 Changed 9 months ago by Richardmn The injury of the rubber in the reciprocal history led to unparished masses in the prisoner and disease of anxieties. Those who fashion professional advantages shall nearly profit; they shall be shamed apart. comment:14 Changed 9 months ago by RichardKew If there is not harsh line for high discovery, the view will generally burn often and will produce less lens. Very, the positions succeeded in forcing the real to abandon the channel rock disease and to redirect shipping to situations in north-eastern britain. comment:15 Changed 9 months ago by Richardmn Social years begin on a buy phentermine 37.5 mg called tutorial, as the supply genomes, it explains to the self-control low loading men. The options bring open, variable minefield to the studies with cremation of storyline in between open foes. comment:16 Changed 9 months ago by RichardKew Attributable cognitive content is based on a other expression of the due disorder. Reasons with recent criminal measure including user are at a hard greater mixture of being tics of both few and common tramadol. comment:17 Changed 9 months ago by FrancisOi From 1905, hitler lived a insufficient parent in vienna, financed by nationalist's drugs and extended-release from his neuroscience. Laing, silvano arieti, theodore lidz and biomarkers have argued that the rights of what is called other adderall 5mg are physical kinds to much enantiomers that dopamine and only quality wealth 1980s on some structural changes. comment:19 Changed 9 months ago by FrancisOi Peter's basilica and it is an astral loss of mean baroque bird. This has been noted to be the beauty of the paper of static shalwar information by important dishes. is this with libyaml?
http://pyyaml.org/ticket/70
CC-MAIN-2015-18
refinedweb
1,044
57.16
#include <sys/ddi.h> #include <sys/sunddi.h> int ddi_dma_nextseg(ddi_dma_win_t win, ddi_dma_seg_t seg, ddi_dma_seg_t *nseg); This interface is obsolete. ddi_dma_nextcookie(9F) should be used instead. A DMA window. The current DMA segment or NULL. A pointer to the next DMA segment to be filled in. If seg is NULL, a pointer to the first segment within the specified window is returned. The ddi_dma_nextseg() function gets the next DMA segment within the specified window win. If the current segment is NULL, the first DMA segment within the window is returned. A DMA segment is always required for a DMA window. A DMA segment is a contiguous portion of a DMA window (see ddi_dma_nextwin(9F)) which is entirely addressable by the device for a data transfer operation. An example where multiple DMA segments are allocated is where the system does not contain DVMA capabilities and the object may be non-contiguous. In this example the object will be broken into smaller contiguous DMA segments. Another example is where the device has an upper limit on its transfer size (for example an 8-bit address register) and has expressed this in the DMA limit structure (see ddi_dma_lim_sparc (9S) or ddi_dma_lim_x86(9S)). In this example the object will be broken into smaller addressable DMA segments. The ddi_dma_nextseg() function returns: Successfully filled in the next segment pointer. There is no next segment. The current segment is the final segment within the specified window. win does not refer to the currently active window. The ddi_dma_nextseg()_nextcookie(9F), ddi_dma_nextwin(9F), ddi_dma_segtocookie(9F), ddi_dma_sync(9F), ddi_dma_lim_sparc(9S), ddi_dma_lim_x86(9S) , ddi_dma_req(9S) Writing Device Drivers for Oracle Solaris 11.2
http://docs.oracle.com/cd/E36784_01/html/E36886/ddi-dma-nextseg-9f.html
CC-MAIN-2014-42
refinedweb
272
58.58
Results 1 to 5 of 5 The black bar under the penguin logo was never easy to read but now it seems to have got worse. The upper row of links is just about visible as ... Difficulty in reading top bar Could we have a more user-friendly colour scheme? Which browser are you using hazel?What do we want? Time machines! When do we want 'em? Doesn't really matter does it!? The Fifth Continent For Firefox, install the Stylish extension and add a new rule for and add the following to it. Code: @namespace url(); @-moz-document domain("") { .navbar a { color: #FFFFFF !important; font-size: 12px !important; } } Last edited by elija; 07-16-2013 at 04:22 PM. Reason: Added font sizeWhat do we want? Time machines! When do we want 'em? Doesn't really matter does it!? The Fifth Continent Thanks, Elija! I would never have thought of looking in in the styles menu. Opera turns out to have a high-contrast black-and-white style which is very reminiscent of the black-on-grey that Links uses. It isn't pretty but it's easy to read so I have switched to that. That still doesn't explain why the default colour scheme for the site is so murky.
http://www.linuxforums.org/forum/feedback-suggestions/197397-difficulty-reading-top-bar.html
CC-MAIN-2014-35
refinedweb
213
78.45
11: Asynchronous Programming - Page ID - 17696 Who can wait quietly while the mud settles? Who can remain still until the moment of action? The central part of a computer, the part that carries out the individual steps that make up our programs, is called the processor. The programs we have seen so far are things that will keep the processor busy until they have finished their work. The speed at which something like a loop that manipulates numbers can be executed depends pretty much entirely on the speed of the processor. But many programs interact with things outside of the processor. For example, they may communicate over a computer network or request data from the hard disk—which is a lot slower than getting it from memory. When such a thing is happening, it would be a shame to let the processor sit idle—there might be some other work it could do in the meantime. In part, this is handled by your operating system, which will switch the processor between multiple running programs. But that doesn’t help when we want a single program to be able to make progress while it is waiting for a network request. Asynchronicity In a synchronous programming model, things happen one at a time. When you call a function that performs a long-running action, it returns only when the action has finished and it can return the result. This stops your program for the time the action takes. An asynchronous model allows multiple things to happen at the same time. When you start an action, your program continues to run. When the action finishes, the program is informed and gets access to the result (for example, the data read from disk). We can compare synchronous and asynchronous programming using a small example: a program that fetches two resources from the network and then combines results. In a synchronous environment, where the request function returns only after it has done its work, the easiest way to perform this task is to make the requests one after the other. This has the drawback that the second request will be started only when the first has finished. The total time taken will be at least the sum of the two response times. The solution to this problem, in a synchronous system, is to start additional threads of control. A thread is another running program whose execution may be interleaved with other programs by the operating system—since most modern computers contain multiple processors, multiple threads may even run at the same time, on different processors. the network. In the synchronous model, the time taken by the network is part of the timeline for a given thread of control. In the asynchronous model, starting a network action conceptually causes a split in the timeline. The program that initiated the action continues running, and the action happens alongside it, notifying the program when it is finished. Another way to describe the difference is that waiting for actions to finish is implicit in the synchronous model, while it is explicit, under our control, in the asynchronous one. Asynchronicity cuts both ways. It makes expressing programs that do not fit the straight-line model of control easier, but it can also make expressing programs that do follow a straight line more awkward. We’ll see some ways to address this awkwardness later in the chapter. Both of the important JavaScript programming platforms—browsers and Node.js—make operations that might take a while asynchronous, rather than relying on threads. Since programming with threads is notoriously hard (understanding what a program does is much more difficult when it’s doing multiple things at once), this is generally considered a good thing. Crow tech Most people are aware of the fact that crows are very smart birds. They can use tools, plan ahead, remember things, and even communicate these things among themselves. What most people don’t know is that they are capable of many things that they keep well hidden from us. I’ve been told by a reputable (if somewhat eccentric) expert on corvids that crow technology is not far behind human technology, and they are catching up. For example, many crow cultures have the ability to construct computing devices. These are not electronic, as human computing devices are, but operate through the actions of tiny insects, a species closely related to the termite, which has developed a symbiotic relationship with the crows. The birds provide them with food, and in return the insects build and operate their complex colonies that, with the help of the living creatures inside them, perform computations. Such colonies are usually located in big, long-lived nests. The birds and insects work together to build a network of bulbous clay structures, hidden between the twigs of the nest, in which the insects live and work. To communicate with other devices, these machines use light signals. The crows embed pieces of reflective material in special communication stalks, and the insects aim these to reflect light at another nest, encoding data as a sequence of quick flashes. This means that only nests that have an unbroken visual connection can communicate. Our friend the corvid expert has mapped the network of crow nests in the village of Hières-sur-Amby, on the banks of the river Rhône. This map shows the nests and their connections: In an astounding example of convergent evolution, crow computers run JavaScript. In this chapter we’ll write some basic networking functions for them. Callbacks One approach to asynchronous programming is to make functions that perform a slow action take an extra argument, a callback function. The action is started, and when it finishes, the callback function is called with the result. As an example, the setTimeout function, available both in Node.js and in browsers, waits a given number of milliseconds (a second is a thousand milliseconds) and then calls a function. setTimeout(() => console.log("Tick"), 500); Waiting is not generally a very important type of work, but it can be useful when doing something like updating an animation or checking whether something is taking longer than a given amount of time. Performing multiple asynchronous actions in a row using callbacks means that you have to keep passing new functions to handle the continuation of the computation after the actions. Most crow nest computers have a long-term data storage bulb, where pieces of information are etched into twigs so that they can be retrieved later. Etching, or finding a piece of data, takes a moment, so the interface to long-term storage is asynchronous and uses callback functions. Storage bulbs store pieces of JSON-encodable data under names. A crow might store information about the places where it’s hidden food under the name "food caches", which could hold an array of names that point at other pieces of data, describing the actual cache. To look up a food cache in the storage bulbs of the Big Oak nest, a crow could run code like this: import {bigOak} from "./crow-tech"; bigOak.readStorage("food caches", caches => { let firstCache = caches[0]; bigOak.readStorage(firstCache, info => { console.log(info); }); }); (All binding names and strings have been translated from crow language to English.) This style of programming is workable, but the indentation level increases with each asynchronous action because you end up in another function. Doing more complicated things, such as running multiple actions at the same time, can get a little awkward. Crow nest computers are built to communicate using request-response pairs. That means one nest sends a message to another nest, which then immediately sends a message back, confirming receipt and possibly including a reply to a question asked in the message. Each message is tagged with a type, which determines how it is handled. Our code can define handlers for specific request types, and when such a request comes in, the handler is called to produce a response. The interface exported by the send method that sends off a request. It expects the name of the target nest, the type of the request, and the content of the request as its first three arguments, and it expects a function to call when a response comes in as its fourth and last argument. "./module provides callback-based functions for communication. Nests have a crow-tech"crow-tech" bigOak.send("Cow Pasture", "note", "Let's caw loudly at 7PM", () => console.log("Note delivered.")); But to make nests capable of receiving that request, we first have to define a request type named "note". The code that handles the requests has to run not just on this nest-computer but on all nests that can receive messages of this type. We’ll just assume that a crow flies over and installs our handler code on all the nests. import {defineRequestType} from "./crow-tech"; defineRequestType("note", (nest, content, source, done) => { console.log(`${nest.name} received note: ${content}`); done(); }); The defineRequestType function defines a new type of request. The example adds support for "note" requests, which just sends a note to a given nest. Our implementation calls console.log so that we can verify that the request arrived. Nests have a name property that holds their name. The fourth argument given to the handler, done, is a callback function that it must call when it is done with the request. If we had used the handler’s return value as the response value, that would mean that a request handler can’t itself perform asynchronous actions. A function doing asynchronous work typically returns before the work is done, having arranged for a callback to be called when it completes. So we need some asynchronous mechanism—in this case, another callback function—to signal when a response is available. In a way, asynchronicity is contagious. Any function that calls a function that works asynchronously must itself be asynchronous, using a callback or similar mechanism to deliver its result. Calling a callback is somewhat more involved and error-prone than simply returning a value, so needing to structure large parts of your program that way is not great. Promises Working with abstract concepts is often easier when those concepts can be represented by values. In the case of asynchronous actions, you could, instead of arranging for a function to be called at some point in the future, return an object that represents this future event. This is what the standard class Promise is for. A promise is an asynchronous action that may complete at some point and produce a value. It is able to notify anyone who is interested when its value is available. The easiest way to create a promise is by calling Promise.resolve. This function ensures that the value you give it is wrapped in a promise. If it’s already a promise, it is simply returned—otherwise, you get a new promise that immediately finishes with your value as its result. let fifteen = Promise.resolve(15); fifteen.then(value => console.log(`Got ${value}`)); // → Got 15 To get the result of a promise, you can use its then method. This registers a callback function to be called when the promise resolves and produces a value. You can add multiple callbacks to a single promise, and they will be called, even if you add them after the promise has already resolved (finished). But that’s not all the then method does. It returns another promise, which resolves to the value that the handler function returns or, if that returns a promise, waits for that promise and then resolves to its result. It is useful to think of promises as a device to move values into an asynchronous reality. A normal value is simply there. A promised value is a value that might already be there or might appear at some point in the future. Computations defined in terms of promises act on such wrapped values and are executed asynchronously as the values become available. To create a promise, you can use Promise as a constructor. It has a somewhat odd interface—the constructor expects a function as argument, which it immediately calls, passing it a function that it can use to resolve the promise. It works this way, instead of for example with a resolve method, so that only the code that created the promise can resolve it. This is how you’d create a promise-based interface for the readStorage function: function storage(nest, name) { return new Promise(resolve => { nest.readStorage(name, result => resolve(result)); }); } storage(bigOak, "enemies") .then(value => console.log("Got", value)); This asynchronous function returns a meaningful value. This is the main advantage of promises—they simplify the use of asynchronous functions. Instead of having to pass around callbacks, promise-based functions look similar to regular ones: they take input as arguments and return their output. The only difference is that the output may not be available yet. Failure Regular JavaScript computations can fail by throwing an exception. Asynchronous computations often need something like that. A network request may fail, or some code that is part of the asynchronous computation may throw an exception. One of the most pressing problems with the callback style of asynchronous programming is that it makes it extremely difficult to make sure failures are properly reported to the callbacks. A widely used convention is that the first argument to the callback is used to indicate that the action failed, and the second contains the value produced by the action when it was successful. Such callback functions must always check whether they received an exception and make sure that any problems they cause, including exceptions thrown by functions they call, are caught and given to the right function. Promises make this easier. They can be either resolved (the action finished successfully) or rejected (it failed). Resolve handlers (as registered with then) are called only when the action is successful, and rejections are automatically propagated to the new promise that is returned by then. And when a handler throws an exception, this automatically causes the promise produced by its then call to be rejected. So if any element in a chain of asynchronous actions fails, the outcome of the whole chain is marked as rejected, and no success handlers are called beyond the point where it failed. Much like resolving a promise provides a value, rejecting one also provides one, usually called the reason of the rejection. When an exception in a handler function causes the rejection, the exception value is used as the reason. Similarly, when a handler returns a promise that is rejected, that rejection flows into the next promise. There’s a Promise.reject function that creates a new, immediately rejected promise. To explicitly handle such rejections, promises have a catch method that registers a handler to be called when the promise is rejected, similar to how then handlers handle normal resolution. It’s also very much like then in that it returns a new promise, which resolves to the original promise’s value if it resolves normally and to the result of the catch handler otherwise. If a catch handler throws an error, the new promise is also rejected. As a shorthand, then also accepts a rejection handler as a second argument, so you can install both types of handlers in a single method call. A function passed to the Promise constructor receives a second argument, alongside the resolve function, which it can use to reject the new promise. The chains of promise values created by calls to then and catch can be seen as a pipeline through which asynchronous values or failures move. Since such chains are created by registering handlers, each link has a success handler or a rejection handler (or both) associated with it. Handlers that don’t match the type of outcome (success or failure) are ignored. But those that do match are called, and their outcome determines what kind of value comes next—success when it returns a non-promise value, rejection when it throws an exception, and the outcome of a promise when it returns one of those. new Promise((_, reject) => reject(new Error("Fail"))) .then(value => console.log("Handler 1")) .catch(reason => { console.log("Caught failure " + reason); return "nothing"; }) .then(value => console.log("Handler 2", value)); // → Caught failure Error: Fail // → Handler 2 nothing Much like an uncaught exception is handled by the environment, JavaScript environments can detect when a promise rejection isn’t handled and will report this as an error. Networks are hard Occasionally, there isn’t enough light for the crows’ mirror systems to transmit a signal or something is blocking the path of the signal. It is possible for a signal to be sent but never received. As it is, that will just cause the callback given to send to never be called, which will probably cause the program to stop without even noticing there is a problem. It would be nice if, after a given period of not getting a response, a request would time out and report failure. Often, transmission failures are random accidents, like a car’s headlight interfering with the light signals, and simply retrying the request may cause it to succeed. So while we’re at it, let’s make our request function automatically retry the sending of the request a few times before it gives up. And, since we’ve established that promises are a good thing, we’ll also make our request function return a promise. In terms of what they can express, callbacks and promises are equivalent. Callback-based functions can be wrapped to expose a promise-based interface, and vice versa. Even when a request and its response are successfully delivered, the response may indicate failure—for example, if the request tries to use a request type that hasn’t been defined or the handler throws an error. To support this, send and defineRequestType follow the convention mentioned before, where the first argument passed to callbacks is the failure reason, if any, and the second is the actual result. These can be translated to promise resolution and rejection by our wrapper. class Timeout extends Error {} function request(nest, target, type, content) { return new Promise((resolve, reject) => { let done = false; function attempt(n) { nest.send(target, type, content, (failed, value) => { done = true; if (failed) reject(failed); else resolve(value); }); setTimeout(() => { if (done) return; else if (n < 3) attempt(n + 1); else reject(new Timeout("Timed out")); }, 250); } attempt(1); }); } Because promises can be resolved (or rejected) only once, this will work. The first time resolve or reject is called determines the outcome of the promise, and further calls caused by a request coming back after another request finished are ignored. To build an asynchronous loop, for the retries, we need to use a recursive function—a regular loop doesn’t allow us to stop and wait for an asynchronous action. The attempt function makes a single attempt to send a request. It also sets a timeout that, if no response has come back after 250 milliseconds, either starts the next attempt or, if this was the fourth attempt, rejects the promise with an instance of Timeout as the reason. Retrying every quarter-second and giving up when no response has come in after a second is definitely somewhat arbitrary. It is even possible, if the request did come through but the handler is just taking a bit longer, for requests to be delivered multiple times. We’ll write our handlers with that problem in mind—duplicate messages should be harmless. In general, we will not be building a world-class, robust network today. But that’s okay—crows don’t have very high expectations yet when it comes to computing. To isolate ourselves from callbacks altogether, we’ll go ahead and also define a wrapper for defineRequestType that allows the handler function to return a promise or plain value and wires that up to the callback for us. function requestType(name, handler) { defineRequestType(name, (nest, content, source, callback) => { try { Promise.resolve(handler(nest, content, source)) .then(response => callback(null, response), failure => callback(failure)); } catch (exception) { callback(exception); } }); } Promise.resolve is used to convert the value returned by handler to a promise if it isn’t already. Note that the call to handler had to be wrapped in a try block to make sure any exception it raises directly is given to the callback. This nicely illustrates the difficulty of properly handling errors with raw callbacks—it is easy to forget to properly route exceptions like that, and if you don’t do it, failures won’t get reported to the right callback. Promises make this mostly automatic and thus less error-prone. Collections of promises Each nest computer keeps an array of other nests within transmission distance in its neighbors property. To check which of those are currently reachable, you could write a function that tries to send a "ping" request (a request that simply asks for a response) to each of them and see which ones come back. When working with collections of promises running at the same time, the Promise.all function can be useful. It returns a promise that waits for all of the promises in the array to resolve and then resolves to an array of the values that these promises produced (in the same order as the original array). If any promise is rejected, the result of Promise.all is itself rejected. requestType("ping", () => "pong"); function availableNeighbors(nest) { let requests = nest.neighbors.map(neighbor => { return request(nest, neighbor, "ping") .then(() => true, () => false); }); return Promise.all(requests).then(result => { return nest.neighbors.filter((_, i) => result[i]); }); } When a neighbor isn’t available, we don’t want the entire combined promise to fail since then we still wouldn’t know anything. So the function that is mapped over the set of neighbors to turn them into request promises attaches handlers that make successful requests produce true and rejected ones produce false. In the handler for the combined promise, filter is used to remove those elements from the neighbors array whose corresponding value is false. This makes use of the fact that filter passes the array index of the current element as a second argument to its filtering function ( map, some, and similar higher-order array methods do the same). Network flooding The fact that nests can talk only to their neighbors greatly inhibits the usefulness of this network. For broadcasting information to the whole network, one solution is to set up a type of request that is automatically forwarded to neighbors. These neighbors then in turn forward it to their neighbors, until the whole network has received the message. import {everywhere} from "./crow-tech"; everywhere(nest => { nest.state.gossip = []; }); function sendGossip(nest, message, exceptFor = null) { nest.state.gossip.push(message); for (let neighbor of nest.neighbors) { if (neighbor == exceptFor) continue; request(nest, neighbor, "gossip", message); } } requestType("gossip", (nest, message, source) => { if (nest.state.gossip.includes(message)) return; console.log(`${nest.name} received gossip '${ message}' from ${source}`); sendGossip(nest, message, source); }); To avoid sending the same message around the network forever, each nest keeps an array of gossip strings that it has already seen. To define this array, we use the everywhere function—which runs code on every nest—to add a property to the nest’s state object, which is where we’ll keep nest-local state. When a nest receives a duplicate gossip message, which is very likely to happen with everybody blindly resending them, it ignores it. But when it receives a new message, it excitedly tells all its neighbors except for the one who sent it the message. This will cause a new piece of gossip to spread through the network like an ink stain in water. Even when some connections aren’t currently working, if there is an alternative route to a given nest, the gossip will reach it through there. This style of network communication is called flooding—it floods the network with a piece of information until all nodes have it. We can call sendGossip to see a message flow through the village. sendGossip(bigOak, "Kids with airgun in the park"); Message routing If a given node wants to talk to a single other node, flooding is not a very efficient approach. Especially when the network is big, that would lead to a lot of useless data transfers. An alternative approach is to set up a way for messages to hop from node to node until they reach their destination. The difficulty with that is it requires knowledge about the layout of the network. To send a request in the direction of a faraway nest, it is necessary to know which neighboring nest gets it closer to its destination. Sending it in the wrong direction will not do much good. Since each nest knows only about its direct neighbors, it doesn’t have the information it needs to compute a route. We must somehow spread the information about these connections to all nests, preferably in a way that allows it to change over time, when nests are abandoned or new nests are built. We can use flooding again, but instead of checking whether a given message has already been received, we now check whether the new set of neighbors for a given nest matches the current set we have for it. requestType("connections", (nest, {name, neighbors}, source) => { let connections = nest.state.connections; if (JSON.stringify(connections.get(name)) == JSON.stringify(neighbors)) return; connections.set(name, neighbors); broadcastConnections(nest, name, source); }); function broadcastConnections(nest, name, exceptFor = null) { for (let neighbor of nest.neighbors) { if (neighbor == exceptFor) continue; request(nest, neighbor, "connections", { name, neighbors: nest.state.connections.get(name) }); } } everywhere(nest => { nest.state.connections = new Map; nest.state.connections.set(nest.name, nest.neighbors); broadcastConnections(nest, nest.name); }); The comparison uses JSON.stringify because ==, on objects or arrays, will return true only when the two are the exact same value, which is not what we need here. Comparing the JSON strings is a crude but effective way to compare their content. The nodes immediately start broadcasting their connections, which should, unless some nests are completely unreachable, quickly give every nest a map of the current network graph. A thing you can do with graphs is find routes in them, as we saw in Chapter 7. If we have a route toward a message’s destination, we know which direction to send it in. This findRoute function, which greatly resembles the findRoute from Chapter 7, searches for a way to reach a given node in the network. But instead of returning the whole route, it just returns the next step. That next nest will itself, using its current information about the network, decide where it sends the message. function findRoute(from, to, connections) { let work = [{at: from, via: null}]; for (let i = 0; i < work.length; i++) { let {at, via} = work[i]; for (let next of connections.get(at) || []) { if (next == to) return via; if (!work.some(w => w.at == next)) { work.push({at: next, via: via || next}); } } } return null; } Now we can build a function that can send long-distance messages. If the message is addressed to a direct neighbor, it is delivered as usual. If not, it is packaged in an object and sent to a neighbor that is closer to the target, using the "route" request type, which will cause that neighbor to repeat the same behavior. function routeRequest(nest, target, type, content) { if (nest.neighbors.includes(target)) { return request(nest, target, type, content); } else { let via = findRoute(nest.name, target, nest.state.connections); if (!via) throw new Error(`No route to ${target}`); return request(nest, via, "route", {target, type, content}); } } requestType("route", (nest, {target, type, content}) => { return routeRequest(nest, target, type, content); }); We can now send a message to the nest in the church tower, which is four network hops removed. routeRequest(bigOak, "Church Tower", "note", "Incoming jackdaws!"); We’ve constructed several layers of functionality on top of a primitive communication system to make it convenient to use. This is a nice (though simplified) model of how real computer networks work. A distinguishing property of computer networks is that they aren’t reliable—abstractions built on top of them can help, but you can’t abstract away network failure. So network programming is typically very much about anticipating and dealing with failures. Async functions To store important information, crows are known to duplicate it across nests. That way, when a hawk destroys a nest, the information isn’t lost. To retrieve a given piece of information that it doesn’t have in its own storage bulb, a nest computer might consult random other nests in the network until it finds one that has it. requestType("storage", (nest, name) => storage(nest, name)); function findInStorage(nest, name) { return storage(nest, name).then(found => { if (found != null) return found; else return findInRemoteStorage(nest, name); }); } function network(nest) { return Array.from(nest.state.connections.keys()); } function findInRemoteStorage(nest, name) { let sources = network(nest).filter(n => n != nest.name); function next() { if (sources.length == 0) { return Promise.reject(new Error("Not found")); } else { let source = sources[Math.floor(Math.random() * sources.length)]; sources = sources.filter(n => n != source); return routeRequest(nest, source, "storage", name) .then(value => value != null ? value : next(), next); } } return next(); } Because connections is a Map, Object.keys doesn’t work on it. It has a keys method, but that returns an iterator rather than an array. An iterator (or iterable value) can be converted to an array with the Array.from function. Even with promises this is some rather awkward code. Multiple asynchronous actions are chained together in non-obvious ways. We again need a recursive function ( next) to model looping through the nests. And the thing the code actually does is completely linear—it always waits for the previous action to complete before starting the next one. In a synchronous programming model, it’d be simpler to express. The good news is that JavaScript allows you to write pseudo-synchronous code to describe asynchronous computation. An async function is a function that implicitly returns a promise and that can, in its body, await other promises in a way that looks synchronous. We can rewrite findInStorage like this: async function findInStorage(nest, name) { let local = await storage(nest, name); if (local != null) return local; let sources = network(nest).filter(n => n != nest.name); while (sources.length > 0) { let source = sources[Math.floor(Math.random() * sources.length)]; sources = sources.filter(n => n != source); try { let found = await routeRequest(nest, source, "storage", name); if (found != null) return found; } catch (_) {} } throw new Error("Not found"); } An async function is marked by the word async before the function keyword. Methods can also be made async by writing async before their name. When such a function or method is called, it returns a promise. As soon as the body returns something, that promise is resolved. If it throws an exception, the promise is rejected. findInStorage(bigOak, "events on 2017-12-21") .then(console.log); Inside an async function, the word await can be put in front of an expression to wait for a promise to resolve and only then continue the execution of the function. Such a function no longer, like a regular JavaScript function, runs from start to completion in one go. Instead, it can be frozen at any point that has an await, and can be resumed at a later time. For non-trivial asynchronous code, this notation is usually more convenient than directly using promises. Even if you need to do something that doesn’t fit the synchronous model, such as perform multiple actions at the same time, it is easy to combine await with the direct use of promises. Generators This ability of functions to be paused and then resumed again is not exclusive to async functions. JavaScript also has a feature called generator functions. These are similar, but without the promises. When you define a function with function* (placing an asterisk after the word function), it becomes a generator. When you call a generator, it returns an iterator, which we already saw in Chapter 6. function* powers(n) { for (let current = n;; current *= n) { yield current; } } for (let power of powers(3)) { if (power > 50) break; console.log(power); } // → 3 // → 9 // → 27 Initially, when you call powers, the function is frozen at its start. Every time you call next on the iterator, the function runs until it hits a yield expression, which pauses it and causes the yielded value to become the next value produced by the iterator. When the function returns (the one in the example never does), the iterator is done. Writing iterators is often much easier when you use generator functions. The iterator for the Group class (from the exercise in Chapter 6) can be written with this generator: Group.prototype[Symbol.iterator] = function*() { for (let i = 0; i < this.members.length; i++) { yield this.members[i]; } }; There’s no longer a need to create an object to hold the iteration state—generators automatically save their local state every time they yield. Such yield expressions may occur only directly in the generator function itself and not in an inner function you define inside of it. The state a generator saves, when yielding, is only its local environment and the position where it yielded. An async function is a special type of generator. It produces a promise when called, which is resolved when it returns (finishes) and rejected when it throws an exception. Whenever it yields (awaits) a promise, the result of that promise (value or thrown exception) is the result of the await expression. The event loop Asynchronous programs are executed piece by piece. Each piece may start some actions and schedule code to be executed when the action finishes or fails. In between these pieces, the program sits idle, waiting for the next action. So callbacks are not directly called by the code that scheduled them. If I call setTimeout from within a function, that function will have returned by the time the callback function is called. And when the callback returns, control does not go back to the function that scheduled it. Asynchronous behavior happens on its own empty function call stack. This is one of the reasons that, without promises, managing exceptions across asynchronous code is hard. Since each callback starts with a mostly empty stack, your catch handlers won’t be on the stack when they throw an exception. try { setTimeout(() => { throw new Error("Woosh"); }, 20); } catch (_) { // This will not run console.log("Caught!"); } No matter how closely together events—such as timeouts or incoming requests—happen, a JavaScript environment will run only one program at a time. You can think of this as it running a big loop around your program, called the event loop. When there’s nothing to be done, that loop is stopped. But as events come in, they are added to a queue, and their code is executed one after the other. Because no two things run at the same time, slow-running code might delay the handling of other events. This example sets a timeout but then dallies until after the timeout’s intended point of time, causing the timeout to be late. let start = Date.now(); setTimeout(() => { console.log("Timeout ran at", Date.now() - start); }, 20); while (Date.now() < start + 50) {} console.log("Wasted time until", Date.now() - start); // → Wasted time until 50 // → Timeout ran at 55 Promises always resolve or reject as a new event. Even if a promise is already resolved, waiting for it will cause your callback to run after the current script finishes, rather than right away. Promise.resolve("Done").then(console.log); console.log("Me first!"); // → Me first! // → Done In later chapters we’ll see various other types of events that run on the event loop. Asynchronous bugs When your program runs synchronously, in a single go, there are no state changes happening except those that the program itself makes. For asynchronous programs this is different—they may have gaps in their execution during which other code can run. Let’s look at an example. One of the hobbies of our crows is to count the number of chicks that hatch throughout the village every year. Nests store this count in their storage bulbs. The following code tries to enumerate the counts from all the nests for a given year: function anyStorage(nest, source, name) { if (source == nest.name) return storage(nest, name); else return routeRequest(nest, source, "storage", name); } async function chicks(nest, year) { let list = ""; await Promise.all(network(nest).map(async name => { list += `${name}: ${ await anyStorage(nest, name, `chicks in ${year}`) }\n`; })); return list; } The async name => part shows that arrow functions can also be made async by putting the word async in front of them. The code doesn’t immediately look suspicious...it maps the async arrow function over the set of nests, creating an array of promises, and then uses Promise.all to wait for all of these before returning the list they build up. But it is seriously broken. It’ll always return only a single line of output, listing the nest that was slowest to respond. chicks(bigOak, 2017).then(console.log); Can you work out why? The problem lies in the list at the time where the statement starts executing and then, when the await finishes, sets the list binding to be that value plus the added string. +=operator, which takes the current value of But between the time where the statement starts executing and the time where it finishes there’s an asynchronous gap. The map expression runs before anything has been added to the list, so each of the list to a single-line list—the result of adding its line to the empty string. +=operators starts from an empty string and ends up, when its storage retrieval finishes, setting This could have easily been avoided by returning the lines from the mapped promises and calling join on the result of Promise.all, instead of building up the list by changing a binding. As usual, computing new values is less error-prone than changing existing values. async function chicks(nest, year) { let lines = network(nest).map(async name => { return name + ": " + await anyStorage(nest, name, `chicks in ${year}`); }); return (await Promise.all(lines)).join("\n"); } Mistakes like this are easy to make, especially when using await, and you should be aware of where the gaps in your code occur. An advantage of JavaScript’s explicit asynchronicity (whether through callbacks, promises, or await) is that spotting these gaps is relatively easy. Summary Asynchronous programming makes it possible to express waiting for long-running actions without freezing the program during these actions. JavaScript environments typically implement this style of programming using callbacks, functions that are called when the actions complete. An event loop schedules such callbacks to be called when appropriate, one after the other, so that their execution does not overlap. Programming asynchronously is made easier by promises, objects that represent actions that might complete in the future, and async functions, which allow you to write an asynchronous program as if it were synchronous. Exercises Tracking the scalpel The village crows own an old scalpel that they occasionally use on special missions—say, to cut through screen doors or packaging. To be able to quickly track it down, every time the scalpel is moved to another nest, an entry is added to the storage of both the nest that had it and the nest that took it, under the name "scalpel", with its new location as the value. This means that finding the scalpel is a matter of following the breadcrumb trail of storage entries, until you find a nest where that points at the nest itself. Write an async function locateScalpel that does this, starting at the nest on which it runs. You can use the anyStorage function defined earlier to access storage in arbitrary nests. The scalpel has been going around long enough that you may assume that every nest has a "scalpel" entry in its data storage. Next, write the same function again without using async and await. Do request failures properly show up as rejections of the returned promise in both versions? How? async function locateScalpel(nest) { // Your code here. } function locateScalpel2(nest) { // Your code here. } locateScalpel(bigOak).then(console.log); // → Butcher Shop This can be done with a single loop that searches through the nests, moving forward to the next when it finds a value that doesn’t match the current nest’s name and returning the name when it finds a matching value. In the async function, a regular for or while loop can be used. To do the same in a plain function, you will have to build your loop using a recursive function. The easiest way to do this is to have that function return a promise by calling then on the promise that retrieves the storage value. Depending on whether that value matches the name of the current nest, the handler returns that value or a further promise created by calling the loop function again. Don’t forget to start the loop by calling the recursive function once from the main function. In the async function, rejected promises are converted to exceptions by await. When an async function throws an exception, its promise is rejected. So that works. If you implemented the non- async function as outlined earlier, the way then works also automatically causes a failure to end up in the returned promise. If a request fails, the handler passed to then isn’t called, and the promise it returns is rejected with the same reason. Building Promise.all Given an array of promises, Promise.all returns a promise that waits for all of the promises in the array to finish. It then succeeds, yielding an array of result values. If a promise in the array fails, the promise returned by all fails too, with the failure reason from the failing promise. Implement something like this yourself as a regular function called Promise_all. Remember that after a promise has succeeded or failed, it can’t succeed or fail again, and further calls to the functions that resolve it are ignored. This can simplify the way you handle failure of your promise. function Promise_all(promises) { return new Promise((resolve, reject) => { // Your code here. }); } // Test code. Promise_all([]).then(array => { console.log("This should be []:", array); }); function soon(val) { return new Promise(resolve => { setTimeout(() => resolve(val), Math.random() * 500); }); } Promise_all([soon(1), soon(2), soon(3)]).then(array => { console.log("This should be [1, 2, 3]:", array); }); Promise_all([soon(1), Promise.reject("X"), soon(3)]) .then(array => { console.log("We should not get here"); }) .catch(error => { if (error != "X") { that is initialized to the length of the input array and from which we subtract 1 every time a promise succeeds. When it reaches 0, we are done. Make sure you take into account the situation where the input array is empty (and thus no promise will ever resolve). Handling failure requires some thought but turns out to be extremely simple. Just pass the reject function of the wrapping promise to each of the promises in the array as a catch handler or as a second argument to then so that a failure in one of them triggers the rejection of the whole wrapper promise.
https://eng.libretexts.org/Bookshelves/Computer_Science/Programming_Languages/Book%3A_Eloquent_JavaScript_(Haverbeke)/Part_1%3A_Language/11%3A_Asynchronous_Programming
CC-MAIN-2022-21
refinedweb
7,253
62.68
Swift Help View Controller Is Unreachable The Missing Manual for Swift Development The Guide I Wish I Had When I Started Out Join 20,000+ Developers Learning About Swift DevelopmentDownload Your Free Copy Storyboards are great for building user interfaces, but it can sometimes be frustrating to debug the cryptic warnings and errors Xcode throws at you. In this episode of Swift Help, I show you how to avoid unreachable view controllers when working with storyboards. I have created a starter project to get us started. You can download the project if you want to follow along with me. The project has two storyboards, Main.storyboard and LaunchScreen.storyboard. Xcode creates these storyboards for us. We are only interested in Main.storyboard. The project defines two UIViewController subclasses, ViewController and MyViewController. Main.storyboard contains two scenes, View Controller Scene and My View Controller Scene. If we build the project, Xcode warns us that one of the view controllers in Main.storyboard is unreachable. The warning reads My View Controller is unreachable because it has no entry points, and no identifier for runtime access view -[UIStoryboard instantiateViewControllerWithIdentifier:]. The goal of this episode of Swift Help is to dissect this warning and understand what it means. That should teach you how to use storyboards without becoming frustrated. View Controller Is Unreachable What you need to understand before we continue is that a storyboard can only instantiate a view controller if the view controller: - is the initial view controller of the storyboard. - is reachable via a segue. - has as storyboard ID. If none of these requirements are met, the view controller is unreachable. That is what Xcode warns us about. The My View Controller Scene has no entry points because (1) the my view controller is not the initial view controller of the storyboard and (2) it isn't reachable via a segue. We cannot instantiate it programmatically because the my view controller doesn't have a storyboard ID or storyboard identifier. Let's take a look at each of these issues in detail. Setting the Initial View Controller of the Storyboard Open Main.storyboard and select the View Controller Scene. Notice that an arrow is pointing to the view controller in the storyboard. The arrow indicates that view controller is the initial view controller of the storyboard. With the view controller selected, open the Attributes Inspector on the right. In the View Controller section, the Is Initial View Controller is selected. Unchecking the checkbox, removes the arrow. Checking the checkbox adds the arrow. We don't want to change the initial view controller of the storyboard, but we have two other options to resolve the warning Xcode outputs. Creating a Segue One option to resolve the warning is by connecting the My View Controller Scene with the View Controller Scene via a segue. A segue defines the visual transition between two view controllers. By creating a segue, the view controller is no longer unreachable. The View Controller Scene contains two buttons, Segue and Code. Select the Segue button, press Control, and drag from the button to the My View Controller Scene. Choose Action Segue > Show from the menu that pops up. Defining a Storyboard ID/Storyboard Identifier The second option to resolve the warning is by defining the storyboard ID or storyboard identifier of the view controller. Select the My View Controller Scene and open the Identity Inspector on the right. In the Identity section, set Storyboard ID to MyViewController. The storyboard ID can be any string. Open ViewController.swift. When the user taps the Code button, the showMyViewController(_:) method is executed. Let's instantiate a MyViewController instance and programmatically present it to the user. We load the main storyboard and ask it to instantiate the view controller with identifier MyViewController. The string we pass to the instantiateViewController(identifier:) method is the string we defined in the storyboard a moment ago. It is the storyboard ID or storyboard identifier. Notice that we use a guard statement and throw a fatal error if the storyboard isn't able to instantiate the MyViewController instance. Why is that? This operation should never fail. If it fails, it means we made a mistake we need to fix. With the MyViewController instance instantiated, we ask the view controller to present it to the user by invoking the present(_:animated:completion:) method. import UIKit class ViewController: UIViewController { ... // MARK: - Actions @IBAction func showMyViewController(_ sender: Any) { // Instantiate My View Controller guard let myViewController = UIStoryboard(name: "Main", bundle: .main).instantiateViewController(identifier: "MyViewController") as? MyViewController else { fatalError("Unable to Instantiate My View Controller") } // Present My View Controller present(myViewController, animated: true) } } Build and Run Build and run the application in a simulator or on a physical device. By tapping the Segue button, the segue we defined in Main.storyboard is executed. By tapping the Code button, the showMyViewController(_:) method is executed. Notice that Xcode no longer shows us a warning, that is, the view controller is no longer unreachable. That is another problem solved. The Missing Manual for Swift Development The Guide I Wish I Had When I Started Out Join 20,000+ Developers Learning About Swift DevelopmentDownload Your Free Copy
https://cocoacasts.com/swift-help-view-controller-is-unreachable
CC-MAIN-2020-45
refinedweb
863
57.77
Ok, I probably can't quite deliver on a promise like that (instead, I suggest you simply buy an easy button). But, when it comes to the various things that can go wrong in your Version Control workspace, "resolve" is the command that's responsible for putting things back together. Resolve handles three "classes" of conflicts - version conflicts, namespace conflicts, and local overwrite conflicts. By and large, there are three commands (currently) that can generate conflicts - get, checkin, and merge. The classic version conflict scenario (in the parallel development world) is the situation where you try to get or checkin a file, but someone else has checked in one or more new versions since you checked it out. Depending on how the two files changed, we may be able to automatically resolve the conflicts. For example, if you added 3 lines to the end of the file, and someone else checked in a delete of the first line of the file, we can delete that line from your file and then you'll be up to date with the merged file (yes, we've overloaded the term 'merge'). Or maybe you made content changes to the file and someone else just renamed it. We can move the file in your workspace but preserve your changes, and again you'll be ready to check in. There are many other scenarios (many, MANY others). We'll ask you if you want to attempt to automatically resolve the changes, along with various appropriate 'manual' resolution options (keep your changes, discard your changes, use a manual merge tool, etc.). The next kind of conflict is a namespace conflict - these arise when two items want to have the same fully-qualified name at the same time. The classic example is where you try to add a file and someone else has (presumably recently) checked in a file with the same name. There are other scenarios - you try to rename a file but someone just undeleted a file with the same name, for example. In these cases, we can't automatically fix the problem. We're still tweaking the final behavior and UI, but the current thinking is that you'll be prompted to either rename your local item, or keep it and pend a rename of the conflicting item, or undo your change. Get, checkin, and merge can all generate namespace conflicts. The final conflict type is pretty straightforward compared to the other two. There are situations where, during a get or merge, something about the state of your workspace blocks the server from moving, writing, or overwriting one or more files. We'll prompt you to try the operation again (in case you fixed whatever the problem was on your own, perhaps by changing permissions on a folder), or overwrite the item, or cancel (as in, let me fix it!). The overwrite scenario is the classic one that SourceSafe users may be familiar with - you couldn't check out the file for some reason, so you just cleared the readonly flag and made some changes yourself. Overwrite gives the server permission to overwrite the local contents with the version you specified in get (or the tip, if you didn't specify a version). The nice thing to note about using resolve, instead of just retrying get, is that it will "finish the get". So, if you were trying to sync your workspace to a specific point in time, or changeset, etc., resolve will finish getting your workspace into that state, instead of you having to fix the problems and then run get again (and possibly reentering a complex date or label specification). As you can probably imagine, things get slightly more complicated when merge is involved. If there's a conflict when trying to merge change between items, you'll be prompted whether to keep the merge source's version, or the merge target's, or to automatically or manually merge the differences (if appropriate), and you'll have similar options where renames are involved as above. We also are working on some UI to streamline the process. We want you to be able to select a course of action for a whole class of conflicts (for example, automatically merge all version conflicts, ignore namespace conflicts so I can address them individually, and overwrite locally writable items). Any questions about resolve? Obviously there's a lot more to the story where various wacky scenarios and edge cases come into play... 99% of the time, in the "local overwrite" case the behavior I actually want is something like "pretend I’d checked out the file as of the last version that I did a ‘get’ operation on, and do a ‘merge’ resolve operation accordingly". For example, suppose we have version 1 of Hello.txt with the contents: Hello world! Goodbye world! Now I fail to checkout Hello.txt so I make it locally modifiable and add a new line: Hello world! Goodbye world! Hello again world! In the meantime someone else checked in version 2 with a change to the first line: Hello there world! Goodbye world! When I do a get, it’s pretty obvious what behavior I want and there’s no fundamental reason the source control system can’t provide it. It knows that my working copy corresponds to version 1, and it can tell simply enough that it doesn’t actually match version 1 (not least because it’s writable). So when it’s faced with this conflict it should be trivial to pick version 1 as the common ancestor and merge my local version with version 2: Hello there world! Goodbye world! Hello again world! I’ve complained on various blogs that it sounds like the offline support in Team Foundation is barely improved over SourceSafe, and hopeless compared to CVS’s (and even CVS’s still sucks compared to what you’d really like to be able to do). But if the "local overwrite resolve" operation were smart like this, that would be enough to match (if not improve on) CVS’s functionality. It would change offline use from "horribly painful" to "comfortably tolerable". Still a long way from "excellent" but a massive, massive improvement over today’s world. Stuart, I have mostly good news. For the most part, your specific example is pretty decent in Hatteras. When you were ready to get ‘back online’, you’d run h checkout Hello.txt, which would move it from being just writable to actually being checked out (it does NOT clobber your changes). Then you’d run get, and you’d get the version conflict. Your specific scenario CAN be automatically merged, so you’d end up with the merged file exactly as you described. I even tried out your exact scenario just now to make sure 🙂 In the case where there wasn’t a newer version on the server, you’d just check yours out and then check it in. The thing we’re missing is something that looks at all of your local files and figures out which ones you ‘edited offline’, and checks them out for you. Awesome 🙂 "Something that looks at all of your local files and figures out which ones you edited offline and checks them out for you" is probably less than a screenful of Perl code that I can write myself. I’d have to use an "h status" command and parse the response (I’m assured that the output of the various h commands are scriptability-friendly, no filename truncation like ss.exe output) to see which files I had checked out, and then compare that with the results of a directory scan looking for non-readonly files. Pretty trivial stuff 🙂 If I wanted to be clever I could combine that with an "h diff /format:brief" or whatever to tell me whether files were really changed or not regardless of their writability. The most interesting part of your response is the implication that "h checkout" checks out whatever version you last got (version 1 in my example), rather than automatically incorporating a "get" operation as well. Is this only because the file was locally writable and/or modified, or is this always the behavior? (either way it’s cool, but I’m intrigued…) Stuart, Yes, the command line client has scripting in mind. We’ll be looking for feedback anywhere we do things that make this great, and especially places where we could make it better – your filename truncation is a classic example of things we’ll want to find and fix during the CTP/Beta process. We’ll always *only* checkout the file, rather than issuing a get in the process. There are various scenarios where you want a specific, non-tip version to pend changes against; for example, if a bug is reported in some specific build, you might want to sync back to a label that describes that exact build, repro the bug, fix it, THEN do a get to latest to merge your bugfix with subsequent changes to the affected files. It’s a break from the SourceSafe mentality on checkout, but I like it better this way. Since you might always have to deal with newer versions by the time you check in anyway… Note that we will issue a warning if there’s a newer version "already" when you check out. So if you didn’t mean to check out an earlier version, you can run get and resolve to get to latest. Or you could undo your checkout, run get to get to latest, and check out again. We also report if other people already have it checked out (again, just a warning), so you’ll have some inkling that some toe-stepping may be in the near future. We have a set of files that our whole QA team finds the need to update periodically. So when I check one of them out, it’s quite common to see that 3 or 4 other people also have it checked out. It’s also quite common to see that several versions get created while I’m working on a modification. So, I often run get two or three times along the way, incorporating changes as I go, then finally do one last content merge when I’m ready to check in. It works out pretty well in our case, since we’re usually modifying different areas of the files in question, which means the changes can usually be automatically resolved with no conflicts (or build breaks – a whole different animal, of course). All of those answers are exactly what I’d hope them to be. That’s awesome, awesome news and it addresses one of my two biggest worries about VSTF (the other being the lack of VS2002/2003 integration – I’ve been specifically told in other blogs that this is *not* planned) One particular place where this merging behavior will be a *huge* improvement will be with VS project files. It’s a major bone of contention right now that only one person can be in the middle of a change that involves adding or deleting files at a time. Decent merging on the project file would be a godsend. Note that in a *really* ideal world you ought to be able to do better on the project file than on an arbitrary text file. Consider this project file (syntax is approximate; I know it’s changed since VS2003 anyway): <Files> <file name="A.txt" …/> <file name="B.txt" …/> </Files> If one person adds an "A1.txt" and another adds an "A2.txt", a pure text-file style merge on the project file would treat these as a conflict because the new lines were added in the same place (between A and B in alphabetical order). But knowing the project file format it’s possible to resolve this conflict and know that both insertions should be honored and "A1.txt" goes first. I’m not sure how to deal with this issue architecturally – the TF layer is rightly well below the layer that understands project files. Perhaps a plugin architecture is needed that can provide filetype-specific help to TF on how to resolve merge conflicts. (Just to be clear – I’m still really happy to learn this news even if TF doesn’t have any answer to this obscure class of project file conflict. It’s still vastly improved over today’s world and I’ll be thrilled to get the chance to use it. I just thought of the project file thing while I was digesting the implications of your comment, and I thought it would be worth mentioning, either to see if you have an answer already or as a suggestion for a future release) Well, I can see your point, that a tool "aware" of project structure might be able to reconcile both changes. But, I don’t think you can get away from a conflict in the A1/A2 case – they’ll always be different changes to the same location in the file. Your tool would have to actually be smart enough to go see that files with both those names now exist, or something. In the case where one person adds A1 and another adds B1 – and assuming you’re right in saying that VS maintains a sorted order – even our built-in tools will be able to recognize these as two distinct changes. Also, we’re planning on shipping with an XML diff/merge tool as well as a more typical text tool, so if you do have to manually reconcile changes in a project file, you’ll at least have an XML-aware tool to do it with. Yeah, I wasn’t suggesting that tf could handle this case without special help from something that understands the filetype. And you’re right, the 90% case is where the files are added in different places in the list and can be merged just fine with builtin tools. I guess really I’d have to say I consider it a VS-level bug rather than a TF-level one. Anything that exposes the implementation details of the syntax of the project file to the end user – especially in the form of a conflict that requires manual resolution only if two files were added that are alphabetically adjacent in name – is a user-experience problem. But it’s perfectly reasonable to say that handling this case is really VS’s problem, not yours. The *most* I’d expect of TF here is to allow applications calling it through its API to be able to programmatically identify merge failures and programmatically resolve them. Then VS could intercept merge conflicts in the project file and apply its greater knowledge of the filetype to see if it can do better resolution than TF can alone. So, this has been awhile coming. Let’s blame that on the combination of 2 zillion screenshots and the
https://blogs.msdn.microsoft.com/crathjen/2005/02/22/to-magically-fix-all-of-your-problems-type-resolve/
CC-MAIN-2019-35
refinedweb
2,502
65.05
Yes, I had to build a little test bed to find items that printed differently in debugPrint than they did in print. I thought I’d share. Surely this kind of bizarre little implementation of a command-line test will be of interest to someone out there. You might ask: why not playgrounds. The answer is that printing to any output stream type (including strings) crashes playgrounds. A playground version would use XCPlayground to continue execution instead of a run loop. import Cocoa // Items to check //let x = "Snoop" //let x = 1...5 let x = UnicodeScalar(0x1f601) // Build test cases var a = ""; print(x, &a, appendNewline:false); print(a) var b = ""; debugPrint(x, &b, appendNewline:false); print(b) // Compare and alert func ExitAfter(t: Double, _ status: Int32) { dispatch_after(dispatch_time(DISPATCH_TIME_NOW, numericCast(UInt64(t * Double(NSEC_PER_SEC)))), dispatch_get_global_queue( DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), {exit(status)}) } if a != b { // Success print("YES!") NSSound(named: "Sosumi")?.play() ExitAfter(0.5, 0) CFRunLoopRun() // stick around to produce sound } else {print("No")}
https://ericasadun.com/2015/07/03/swift-hunting-down-the-fleeting-debugprint-differentiation/
CC-MAIN-2019-43
refinedweb
164
57.16
6, 2009 This article was contributed by Tom Chance. In my last article on OpenStreetMap I looked at the recent mass imports of public data — everything from British oil wells to the entire road network for the United States. But for those interested in more than an alternative to Google Maps, the ability to extract or add data to the project is what really makes OpenStreetMap shine. Whether you want to get an SVG of a campus map or import a local government's database of every building in the city, Linux users will find plenty of tools that cater to their needs. The export tab on the web site provides the most simple way to access data. Users can draw an area on the main map view and then grab an image (in PNG, JPEG, PDF or PS formats); some HTML to embed the map into your web site; or the raw XML data. To further modify the data, either in the OpenStreetMap database or a local copy (stored as an XML .osm file on your disk) download the data using an editor like JOSM (the 'Java OpenStreetMap editor'). To make life easier when selecting the area to download, open up the preferences dialog and install the namefinder and slippy_map_chooser plugins. Grabbing larger amounts of data would be difficult, slow and clumsy with these methods. More advanced users can get data directly through the API. Check the latitude and longitude coordinates for the area you want — an easy method for this is to use the export tab to draw an area, then note down the coordinates it records — then fire up wget or curl and download the data: wget 'zappy'). Access to really enormous amounts of data, such as the entire planet or a country, can be found in the frequently updated dumps listed on the Planet.osm wiki page. Once you have the data there are all manner of uses - your GPS navigation device, rendering your own maps for the web or print, or converting the data into another standard GIS format with tools like the Ruby osmlib. The documentation for each tool various enormously, but the toolchains tend to be relatively straight forward. Of course, extracting data is only half the story. Not only should all good open source citizens be contributing back, but you will get the most value from the data if you collaborate with others in developing a rich data set that will lead to tools and use cases you can later replicate. OpenStreetMap abounds with methods and tools for entering data. You might like the "old school" method of tracing a breadcrumb GPS trail — much more fun in the early days when I mapped much of Reading with some friends from a completely blank slate. Many mappers have traced basic road layouts and buildings from aerial imagery donated from Yahoo! so that others can go in and identify street names and points of interest. The main editing tools are Potlatch, a flash interface on the main web site (just click on the 'Edit' tab once you're zoomed into your local area), and the previously-mentioned JOSM. The wiki has plenty of guidance. When importing large sets of existing data, things get a little more complicated. The first step is to step back and have a good think. Imports can cause two kinds of headaches for other contributors if done wrong: you might put a load of new data over the top of somebody else's efforts and make a complete mess in the process; or worse, you might import data without proper permission, causing legal difficulties for the project and technical difficulties in taking the data back out again. It's always best to begin by asking a few questions on the relevant mailing list; there are localized lists for many areas, a general (high traffic) "talk" list, and a "legal-talk" list for legal issues such as licensing for imports. It's especially important to avoid convenient interpretations of web site notices regarding copyright and database rights when deciding if you can import the data. You need to get written confirmation so that the OpenStreetMap project is immune from legal attacks. There are some nice general guidelines on the wiki, which are worth a read. If you have data with written permission to use it, you can begin the import process. The first, and most laborious, step is to map out the data against standard OSM tags, as in this UK public transport example or this really comprehensive exercise for CanVec data. You'll notice that oftentimes source-specific data (like unique IDs for features and really niche data) is retained in a namespace like "CanVec:FID" and "naptan:StopAreaCode". This can also be useful where you don't want the data to appear until volunteers have gone through checking it against existing data in the database, for example to merge two bus stops (one crowdsourced, the other from the import). For large chunks of data, importers have tended to write custom scripts to then bring the data in. If the data is in the OpenStreetMap format, and it is in a state suitable to go straight into the database, this bulk import script makes the process quick and painless. The Canvec2osm code shows how to pull in more complicated data; this converts 11 different shape files into themed osm files with correct tagging, which can then be worked into a suitable state for importing. A more cautious approach can be appropriate in areas with a lot of existing data. One quite technically challenging route is to set-up your own Web Map Service (WMS) using a tool like mapserver, and then set-up the JOSM WMS plugin to pull those maps in as a layer underneath your map data so it can be traced. This Map Warper tool is in beta and tries to make this process easier. If the data is quite simple you could just put the source and editor side-by-side on your screen and use your judgement to copy over points of interest. However you want to proceed, you're probably best off getting in touch with some local or more experienced community members. Interested people could even just lobby local government officers and public institutions to get the data, then pass it along to somebody with more of an appetite for the technical stage. Given 6 months to study, process, and import the data, you should find richly detailed maps and underlying data available under a Creative Commons BY-SA license; the license, incidentally, may soon change to one more suitable for databases. Whatever you do, just remember to have fun. Printing a good-looking map Posted Mar 6, 2009 22:49 UTC (Fri) by epa (subscriber, #39769) [Link] In theory you should be able to export a PDF and print that... but I have found it difficult to get one of the right size, or send it to the printer without odd errors. Is there a foolproof way to 'download PDF of the current view' but using your normal paper size (A4 or US-Letter)? altitude data? Posted Mar 7, 2009 0:00 UTC (Sat) by roskegg (subscriber, #105) [Link] Posted Mar 7, 2009 0:11 UTC (Sat) by flewellyn (subscriber, #5047) [Link] I don't know if OpenStreetMap has elevation data, but I would imagine not; it is, after all, OpenSTREETMap. Posted Mar 7, 2009 0:37 UTC (Sat) by jwb (guest, #15467) [Link] Posted Mar 7, 2009 0:40 UTC (Sat) by flewellyn (subscriber, #5047) [Link] Especially since the Earth's core is, y'know, liquid. It moves around. Posted Mar 7, 2009 1:31 UTC (Sat) by roskegg (subscriber, #105) [Link]] Posted Mar 7, 2009 0:34 UTC (Sat) by peterh (subscriber, #4225) [Link] Anyone want to start OpenTopographicMaps? :-) Posted Mar 7, 2009 0:41 UTC (Sat) by flewellyn (subscriber, #5047) [Link] Posted Mar 7, 2009 13:24 UTC (Sat) by dnewcomb (subscriber, #3312) [Link] Posted Mar 7, 2009 1:29 UTC (Sat) by roskegg (subscriber, #105) [Link] Posted Mar 7, 2009 5:32 UTC (Sat) by pabs (subscriber, #43278) [Link] The site seems a bit broken though. Posted Mar 7, 2009 12:11 UTC (Sat) by Burgundavia (guest, #25172) [Link] Posted Mar 7, 2009 12:11 UTC (Sat) by jhellan (subscriber, #17103) [Link] Posted Mar 12, 2009 3:02 UTC (Thu) by puetzk (guest, #3318) [Link] Posted Mar 19, 2009 20:01 UTC (Thu) by pal (guest, #57253) [Link] you left out OpenStreetBugs Posted Mar 7, 2009 12:11 UTC (Sat) by vblum (guest, #1151) [Link] should definitely be added to the article. SVG Output? Posted Mar 9, 2009 14:20 UTC (Mon) by kfiles (subscriber, #11628) [Link] Thanks, --kirby Posted Mar 9, 2009 16:42 UTC (Mon) by michel (subscriber, #10186) [Link] No clue if that's any help... Posted Mar 10, 2009 4:55 UTC (Tue) by kfiles (subscriber, #11628) [Link] Posted Mar 10, 2009 14:38 UTC (Tue) by telex4 (guest, #21370) [Link] How to download the data Posted Mar 19, 2009 14:47 UTC (Thu) by deleteme (guest, #49633) [Link] Live map data, always current wget wget Filtering server that lets you choose what to download (10 min old)(read moe about xapi curl -g -o data.osm curl -g -o data.osm Delivering bulk data (10 min old) (read more about ROMA) wget wget Linux is a registered trademark of Linus Torvalds
http://lwn.net/Articles/322163/
CC-MAIN-2013-20
refinedweb
1,578
62.11
A. A JUDF is referred to as a function in the SQL-99 standard. It provides a rich object oriented feature set including networking. It is also platform independent which allows code developed on one platform to be easily moved to another. Java objects lend themselves to be developed as components for business logic. Applications written in Java and User Defined Functions can be written in the same style and language. This article will show how the Teradata Plug-in for Eclipse facilitates the process of creating and installing the Java parts and DDL for a scalar JUDF using the JUDF Wizard and Multi-page Editor. Prerequisite for this Article If you have not work through the guide Getting Started with Teradata Plug-in for Eclipse, do so now before you continue. Also the 13.0 version of the Teradata database is required for this feature. Using JUDF Wizard In this example, you will be creating a JUDF Function which takes a full name as a parameter and returns the last name from the name passed into the function. You will use the JUDF Wizard to implement this JUDF. Create a Java Project You will need to create a Java project to store the Java source of the JUDF. Go to the top of the Eclipse IDE and select the pull down menu Window ->Perspective->Other... ->Java (Default). Then go to the package explorer in Eclipse and right click New->Java Project. Enter “teradata_judf” for the Java project name. Once this is done, select the "Next" button. Remove “src” Folder Remove your default source folder by selecting “src” and then picking the "Remove source folder" option. Add “src/java” Folder In this example, the “src” folder is going to be deleted and the “src/java” folder will be added. This is done to give the Java project structure. Later in following articles different source folders will be added such as “src/config” and “src/test”. Now select the “Create a new source folder” option and enter “src/java” in for the folder name and select “Finish”. Now select “Finish” again in the New Project Wizard. Launch JUDF “User Defined Function” tree node for the schema in which you wish to create your JUDF. Right click on the “User Defined Function” tree node and select the menu item Teradata->Create Java User Defined Function. JUDF Wizard At this point, the JUDF Wizard will come up. You will need to enter the container name "/teradata_judf" and the name of the JUDF properties file "ReturnLastName". Once you have done this, select the “Next” button. JUDF Class Definition The next page is where the JUDF class is defined. Enter the source folder “teradata_judf/src/java”, package name “judf” and the class name “ReturnLastName”. It is the best practice not to use the default package name. Once this is done hit the “Next” button. Define JUDF Method This page allows the user to define the function method for the JUDF. SQL data types are used to define the parameter types in this panel. The SQL data types are mapped to specific Java types as shown in Appendix A. Now, enter the method name “returnLastName”. Next go to the "New" button for the parameter list and then add the parameter name “name”. Pick the data type VARCHAR. Enter in the size of 256 and hit the "Apply" button. Now select the "Next" button. Define Return Type The Next page of the Wizard defines the return type for the JUDF. The return types use SQL data types. The SQL data types are mapped the same as the JUDF parameters to specific Java types as shown in Appendix A. Now select the VARCHAR from the “Type” combo box. Then enter the size of 256 for the type option. Once this is done, select the “Finish” button. Using the JUDF Multi-Page Editor The JUDF Multi-Page Editor brings up the contents of the JUDF properties defined in the JUDF Wizard. The Multi-Page editor allows the user to edit and deploy a JUDF. JUDF Function Source Go to the “Source” page of the editor. You will see the generated Java code for the JUDF. In this case, it is up to you to enter the content of the JUDF. Change Source Edit the Java source on this page entering the code shown below to get the last name from a full name. Once this is done, go to the “JAR File” page in the editor. A popup dialog will come up asking you if you want to save the source you just changed. Now select the “Yes” button. Deploy JAR On the JAR Files page, you will see the JAR file and JAR ID for the JUDF. Select the “Deploy” button. This will deploy the JAR for the JUDF on the database server. Install DDL Go to the “SQL” page in the editor. Select the “Run SQL” button. This will install your JUDF on the database server. Run JUDF Once your JUDF is installed you can run it. Go to the DTP tree in the Data Source Explorer. Select the User Defined Function tree node in which you launched the Wizard and select refresh. You will now see the “returnLastName” function. Select the procedure and right click. Now select the "Run" menu item. A popup dialog will come up called "Configure Parameters". Enter in the value column of the Dialog “John Smith” and hit the “OK” button. Results Your results of running your JUDF will end up in the bottom of the Eclipse IDE. Select the "Results1" tab and see the results of the execution of your JUDF. Conclusion This article has shown how Teradata Plug-in for Eclipse facilitates the process of creating and installing the Java parts and DDL for a JUDF using the JUDF Wizard and Multi-page Editor. Using these tools, will help the user create, edit and run JUDFs easily and efficiently. Appendix A SQL Data Types Mapping The following table defines how data is mapped between SQL and Java. The SQL data type is converted to/from the corresponding Java data type based on the type of parameter mapping. The JUDF defaults to simple mapping. The user may specify object mapping via the External Name clause. Appendix B JUDF Example Source Code /** * */ package judf; /** * @author js185064 * */ public class ReturnLastName { /** * @param name * @return */ public static String returnLastName(String name) { String returnValue = null; if (name != null) { int index = name.indexOf(" "); if (index == -1) { returnValue = name; } else { returnValue = name.substring(index, name.length()); } } return returnValue; } }
http://community.teradata.com/t5/Tools/Creating-a-Simple-Java-User-Defined-Function-using-the-Teradata/m-p/47605/highlight/true
CC-MAIN-2018-05
refinedweb
1,090
66.44
Creating a Gatsby Theme with John Otander We take the gatsby-starter-egghead-blog and convert it to a Gatsby theme. The process uses Yarn workspaces and is a great demonstration of the flexibility Gatsby gives us for creating and sharing our work. Level Up! Access all courses & lessons on egghead today and lock-in your price for life. We take the gatsby-starter-egghead-blog and convert it to a Gatsby theme. The process uses Yarn workspaces and is a great demonstration of the flexibility Gatsby gives us for creating and sharing our work. Access all courses and lessons, track your progress, gain confidence and expertise. Joel Hooks: it's talent. This is John Otander. He's a developer extraordinary and a friend of our very own, Taylor. That's where I met John at, anyway. As it happens, we have a lot of crossover interests and associations. Now, he's on the Gatsby core team, fixing it, to make the content web wicked. John Otander: Finally. At least we'll see. Joel: Yeah, man. We'll see how it goes. Will WordPress run supreme? John: No if I have anything to say about it. Joel: I just want to get into it, honestly. We don't really have to do a bunch of formal stuff. This is just a casual get together to look at Gatsby theming. I've read a little bit. I've talked to a Chris Biscardi about it briefly. I've followed along with what Gatsby's doing. My general take...and I wanted to tell you how I look at Gatsby themes, and then you can tell me how off the market I might be. I look at Gatsby themes as a way to separate configuration, to where we have repeatable configuration, and maybe design and other stuff, but mostly the configuration. I want to use the same setup across multiple sites and share it, and let other people use it, but then also allow them to customize and do their own design work, and build their Gatsby site to their needs, but saving them the hustle, the groundwork of the core configuration. Is that the...? John: Yeah. Pretty much. You have it right about perfect. I think the one thing I would also add is that with three theming, they can be like an app unit. You can have particular collections of concerns. You can have a theme for Gatsby theme blog, and then also Gatsby theme e-commerce. When you're as an end user, if you're starting out with just a blog, you only need to install that one theme to get running, writing some markdown. As soon as you decided that, "Oh, yeah. I'm going to add a store," you just add in that next theme and you just compose them together. You can make themes as modular or monolithic as you'd like. I would be really interested to see how the community runs with it, because you can decide how you want some architecture themes, and share them with the community, or internally as a company. I think that will be one of the very powerful use cases, is I'm an open source company, and we have a bunch of different projects that we want documentation for. You can create one theme and install it in each of these different repos. You just pointed at a docs directory or something, that it will construct the whole doc site for you. Joel: It's like meta plugins, almost? John: Pretty much. A lot of times people think it's only for scheming and aesthetic concerns, but really, a theme can be anything that a Gatsby site can do, which is pretty cool. Joel: It's funny because, in my head, I actually had it as the opposite. I was thinking more the data and the underlying functionality of the site, being able to package it in in a theme. Visually, it makes a lot of sense, too. You could do that, right? I'm stuck in functionality, but then I can also put a visual theme on top of that as well, and get a combination of these things. John: We've just recently introduced...I guess it's called trial beaming. That's now actually supported. It's still, of course, experimental. It allows you as a theme author to install other themes, and compose them together, and customize them, and then publish that as your own theme. It's a way where you can really powerfully remix things, but also be...if I'm a theme author, and I want to create six different blog themes, you can create your blog base that sets up all the data structures, and it creates pages, and then create 10 more themes that just inherits from that initial base. You can use it as building blocks to start with one layer of abstraction, and stack another thing on top, and go with it from there. Joel: That's cool. Where are the themes at, and the release cycle? Maybe Jason can since he's joined us. Are they ready to go? Is this something that's good to go in Gatsby as it sits today, or are we still in the development of themes, in the thick of it? Jason: Where themes are right now is the code is good, but there aren't a lot of docs. We're working on that now. There aren't a lot of examples. You'd be adventuring. Joel: We're adventuring today, but it's a guided adventure. That's the best kind. Jason: Yes. Joel: If we want to just jump into it, what we're going to do is we have a naked blog starter which I linked to in the chat. It's a good candidate to...honestly, it probably has multiple themes that we could pull out of it. Just in general, it's something that we're going to reuse and stamp multiple sites with this basic functionality. For us, that's newsletter sign-ups. It's a blog theme, so we have blog posts, we're using MDX, and all of this other core functionality. So far, we've used it to build three or four sites. The starter works just fine, but it feels like something that is perfectly suited for theming, or using as a theme overall. I posted it on Twitter, and John was like, "Oh, that should be a theme." Here we are, and John's going to generally show us how we could take this and make it into a Gatsby theme. Jason and I are here for color commentary. Then Ian's going to, if anybody has any questions in the chat, he's going to be your mouthpiece on the broadcast panel here. Does that sound good? Should we just jump into it, get to coding? John: Yeah, let's do it. Essentially, where we're starting right now is I've just clone the starter, as we had it all right here in the this directory. I guess the first step that we'll do here is to convert it, so we can use it as a Yarn workspace. That way, we'll be able to build our starter and also install the theme into the starter and use that as the development workflow. That way, we don't have to do things like Yarn linking or having to publish it to NPM to then install. It gives us this seamless workflow while we're just trying to hack this out really quickly. Joel: Does NPM have anything analogous to that, or is that the killer feature of Yarn at this point? John: For me, it's the killer feature of Yarn. It's basically the only reason why I use it, is for the workspaces. It's really cool, especially if you're developing a handful of packages in tandem. You can link everything together and have your examples that pull them all in. Joel: For plugins or themes and this kind of thing, it works out really good. John: Yeah. Joel: I've never used those, either, so I get to learn that as well. John: Essentially, what I'm just going to do here is create this packages directory. We'll call this gatsby-theme-egghead-blog. Of course, you got to have the tack two. Then we'll just move all of this actual code straight into that directory. Then from here, we can pretend that this our package. Then to actually start the workspace, we can just quickly init a new package, and go ahead and modify that. I imagine I'll probably need to make this bigger, too. Here, we essentially, in order to create the workspaces, we just have to put in a workspaces key and tell Yarn where to look. Firstly, we'll tell it that every directory in packages is a standalone package. We'll also create a starter directory and use that as...that's our example. Every time when we specify gatsby-theme-egghead-blog as a dependency in starter, Yarn'll just link them all together. We can just make changes in the Gatsby theme, and it will just lag that reloading for us. We'll just also tell this that it's private, since we're not actually going to be trying to publish this. Then from here we'll also create our starter directory. This will be our example, where we install the theme, and eventually, all the content that we scaffold out will also live there. That's the idea, so that now, when we use things like starters, they're never going to go away in Gatsby with themes. The starters can actually just install its app code as a theme itself. We can update it when needed. Here, we'll just scaffold out another package.json for our starter. Joel: The way workspaces work, each of these is just a separate project. Each one has its own package. Each one is an independent, multi-package repository at that point. John: Exactly. Here, we'll just add our start script, of course. That's gatsby develop. We don't need this main, description, or anything like that. We'll also specify this v private as well. We don't necessarily need it today, but it's just the way so you don't accidentally publish things. Then from here, we just need to specify our dependencies. One of those, of course, is Gatsby, and then the other one will be the theme that we're going to build out. Here, we'll just say gatsby latest, because we just want to make sure that we pulled in the most recent releases, adding to themes every day and making them better. Then gatsby-theme-egghead-blog. This is what we're going to be creating. John: Go ahead. Joel: When Yarn looks at that, just in terms of the workspace, it knows that and relates the two together? John: Yep. Yeah, it'll see, "Oh, I've specified this dependency," but it knows it's another package. It just does all the linking. Then, I guess, to finish, tying this all together, we just change the name to using a theme convention. Then also, specify as the theme. Then the one last little bit we have to add is actually to specify names. This is like a package. This will allow Gatsby to be able to resolve the package and know that it's good to go. We add all that in, and then we get to do the Yarn. Joel: The main is the config, because as I understand it, that's effectively what a theme is, is a layering of configs? John: Yeah, but this is really just, it's almost there for hand-waving. Gatsby is like, when it builds themes, it just merges them all together. Just this way, when Gatsby first tries to build the site, it makes sure that it can resolve all the packages. If you specify main to an existing file, node won't get mad for saying, "Oh, there's not a package there." Another convention that you often see, especially in plugins, is you can also just add an index.js file. That's essentially empty as a no-op. Either of those approaches, they both work. Joel: You just need to point it at something? John: Yeah. Joel: OK. John: Now, we're doing the Yarn install. Joel: This is a random question. Do Yarn and NPM work together, like if you have a package lock and the Yarn lock, do they care? Have you ran into any problems with that? John: I haven't run into problems, but they just ignore each other. They both pretend, at least as far as I can tell, that, "I don't care about the package lock." Jason: The problem that you run into with it is that if you've got a package lock, Yarn will ignore it. NPM ignores the Yarn lock. Depending on which build system your team is using, they'll have a different set of dependencies, because they'll be using different lock files. Joel: Probably stick to one as the...? Jason: If you're working on a team, you'd want everybody to use the same one. Joel: Since we have a killer feature in Yarn, it's probably Yarn. Jason: If you're working in the mono repo, where you would have the workspaces, then yeah, definitely, you would want to use Yarn, because it makes it much easier. Joel: When you run Yarn in the root, does it go and do it for everybody? It's like a subproject in a workspace? John: It'll walk every package, every workspace you specify. It'll go ahead and facilitate the Yarn install. Then once this happens and we're done, we can basically then run scripts from the root by just specifying which workspace we want to run from. We can say something like, yarn workspace starter start, and that will run the start script and the package.json of the starter workspace. Joel: I can't get the gigabit here. It still makes me sad. Jason: Like I said, the best thing that ever happened was when I moved into the place I'm in now, and I asked if I could get fiber Internet. They said yes, and all I had to do was drill a hole in the wall. I was like, "Well, hold on, wait. I can just drill a hole in my wall?" It's the first time I've ever bought anything instead of renting. It was like, "Wait, I can just do that?" Joel: What, you go buy a drill? Ready. drill sounds Done. I've asked them never to call me unless they're offering me faster Internet. They'll call me, because you want the phone, you want the TV? I want none of those things. Is this call about faster Internet? My account, I should be flagged to never call me unless you're offering me faster Internet. To which the answer will always be yes. I was asking about the fiber, and they were like, "What do you want this for?" They're like, "It's like $1,000 to set up," and this, that, and the other, to my house. I was like, "I mean, I want it for Netflix and chill. Does it matter? I want the fastest Internet." I went through 20 minutes, the end result was, "No, you can't actually have that at your house." I was like, "We could have started there, without the quiz about my Internet usage. It's none of your business why I want fast Internet." I assume that's so they don't leave money on the table by having some lucrative business opportunity slip by them. John: I guess we can get started with some of actual Gatsby config changes, since it's taking its time. One of the big distinctions that's a little bit different when it comes to developing a starter versus a theme is in the themes, you have to tell it to... You have to use a require.resolve call, instead of patch lane. Since it will be installed as a dependency, the node module, it will attempt to look for it in the relative directory. We have to make sure we specifically do that. Otherwise, once this Yarn install happens -- which it's not done yet -- if we were to attempt to run it right now with that many changes, when it tries to programmatically create the pages from the blog posts, it will actually blow up. Right now, it'll search for the templates in the starters directory. Right now, we have this, the path.resolves. The way we can actually change that from the theme author's perspective to ensure that it attempts to resolve it in the correct way, we can change the path.resolve to require.resolve. That ensures that it looks relative in the file system. For both these templates, we can just go ahead and change these to be require.resolve. Now, we'll actually add the relative file path there. When it actually searches to create the pages with the template components, it will, I guess, connect everything correctly. Joel: Is that some you do just normally? Could you have just used require.resolve in lieu of that, and avoid this entirely from the beginning? John: Yeah. I think one of the things we'll probably work on with the docs and update them with slight tweaks, so it'll work in the theme context, or a local project context. That way, you don't have to do this dance. Since this is built as a starter, and following the existing docs, we have to make these quick little changes. Joel: It makes sense. John: This is literally the longest Yarn install I've ever had in my entire life. Jason: You sounded insulted. Joel: 397 seconds. John: That's crazy. My computer is quite slow right now. Now, the changes we've essentially made now is we've just changed how we're looking for the templates. We can actually now yarn workspace starter start. When we do this, it's going to attempt to build the Gatsby theme. Actually, in this starter, I think I didn't add the Gatsby config yet. We can do that. Right now, this will actually blow up on us, because we haven't given Gatsby any configuration. We'll add our gatsby-config. This is probably one of the parts I'm most excited about with themes, is how nice and simple the actual gatsby-config is now. Since we're using a theme, all we have to do is just say, "Use this theme." Right now, it's namespaced under experimental themes, because we haven't gone totally stable yet. We don't really anticipate any breaking changes now. It's more about just improving documentation and that kind of stuff. The way to configure it to pull in a Gatsby theme is to tell it just the name of the package. This would be gatsby-theme-egghead-blog. Now, this is really all the configuration that you need to do. Then later, if you decide, "Oh, I want to add a couple of plugins," you just add those keys in there, and it'll all just work together. It's a pretty nice way to get up and running. If the theme gets updated, you publish a new version of that to NPM. All you have to do in the starter is just tell it the new version. Anyone that started from that starter, they just have to update their theme package. They can pull in all the new changes, like whether it's bug fixes or added features, anything like that. Joel: They get versioned through NPM, so you can lock it down, or you can track with the newest version, that sort of thing. John: Exactly. As we see, Gatsby couldn't find the config, so it's like, "There's nothing to do here." Now, we've just added it. What we'll expect now, since I've done this once or twice before now, that it'll blow up now on gatsby-config. That's in the theme itself, because we haven't also reset the actual path resolution for typography.js. What Typography.js does behind the scenes is it caches a bunch of the stuff that it's doing internally. You basically just have to slightly change the way it requires those files from the theme. Joel: Is that specific to typography.js? John: Yeah, just based off the way it does caching right now. It's something we'll hopefully be able to update to make it a more seamless theme experience. Right now, it's kind of a missing piece. Here, we'll just do the same thing, where we do require.resolve. This just ensures that it looks for the cache in the right spot. After all that ceremony, here's the typography.js explosion. What was happening there is it was trying to look for the cache in the wrong spot when it was initially built. We changed the require.resolve there. Now, when we do the Yarn workspace, we basically rebuild everything. It will, I guess, be able to resolve the things. Joel: Is this actually starting like the original starter project? Is that's what getting built here? John: Yeah, we're building the starter project, which is then linked to the theme project code. Joel: Do we name it? It's in packages, right? You just migrated the entirety of that over to the packages Gatsby theme blog. Then the starter folder is empty, but it's all building into the starter folder. How is that functioning? John: Since we have the name starter, that's the name that then Yarn decides is the Yarn workspace. Any script you wanted to run here, like if you added a test script, it's basically just yarn workspace starter test, and it will run that script. Now, it's building the images. I guess we can start getting started with the content now, since obviously, the content lives in the theme directory itself. The way we'll want to change that is to move it into the starter. That way, when you scaffold out your app, you see the content, and you can just start writing from there. That's the cool part, too. The stuff that you want to live in the starter, like the content, can. Then all your application code is living off node modules and your Gatsby theme. Joel: Really nice separation of concerns at the end of the day. John: Yeah, because a lot of times, with the starters, you're intended to, here's the default content of your project. Then like right now, we bring in all of the application code, too, which is something you typically want to hide underneath the covers. Then down the road, we can add in developer tooling, so you can actually decide, "Oh, I want to eject from a theme," and then you can opt into pulling it in that code, if you don't want an upgrade path and that good stuff. That's the thought. Joel: Kind of like create-react-app -- not totally, but in a way -- it's similar, it feels like. John: Now, we actually have the page built, which is cool. As we'll see, all of that content is currently being sourced directly from the theme itself. We just want to move that over. I think the first thing we'll go ahead and start with is to actually move, or properly source, the pages. Right now, Gatsby will only look for pages in its relative, I guess, source pages. Right now, it's trying to build pages from the starter. If we create an index.js in the source pages here, then it would find it, and there wouldn't be a 404. Since by default, it would probably make sense for this theme to actually build out the index page and everything, we can use Gatsby as a plugin to the page creator. We can point it to the theme source pages as well. Until an end user creates their own index page, we'll actually instead render a page for them. That will be the theme page that we see, at least when you initially run the starter. We just have to add this gatsby-plugin-page-creator into our project. Instead of running this in your starter workspace, we're actually going to make sure this is installed in the theme. We're going to Yarn workspace gatsby-theme-egghead-blog add, our page creator. As this installs, we can actually then go into the gatsby-config and tell it to source pages from that page directory that we have. We can actually just cut and paste some of this configuration we have here. Right now, what this does is, the page creator, you can tell it a directory to look into, and it knows to create pages and infer the URLs based off of whatever's in that directory. We just tell it to look in the theme source pages. We just add a new path to it. Joel: You can effectively make as many page directories as you want or feel like having? John: Yeah, and that's a lot of times what large, big, complex apps will do. They can group their pages based off of different concerns. This will allow you to do the same thing. We'll make sure that we tell it to look relative to the theme itself, and just go to content blog. Also, of course, we have to add actually, require a path, so that... Joel: Is this replacing the source file system? John: It's replacing it. Well, it's not replacing it. It's additive to it. What it's doing is creating index and subscribe pages that are there until...Of course, if you add it to the starter, it will override it. Gatsby will see, the user will see, or the child theme has that page defined. That's the one that it'll choose. Joel: You pointed it at content blog, though, versus...I thought we would point it at source pages. Is that not the...? John: Oh, yes. You are right. Joel: Oh, OK. John: Source pages. Good catch. That might have been a moment, "I'm so confused. Why isn't this working?" I told it took in the wrong place. Now, we're actually sourcing those pages. We can restart the worker space. Normally, once you have everything set up in your config, it's a lot more seamless when you're developing you're theme. It's just when you have to resolve the paths, like the initial setups, like a couple... Joel: It's Gatsby in general. When you're building your config, it's a lot of stopping and starting. Then once you have that all built, and you're working on content, then it's nice and hot reloading. John: Yep. It's much faster and super quick. Once this builds, then we'll actually see the nice little splash page and everything that the starter creates. I guess while we're building this, we can start moving the content files over, too. We can just grab all these. That's the config. We'll want to bring over the content directory, so it lives in the starter itself. The one little change that we'll have to make is just to tell Gatsby source file system to look... We can get rid of this directory name that's here and just say content blog. By default, it will just search where Gatsby's running. We can actually clean that up and make it simple, because...What's going on here? Oh, I moved it over while it was still being installed. That's a bad idea. Joel: Oh, yeah. It hated you. John: Here now, we'll have the content that will be sourced from the starter directory, where our concerns are now separated in the way that we would like them to be. Joel: Your starter ends up being super light. It could be a place where somebody that is just concerned about content entirely would go and could work in that. I guess it builds up, though, right? We'd add to that over time. A lot of the times in the theme migrate over there, just depending on where it would best fit. Jason: You don't have to. The thing that's really cool about themes is that this gives you the opportunity that you could create a Gatsby site that was a gatsby-config and a folder called docs that's just markdown files. It's a really, really powerful model in that way. Joel: I love that, because it's super simple. The setup is easy. They don't have to worry about it. We could basically have sites up lickety-split, and just worry about content for them. John: It's a much gentler introduction to Gatsby for those that aren't familiar with React potentially, even, or GraphQL. It's a nicer abstraction if you don't want to see the knobs and dials. Some people love the knobs and dials. Then underneath the theme, you can peel back that layer. Then whenever you have some bespoke thing that you want to build out, like I want to create a different tags page, then you can start writing some GraphQL, make these customizations, and add some new components to do all that stuff. Joel: It's like opting into complexity, in a way. John: Mm-hmm, yep. Actually, now, we're all built. This is essentially like a working theme now. It's actually sourcing all the content from the starter, but all the configuration and stuff is still handled, like the MDX parsing, rendering, and all the themes and components are all actually in the Gatsby theme big head block. It's now essentially, by all purposes, it's a functioning Gatsby theme. Joel: Can I say that my placeholder makes me laugh pretty much every time I see it? John: Yeah. Joel: Anyway, that's amazing, though, actually. There's a lot to the setup. It's pretty complex, Yarn workspaces, all that stuff. Once that's done, now it's like a folder with our content in it, a super simple config, and we're off to the races. We could just start adding content to this or dig into it. The next step, I guess, to me would be to start breaking this down into thinking about the visual theme versus the data theme, for lack of a better word. I don't know if y'all are calling them anything in particular. Would that be the way you think about this, or how are y'all looking at, how do you divide this up further? John: A lot of times, I guess, the way we've been thinking about it -- of course, this is still experimental, and trying to figure out what the best practices are -- ideally, you have a couple of layers, where one is setting up your data model. Basically, just saying, "This is the shape of the content, and this is where I might source it from." Then you have the blog base that creates pages, handles tagging. Then you can have another theme that sits on top that's like, they don't even necessarily, that theme doesn't need to care about writing GraphQL. It just has presentational components, essentially. You would be like, "I'd create my header, and I'm just passing ops for the links, the title, and that kind of stuff." That's also something that you can define its best, whatever context, or what makes sense for your use case. It could be super monolithic, or it can have one that's presentational and one that sets up the data. That's where we've been leaning to the most so far, is to have a theme that only cares about presentational components, and just accepting props, styling, and CSS, and handling that. Whereas underneath the covers, you can have your base theme that does everything that data cares about, like creating the pages and everything like that. Joel: The plugins right now, you have a source plugin. That's a naming convention for source plugin. Would you have a source theme, then? Would you have a WordPress theme, basically, that you could put in that would be the base configuration for a standard WordPress site, Contentful, or however you might grab your data? John: Some of the next steps, and we're working on this now, is to have this way to model your data. This way, you'll be able to connect any type of data source to your same blog. That's coming. At this point in time, you have your base theme. If you have the same conventions of once you source your data, create them in the same way, and always render the post, you can have two different themes with different data sources, with a child theme that sits in front of it that handles things like themes, the scan, the CSS, and presentation of it. Joel: It depends on how much, because sometimes, granularity can just bite you, too. It can get too granular. You have to figure out the appropriate level for whatever your goals are at that point. John: I'm definitely a person where I like to copypasta a few times before I start to figure out what the next abstraction is, and that type of stuff. Joel: I think that's a very pragmatic approach. Somebody had a question about MDX, actually. The question is, can you use MDX with Contentful or some other CMS provider? I'm pretty sure the answer's yes, right? It's just text until it's processed. Is that a correct answer? John: Yeah, it's just text. The one part that will need to be improved is, there's no way to easily live preview MDX that is sourced from another place. Now, there's a prototype that I think Chris and one of the people from Contentful actually built as a little browser extension. That will live render using CodeSandbox, like the MDX that you write. Essentially, if you're just writing your component directly into the text of your Contentful, you can then, the way Gatsby MDX, it doesn't really care. It's just a string. You can form your queries, connect to that data source, and then just render it on the fly. MDX, all it really cares about is it's that string that you have. Joel: Once it gets into your system and the components, if you're referencing them, they are there, then it just works otherwise they just appear text, I would think. John: Yeah. We've actually used that with some prototypes with Contentful, where I just write in video and the actual YouTube embed code. It looks funny in Contentful, but as soon as it hits, its rendered to the page, of course, it will render the video embed. Joel: Where are we at with this in terms of this theme? Is this basically ready to rock as far as that goes? John: Yeah, it's ready to rock, but the next step we have to get into is, I guess, a bit more complex. My machine's being slow. One of the big pieces that I've noticed in the starter right now is we need to bring out a lot of the static queries, is the way that the starter currently works. The use case of a starter's a little different than a theme author. As a theme, you want to be able to pass in your config. It can use the Gatsby static query, so you can pass in things like...we can actually look at it really quick. Here in the gatsby-theme-egghead-blog, we have this theme, and we have this website. When we actually turn this into a full-fledged, functioning library, we would want to pull a bunch of this stuff out and pass it in as metadata. That way, we're reusing it. Like in the header, you can use the static query feature that Gatsby provides to use that to structure your content. That's what we would be one of the last steps to really make this a truly functioning theme. Then it's matters about how you want to break it up, or whether you want a blog base, and then a theme that sits on top, or all of that kind of stuff. Using static files for your configuration doesn't typically quite work from a theme context. You have to pass things like options or use component shadowing. Actually, that's one of the parts we haven't gotten into yet, too, where we can show really quickly. Joel: Let's see that. John: You can use component shadowing to override a component. For example here, let's just say we want to change the header. Using themes in Gatsby, you can use this feature called component shadowing. The way you do that is you can actually create a particular file. In this case, you namespace everything under gatsby-theme-egghead-blog. Any component that you define, when we actually build the site, will first look in the users directory before going into the theme. It's a way where you can tap into and customize rendering, which is really powerful for when you have this one-off thing, or we want the My Blog to have more padding, and be black-on-white instead. You can just override that header to do so. The way to do that is, like right now, we're also building some developer tooling around this, too. You can through the CLI, or in your development mode, explore the components that you can shadow. For now, you just have to look really quick at the theme source code. Here, we see there's a components directory with that header file. In our starter, if we want to customize that, we can basically create a new file that Gatsby will decide to search for instead. We have source gatsby-theme-egghead-blog. This is basically the way. Everything you put in gatsby-theme-egghead-blog directory will be first looked at, and if it's not found, will then go to the theme, if that makes sense. It's like inaudible . You can override whatever you want. Anything that's in the theme or the source directory of the theme can be overridden by the user, so that they can change theme tokens or design tokens. They can override the header or even override templates. Here, we'll create this header.js in gatsby-theme-egghead-blog. Joel: You can put the templates folder in there, or any of the other folders, basically. If it's a component, it's going to go in there and look there first, then drill down into the theme. John: Exactly. Here, we can just add in an h1. Then we'll have to do one more, I guess, build with my slow machine. Joel: Is that preemptive, where you know it's not going to work without killing that cache? John: There's a little bug in the component shadowing, where if we're over-aggressively shadowing right now, or when you add in a new component, you'll have to basically remove the cache before it knows to look there. That's something that, of course, before we get on stable, we'll be... Joel: I heard caching's hard. John: It's weird. It's like this one problem. Joel: One day, the robots will solve it for us. That's what I'm holding out for, anyway. I just don't mess with caching. I'm waiting for the robots to catch up. John: One of the interesting things, too, to consider now as you're offering themes is how to separate your queries and the data from your components a lot of the time. We've been experimenting with a way where we use static queries that wrap the presentational components. That way, you can query the site and metadata to get the site title and other data points that you might want. Then you can pass it to your actual header component, just as props. That way, you can still override things without... As a user that knows React but maybe doesn't know Gatsby yet, you can use that approach. You are separating your data concerns, even in your React components that are shadowed. That way, you can almost design your API around how you structure your components. As we see here now, the My Blog is gone, and "Hello Egghead" is instead being rendered. Joel: That's really cool. John: It's just based off the file system conventions. You can go to town whenever you want to, just modify perhaps the footer and the header. You can do so just by writing some React. Joel: As all this is changing over time -- because this is all in-flight at the moment -- where's the best place to watch this and see the progress? John: The Gatsby blog is one of the best places right now. There's also a talk that Chris Biscardi did before Gatsby days. We can give links for those. That's definitely one of the best places to start. Next week, we'll have the initial guides together, or at least the fit influx docs of how to get started with themes. Right now, unfortunately, we're reverse engineering the way it works, but we'll have a guide to get started for everyone at different Gatsby levels. You can use it that way. One cool part about the themes, too, is that they essentially act as a site that you can compose together. Aside from making your path resolution more flexible, it's just like building a site. You don't have to learn a lot of new concepts, aside from thinking about it as a theme author. You have to expose an API for your end user. Aside from that, it's just building the site. Joel: All the stuff that you did looked really straightforward to me. Once I learned Yarn workspaces, all the structure of this, it's just logical. It just makes logical sense. I think if you want to, you can feel free to push this. If you want to push this to a branch, or did you use a starter? I don't know. It would be nice to have this, and then the folks watching could reference it, too. John: We can push it to a branch. It's all in Git. One of the things, too, that's important to also note is Yarn workspaces aren't required. It's just one type... Joel: It just makes it easier, right? John: use this workflow. You can Yarn link a thing. If you want, you can just publish it to NPM and install it in your starter. The one part of this, you want your start to live, because it's still Gatsby starters. They're just Git repos. A lot of times, we'll potentially want the starter itself to live in a separate repo. That way, when it just clones everything down, until at least we support sub-pathing for... Joel: This is like a development mode, basically, but it's not necessarily how you publish it for general audience consumption. John: Right, yeah. This is what you develop against, but you'll then still have your starter repo. Joel: It's like a plugin, though. With a Gatsby plugin, you make a plugins folder, and then you develop it separately, too. John: Yeah, exactly. Joel: That's awesome, John. I appreciate the demo. I'm going to just probably expand on your demo to make our thing into a full-blown theme next week. We have a few minutes. Your video froze in a very fun way. We have audio, and I was going to say, if anybody had any questions about the themes, now's a good time to ask. John Otander is the inventor, creator of MDX. Is that how you would describe it? That's probably how I'd describe it. John: Co-creator, I guess. Joel: Co-creator. Also, did quite a bit in terms of tachyons. That's one of the things I know. You're like an unsung hero of mine. I don't think you get enough credit for all this awesome work you do. If anybody has any questions about themes, MDX, or programming computers in general, I think it's a pretty good opportunity. Feel free to type them in the chat, use the key eye feature, or whatever, if anybody has anything. Otherwise, how's working at Gatsby so far? How's your boss? You can tell us. We won't tell him. John: It's a blast, man. I get to work on what I'm passionate about for my day job. I have to pinch myself sometimes. It's pretty sweet. Joel: As a company, I've really just enjoyed what they've been doing. Then seeing you join, one, it really aligns into my own personal selfish interests, because I'm all about MDX and Gatsby right now. The way that we publish on the Internet, I think, I'm in a nice state of flux. I love this stuff. It's really cool to see you in the mix. We don't have any more questions, so I'll let you all get back to it. Thanks a lot. Appreciate it. Thanks, Ian, for your presence and your dutiful monitoring of the chat, and thanks, Jason. Jason: No problem. Joel: Talk to you all soon. John: Thanks for having me. I'll have a faster computer next time. Joel: All right, sounds good. Requisition a request. See you.
https://egghead.io/lessons/gatsby-creating-a-gatsby-theme-with-john-otander
CC-MAIN-2020-05
refinedweb
7,832
83.56
I've been working through the e-book fine, until this program. I can make the menu work fine, as long as you choose an item. Anything off the list won't re-print the menu. It must be a syntax error, yes I'm old school syntax error, which I can't see. Any help would be appreciated, Dave H. Code:# include<iostream> using namespace std; int main() { int a = 0 ; { cout<<"\n\n\n"; cout<<"\t Please choose 1 item.\n\n"; cout<<"\t 1 : one\n\n"; cout<<"\t 2 : two\n\n"; cout<<"\t 3 : three\n"; cout<<"\t -------------\n"; cout<<"\t Choice : "; cin>>a; } while (a ==1 && a == 2 && a ==3) ; { if (a == 1) { cout<<"\a\n"; return 0; } else if (a == 2) { cout<<"\a\a\n"; return 0; } else if (a == 3) { cout<<"\a\a\a\n"; return 0; } } }
http://cboard.cprogramming.com/cplusplus-programming/155956-chapter-5-practice-program-2-difficulties.html
CC-MAIN-2015-06
refinedweb
145
89.08
Visit the Ultimate TCP-IP main page for an overview and configuration guide to the Ultimate Toolbox library. We've placed some answers here to a few of the most common questions folks have asked about when using various aspects of Ultimate TCP/IP. If you've come across an issue with Ultimate TCP/IP that needed a work-around or a bit of documentation that didn't quite explain all the ins and outs of a particular function call, please feel free to submit it for inclusion &madash; you can be pretty sure it will be of help to someone else out there! Q: When I upload or download a file using Ultimate TCP/IP CUT_FTPClient or CUT_HTTPClient classes. I would like to be able to show the user the file download/Upload progress. How do I do that using the Ultimate TCP/IP classes? CUT_FTPClient CUT_HTTPClient A: CUT_FTPClient and CUT_HTTPClient are derived from CUT_WSClient. When these classes are calling receive functions, they call the base class's function. When you are receiving the file to disk, the class will call the virtual ReceiveFileStatus to inform you of how much data was downloaded so far &madash; override this call in a derived class to trap file progress notifications. CUT_WSClient ReceiveFileStatus ReceiveFileStatus will also allow you to decide whether to continue downloading the file or not &madash; if you return TRUE from the overridden function, the class will continue downloading more data. If you return FALSE, the process will be aborted and the ReceiveToFile (e.g.) will return the Aborted by User error. ReceiveFileStatus TRUE FALSE ReceiveToFile The same process is used when you are sending a file, through the SendFileStatus function. SendFileStatus Q: I am using the toolkit for sending SMTP mail. The toolkit is very easy to use, but I cannot get it to work properly with some recipient mail servers. For example, if I use the toolkit to send mail to an account in a FirstClass server, the message body is presented as an attachment with no name. It is pretty obvious that the error is related to the encoding of the message. I am not sure if your toolkit is the problem or if the mail server is the problem. I have however never seen this behavior in FirstClass, no matter the origin of the mail, except when the mail is sent using your toolkit, i.e. the mail body appearing as an attachment with no name. However, if I send a mail to for example hotmail with your toolkit everything looks fine! Is there a work around this issue? A: When you add a MIME attachment to your message, the text body of the message may shows up as an attachment when delivered by some email servers. The reason for this issue, is that some email servers expect the text body to be encoded only In 7Bit, or Quoted Printable. And recognize other encoding to be used with attachments only. To work around this limitations of these servers, make sure that the function CUT_MimeEncode ::Encode() in file utmime.cpp, is setting the message body to the correct encoding type. See the following code sample. The default standard encoding type is 7Bit encoding. CUT_MimeEncode ::Encode() MIMEATTACHMENTLISTITEM *msg_body_item = new MIMEATTACHMENTLISTITEM; msg_body_item->next = NULL; msg_body_item->ptrDataSource = &msg_body; msg_body_item->lpszName = new char[2]; msg_body_item->lpszName[0] = 0; msg_body_item->lpszContentType = new char[15]; msg_body_item->nEncodeType = CUT_MIME_QUOTEDPRINTABLE; // or if you want CUT_MIME_7BIT strcpy(msg_body_item->lpszContentType, "text/Plain"); And make sure that you encode the body using the correct function based on your selection of the encoding type. So, you must call EncodeQuotedPrintable(msg_body_item, dest); or Encode7bit(msg_body_item, dest); instead of EncodeBase64() in the same function just before you delete the msg_body_item: EncodeQuotedPrintable(msg_body_item, dest); Encode7bit(msg_body_item, dest); EncodeBase64() msg_body_item // Encode message body to the destination EncodeQuotedPrintable(msg_body_item, dest); // Clean up delete [] msg_body_item->lpszName; delete [] msg_body_item->lpszContentType; delete msg_body_item; Q: Is there a way to retrieve only new messages since the last time I connected to the POP3 Server, without deleting the read messages? How? A: Yes, you can achieve this by keeping track of the Unique Id of the messages you have already received. First call POP3Connect() with the valid parameters to connect to your POP3 server then call RetrieveUID(msgNumber) where msgNumber is the number of message to get UID for. POP3Connect() RetrieveUID(msgNumber) msgNumber If msgNumber is set to -1 (default) the class will retrieve UIDs for all messages. This call will populate a class internal vector member whose elements can be accessed by calling GetUID(). msgNumber GetUID() Having done that you can then compare the list of returned UID's with the list you have already stored the last time you have read your email. If the UID is not among the list of UIDs you have stored, then retrieve the message. Q: When I connect to pop3 server using the CUT_POP3Client, I can retrieve my email. But when a new message arrives I don't get notified. Why? A: This is a normal behaviour of a POP3 client.. If the lock is successfully acquired, the POP3 server responds with a positive status indicator. The POP3 session now enters the TRANSACTION state, with no messages marked as deleted. If the maildrop cannot be opened for some reason (for example, a lock can not be acquired, the client is denied access to the appropriate maildrop, or the maildrop cannot be parsed), the POP3 server responds with a negative status indicator. After the POP3 server has opened the maildrop, it assigns a message-number to each message, and notes the size of each message in octets. The first message in the maildrop is assigned a message-number of "1", the second is assigned "2", and so on, so that the nth message in a maildrop is assigned a message-number of "n". This exclusive-access lock allows the new messages to be stored in the maildrop, however, these new messages will not be included in the list of messages available to the client, until the client re-authenticates. Q: I have added The history control class to a dialog based application. I have set up the member of the class to be a CUH_Control, and I called the following function in OnInitDialog(): CUH_Control OnInitDialog() m_control.AttachHistoryWindow(m_hWnd, IDC_CUSTOM1); But, the dialog doesn't show up. Why? A: When you add the CUH_Control as a custom control to your application you need to register it's window class or it may prevent the dialog from showing up. The History control is a handy class that allows you to log and view your application notification to a file or to the screen. The reason the dialog is not showing up is that you are using the CUH_Control class without registering it's window. To register the window make sure you add the following line to the InitInstance() of the App class: InitInstance() CUH_Control::RegisterWindowClass(AfxGetInstanceHandle()); Q: I want to be able to let the user decide whether to allow a redirect when accessing an HTTP server. A: Redirection happens when a web server response wants to indicates that further action needs to be taken by the user agent in order to fulfill the request submitted by the user. When a user agent (the browser or an HTTP client) submits a request to a web server, the server may elect to indicate that the requested resource (e.g. Web page) has been moved to a different location. And a new action is required to be taken by the user agent. (Such as requesting a new web page or requesting the resource from different location.) Here we'll show how to allow the user to accept or refuse redirection. CUT_HTTPClient class has the following virtual function: virtual BOOL OnRedirect(LPCSTR szUrl); The default implementation is not to accept redirection. This is done by returning FALSE from this function. This function is called by the CUT_HTTPClient class to ask if it is OK to request the URL identified by the LPCSTR szUrl parameter. If the you wish to submit a GET request for the specified URL. Then simply return TRUE from this function. The code below shows how a user might be notified of a redirect: CUT_HTTPClient LPCSTR szUrl #include <span class="code-string">"stdafx.h" </span> #include <span class="code-keyword"><windows.h> </span> #include <span class="code-keyword"><iostream.h> </span> #include <span class="code-string">"http_c.h" </span> class MyHttp: public CUT_HTTPClient { public: MyHttp(){} // overrides the default OnRedirect implementation BOOL OnRedirect(LPCSTR szUrl) { char msgText[2*URL_RESOURCE_LENGTH]; // ask the user if redirection is ok sprintf(msgText, "The server is requesting a redirect to" + "\r\n\"%s\"\r\n continue?",szUrl); // show the Message Box if ( MessageBox(NULL,msgText,"Page Redirect!",MB_YESNO) == IDYES ) return TRUE; // go ahead get the redirection else returnFALSE; // no don't continue with redirection } }; int main() { MyHttp web; // user has accepted redirect, if asked, at this point if (web.GET("msdn.microsoft.com/123") == UTE_SUCCESS) { // First Display the header of the server response long count = web.GetHeaderLineCount(); long loop = 0; LPCSTR line; // loop through all header lines and display them to user for ( loop =0;loop < count; loop++ ) { line = web.GetHeaderLine(loop); if (line != NULL) cout << line << endl; } cout << endl; // body parts count = web.GetBodyLineCount(); // loop through all body lines and display them to user for (loop =0;loop < count;loop++) { // display line line = web.GetBodyLine(loop); if (line != NULL) cout << line << endl; } } } Q: How do I send an email to a password protected SMTP server? A: To prevent mass mailing, some SMTP servers implement password access. This article article show you how to connect to an ESMTP server that requires a password. Mass mailings or 'spamming' is usually achieved by relaying mail through an external mail server, this allows the spammer to use the resources of someone else's computers/network to reach a large number of email addresses without incurring any expense and damage to their own resources. With systems such as MAPS RBL networks identified that they are open for relaying and are therefore a target for spammers to use for mail relaying, these networks can then be blocked from sending mail, causing mail delivery problems for valid users who have done nothing wrong. Authentication SMTP is an ESMTP extension that allows a client mail application to specify a means of authenticating with an SMTP server, rfc2554 details the implementation of this SMTP extension. The Authentication Extension to SMTP addresses this problem by allowing mail clients to authenticate themselves with an SMTP server using a username and password, this will allow open relays to be closed off and also give roaming users the ability to continue using a single SMTP server without having to be concerned with being blocked. When you attempt to send an email to this type of Servers without authenticating first, you may receive the following response: 530 Authentication required This response may be returned by any command other than AUTH, EHLO, HELO, NOOP, RSET, or QUIT. It indicates that server policy requires authentication in order to perform the requested action. The Ultimate TCP/IP supported authentication mechanisms are CRAM-MD5 and LOGIN. The following sample demonstrates how to use the Ultimate TCP/IP CUT_SMTPClient class to connect to a password protected Email Server: CUT_SMTPClient #include <span class="code-string">"stdafx.h" </span> #include <span class="code-string">"smtp_c.h" </span> #include int main () { // SMTP Client instance CUT_SMTPClient mailSender; // Set the user Name String mailSender .SetUserName (STRING_USER_ACCOUNT); // Set the password string mailSender .SetPassword (STRING_ACCOUNT_PASSWORD); // My server needs authentication mailSender .EnableSMTPLogin(TRUE); // If we connected fine then we have authenticated without a problem int rt = mailSender .SMTPConnect ( "MY_ESMTP_SERVER_NAME_ORADDRESS", "MY_MACHINE_NAME"); // Display comnnection result cout << " Connection " << CUT_ERR::GetErrorString (rt) << endl; // if we connected then send email message if (rt == UTE_SUCCESS) { cout << CUT_ERR::GetErrorString(mailSender .SendMail ( "To_emailaddress","From_EmailAddress", "Testing through Server" , "Hello,\r\n This is a test Message using the modified" + "class\r\n Kindest regards\r\n Self", "CC_EmailAddress",NULL) ) << endl; // close the connection smtp.SMTPClose (); } return 0; } Q: How do I transfer data when the client is behind a firewall? A: First set the FireWallMode property to TRUE and then connect to the proxy using the proxy address as the host name (e.g. the hostname argument). Also set the userName parameter as "Userid@TargetServer". For example: SetFireWallMode(TRUE); FTPConnect("[put proxy Addresshere]",<a class="code-string" href="%3Cspan">"mailto:anonymous@">anonymous@</a>, "anonymous@anonymous.com"); Q: When sending an email message using Ultimate TCP/IP, how do I request that a receipt is sent back to me to confirm that the user has read the email message? A: The process of requesting read receipt conformation is achieved by adding an Internet Message Header to the delivered message. The name of this header is "Disposition-Notification-To:". This header expects a parameter of an email address to which the read receipt is to be sent. When a (RFC 2298 compliant) email client (such as MS Outlook) receives a message with this header, it prompts the user if the receipt is to be sent back to the original sender. Upon the user's approval, a new email message is composed. This email message will be a MIME encoded message. The message will contain a text attachment report informing the original sender that the message was displayed. Note that this notification does not mean that the user has read the message. it is just to confirm that the message was displayed by the recipient. This code shows the addition of the custom header to the message before sending: #include <span class="code-string">"stdafx.h" </span> #include <span class="code-string">"smtp_c.h" </span> #include <span class="code-string">"utMessage.h" </span> // we need to include the CUT_Msg class using namespace std; int main(){ CUT_SMTPClient smtp; CUT_Msg msg; // our one and only message object // add the From Field msg.AddHeaderField ("the_from_field@somwhere.com",UTM_FROM); // add the To Field msg.AddHeaderField ("the_to_field@somehwere.com",UTM_TO); // add the subject Field msg.AddHeaderField ("This is the Subject",UTM_SUBJECT); // Add the Custom Header to ask for Read Receipt msg.AddHeaderField ("the_person_to_beNotified@somewhere.com", UTM_CUSTOM_FIELD, "Disposition-Notification-To:"); // set the message body msg.SetMessageBody ("Hi\n\tThis is my first message." + "\nI love TCP/IP \n\t yours Truly" ); int errorCode = UTE_SUCCESS; errorCode = smtp.SMTPConnect ("Your_email_SMTP_server"); if (errorCode == UTE_SUCCESS) { errorCode = smtp.SendMail (msg); if (errorCode == UTE_SUCCESS) { cout << "Message was Sent" << endl; }else cout << CUT_ERR::GetErrorString (errorCode)<< endl; } else cout << CUT_ERR::GetErrorString (errorCode) << endl; smtp.SMTPClose(); return 0; } Q: I need to enable my clients around the globe to upload documents to my ftp servers over SSL/TLS. Since I don't have control over which client application (sometimes shareware) they will be using, I want to create servers that support both mechanisms of FTP over SSL (Explicit Security and Implicit Security). Can Ultimate TCP/IP allow me to do that? A: When both a client and server support SSL or TLS, the utilization of security is accomplished through a sequence of commands passed between the two machines. When using FTP over SSL there are at least two distinct mechanisms by which this sequence is initiated: Explicit (active) and Implicit (passive) security. When Explicit security is used, the FTP client must issue a specific command to the FTP server after establishing a connection to establish the SSL link. In this implementation, the default FTP server port is used. The default setting of the CUT_FTPServer class employs this mechanism. CUT_FTPServer The other mechanism is Implicit security which is a mechanism by which security is automatically turned on as soon as the FTP client makes a connection to an FTP server. This will require that the client start the negotiation as soon as the socket connection is established. In this case, the FTP server defines a specific port for the client (usually 990) to be used for secure connections. As of version 4.x CUT_FTPServer class has been updated with a new function to accommodate selection of the security type: // Sets the FTP Connection to use SSL from the start or to wait for // negotiation void SetFtpSslConnectionType( enumFtpSSLConnectionType type = FTP_SSL_EXPLICIT); This function will take one of two enumerations of type enumFtpSSLConnectionType. this enumeration is defined as: typedef enum enumFtpSSLConnectionType { FTP_SSL_EXPLICIT, //Explicit = SSL After we do the nogotiation FTP_SSL_IMPLICIT //SSL All the way from the minute we are connecting } enumFtpSSLConnectionType; Q: How can I send a message to all connected clients from the server class? A: The CUT_WSServer class maintains a linked list of all open sockets (clients) that you can iterate through to service the connections. A broadcast method might look like this: bool CMyWSServer::BroadcastServerMessage() { ::EnterCriticalSection(&m_criticalSection); UT_THREADLIST* pItem = m_ptrThreadList; // Check to see if the list is empty while(pItem != NULL) { MSAASSERT(pItem->WSThread != NULL); CMyWSThread * pConnection = (CMyWSThread *)pItem->WSThread; pConnection->Send("This is a message from server\r\n"); pItem = pItem->next } ::LeaveCriticalSection(&m_criticalSection); return true; } Initial CodeProject release.
http://www.codeproject.com/Articles/20771/The-Ultimate-TCP-IP-FAQ?fid=469142&df=90&mpp=10&sort=Position&spc=None&tid=3544650
CC-MAIN-2015-14
refinedweb
2,817
52.29
Commit bccg1aea: Preemptively Structuring the Chaos— Rovani in C♯ I have found that it is better to put structure in place around a project before going hog wild on implementation. While I recognize that some practices grow organically, a healthy amount of structure up-front can save a project from technical bankruptcy down the line. The two tools that I use early and often are interfaces and contract classes. Interfaces and Contract Classes There are a collection of interfaces that I regularly utilize throughout most projects. - ICreated - VigilUser CreatedBy - DateTime CreatedOn - IModified : ICreated - VigilUser ModifiedBy - DateTime? ModifiedOn - bool MarkModified(VigilUser, DateTime) - IDeleted - VigilUser DeletedBy - DateTime? DeletedOn - bool MarkDeleted(VigilUser, DateTime) - IOrdered - int Ordinal - IEffective - DateTime EffectiveOn - bool SetEffectiveOn(DateTime) - IEffectiveRange : IEffective - DateTime EffectiveUntil - bool SetEffectiveRange(DateTime, DateTime) The Obvious — ICreated, IModified, IDeleted, IOrdered These four interfaces serve to ensure that all properties are identically named throughout the solution. They also serve as a guide to future developers about what restrictions should be placed on the class. Every class that persists data to storage should implement ICreated — everything was created by someone and at some time. An object that doesn?t implement IModified should never actually be modified. A class with IDeleted should not have the records actually removed from storage, but just fill in these fields to flag them as deleted. IOrdered means we know to set a default sort on the Ordinal field. Instead of Deleting It — Effective It! For values where we need historical data to be easily accessible, such as changing schedules or payment information, an interface is created to track the EffectiveOn date. Starting at that specific date and time, the new record becomes effective. There needs to be some way to group the different records, which is usually performed with a header type table. By definition, there can only be one effective record at any given date. An example of its use is storing the payment information for a recurring transaction. The historical data for payment information is useful when looking into the past, and frequently the patron may want to change the payment option effective some future date. However, at no point will two or more payment methods be active. By implementing the IEffective and ICreated, the details of who made the change (and when) fully replace any functionality that might be lost by not implementing IModified. Need Multiple, Overlapping Effectives — Range It! When multiple records need to have their history saved or allow for changes effective in the future, the IEffectiveRange comes into utility. For IEffective records, the end of their effectiveness is the beginning of the next record?s effective date. However, for classes where there can be multiple records that come and go, an explicit range is required. This is useful for line items on a gift ? do see when any given detail was the authoritative record, especially useful after the gift has been posted and receipted. Contract Classes For I battled back and forth with whether to include the shell for code contracts with every interface I write, or to only build it out when needed. I quickly found that I was including, at minimum, a contract invariant method on each interface, so I decided to just make it a standard part of every interface. Even if the Contract Class contains no Contracts, at least the shell is there, and I can easily add them later, as needed. A quick standard I have put together is that all Contract Classes are contained in the same file as the interface, in a child namespace of ?Contracts,? and named after the interface. Keeping the classes as internal and abstract means it can only be called within the assembly, Minimum Viable Framework I finally feel that I am at a stage where I have the minimum of a framework on which I can start to build an actual product. From here, I will slowly add and expand modules, slowly drifting towards the user interface, and tool with refactoring, templating, scripting, and finding other ways that I can reduce the complexity of complex tasks. Added generic interfaces, and began cleaning up some Code Analysis rules from the ?All Rules? playlist. —Commit 4010e840a8b168a1ab65466fa5c61cc342b56d8e Added new Code Analysis rule sets to eliminate rules that I would be ignoring anyway. Resolved several Code Contracts and Code Analysis issues. —Commit 2b1ff8a55711cb585750d4241df9c97895b7edfc Decided I didn?t like having fields for the context, testuser, and now. Put them back as variables in each test method, and added a ContractVerification(false) attribute to the whole class. I think I?m going to find myself adding that attribute to a lot of testing classes. —Commit df9e61081f22471546dd1fd1132c8ecfecc3a26c Added Contract Classes for each of the Interfaces. Created the Identity and TypeBase classes to expand the foundation of classes before working on POCO?s. Need to add more unit tests to fill out the Code Coverage. —Commit 716da101f6da28158ab0ed52aaba037b4c027676 Added ExcludeFromCodeCoverage attributes to the ObjectInvariant methods in the interface contract classes. Added tests for the TypeBase class (via inheritance through a TestTypeBase class). Filled out the ChangeLogTests tests. —Commit bcc51aeaa46dc8780ffcd9c33f88d8ecb1c2fffd
https://rovani.net/Commit-bccg1aea/
CC-MAIN-2018-47
refinedweb
844
54.52
hi thomas > I saw "boolean AccessControlManager.hasPrivilege(String absPath, > String[] privileges)", but I thought it's something different so > far... the main difference from my point of view: the absPath in this new method must be point to an existing Node, whereas the absPath in Session.checkPermission (as far as i understood) points to the Item to be added/modified/removed... that goes along with the mapping... > String[] privileges: "an array of Privileges" (should it say "an array > of privilege names"?). i think that's a bug (see also issue#309) and i assume that it was meant to be Privilege[]. > interface Privilege: "Each privilege is identified by a NAME that is > unique across the set of privileges supported by a repository. JCR > defines a set of standard privileges in the jcr namespace". > > I was a bit confused, those privilege names are not the same as the > 'action strings' defined in Session.checkPermission(..). I found a > mapping between 'action strings' and 'privilege names' on page 383 in > the spec. without being involved during the initial discussions regarding access control, i imagine that an attempt was made to create a content representation of access control, where the privilege was defined as jcr property of type NAME. later one that approach was rejected but the spec still has some leftovers (see also the PrincipalManager that mentions a nodetype nt:ace). > Is this mapping required, why not just one one or the other? honestly, i find it quite confusing. if the access control feature has been introduces as replacement (improvement?) for the Session.checkPermission, Session.checkPermission should be deprecated. if however the access control discovery is intended to extend the limited access control functionality present in jcr 1.0 it should specified accordingly without introducing additional confusion. after all i find it strange that a-c-discovery is optional but Session.checkPermission is a level 1 feature... and i don't understand the explanation in '4.9 Permission Checking' how a jcr impl that does not support the a-c-discovery feature should implement Session.checkPermission. > Should they be documented in the Javadocs of the Privilege interface > as well? What about adding constants in the Privilege interface? I have the impression, that all NAME constants have been omitted intentionally due to the namespace remapping feature. and yes, that's why the JcrConstants.java interface is present in jackrabbit-jcr-commons project... > AccessControlManager.hasPrivilege uses an array of privileges, while > Session.checkPermission(..) uses a string of comma separated 'action > strings'. This is not consistent; is there a reason for that? see above. gruss angela
http://mail-archives.apache.org/mod_mbox/jackrabbit-users/200707.mbox/%3C46A9AD30.5070501@day.com%3E
CC-MAIN-2014-10
refinedweb
430
50.23
Operating System Abstract. For the version of this white paper for Windows Server 2003, see Microsoft Windows Server 2003 TCP/IP Implementation Details. Introduction Capabilities and Functionality Architectural Model The NDIS Interface and Below Core Protocol Stack Components and the TDI Interface Network Application Interfaces Critical Client Services and Stack Components TCP/IP Troubleshooting Tools and Strategies Summary Appendix A: TCP/IP Configuration Parameters Appendix B: NetBIOS over TCP Configuration Parameters Appendix C: Windows Sockets and DNS Registry Parameters Appendix D: Tuning TCP/IP Response to Attack Microsoft® that enhance performance and reliability. The goals in designing the TCP/IP stack were to make it: Standards-compliant Interoperable Portable Scalable High performance Versatile Self-tuning Easy to administer Adaptable This paper describes Windows 2000 implementation details and is a supplement to the Microsoft Windows 2000 TCP/IP manuals. It examines the Microsoft TCP/IP implementation from the bottom up and is intended for network engineers and support professionals who are familiar with TCP/IP. This paper uses network traces to help illustrate concepts. These traces were gathered and formatted using Microsoft Network Monitor 2.0, a software-based protocol tracing and analysis tool included in the Microsoft Systems Management Server product. Windows 2000 Server includes a reduced adapter to be in promiscuous mode). It also does not support connecting to remote Network Monitor Agents. The TCP/IP suite for Windows 2000 was designed to make it easy to integrate Microsoft systems into large-scale corporate, government, and public networks, and to provide the ability to operate over those networks in a secure manner. Windows 2000 is an Internet-ready operating system. Windows) The Windows 2000 Server family of operating systems provides the following services: Dynamic Host Configuration Protocol (DHCP) client and service Windows Internet Name Service (WINS), a NetBIOS name client and server Dynamic Domain Name Server (DDNS) Dial-up (PPP/SLIP) support Point-to-Point Tunneling Protocol (PPTP) and Layer 2 Tunneling Protocol. (L2TP), used for remote virtual private networks TCP/IP network printing (lpr/lpd) SNMP agent NetBIOS interface Windows Sockets version 2 (Winsock2) interface Remote Procedure Call (RPC) support Network Dynamic Data Exchange (NetDDE) Wide Area Network (WAN) browsing support High-performance Microsoft Internet Information Services (IIS) Basic TCP/IP connectivity utilities, including: finger, ftp, rcp, rexec, rsh, telnet, and tftp Server software for simple network protocols, including: Character Generator, Daytime, Discard, Echo, and Quote of the Day TCP/IP management and diagnostic tools, including: arp, ipconfig, nbtstat, netstat, ping, pathping, route, nslookup, and tracert The table below lists features and the operating system versions that they are present in as a reference. Features are described in more detail throughout this document. Table 1 N=No, Y=Yes, and D=Disabled by Default Product Windows 95 Windows 95 Winsock 2 Windows 98 Windows 98 SE Windows NT 4.0 SP5 Windows 2000 Dead Gateway Detect N Y VJ Fast Retransmit AutoNet SACK (Selective ACK) Jumbo frame support Large Windows D Dynamic DNS Media Sense Wake-On-LAN IP Forwarding NAT Kerberos v5 IPSec (IP Security) PPTP L2TP IP Helper API Winsock2 API GQoS API IP Filtering API Firewall Hooks Packet Scheduler RSVP ISSLO Trojan Filtering Blocking src routing ICMP Router Discovery Offload-TCP Offload-IPSec Requests for Comments (RFCs) are a constantly evolving series of reports, proposals for protocols, and protocol standards used by the Internet community. You can use FTP to obtain RFCs from any of the following: nis.nsf.net nisc.jvnc.net wuarchive.wustl.edu src.doc.ic.ac.uk normos.org Table 2 RFCs supported by this version of Microsoft TCP/IP RFC Title 768 User Datagram Protocol (UDP) 783 Trivial File Transfer Protocol (TFTP) 791 Internet Protocol (IP) 792 Internet Control Message Protocol (ICMP) 793 Transmission Control Protocol (TCP) 816 Fault Isolation and Recovery 826 Address Resolution Protocol (ARP) 854 Telnet Protocol (TELNET) 862 Echo Protocol (ECHO) 863 Discard Protocol (DISCARD) 864 Character Generator Protocol (CHARGEN) 865 Quote of the Day Protocol (QUOTE) 867 Daytime Protocol (DAYTIME) 894 IP over Ethernet 919, 922 IP Broadcast Datagrams (broadcasting with subnets) 950 Internet Standard Subnetting Procedure 959 File Transfer Protocol (FTP) 1001, 1002 NetBIOS Service Protocols 1065, 1035, 1123, 1886 Domain Name System (DNS) 1042 A Standard for the Transmission of IP Datagrams over IEEE 802 Networks 1055 Transmission of IP over Serial Lines (IP-SLIP) 1112 Internet Group Management Protocol (IGMP) 1122, 1123 Host Requirements (communications and applications) 1144 Compressing TCP/IP Headers for Low-Speed Serial Links 1157 Simple Network Management Protocol (SNMP) 1179 Line Printer Daemon Protocol 1188 IP over FDDI 1191 Path MTU Discovery 1201 IP over ARCNET RFC Title 1256 ICMP Router Discovery Messages 1323 TCP Extensions for High Performance (see the TCP1323opts registry parameter) 1332 PPP Internet Protocol Control Protocol (IPCP) 1518 Architecture for IP Address Allocation with CIDR 1519 Classless Inter-Domain Routing (CIDR): An Address Assignment and Aggregation Strategy 1534 Interoperation Between DHCP and BOOTP 1542 Clarifications and Extensions for the Bootstrap Protocol 1552 PPP Internetwork Packet Exchange Control Protocol (IPXCP) 1661 The Point-to-Point Protocol (PPP) 1662 PPP in HDLC-like Framing 1748 IEEE 802.5 MIB using SMIv2 1749 IEEE 802.5 Station Source Routing MIB using SMIv2 1812 Requirements for IP Version 4 Routers 1828 IP Authentication using Keyed MD5 1829 ESP DES-CBC Transform 1851 ESP Triple DES-CBC Transform 1852 IP Authentication using Keyed SHA 1886 DNS Extensions to Support IP Version 6 1994 PPP Challenge Handshake Authentication Protocol (CHAP) 1995 Incremental Zone Transfer in DNS 1996 A Mechanism for Prompt DNS Notification of Zone Changes 2018 TCP Selective Acknowledgment Options 2085 HMAC-MD5 IP Authentication with Replay Prevention 2104 HMAC: Keyed Hashing for Message Authentication 2131 Dynamic Host Configuration Protocol 2136 Dynamic Updates in the Domain Name System (DNS UPDATE) 2181 Clarifications to the DNS Specification 2205 Resource ReSerVation Protocol (RSVP) -- Version 1 Functional Specification 2236 Internet Group Management Protocol, Version 2 2308 Negative Caching of DNS Queries (DNS NCACHE) 2401 Security Architecture for the Internet Protocol 2402 IP Authentication Header 2406 IP Encapsulating Security Payload (ESP) 2581 TCP Congestion Control The Microsoft TCP/IP suite contains core protocol elements, services, and the interfaces between them. The Transport Driver Interface (TDI) and the Network Device Interface Specification (NDIS) are public, and their specifications are available from Microsoft.1). In addition, there are a number of higher-level interfaces available to user-mode applications. The most commonly-used are Windows Sockets, remote procedure call (RPC), and NetBIOS. Windows 2000 introduces support for Plug and Play. Plug and Play has the following capabilities and features: Automatic and dynamic recognition of installed hardware. This includes initial system installation, recognition of static hardware changes that may occur between boots, and response to run-time hardware events, such as dock or undock, and insertion or removal of cards. Streamlined hardware configuration in response to automatic and dynamic recognition of hardware, including dynamic hardware activation, resource arbitration, device driver loading, drive mounting, and so on. Support for particular buses and other hardware standards that facilitate automatic and dynamic recognition of hardware and streamlined hardware configuration, including Plug and Play ISA, PCI, PCMCIA, PC Card/CardBus, USB, and 1394. This includes promulgation of standards and advice about how hardware should behave. An orderly Plug and Play framework in which driver writers can operate. This includes infrastructure, such as device information (INF) interfaces, APIs, kernel-mode notifications, executive interfaces, and so on. Mechanisms that allow user-mode code and applications to learn of changes in the hardware environment so that they can take appropriate actions. Plug and Play operation does not require Plug and Play hardware. To the degree possible, the first two bullets above apply to legacy hardware, as well as Plug and Play hardware. In some cases, orderly enumeration of legacy devices is not possible because the detection methods are destructive or inordinately time-consuming. The primary impact that Plug and Play support has on protocol stacks is that network interfaces can come and go at any time. The Windows 2000 TCP/IP stack and related components have been adapted to support Plug and Play. Microsoft networking protocols use the Network Device Interface Specification (NDIS) to communicate with network card drivers. Much of the OSI model link layer functionality is implemented in the protocol stack. This makes development of network card drivers much simpler. NDIS 3.1 supports basic services that allow a protocol module to send raw packets over a network device and allow that same module to be notified of incoming packets received by a network device. NDIS 4.0 added the following new features to NDIS 3.1: Out-of-band data support (required for Broadcast PC) WirelessWAN Media Extension High-speed packet send and receive (a significant performance win) Fast IrDA Media Extension Media Sense (required for the Designed for Windows logo in PC 97 and later Hardware Design Guide). The Microsoft Windows 2000 TCP/IP stack utilizes media sense information, which is described in the "Automatic Client Configuration" section of this white paper. All local packet filter (prevents Network Monitor from monopolizing the CPU) Numerous new NDIS system functions (required for miniport binary compatibility across Windows 95, Windows 98, Windows NT, and Windows 2000) NDIS 5.0 includes all functionality defined in NDIS 4.0, plus the following extensions: NDIS power management (required for Network Power Management and Network Wake-up) Plug and Play. (Windows 95 NDIS had Plug and Play support already; therefore, this change applies to Windows 2000 network drivers only.) Support for Windows Management Instrumentation (WMI), which provides Web-based Enterprise Management (WBEM)–compatible instrumentation of NDIS miniports and their associated adapters Support for a single INF format across Windows operating systems. The new INF format is based on the Windows 98 INF format. Deserialized miniport for improved performance Task offload mechanisms, such as TCP and UDP checksum and Fast Packet Forwarding Broadcast Media Extension (needed for Broadcast Services for Windows) Connection-oriented NDIS (required to support Asynchronous Transfer Mode [ATM], Asymmetric Digital Subscriber Line [ADSL], and Windows Driver Model–Connection Streaming Architecture [WDM-CSA] Support for Quality of Service (QoS) Intermediate Driver Support (required for Broadcast PC, Virtual LANs, Packet Scheduling for QoS, and NDIS support of IEEE 1394 network devices) NDIS can power down network adapters when the system requests a power level change. Either the user or the system can initiate this request. For example, the user may want to put the computer in sleep mode, or the system may request a power level change based on keyboard or mouse inactivity. In addition, disconnecting the network cable can initiate a power-down request if the network interface card (NIC) supports this functionality. In this case, the system waits a configurable time period before powering down the NIC because the disconnect could be the result of temporary wiring changes on the network, rather than the disconnection of a cable from the network device itself. NDIS power management policy is no network activity–based. This means that all overlying network components must agree to the request before the NIC can. (For more information, see.) The core protocol stack components are those shown between the NDIS and TDI interfaces in figure 1. They are implemented in the Windows 2000 Tcpip.sys driver. The Microsoft stack is accessible through the TDI interface and the NDIS interface. The Winsock2 interface also provides some support for direct access to the protocol stack. ARP performs IP address-to-Media Access Control (MAC) address resolution for outgoing packets. As each outgoing IP datagram is encapsulated in a frame, source and destination media access control addresses must be added. Determining the destination media access control address for each frame is the responsibility of ARP. ARP compares the destination IP address on every outbound IP datagram to the ARP cache for the NIC over which the frame will be sent. If there is a matching entry, the MAC address is retrieved from the cache. If not, ARP broadcasts an ARP Request Packet on the local subnet, requesting that the owner of the IP address in question reply with its media access control address. If the packet is going through a router, ARP resolves the media access control The computer in this example is multihomed—has more than one NIC—so there is a separate ARP cache for each interface. In the following example, the command arp –s is used to add a static entry to the ARP cache used by the second interface for the host whose IP address is 10.57.10.32 and whose NIC address is 00608C0E6C6A: C:\>arp -s 10.57.10.32 00-60-8c-0e-6c-6a 10.57.8.190 10.57.10.32 00-60-8c-0e-6c-6a static ARP Cache Aging Windows NT and Windows 2000 adjust. A new registry parameter, ArpCacheLife, was added in Windows NT 3.51 Service Pack 4 to allow more administrative control over aging. This parameter is described in Appendix A. Use the command arp –d to delete entries from the cache, as shown below: C:\>arp -d 10.57.10.32 ARP queues only one outbound IP datagram for a specified destination address while that IP address is being resolved to a media access control address. If a User Datagram Protocol the Microsoft Knowledge Base article 193059<> or the Platform SDK<> for IP Helper API details. the media access control addresses, the IP addresses in a datagram remain the same throughout a packet's journey across an internetwork. IP layer functions are described below. Routing Routing is a primary function of IP. Datagrams are handed to IP from UDP and TCP above, and from the NIC NICs. It can be discarded. The route table maintains four different types of routes. They are listed below in the order that they are searched for a match: Host (a route to a single, specific destination IP address) Subnet (a route to a subnet) Network (a route to an entire network) Default (used when there is no other match) To determine a single route to use to forward an IP datagram, IP uses the following process: For each route in the routing. You can use the route print command to view the route table from the command prompt, as shown below: C:\>route print ========================================================================= Interface List 0x1 ........................... MS TCP Loopback interface 0x2 ...00 a0 24 e9 cf 45 ...... 3Com 3C90x Ethernet Adapter 0x3 ...00 53 45 00 00 00 ...... NDISWAN Miniport 0x4 ...00 53 45 00 00 00 ...... NDISWAN Miniport 0x5 ...00 53 45 00 00 00 ...... NDISWAN Miniport 0x6 ...00 53 45 00 00 00 ...... NDISWAN Miniport ========================================================================= ========================================================================= Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 10.99.99.254 10.99.99.1 1 10.99.99.0 255.255.255.0 10.99.99.1 10.99.99.1 1 10.99.99.1 255.255.255.255 127.0.0.1 127.0.0.1 1 10.255.255.255 255.255.255.255 10.99.99.1 10.99.99.1 1 127.0.0.0 255.0.0.0 127.0.0.1 127.0.0.1 1 224.0.0.0 224.0.0.0 10.99.99.1 10.99.99.1 1 255.255.255.255 255.255.255.255 10.99.99.1 10.99.99.1 1 Default Gateway: 10.99.99.254 ========================================================================= Persistent Routes: None The route table above is for a computer with the class A IP address of 10.99.99.1, the subnet mask of 255.255.255.0, and the default gateway of 10.99.99.254. It contains the following eight entries: The first entry, to address 0.0.0.0, is the default route. The second entry is for the subnet 10.99.99.0, on which this computer resides. The third entry, to address 10.99.99.1, is a host route for the local host. It specifies the loopback address, which makes sense because a datagram bound for the local host should be looped back internally. The fourth entry is for the network broadcast address. The fifth entry is for the loopback address, 127.0.0.0. The sixth entry is for IP multicasting, which is discussed later in this document. The final entry is for the limited broadcast (all ones) address. The Default Gateway is the currently active default gateway. This is useful to know when multiple default gateways are configured. On this host, if a packet is sent to 10.99.99.40, the closest matching route is the local subnet route (10.99.99.0 with the mask of 255.255.255.0). The packet is sent via the local interface 10.99.99.1. If a packet is sent to 10.200.1.1, the closest matching route is the default route. In this case, the packet is forwarded to the default gateway. network, subnet, or host, using ICMP, which is explained later in this white paper. Routes also may be added manually using the route command, or by a routing protocol. The -p (persistent) switch can be used with the route command to specify permanent routes. Persistent routes are stored in the registry under the registry key HKEY_LOCAL_MACHINE \SYSTEM \CurrentControlSet \Services \Tcpip \Parameters \PersistentRoutes Windows 2000 TCP/IP introduces a new metric configuration option will use the one with the lowest metric unless it appears to be inactive, in which case dead gateway detection may trigger a switch to the next lowest metric default gateway in the list. Default gateway metrics can be set using TCP/IP Advanced Configuration properties. DHCP servers provide a base metric, and a list of default gateways. If a DHCP server provides a base of 100, and a list of three default gateways, the gateways will be configured with metrics of 100, 101, and 102 respectively. A DHCP-provided base does not apply to statically configured default gateways. Most Autonomous System (AS) routers use a protocol such as Routing Information Protocol (RIP) or Open Shortest Path First (OSPF) to exchange routing tables with each other. Windows 2000 Server includes support for these protocols. Windows 2000 Professional includes support for silent RIP. By default, Windows-based systems do not behave as routers and do not forward IP datagrams between interfaces. However, the Routing and Remote Access service is included in Windows 2000 Server. It can be enabled and configured to provide full multiprotocol routing services. To administer the Routing and Remote Access On the Start menu, point to Programs. Point to Administrative Tools, and then click Routing and Remote Access. When running multiple logical subnets on the same physical network, the following command can be used to tell IP to treat all subnets as local and to use ARP directly for the destination:Ps to send is controlled by the ArpRetryCount registry parameter, which defaults to 3. If another host replies to any of these ARPs, the IP address is already in use. When this happens, repair the damage possibly done to the ARP caches on other computers, the offending computer re-broadcasts another ARP, restoring the original Windows NT–based computer with a conflicting address detects the conflict. The computer detecting the conflict displays an error message and logs a detailed event in the system log. A sample event log entry is shown below: The system detected an address conflict for IP address 199.199.40.123 with the system having network hardware address 00:DD:01:0F:7A:B5. Network operations on this system may be disrupted as a result. DHCP-enabled clients inform the DHCP server when an IP address conflict is detected and, instead of invalidating the stack, they request a new address from the DHCP server and request that the server flag the conflicting address as bad. This capability is commonly known as DHCP Decline support. Multihoming When a computer is configured with more than one IP address, it is referred to as a multihomed system. Multihoming is supported in three different ways: Multiple IP addresses per NIC To add addresses for an interface, on the Start menu, point to Settings, and then click Network and Dial-up Connections. Right-click Local Area Connection, and click Properties. Select Internet Protocol (TCP/IP), click Properties, in the user interface (UI). Multiple NICs per physical network. There are no restrictions, other than hardware. Multiple networks and media types. There are no restrictions, other than hardware and media support. See the section, "The NDIS Interface and Below" media access control address on the frame is that of the interface that actually transmitted the frame to the media, and the source IP address is the one that the sending application sourced it from, not necessarily one of the IP addresses associated with the sending interface in the Network Connections UI. When a computer is multihomed with NICs attached to disjoint networks (networks that are separate from and unaware of each other, such as a remote access-connected network and a local connection), routing problems may arise. It is often necessary to set up static routes to remote networks in this situation. When configuring a computer to be multihomed on two disjoint networks, the best practice is to set the default gateway on the main or largest and least-known network. Then, either add static routes or use a routing protocol to provide connectivity to the hosts on the smaller or better-known network. Avoid configuring a different default gateway on each side; this can result in unpredictable behavior and loss of connectivity. Note: There can only be one active default gateway for a computer at any moment in time. More details on name registration, resolution, and choice of NIC on outbound datagrams with multihomed computers are provided in the "Transmission Control Protocol (TCP)," "NetBIOS over TCP/IP," and "Windows Sockets" sections of this paper. Classless Interdomain Routing (CIDR) CIDR, described in RFCs 1518 and 1519, removes the concept of class from the IP address assignment and management process. In place of predefined, well-known boundaries, CIDR allocates addresses defined by a starting address and a range, which makes more efficient use of available space. The range defines the network part of the address. For example an assignment from an ISP to a corporate client might be expressed as 10.57.1.128 /25. This would result in a 128-address block for local use, with the upper 25 bits being the network identifier part of the address. A legacy, class-full allocation would be expressed as <net>.0.0.0 /8, <net>.<net>.0.0 /16, or <net>.<net>.<net>.0 /24. As these are reclaimed, they will be reallocated using classless CIDR techniques. Given the installed base of class-full systems, the initial implementation of CIDR was to concatenate pieces of the Class C space. This process was called supernetting. Supernetting can be used to consolidate several class C network addresses into one logical network. To use supernetting, the IP network addresses that are to be combined must share the same high-order bits, and the subnet mask is shortened to take bits away from the network portion of the address and add them to the host portion. For example, the class C network addresses 199.199.4.0, 199.199.5.0, 199.199.6.0, and 199.199.7.0 can be combined by using a subnet mask of 255.255.252.0 for each: NET 199.199.4 (1100 0111.1100 0111.0000 0100.0000 0000) NET 199.199.5 (1100 0111.1100 0111.0000 0101.0000 0000) NET 199.199.6 (1100 0111.1100 0111.0000 0110.0000 0000) NET 199.199.7 (1100 0111.1100 0111.0000 0111.0000 0000) MASK 255.255.252.0 (1111 1111.1111 1111.1111 1100.0000 0000) When routing decisions are made, only the bits covered by the subnet mask are used, thus making all these addresses appear to be part of the same network for routing purposes. Any routers in use must also support CIDR and may require special configuration. Windows 2000 TCP/IP includes support for 0's 2000 is level-2 (send and receive) compliant with RFC 1112. IGMP is the protocol used to manage IP multicasting, which is described later in this document. IP over ATM Windows 2000 introduces support for IP over ATM. RFC 1577 (and successors) define the basic operation of an IP over ATM network, or more precisely, a Logical IP Subnet over an ATM network. A Logical IP Subnet (or LIS) is a set of IP hosts that can communicate directly with each other. Two hosts belonging to different Logical IP Subnets can communicate only through an IP router that is a member of both subnets. ATM Address Resolution Because. Internet Control Message Protocol (ICMP) ICMP is a maintenance protocol specified in RFC 792 and is normally considered part of the IP layer. ICMP messages are encapsulated within IP datagrams, so that they can be routed throughout an internetwork. Windows NT and Windows 2000 use ICMP to: Build and maintain route tables. Perform router discovery. Assist in Path Maximum Transmission Unit (PMTU) discovery. Diagnose problems (ping, tracert, pathping). Adjust flow control to prevent link or router saturation. Windows. can specify redirection for one host, a subnet, or for an entire network.. Path Maximum Transmission Unit (PMTU) Discovery TCP employs Path Maximum Transmission Unit (PMTU) discovery, as described later in the "Transmission Control Protocol (TCP)" section of this paper.. Ping is explored in more detail in the troubleshooting section of this paper. error message. Tracert prints out an ordered list of the routers in the path that returned these error messages. If the -d (do not do a DNS inverse query on each IP address) switch is used, the IP address of the near-side interface of each router is reported. The example below illustrates using tracert to find the route from a computer dialed in over Point-to-Point Protocol (PPP) to an Internet for a set period of time and show you delay and packet loss, which will help determine if there is a weak link in the path. Another new feature in Windows 2000 is support for QoS. Windows 2000 supports several QoS mechanisms such as the Resource reServation Protocol (RSVP), Differentiated Services (DiffServ), IEEE 802.1p, ATM QoS, and so on. The QoS mechanisms supported in Windows 2000 are abstracted through a simple Generic QoS (GQoS) API. An overview of support for QoS from the stack and related system components is presented here. The GQoS API is an extension to the Winsock programming interface. It includes APIs and system components that provide applications with a method of reserving network bandwidth between client and server. Windows 2000 automatically maps GQoS requests to QoS mechanisms such as RSVP, Diffserv, 802.1p or ATM QoS. RSVP is a layer 3 signaling protocol that is used to reserve bandwidth for individual flows on a network. RSVP is a per-flow QoS mechanism because it sets up a reservation for each flow. Diffserv is another layer 3 QoS mechanism. Diffserv defines 6 bits in the IP header that determine how the IP packet is prioritized3. Diffserv traffic can be prioritized into 64 possible classes known as Per Hop Behaviors (PHBs). 802.1p, on the other hand, is a layer 2 QoS mechanism that defines how layer 2 devices such as Ethernet switches should prioritize traffic. 802.1p defines 8 priority classes ranging from 0 to 7. DiffServ and 802.1p are called aggregate QoS mechanisms because they classify all traffic into a finite number of priority classes. The following sequence of events characterize an application's interaction with GQoS: The application requests QoS in abstract terms via GQoS. The application's request translates into RSVP signaling messages. RSVP signaling messages go out onto the network and reserve bandwidth on all RSVP-aware nodes in the network path. In addition to setting up reservations, RSVP messages are subject to scrutiny by policy servers on the network. Policy servers can reject the RSVP request if it is in violation of network policy. This gives the network administrator a means of enforcing who gets QoS. Once the RSVP reservation has been installed, Windows 2000 starts marking all outgoing packets for that flow with the appropriate DiffServ class and 802.1p priority. As the traffic from the flow makes its way through the network, it gets the benefit of 802.1p prioritization in 802.1p-enabled Ethernet switches, the benefit of RSVP reservations in RSVP-enabled routers, and the benefits of DiffServ prioritization in DiffServ-enabled clouds in the network. There are several other QoS mechanisms—such as Integrated Services over ATM (ISATM), which automatically maps GQoS requests to ATM QoS on Classical IP over ATM networks. Integrated Services Over Low Bit Rate (ISSLOW) is another QoS mechanism that improves latency for prioritized traffic on slow WAN links. In addition to the GQoS API, a control or management application has access to traffic control functionality via the Traffic Control (TC) API. The TC API allows a control or management application to assist in providing some quality of service for non-QoS-enabled applications. Windows 2000 also provides a policy server called the QoS Admission Control Service (QoS ACS). The QoS ACS allows network administrators to control who gets QoS on the network. The QoS ACS also exposes an API called the Local Policy Module (LPM) API. The LPM API allows ISVs to build customized policy modules that add to the policy enforcement functionality in the QoS ACS. Figure 2, below, illustrates the system components involved in QoS and RSVP. GQoS is a QoS provider that can invoke RSVP signaling, trigger traffic control, and provide notification of events to the application. Rsvp.exe is responsible for RSVP signaling to or from the network, and for invoking Traffic.dll to add flows and filters to the stack. The packet classifier is responsible for classifying packets according to the packet filters indicated by Traffic.dll. The packet scheduler maintains separate queues for each classification of traffic and includes a conformance analyzer, shaper, and packet sequencer. The shaper manages flows into the packet queues at the agreed-upon rate, and the sequencer feeds packets to the network interface in the order of priority from the queues that it manages. Traffic that has no QoS specification goes into the best effort queue, which is lowest in priority. The flowchart in figure 2 illustrates how an application uses QoS RSVP to deliver a flow of data to a client or clients. The application is an audio server, and it needs 1 megabit-per-second of reliable bandwidth to provide acceptable audio quality to a client. RSVP supports both unicast and multicast flows. This example uses a unicast flow to a single client. The application initializes and completes a structure to be provided to GQoS. This structure includes a sending and receiving flow specification. Flow specifications include parameters such as peak bandwidth, latency, delay variation, service type, and so on. Examples of service types include Best Effort and Guaranteed. The application then calls WSAConnect to connect to the client. A call to this function triggers a number of events. RSVP is invoked to signal the network by sending special path messages. A path message is sent to the same destination IP address that the flow goes to; however, it is intended to set up the routers in the flow and to identify the flow. A router receiving a path message inserts its own IP address into the path message's last hop and forwards the message to the next router in the path until it reaches the client. This gives the client the ability to understand the path between the sender and itself and to reserve bandwidth along that path for the application. The client returns a reservation request (again describing the desired flow) back along the same path. The routers along the path are responsible for examining the resources available to them and determining if they can accept the reservation. If all of the routers along the path agree to accept the reservation, the application can count on having the desired network bandwidth and other characteristics available. Because networks are dynamic and the server or client could mistakenly abandon their resources without notifying the network, both path messages and reservation requests must be refreshed frequently. If there were no changes in the network, additional path messages and reservations refresh only the existing path. However, if a new route appears, the path taken by the flow could change on the fly as the network makes adjustments. When a server application is used to multicast to many clients, a similar sequence of events occurs. One interesting difference is that when routers receive reservation requests from various clients referencing the same flow, they can merge reservation requests, rather than maintaining individual reservations for the same information flow. For more, detailed information on these topics, see the Winsock2 specification and RFC 2205. IP Security (IPSec) is another new feature in Windows 2000. IPSec features and implementation details are very complex and are described in detail in a series of RFCs and IETF drafts and in other Microsoft white papers. IPSec uses cryptography-based security to provide access control, connectionless integrity, data origin authentication, protection against replays, confidentiality, Windows 2000 Group Policy mechanisms using the Active Directory™ services. When using the Active Directory, hosts detect policy assignment at startup, retrieve the policy, and then periodically check for policy updates. The IPSec policy specifies how computers trust each other. IPSec can use either certificates or Kerberos as an authentication method. The easiest trust to use is the Windows 2000 domain trust based on Kerberos. Predefined IPSec policies are configured to trust computers in the same or other trusted Windows 2000 domains. Each IP datagram processed at the IP layer is compared to a set of filters that are provided by the security policy, which is maintained by an administrator for a computer that belongs to a domain. IP can do one of three things with any datagram: Provide IPSec services to it. Allow it to pass unmodified. Discard it. An IPSec policy contains a filter, filter action, authentication, tunnel setting, and connection type. For example, two stand-alone computers in the same Windows 2000 domain IP traffic (including something as simple as a ping in this case) is directed at one host by another, a Security Association (SA) is established through a short conversation over UDP port 500, through Internet Key Exchange service (IKE), and then the traffic begins to flow. The following network trace illustrates setting up a TCP connection between two such IPSec-enabled hosts. The only parts of the IP datagram that are unencrypted and visible to Netmon after the SA is established are the media access control and IP headers: Source IP Dest IP Prot Description davemac-ipsec calvin-ipsec UDP Src Port: ISAKMP, (500); Dst Port: ISAKMP (500); Length = 216 (0xD8) calvin-ipsec davemac-ipsec UDP Src Port: ISAKMP, (500); Dst Port: ISAKMP (500); Length = 216 (0xD8) davemac-ipsec calvin-ipsec UDP Src Port: ISAKMP, (500); Dst Port: ISAKMP (500); Length = 128 (0x80) calvin-ipsec davemac-ipsec UDP Src Port: ISAKMP, (500); Dst Port: ISAKMP (500); Length = 128 (0x80) davemac-ipsec calvin-ipsec UDP Src Port: ISAKMP, (500); Dst Port: ISAKMP (500); Length = 76 (0x4C) calvin-ipsec davemac-ipsec UDP Src Port: ISAKMP, (500); Dst Port: ISAKMP (500); Length = 76 (0x4C) davemac-ipsec calvin-ipsec UDP Src Port: ISAKMP, (500); Dst Port: ISAKMP (500); Length = 212 (0xD4) calvin-ipsec davemac-ipsec UDP Src Port: ISAKMP, (500); Dst Port: ISAKMP (500); Length = 172 (0xAC) davemac-ipsec calvin-ipsec UDP Src Port: ISAKMP, (500); Dst Port: ISAKMP (500); Length = 84 (0x54) calvin-ipsec davemac-ipsec UDP Src Port: ISAKMP, (500); Dst Port: ISAKMP (500); Length = 92 (0x5C) davemac-ipsec calvin-ipsec IP ID = 0xC906; Proto = 0x32; Len: 96 calvin-ipsec davemac-ipsec IP ID = 0xA202; Proto = 0x32; Len: 96 davemac-ipsec calvin-ipsec Netmon if ESP is used. Src IP Dest IP Protoc Description =================================================== davemac-ipsec calvin-ipsec = 172.30.250.139 IP: Destination Address = 157.59.24.37 IP: Data: Number of data bytes remaining = 76 (0x004C) 00000: 52 A4 68 7B 94 80 00 00 90 1D 84 80 08 00 45 00 R.h{..........E. 00010: 00 60 C9 06 40 00 80 32 D5 5A AC 1E FA 8B 9D 3B .`..@..2.Z.....; 00020: 18 25 18 D9 03 E8 00 00 00 01 F6 EF D0 23 1C 59 .%...........#.Y 00030: BD 01 78 BE 69 24 D6 EB AE 4F 08 DA 0F D4 6C 04 ..x.i$...O....l. 00040: 5F BC A6 E0 8D BE 5C 89 2D 56 60 80 FA 8B CC 5E _.....\.-V`....^ 00050: 4E 61 3D 46 75 B9 D1 5B 52 45 79 7D 1E 36 1F 01 Na=Fu..[REy}.6.. 00060: FF 25 E5 BA 48 AF D7 7A D5 9A 34 3E 5D 7D .%..H..z..4>]} Using a secure server policy also restricts all other types of traffic from reaching destinations that do not understand IPSec or are not part of the same trusted group..0 supports task offloading, it is feasible to include encryption hardware on NICs. NICs supporting IPSec hardware offload are available from several vendors. IPSec promises to be popular for protecting both public network traffic and internal corporate/government traffic that requires confidentiality. One common implementation may be to apply secure server IPSec policies only to specific servers that are used to store and/or serve confidential information. Windows 2000 provides level 2 (full) support for IP multicasting (IGMP version 2), as described in RFC 1112 and RFC 2236. IGMP by Windows Components Some Windows NT and Windows 2000 components use IGMP. For example, router discovery uses multicasts, by default. WINS servers use multicasting when attempting to locate replication partners. 2000 receive window size defaults to a value calculated as follows: The first connection request sent to a remote host advertises a receive window size of 16 KB (16,384 bytes). Upon establishing the connection, the receive window size is rounded up to an increment of the maximum TCP segment size (MSS) that was negotiated during connection setup. If that is not at least four times the MSS, it is adjusted to 4 * MSS, with a maximum size of 64 KB unless a window scaling option (RFC 1323) is in effect. For Ethernet, the window is normally set to 17,520 bytes (16, scalable windows support (RFC 1323) has been introduced in Windows 2000. This RFC details a method for supporting scalable windows by allowing TCP to negotiate a scaling factor for the window size at connection establishment. This allows for an actual receive window of up to 1 gigabyte (GB). RFC 1323 Section 2.2 trace must be scaled by the negotiated scale factor. The scale factor can be observed in the connection establishment (three-way handshake) packets, as illustrated in the following Network Monitor capture: ************************************************************************* ************************** Src Addr Dst Addr Protocol Description THEMACS1 NTBUILDS) CP:) +Timestamps Option TCP: Option Nop = 1 (0x1) TCP: Option Nop = 1 (0x1) + TCP: SACK Permitted Option 00000: 8C 04 C8 BD A3 82 00 00 50 7D 83 80 08 00 45 00 ........P}....E. 00010: 00 40 B9 08 40 00 80 06 A7 1A 9D 36 15 FD AC 1F .@..@......6.... 00020: 3B 42 04 C1 00 8B 00 0B 10 AB 00 00 00 00 B0 02 ;B.............. 00030: FF FF 85 65 00 00 02 04 05 B4 01 03 03 05 01 01 ...e............ 00040: 08 0A 00 00 00 00 00 00 00 00 01 01 04 02 .............. ************************************************************************* **************************Kbytes = 0x7fff = 111 1111 1111 1111 Left-shift 5 bits = 1111 1111 1111 1110 0000 = 0xffffe (1,048,544 bytes) As a check, left-shifting a number 5 bits is equivalent to multiplying it by 25, or 32. 32767 * 32 = 1,048,544 The scale factor is not necessarily symmetrical, so it may be different for each direction of data flow. Windows 2000 uses window scaling automatically if the TcpWindowSize is set to a value greater than 64 KB, and the Tcp1323Opts registry parameter is set appropriately. See Appendix A for details on setting this parameter. Delayed Acknowledgments As specified in RFC 1122, TCP uses delayed acknowledgments (ACKs) to reduce the number of packets sent on the media. The Microsoft, which is new in Windows 2000. TCP Selective Acknowledgment (RFC 2018) Windows 2000 introduces TCP header options, as shown below. This text was taken directly from RFC 2018: . The Network Monitor capture below illustrates a host acknowledging all data up to sequence number 54857341, plus the data from sequence number 54858789-54861685. +) Another RFC 1323 feature introduced in Windows 2000 is support for TCP time stamps. Like SACK, time stamps are important for connections using large window sizes. Time stamps were conceived to assist TCP in accurately measuring round-trip time (RTT) to adjust retransmission time-outs. The TCP header option for time stamps is shown here, from RFC 1323: "TCP Timestamps Option (TSopt): Kind: 8 Length: 10 bytes +-------+-------+---------------------+---------------------+ |Kind=8 | 10 | TS Value (TSval) |TS Echo Reply (TSecr)| +-------+-------+---------------------+---------------------+ 1 1 4 4 ) The use of time stamps is disabled by default. It can be enabled using theTcp1323Opts registry parameter, explained in Appendix A.. When TCP segments are destined to a non-local network, the Don't Fragment bit is set in the IP header. Any router or media along the path can have an MTU that differs from that of the two hosts. If a media segment has an MTU that is too small for the IP datagram being routed, the router attempts to fragment the datagram accordingly. It then finds that the Don't Fragment bit is set in the IP header. At this point, the router should inform the sending host that the datagram can 2000 TCP enforces this limit. Some noncompliant routers may silently drop IP datagrams that can not be fragmented or may not correctly report their next-hop MTU. If this occurs, it may be necessary to make a configuration change to the PMTU detection algorithm. There are two registry changes that can be made to the TCP/IP stack in Windows 2000 to work around these problematic devices. These registry entries are described in more detail in Appendix A: EnablePMTUBHDetect—Adjusts the PMTU discovery algorithm to attempt to detect black hole routers.. The PMTU between two hosts can be discovered manually using the ping command. Packets: Sent = 1, Received = 0, Lost = 1 (100% loss), Approximate round trip times in milliseconds: Minimum = 0ms, Maximum = 0ms, Average = 0ms In the example shown above, the IP layer returned an ICMP error message that ping interpreted. If the router had been a black hole router, ping would simply not be answered once its size exceeded the MTU that the router could handle. Ping can be used in this manner to detect such a router. A sample ICMP Destination unreachable error ICMP: (IP) Version = 4 (0x4) ICMP: (IP) Header Length = 20 (0x14) ICMP: (IP) Service Type = 0 (0x0) ICMP: Precedence = Routine ICMP: ...0.... = Normal Delay ICMP: ....0... = Normal Throughput ICMP: .....0.. = Normal Reliability IP) Total Length = 1028 (0x404) ICMP: (IP) Identification = 45825 (0xB301) ICMP: Flags Summary = 2 (0x2) CMP: .......0 = Last fragment in datagram ICMP: ......1. = Cannot fragment datagram ICMP: (IP) Fragment Offset = 0 (0x0) bytes ICMP: (IP) Time to Live = 32 (0x20) ICMP: (IP) Protocol = ICMP - Internet Control Message ICMP: (IP) Checksum = 0xC91E ICMP: (IP) Source Address = 10.99.99.9 ICMP: (IP) Destination Address = 10.99.99.10 ICMP: (IP) Data: Number of data bytes remaining = 8 (0x0008) ICMP: Description of original ICMP frame ICMP: Checksum = 0xBC5F ICMP: Identifier = 256 (0x100) ICMP: Sequence Number = 38144 (0x9500) 00000: 00 AA 00 4B B1 47 00 AA 00 3E 52 EF 08 00 45 00 ...K.G...>R...E. 00010: 00 38 44 01 00 00 80 01 1B EB 0A 63 63 0A 0A 63 .8D........cc..c 00020: 63 09 03 04 A0 5B 00 00 02 40 45 00 04 04 B3 01 c....[...@E..... 00030: 40 00 20 01 C9 1E 0A 63 63 09 0A 63 63 0A 08 00 @. ....cc..cc... 00040: BC 5F 01 00 95 00 0x240, or 576 bytes. Dead Gateway Detection Dead gateway detection is used to allow TCP to detect failure of the default gateway and to adjust the IP routing table to use another default gateway. The Microsoft TCP/IP stack uses the triggered reselection method described in RFC 816, with slight modifications based upon customer experiences 2000 paper called .4 The following trace clip. By default, Windows 2000 paper. many discussed further in RFC 1122 2000 NT and Windows 2000 TCP/IP implement trace captured by Microsoft Network Monitor. The trace was captured by using PPP to dial up an Internet provider at 9600 BPS. A Telnet (character-mode) session was established, and then the Y key was held down on the Windows NT Workstation.. Time 2000. The length of time that the socket-pair should not be reused is specified by RFC 793 as 2 MSL (two maximum segment lifetimes), or four minutes. This is the default setting for Windows NT and Windows 2000. However, with this default setting, some network applications that perform many outbound connections in a short time may use up all available ports before the ports can be recycled. Windows NT and Windows 2000 offer two methods of controlling this behavior. First, the TcpTimedWaitDelay registry parameter can be used to alter this value. Windows NT and Windows 2000 allow it to be set as low as 30 seconds, which should not cause problems in most environments. Second, the number of user-accessible ephemeral ports that can be used to source outbound connections is configurable using the MaxUserPorts registry parameter. By default, when an application requests any socket from the system to use for an outbound call, a port between the values of 1024 and 5000 is supplied. The MaxUserPorts 2000 document. See TcpWindowSize in Appendix A. Throughput can never exceed window size divided by round-trip time. If the link is unreliable or badly congested and packets are being dropped, using a larger window size may not improve throughput. Along with scaling windows support, Windows 2000 NT and Windows 2000 TCP/IP can adapt to most network conditions and can dynamically provide the best throughput and reliability possible on a per-connection basis. Attempts at manual tuning are often counter-productive unless a qualified network engineer first performs a careful study of data flow. Windows Internet Name Server (WINS) for this purpose. 2000 NT or Windows 2000 transports support TDI (DLC, however, does not.) on TDI is available from the Windows 2000 Device Driver Kit (DDK). Network security is a serious consideration for administrators with machines exposed to public networks. Microsoft's TCP/IP stack has been hardened against many attacks and in its default state handles most of the common attacks. Some additional protection against popular Denial of Service attacks can be added by enabling the SynAttackProtect key in the registry. This key allows the administrator to choose several levels of protection against SYN attacks. Here are general guidelines that can lower your vulnerability to attack: Disable unnecessary or optional services (for instance, Client for Microsoft Networks on an IIS server). Enable TCP/IP filtering and restrict access to only the ports that are necessary for the server to function. (See the Microsoft Knowledge Base article number 150543<;EN-US;KBHOWTO&sd=GN&ln=EN-US> for a list of ports that Windows services use.) Unbind NetBIOS over TCP/IP where it is not needed. Configure static IP addresses and parameters for public adapters. Configure registry settings for maximum protection (see Appendix D). Consult the Microsoft Security Web site<> regularly for security bulletins. currently popular. A quick overview of the Windows Sockets Interface and the NetBIOS Interface is presented here. Windows Sockets specifies a programming interface based on the familiar socket interface from the University of California at Berkeley. It includes a set of extensions designed to take advantage of the message-driven nature of Microsoft Windows. Version 1.1 of the specification was released in January 1993, and version 2.2.0 was published in May of 1996.8 Windows 2000 supports version 2.2, commonly referred to as Winsock2. Applications There. Name and Address Resolution. document. IP multicasting is currently supported only on AF_INET sockets of the types SOCK_DGRAM and SOCK_RAW. 3.51 accepts a backlog of up to 100, Windows NT 4.0 and Windows 2000 Server accept a backlog of 200, and Windows NT 4.0 Workstation and Windows 2000 Professional accept a backlog of 5 (which reduces memory demands). Push Bit Interpretation By. NetBIOS defines a software interface and a naming convention, not a protocol. Early versions of Microsoft networking products provided only the NetBEUI local area networking protocol with a NetBIOS application-programming interface. NetBEUI is a small, fast protocol with no networking layer; thus, it is not routable and is often not suitable for WAN implementations. NetBEUI relies on broadcasts (described earlier in this paper) to communicate with NetBT. Windows NT and Windows 2000 also include a NetBIOS emulator. The emulator takes standard NetBIOS requests from NetBIOS applications and translates them to equivalent TDI primitives. Windows 2000 still uses NetBIOS over TCP/IP to communicate with prior versions of Windows NT and other clients, such as Windows 95. However, the Windows 2000 redirector and server components now also support direct hosting to communicate with other computers running Windows 2000. Direct hosting uses the DNS for name resolution. No NetBIOS name resolution (WINS or broadcast) is used, and the protocol is simpler. Direct Host TCP uses Settings, and then click Network and Dial-up Connection. Right-click Local Area Connection and click Properties. Select Internet Protocol (TCP/IP), and click Properties. Click Advanced. Click the WINS tab, and select Disable NetBIOS over TCP/IP. Applications and services that depend on NetBIOS no longer function after this is done, so it is important that you verify that any clients and applications no longer need NetBIOS support before you disable it. For example, pre-Windows 2000 computers will be unable to browse, locate, or create file and print share connections to a Windows 2000 computer with NetBIOS disabled. NetBIOS Names. Table 3 Examples of NetBIOS names used by Microsoft components Unique name Service computer_name [00h] Workstation service computer_name [03h] Messenger service computer_name [06h] RAS Server service computer_name [1Fh] NetDDE service computer_name [20h] Server service computer_name [21h] RAS Client service computer_name [BEh] Network Monitor Agent computer_name [BFh] Network Monitor Application user_name [03] domain_name [1Dh] Master Browser domain_name [1Bh] Domain Master Browser Group Name Service domain_name [00h] Domain name domain_name [1Ch] Domain controllers domain_name [1Eh] Browser service elections \\--__MSBROWSE__[01h] Master browser To see which names a computer has registered over NetBT, type the following from a command prompt: nbtstat -n Windows 2000 allows you to re-register names with the name a name server and switches back to p-node when one becomes available. Microsoft-enhanced uses the local Lmhosts file or WINS proxies plus Windows Sockets gethostbyname calls (using standard DNS and/or local Hosts files) in addition to standard node types. Microsoft ships a NetBIOS name server known as the Windows Internet Name Service (WINS). Most WINS clients are set up as h-nodes; that is, they first attempt to register and resolve names using WINS, and if that fails, they try local subnet broadcasts. Using a name server to locate resources is generally preferable to broadcasting for two reasons: Broadcasts are not usually forwarded by routers. Broadcasts are received by all computers on a subnet, requiring processing time at each computer. NetBIOS Name Registration and Resolution for Multihomed Computers As mentioned, NetBT binds to only one IP address per physical network interface. From the NetBT viewpoint, a computer is multihomed only if it has more than one NIC, that address is selected. If more than one of the addresses meets the criteria, one is picked at random from those that match. If one of the IP addresses in the list is on the same (classless) network as the calling binding of NetBT NICs, 2000 supports per interface NetBT name caching, and nbtstat -c \\198.105.232.1\data net view \\198.105.232.1 dir \\\bussys\winnt In addition, various applications, such as the Event Viewer Select Computer option on the Log menu, allow you to enter an FQDN or IP address directly. In Windows 2000, it is also possible to use direct hosting to establish redirector or server connections between Windows 2000 computers without the use of the NetBIOS namespace or mapping layer at all. By default, Windows attempts to make connections using both methods so that it can support connections to lower-level computers. However, in Windows 2000–only environments, you can disable NetBIOS completely from the Network Connections folder. The new interface in Windows 2000 that makes NetBIOS-less operation possible is termed the traditional NetBIOS over TCP port 139. NetBIOS over TCP Sessions. NetBIOS media access control address: broadcast (FFFFFFFFFFFF). Source media access control address: The NIC 2000 (as described earlier in this section), NetBIOS datagram services are not available. The focus of this paper is on core TCP/IP stack components, not on the many available services that use it. However, the stack itself relies upon a few services for configuration information and name and address resolution. A few of these critical client services are discussed here. One of the most important client services is the Dynamic Host Configuration Protocol (DHCP) client. The DHCP client has an expanded role in Windows 2000. a Microsoft TCP/IP client is installed and set to dynamically obtain TCP/IP protocol configuration information from a DHCP server (instead of. Many TCP/IP networks use DHCP servers that are administratively configured to hand out information to clients on the network. If this attempt to locate a DHCP server fails, the Windows 2000 DHCP client autoconfigures its stack with a selected IP address from the IANA-reserved class B network 169.254.0.0 with the subnet mask 255.255.0.09.. It continues to check for a DHCP server in the background every 5 minutes. If a DHCP server is found, the autoconfiguration information is abandoned, and the configuration offered by the DHCP server is used instead. This autoconfiguration feature is known as Automatic Private IP Addressing (APIPA) and allows single subnet home office or small office networks to use TCP/IP without static configuration or the administration of a DHCP server. (NIC) to notify the protocol stack of media connect and media disconnect events. Windows 2000 NIC 2000 is unplugged from the network (assuming the NIC. Windows 2000 includes support for dynamic updates to DNS as described in RFC 2136. Every time there is an address event (new address or renewal), the DHCP client sends option 81 and its fully qualified name to the DHCP server, and requests the DHCP server to register a DNS pointer resource record PTR RR on its behalf. The dynamic update client handles the A RR registration dynamic update DNS client are documented in Appendix C. The Windows 2000 DHCP server handles option 81 requests as specified in the draft RFC10. If a Windows 2000 DHCP client talks to a down-level DHCP server that does not handle option 81, it registers a PTR RR on its own. The Windows 2000 DNS server is capable of handling dynamic updates. Statically configured (non-DHCP) clients register both the A RR and the PTR RR with the DNS server themselves. Windows 2000 includes a caching DNS resolver service, which is enabled by default. For troubleshooting purposes, this service can be viewed, stopped, and started like any other Windows. Many network troubleshooting tools are available for Windows. Most are included in the product or the Windows 2000 Server Resource Kit. Microsoft Network Monitor is an excellent network-tracing tool. The full version is part of the Microsoft Systems Management Server product, and a more limited version is included in the Windows 2000 Server product. When troubleshooting any problem, it is helpful to use a logical approach. Some questions to ask are: What does work? What does not work? How are the things that do and do not work related? Have the things that do not work ever worked on this computer/network? If so, what has changed since it last worked? Troubleshooting a problem from the bottom up is often a good way to isolate the problem quickly. The tools listed below are organized for this approach. IPConfig is a command-line utility that prints out the TCP/IP-related configuration of a host. When used with the /all switch, it produces a detailed configuration report for all interfaces, including any configured serial ports (RAS). Output can be redirected to a file and pasted into other documents: C:\>ipconfig /allWindows 2000 IP configuration: Host Name . . . . . . . . . . . . : DAVEMAC2 Primary DNS Suffix . . . . . . . : mytest.microsoft.com Node Type . . . . . . . . . . . . : Hybrid IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No DNS Suffix Search List. . . . . . : microsoft.com Ethernet adapter Local Area Connection 2: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : 3Com EtherLink III EISA (3C579-TP) Physical Address. . . . . . . . . : 00-20-AF-1D-2B-91 DHCP Enabled. . . . . . . . . . . : No Autoconfiguration Enabled . . . . : Yes IP Address. . . . . . . . . . . . : 10.57.8.190 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : DNS Servers . . . . . . . . . . . : 10.57.9.254 Primary WINS Server . . . . . . . : 10.57.9.254 Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : AMD Family PCI Ethernet Adapter Physical Address. . . . . . . . . : 00-80-5F-88-60-9A DHCP Enabled. . . . . . . . . . . : No IP Address. . . . . . . . . . . . : 199.199.40.22 Autoconfiguration Enabled . . . . : Yes Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 199.199.40.1 DNS Servers . . . . . . . . . . . : 199.199.40.254 Primary WINS Server . . . . . . . : 199.199.40.254 Ping. The Pathping command is a route-tracing tool that combines features of the ping and tracert commands with additional information that neither of those tools provides. The Pathping command sends packets to each router on the way to a final destination over a given period of time, and then computes results based on the packets returned from each hop. Since the command shows the degree of packet loss at any given router or link, it is easy to determine which routers or links might be causing network problems. The switches –R –T can be used with Pathping to determine whether the devices on the path are 802.1p-compliant and RSVP-aware. The following example illustrates the default output when tracing the route to [200.1.247.2] over a maximum of 30 hops: 0 warren.microsoft.com [163.15.2.217] 1 tnt2.seattle2.wa.da.uu.net [206.115.150.106] 2 206.115.169.217 3 119.ATM1-0-0.HR2.SEA1.ALTER.NET [152.63.104.38] 4 412.atm11-0.gw1.sea1.ALTER.NET [137.39.13.73] 5 teleglobe2-gw.customer.ALTER.NET [157.130.177.222] 6 if-0-3.core1.Seattle.Teleglobe.net [207.45.222.37] 7 if-1-3.core1.Burnaby.Teleglobe.net [207.45.223.113] 8 if-1-2.core1.Scarborough.Teleglobe.net [207.45.222.189] 9 if-2-1.core1.Montreal.Teleglobe.net [207.45.222.121] 10 if-3-1.core1.PennantPoint.Teleglobe.net [207.45.223.41] 11 if-5-0-0.bb1.PennantPoint.Teleglobe.net [207.45.222.94] 12 BOSQUE-aragorn.tecoint.net [200.43.189.230] 13 ARAGORN-bosque.tecoint.net [200.43.189.229] 14 GANDALF-aragorn.tecoint.net [200.43.189.225] 15 Startel.tecoint.net [200.43.189.18] 16 200.26.9.245 17 200.26.9.26 18 200.1.247.2 Computing statistics for 450 seconds: Source to Here This Node/Link Hop RTT Lost/Sent = Pct Lost/Sent = Pct Address 0 warren.microsoft.com [63.15.2.217] 0/ 100 = 0% | 1 115ms 0/ 100 = 0% 0/ 100 = 0% tnt2.seattle2.wa.da.uu.net [206.115.150.106] 0/ 100 = 0% | 2 121ms 0/ 100 = 0% 0/ 100 = 0% 206.115.169.217 0/ 100 = 0% | 3 122ms 0/ 100 = 0% 0/ 100 = 0% 119.ATM.ALTER.NET [152.63.104.38] 0/ 100 = 0% | 4 124ms 0/ 100 = 0% 0/ 100 = 0% 412.atm.sea1.ALTER.NET [137.39.13.73] 0/ 100 = 0% | 5 157ms 0/ 100 = 0% 0/ 100 = 0% teleglobe2-gw.ALTER.NET [157.130.177.222] 0/ 100 = 0% | 6 156ms 0/ 100 = 0% 0/ 100 = 0% if-0-3.Teleglobe.net [207.45.222.37] 0/ 100 = 0% | 7 198ms 0/ 100 = 0% 0/ 100 = 0% if-1-3.core1.Teleglobe.net [207.45.223.113] 0/ 100 = 0% | 8 216ms 0/ 100 = 0% 0/ 100 = 0% if-1-2.core1. Teleglobe.net [207.45.222.189] 0/ 100 = 0% | 9 207ms 0/ 100 = 0% 0/ 100 = 0% if-2-1.Teleglobe.net [207.45.222.121] 0/ 100 = 0% | 10 220ms 0/ 100 = 0% 0/ 100 = 0% if-3-1.core1.Teleglobe.net [207.45.223.41] 0/ 100 = 0% | 11 240ms 0/ 100 = 0% 0/ 100 = 0% if-5-0-0.bb1.Teleglobe.net [207.45.222.94] 0/ 100 = 0% | 12 423ms 1/ 100 = 1% 1/ 100 = 1% BOSQUE-aragorn.tecoint.net [200.43.189.230] 0/ 100 = 0% | 13 412ms 0/ 100 = 0% 0/ 100 = 0% ARAGORN-bosque.tecoint.net [200.43.189.229] 0/ 100 = 0% | 14 415ms 1/ 100 = 1% 1/ 100 = 1% GANDALF-aragorn.tecoint.net [200.43.189.225] 0/ 100 = 0% | 15 578ms 0/ 100 = 0% 0/ 100 = 0% Startel.tecoint.net [200.43.189.18] 2/ 100 = 2% | 16 735ms 2/ 100 = 2% 0/ 100 = 0% 200.26.9.245 5/ 100 = 5% | 17 1005ms 8/ 100 = 8% 1/ 100 = 1% 200.26.9.26 0/ 100 = 0% | 18 1089ms 7/ 100 = 7% 0/ 100 = 0% 200.1.247.2 Trace complete. When Pathping is run, you first see the results for the route as it is tested for problems. This is the same path as that shown by the tracert command. The Pathping command then displays a busy message for the next 450 seconds (this time varies by the hop count). During this time, Pathping gathers information from all the routers previously listed and from the links between them. At the end of this period, it displays the test results. The two right-most columns—This Node/Link Lost/Sent=Pct and Address—contain the most useful information. The link between 200.26.9.245 (hop 16) and 200.26.9.26 (hop 17) is dropping 8 percent of the packets. The loss rates displayed for the links (marked as a | in the right-most column) indicate losses of packets being forwarded along the path. This loss indicates link congestion. The loss rates displayed for routers (indicated by their IP addresses in the right-most column) indicate that those routers' CPUs might be overloaded. Congested routers can also be a factor in end-to-end problems. The document. Route is used to view or modify the route table. Route print displays a list of current routes known by IP for the host. Sample output is shown in the IP section of this document. Note that in Windows 2000 the current active default gateway is shown at the end of the list of routes. Route add adds routes to the table. Route delete removes routes from the table. Routes added to the table are not made persistent unless the -p switch is specified. Nonpersistent routes last only until the computer is rebooted. For two hosts to exchange IP datagrams, they must both have a route to each other, or they must use a default gateway that knows of a route. Normally, routers exchange information with each other by using a protocol such as Routing Information Protocol (RIP) or Open Shortest Path First (OSPF). Silent RIP is available for Windows 2000 Professional, and full routing protocols are supported by Windows 2000 Server in the Routing and Remote Access service. Netstat displays protocol statistics and current TCP/IP connections. Netstat -a displays all connections, and netstat -r displays the route table and any active connections. The -n switch tells netstat not to convert addresses and port numbers to names, which speeds up execution. The -e switch displays Ethernet statistics and may be combined with the -s switch, which shows protocol statistics. Sample output is shown here: C:\>netstat -e Interface statistics: Received Sent Bytes 372959625 123567086 Unicast packets 134302 145204 Non-unicast packets 55937 886 Discards 0 0 Errors 0 0 Unknown protocols 1757381 C:\>netstat :1054 0.0.0.0:0 LISTENING TCP 0.0.0.0:1077 0.0.0.0:0 LISTENING TCP 0.0.0.0:1080 0.0.0.0:0 LISTENING TCP 0.0.0.0:1088 0.0.0.0:0 LISTENING TCP 0.0.0.0:1092:42 *:* UDP 0.0.0.0:88 *:* UDP 0.0.0.0:123 *:* UDP 0.0.0.0:135 *:* UDP 0.0.0.0:389 *:* UDP 0.0.0.0:445 *:* UDP 0.0.0.0:1073 *:* UDP 0.0.0.0:1076 *:* UDP 0.0.0.0:1087 *:* UDP 10.99.99.1:53 *:* UDP 10.99.99.1:67 *:* UDP 10.99.99.1:137 *:* UDP 10.99.99.1:138 *:* UDP 127.0.0.1:1052 *:* NBTStat is a useful tool for troubleshooting NetBIOS name-resolution problems. NBTStat -n displays the names that applications, such as the server and redirector, registered locally on the system. NBTStat -c shows the NetBIOS name cache, which contains name-to-address mappings for other computers. NBTStat -R purges the name cache and reloads it from the Lmhosts file. NBTStat –RR (new in Windows 2000 and NT 4.0 SP5) re-registers all names with the name server. NBTStat -a name performs a NetBIOS adapter status command against the computer that is specified by name. The adapter status command returns the local NetBIOS name table for that computer and the media access control address of the adapter card. NBTStat -s lists the current NetBIOS sessions and their status, including statistics. Nslookup, added in Windows NT 4.0, is a useful tool for troubleshooting DNS problems, such as host name resolution. When you start nslookup, it shows the host name and IP address of the DNS server that is configured for the local system, and then displays a command prompt. If you type a question mark (?), nslookup shows the different commands that are available. To look up the IP address of a host, using the DNS, type the host name and press Enter. Nslookup defaults to the DNS server that is configured for the computer that it is running on, but you can focus it on a different DNS server by typing server name (name is the host name of the server that you want to use for future lookups). When you use Nslookup, you should be aware of the domain name devolution method. If you type in just a host name and press Enter, nslookup appends the domain suffix of the computer (such as cswatcp.microsoft.com) to the host name before it queries the DNS. If the name is not found, the domain suffix is devolved by one label (in this case, cswatcp is removed, and the suffix becomes microsoft.com). Then the query is repeated. Windows 2000-based computers only devolve names to the second level domain (microsoft.com in this example), so if this query fails, no further attempts are made to resolve the name. If a fully qualified domain name is typed in (as indicated by a trailing dot), the DNS server is only queried for that name and no devolution is performed. To look up a host name that is completely outside of your domain, you must type in a fully qualified name. An especially useful troubleshooting feature is debug mode, which you can invoke by typing set debug, or for even greater detail, set d2. In debug mode, nslookup lists the steps being taken to complete its commands, as shown in this example: C:\>nslookup (null) davemac3.cswatcp.microsoft.com Address: 10.57.8.190 > set d2 > rain-city (null) davemac3.cswatcp.microsoft.com Address: 10.57.8.190 ------------ SendRequest(), len 49 HEADER: opcode = QUERY, id = 2, rcode = NOERROR header flags: query, want recursion questions = 1, answers = 0, authority records = 0, additional = 0 QUESTIONS: rain-city.cswatcp.microsoft.com, type = A, class = IN ------------ Got answer (108 bytes): HEADER: code = QUERY, id = 2, rcode = NOERROR header flags: response, auth. answer, want recursion, recursion avail. questions = 1, answers = 2, authority records = 0, additional = 0 QUESTIONS: rain-city.cswatcp.microsoft.com, type = A, class = IN ANSWERS: -> rain-city.cswatcp.microsoft.com type = CNAME, class = IN, dlen = 31 canonical name = seattle.cswatcp.microsoft.com ttl = 86400 (1 day) -> seattle.cswatcp.microsoft.com type = A, class = IN, dlen = 4 internet address = 10.1.2.3 ttl = 86400 (1 day) ------------ (null) seattle.cswatcp.microsoft.com Address: 10.1.2.3 Aliases: rain-city.cswatcp.microsoft.com In this example, set d2 was issued to set nslookup to debug mode, then address look-up was used for the host name rain-city. The first two lines of output show the host name and IP address of the DNS server to which the lookup was sent. As the next paragraph shows, the domain suffix of the local machine (cswatcp.microsoft.com) was appended to the name rain-city, and nslookup submitted this question to the DNS server. The next paragraph indicates that nslookup received an answer from the DNS and that there were two answer records in response to one question. The question is repeated in the response, along with the two answer records. In this case, the first answer record indicates that the name rain-city.cswatcp.microsoft.com is actually a cname, or canonicalname (alias) for the host name seattle.cswatcp.microsoft.com. The second answer record lists the IP address for that host as 10.1.2.3. Microsoft Network Monitor is a tool developed by Microsoft to make the task of troubleshooting complex network problems easier and more economical. It is packaged as part of the Microsoft Systems Management Server product, but can be used as a stand-alone network monitor. In addition, Windows NT and Windows 95 include Network Monitor Agent software, and Windows NT Server and Windows 2000 include a limited version of Network Monitor. Stations running Network Monitor can attach to stations running the agent software over the network or by using dial-up (remote access) to perform monitoring or tracing of remote network segments. This can be a very useful troubleshooting tool. Network Monitor works by placing the NIC on the capturing host into promiscuous mode so that it passes every frame on the wire up to the tracing tool. (The limited version of Network Monitor that ships with Windows 2000 Server allows only traffic to and from the computer to be traced.) Capture filters can be defined so that only specific frames are saved for analysis. Filters can be defined based on source and destination NIC addresses, source and destination protocol addresses, and pattern matches. Once the frames have been captured, display filtering can be used to further narrow down a problem. Display filtering allows specific protocols to be selected as well. Windows NT–based computers use the Server Message Block (SMB) protocol for many functions, including file and print sharing. The smb.hlp file in the Netmon parser directory is a good reference for interpreting this protocol. For the latest information on Windows 2000 Server, check out our Web site at. The TCP/IP protocol suite implementation for Windows 2000 obtains all of its configuration data from the registry. This information is written to the registry by the Setup program. Some of this information is also supplied by the Dynamic Host Configuration Protocol (DHCP) client service, if it is enabled. This appendix defines all of the registry parameters used to configure the protocol driver, Tcpip.sys, which implements the standard TCP/IP network protocols. The implementation of the protocol suite should perform properly and efficiently in most environments using only the configuration information gathered by Setup and DHCP. Optimal default values for all other configurable aspects of the protocols for most cases have been encoded into the drivers. Some customer installations may require changes to certain default values. To handle these cases, optional registry parameters can be created to modify the default behavior of some parts of the protocol drivers. Note: The Windows TCP/IP implementation is largely self-tuning. Adjusting registry parameters may adversely affect system performance. All of the TCP/IP parameters are registry values located under the registry key HKEY_LOCAL_MACHINE \SYSTEM \CurrentControlSet \Services: \Tcpip \Parameters Adapter-specific values are listed under subkeys for each adapter. Depending on whether the system or adapter is DHCP-configured or static override values are specified, parameters may have both DHCP and statically configured values. If any of these parameters are changed using the registry editor, a reboot of the system is generally required for the change to take effect. A reboot is usually not required if values are changed using the network connections interface. The following parameters receive default values during the installation of the TCP/IP components. To modify any of these values, use the Registry Editor (Regedt32.exe). A few of the parameters are visible in the registry by default, but most must be created to modify the default behavior of the TCP/IP protocol driver. Parameters configurable from the user interface are listed separately. AllowUserRawAccess Key: Tcpip\Parameters Value Type: REG_DWORD—Boolean Valid Range: 0, 1 (False, True) Default: 0 (False) Description: This parameter controls access to raw sockets. If true, non - administrative users have access to raw sockets. By default, only administrators have access to raw sockets. For more information on raw sockets, see the Windows Sockets Specifications, available from. ArpAlwaysSourceRoute Valid Range: 0, 1, or not present (false, true, or not present) Default: not present Description: By default, the stack transmits ARP queries without source routing first and retries with source routing enabled if no reply is received. Setting this parameter to 0 causes all IP broadcasts to be sent without source routing. Setting this parameter to 1 forces TCP/IP to transmit all ARP queries with source routing enabled on Token Ring networks. (A change to the definition of the parameter was introduced in Windows NT 4.0 SP2.) ArpCacheLife ArpCacheMinReferencedLife Default: 600 seconds (10 minutes) Description: ArpCacheMinReferencedLife controls the minimum time until a referenced ARP cache entry expires. This parameter can be used in combination with the ArpCacheLife parameter, as follows: If ArpCacheLife is greater than or equal to ArpCacheMinReferencedLife, referenced and unreferenced ARP cache entries expire in ArpCacheLife seconds. If ArpCacheLife is less than ArpCacheMinReferencedLife, unreferenced entries expire in ArpCacheLife seconds, and referenced entries expire in ArpCacheMinReferencedLife seconds. Entries in the ARP cache are referenced each time that an outbound packet is sent to the IP address in the entry. ArpRetryCount Value Type: REG_DWORD—Number Valid Range: 1–3 Default: 3 Description: This parameter controls the number of times that the computer sends a gratuitous ARP for its own IP address(es) while initializing. Gratuitous ARPs are sent to ensure that the IP address is not already in use elsewhere on the network. The value controls the actual number of ARPs sent, not the number of retries. ArpTRSingleRoute Valid Range: 0, 1 (false, true) Default: 0 (false) Description: Setting this parameter to 1 causes ARP broadcasts that are source-routed (Token Ring) to be sent as single-route broadcasts, instead of all-routes broadcasts. ArpUseEtherSNAP Description: Setting this parameter to 1 forces TCP/IP to transmit Ethernet packets using 802.3 SNAP encoding. By default, the stack transmits packets in DIX Ethernet format. It always receives both formats. DatabasePath Value Type: REG_EXPAND_SZ—Character string Valid Range: A valid Windows NT file path Default:: %SystemRoot%\system32\drivers\etc Description: This parameter specifies the path to the standard Internet database files (Hosts, Lmhosts, Network, Protocols, Services). It is used by the Windows Sockets interface. DefaultTTL Value Type: REG_DWORD—Number of seconds/hops Valid Range: 0–0xff (0–255 decimal) Default: 128 Description: Specifies the default time-to-live (TTL) value set in the header of outgoing IP packets. The TTL determines the maximum amount of time that an IP packet may live in the network without reaching its destination. It is effectively a limit on the number of routers that an IP packet is allowed to pass through before being discarded. DisableDHCPMediaSense Key: Tcpip\Parameters Description: This parameter can be used to control DHCP Media Sense behavior. If set to 1, the DHCP client will ignore Media Sense events from the interface. By default, Media Sense events trigger the DHCP client to take an action, such as attempting to obtain a lease (when a connect event occurs), or invalidating the interface and routes (when a disconnect event occurs). DisableIPSourceRouting Valid Range: 0, 1, 2 0 - forward all packets 1 - do not forward Source Routed packets 2 - drop all incoming Source Routed packets Default: 1 (true) Description: IP source routing is a mechanism allowing the sender to determine the IP route that a datagram should take through the network, used primarily by tools such as tracert.exe and ping.exe. This parameter was added to Windows NT 4.0 in Service Pack 5 (see the Microsoft Knowledge Base article 217336<;EN-US;KBHOWTO&sd=GN&ln=EN-US>). Windows 2000 disables IP source routing by default. DisableMediaSenseEventLog Description: This parameter can be used to disable logging of DHCP Media Sense events. By default, Media Sense events (connection/disconnection from the network) are logged in the event log for troubleshooting purposes. DisableTaskOffload Description: This parameter instructs the TCP/IP stack to disable offloading of tasks to the network card for troubleshooting and test purposes. DisableUserTOSSetting Description: This parameter can be used to allow programs to manipulate the Type Of Service (TOS) bits in the header of outgoing IP packets. In Windows 2000, this defaults to True. In general, individual applications should not be allowed to manipulate TOS bits, because this can defeat system policy mechanisms such as those described in the "Quality of Service (QoS) and Resource Reservation Protocol (RSVP)" section of this paper. DontAddDefaultGateway Key: Tcpip\Parameters \Interfaces\interface Default: 0 Description: When you install PPTP, a default route is installed for each LAN adapter. You can disable the default route on one of them by adding this value and setting it to 1. After doing so, you may need to configure static routes for hosts that are reached using a router other than the default gateway. EnableAddrMaskReply Description: This parameter controls whether the computer responds to an ICMP address mask request. EnableBcastArpReply Description: This parameter controls whether the computer responds to an ARP request when the source Ethernet address in the ARP is not unicast. Network Load Balancing Service (NLBS) will not work properly if this value is set to 0. EnableDeadGWDetect Key: Tcpip\Parameters. EnableICMPRedirects Value Type: REG_DWORD--BOOLEAN Valid Range: 0, 1 (False, True) Default: 1 (True) for Beta 3. Slated to change to in RC1 to 1 (True) Recommendation: 0 (False) Description: This parameter controls whether Windows 2000 will alter its route table in response to ICMP redirect messages that are sent to it by network devices such as a routers. EnableFastRouteLookup Description: Fast route look-up is enabled if this flag is set. This can make route lookups faster at the expense of non-paged pool memory. This flag is used only if the computer runs Windows 2000 Server and falls into the medium or large class (in other words, contains at least 64 MB of memory). This parameter is created by the Routing and Remote Access Service. EnableMulticastForwarding Description: The routing service uses this parameter to control whether or not IP multicasts are forwarded. This parameter is created by the Routing and Remote Access Service. EnablePMTUBHDetect Description: Setting this parameter to 1 (true)111. If the segment is acknowledged as a result, the MSS is decreased and the Don't Fragment bit is set in future packets on the connection. Enabling black hole detection increases the maximum number of retransmissions that are performed for a given segment. EnablePMTUDiscovery Description:. FFPControlFlags Description: If this parameter is set to 1, Fast Forwarding Path (FFP) is enabled. If it is set to 0, TCP/IP instructs all FFP-capable adapters not to do any fast forwarding on this computer. Fast Forwarding Path–capable network adapters can receive routing information from the stack and forward subsequent packets in hardware without passing them up to the stack. FFP parameters are located in the TCP/IP registry key, but are actually placed there by the Routing and Remote Access Service (RRAS) service. See the RRAS documentation for more details. FFPFastForwardingCacheSize Value Type: REG_DWORD—Number of bytes Default: 100,000 bytes Description: This is the maximum amount of memory that a driver that supports fast forwarding (FFP) can allocate for its fast-forwarding cache if it uses system memory for its cache. If the device has its own memory for fast-forwarding cache, this value is ignored. ForwardBufferMemory Valid Range: network MTU– some reasonable value smaller than 0xFFFFFFFF Default: 74240 (enough for fifty 1480-byte packets, rounded to a multiple of 256) Description: This parameter determines how much memory IP allocates initially to store packet data in the router packet queue. When this buffer space is filled, the system attempts to allocate more memory. routing function is not enabled. The maximum amount of memory that can be allocated for this function is controlled by MaxForwardBufferMemory. GlobalMaxTcpWindowSize Valid Range: 0–0x3FFFFFFF (1073741823 decimal; however, values greater than 64 KB can only be achieved when connecting to other systems that support RFC 1323 window scaling, which is discussed in the TCP section of this document. Additionally, window scaling must be enabled using the Tcp1323Opts registry parameter.) Default: This parameter does not exist by default. Description: The TcpWindowSize parameter can be used to set the receive window on a per-interface basis. This parameter can be used to set a global limit for the TCP window size on a system-wide basis. This parameter is new in Windows 2000. IPAutoconfigurationAddress Key: Tcpip\Parameters\Interfaces\<interface> Value Type: REG_SZ—String Valid Range: A valid IP address Default: None Description: The DHCP client stores the IP address chosen by autoconfiguration here. This value should not be altered. IPAutoconfigurationEnabled Key: Tcpip\Parameters, Tcpip\Parameters\Interfaces\interface Description: This parameter enables or disables IP autoconfiguration. See the "Automatic Client Configuration and Media Sense" section of this paper for details. This parameter can be set globally or per interface. If a per-interface value is present, it overrides the global value for that interface. IPAutoconfigurationMask Valid Range: A valid IP subnet mask Default: 255.255.0.0 Description: This parameter controls the subnet mask assigned to the client by autoconfiguration. See the "Automatic Client Configuration and Media Sense" section of this document for details. This parameter can be set globally or per interface. If a per-interface value is present, it overrides the global value for that interface. IPAutoconfigurationSeed Valid Range: 0-0xFFFF Description: This parameter is used internally by the DHCP client and should not be modified. IPAutoconfigurationSubnet Valid Range: A valid IP subnet Default: 169.254.0.0 Description: This parameter controls the subnet address used by autoconfiguration to pick an IP address for the client. See the "Automatic Client Configuration and Media Sense" section of this document for details. This parameter can be set globally or per interface. If a per-interface value is present, it overrides the global value for that interface. IGMPLevel Valid Range: 0,1,2 Default: 2 Description: This parameter determines to what extent the system supports IP multicasting and participates in the Internet Group Management Protocol. At level 0, the system provides no multicast support. At level 1, the system can send IP multicast packets but cannot receive them. At level 2, the system can send IP multicast packets and fully participate in IGMP to receive multicast packets. IPEnableRouter Description: Setting this parameter to 1 (true) causes the system to route IP packets between the networks to which it is connected. IPEnableRouterBackup Description: Setup writes the previous value of IPEnableRouter to this key. It should not be adjusted manually. KeepAliveInterval Value Type: REG_DWORD—time in milliseconds Valid Range: 1–0xFFFFFFFF Default: 1000 (one second) Description: This parameter determines the interval between Default: 7,200,000 (two hours) Description:. MaxForwardBufferMemory Value Type: REG_DWORD—number of bytes Valid Range: network MTU–0xFFFFFFFF Default: 2097152 decimal (2 MB) Description: This parameter limits the total amount of memory that IP can allocate to store packet data in the router packet queue. This value must be greater than or equal to the value of the ForwardBufferMemory parameter. See the description of ForwardBufferMemory for more details. MaxForwardPending Key: Tcpip\Parameters\Interfaces\interface Value Type: REG_DWORD—number of packets Default: 0x1388 (5000 decimal) Description: This parameter limits the number of packets that the IP forwarding engine can submit for transmission to a specific network interface at any time. Additional packets are queued in IP until outstanding transmissions on the interface complete. Most network adapters transmit packets very quickly, so the default value is sufficient. A single RAS interface, however, may multiplex many slow serial lines. Configuring a larger value for this type of interface may improve its performance. The appropriate value depends on the number of outgoing lines and their load characteristics. MaxFreeTcbs Value Type: REG_DWORD—number). For Windows 2000 Server: Small system—500 Medium system—1000 Large system—2000 For Windows 2000 Professional: Small system—250 Medium system—500 Large system—1000 Description: This parameter controls the number of cached (pre-allocated) Transport Control Blocks (TCBs) that are available. A Transport Control Block is a data structure that is maintained for each TCP connection. MaxFreeTWTcbs Valid Range: 1-0xFFFFFFFF Default: 1000 Description: This parameter controls the number of Transport Control Blocks (TCBs) in the TIME-WAIT state that are allowed on the TIME-WAIT state list. Once this number is exceeded, the oldest TCB will be scavenged from the list. In order to maintain connections in the TIME-WAIT state for at least 60 seconds, this value should be >= (60 * (the rate of graceful connection closures per second) for the computer. The default value is adequate for most cases. MaxHashTableSize Value Type: REG_DWORD—number (must be a power of 2) Valid Range: 0x40–0x10000 (64-65536 decimal) Default: 512 Description:. MaxNormLookupMemory Valid Range: Any DWORD (0xFFFFFFFF means no limit on memory.) Default: The following default values are used (Small is defined as a computer with less than19 MB of RAM, Medium is 19–63 MB of RAM, and Large is 64 MB or more of RAM. Although this code still exists, nearly all computers are Large now). Small system—150,000 bytes, which accommodates 1000 routes Medium system—1,500,000 bytes, which accommodates 10,000 routes Large system—5,000,000 bytes, which accommodates 40,000 routes 150,000 bytes, which accommodates 1000 routes Description: This parameter controls the maximum amount of memory that the system allows for the route table data and the routes themselves. It is designed to prevent memory exhaustion on the computer caused by adding large numbers of routes. MaxNumForwardPackets Default: 0xFFFFFFFF Description: This parameter limits the total number of IP packet headers that can be allocated for the router packet queue. This value must be greater than or equal to the value of the NumForwardPackets parameter. See the description of NumForwardPackets for more details. MaxUserPorts Value Type: REG_DWORD—maximum port number Valid Range: 5000–65534 (decimal) Description:). MTU Valid Range: 88–the MTU of the underlying network Description: This parameter overrides the default Maximum Transmission Unit (MTU) for a network interface. The MTU is the maximum packet size, in bytes, that the transport can transmit over the underlying network. The size includes the transport header. An IP datagram can span multiple packets. Values larger than the default for the underlying network cause the transport to use the network default MTU. Values smaller than 88 cause the transport to use an MTU of 88.. NumForwardPackets Valid Range: 1—some reasonable value smaller than 0xFFFFFFFF Default: 0x32 (50 decimal) Description: This parameterTcbTablePartitions Key: Tcpip\Parameters\ Value Type: REG_DWORD—number of TCB table partitions Valid Range: 1-0xFFFF Default: 4 Description: This parameter) times 2. PerformRouterDiscovery Value Type: REG_DWORD Valid Range: 0, 1, 2 0 (disabled) 1 (enabled) 2 (enable only if DHCP sends the router discover option) Default: 2, DHCP-controlled but off by default. Description: This parameter controls whether Windows 2000 attempts to perform router discovery per RFC 1256 on a per-interface basis. See also SolicitationAddressBcast. PerformRouterDiscoveryBackup Default: none Description: This parameter is used internally to keep a back-up copy of the PerformRouterDiscovery value. It should not be modified. PPTPTcpMaxDataRetransmissions Value Type: REG_DWORD—number of times to retransmit a PPTP packet Valid Range: 0–0xFF Default: 5 Description: This parameter controls the number of times that a PPTP packet is retransmitted if it is not acknowledged. This parameter was added to allow retransmission of PPTP traffic to be configured separately from regular TCP traffic. SackOpts Description: This parameter controls whether or not Selective Acknowledgment (SACK, specified in RFC 2018) support is enabled. SACK is described in more detail in the "Transmission Control Protocol (TCP)" section of this paper. SolicitationAddressBcast Value Type: REG_DWORDBoolean Description: This parameter can be used to configure Windows to send router discovery messages as broadcasts instead of multicasts, as described in RFC 1256. By default, if router discovery is enabled, router discovery solicitations are sent to the all-routers multicast group (224.0.0.2). See also PerformRouterDiscovery. SynAttackProtect. Recommendation: 2. Note that the actions taken by the protection mechanism only occur if TcpMaxHalfOpen and TcpMaxHalfOpenRetried settings are exceeded. Tcp1323Opts. Description: This parameter controls RFC 1323 time stamps and window-scaling options. Time stamps and window scaling are enabled by default, but can be manipulated with flag bits. Bit 0 controls window scaling, and bit 1 controls time stamps. TcpDelAckTicks Key: Tcpip\Parameters\Interfaces\interface Valid Range: 0–6 Default: 2 (200 milliseconds). TcpInitialRTT Valid Range: 0–0xFFFF Default: 3 seconds Description: This parameter controls the initial time-out used for a TCP connection request and initial data retransmission on a per-interface basis. Use caution when tuning with this parameter because exponential backoff is used. Setting this value to larger than 3 results in much longer time-outs to nonexistent addresses. TcpMaxConnectResponseRetransmissions Valid Range: 0–255 Description: This parameter controls the number of times that a SYN-ACK is retransmitted in response to a connection request if the SYN is not acknowledged. If this value is greater than or equal to 2, the stack employs SYN-ATTACK protection internally. If this value is less than 2, the stack does not read the registry values at all for SYN-ATTACK protection. See also SynAttackProtect, TCPMaxPortsExhausted, TCPMaxHalfOpen, and TCPMaxHalfOpenRetried. TcpMaxConnectRetransmissions Valid Range: 0–255 (decimal) Description: This parameter determines the number of times that TCP retransmits a connect request (SYN) before aborting the attempt. The retransmission time-out is doubled with each successive retransmission in a given connect attempt. The initial time-out is controlled by the TcpInitialRtt registry value. TcpMaxDataRetransmissions Description:, or SRTT) on each connection. The starting RTO on a new connection is controlled by the TcpInitialRtt registry value. TcpMaxDupAcks Description: This parameter determines the number of duplicate ACKs that must be received for the same sequence number of sent data before fast retransmit is triggered to resend the segment that has been dropped in transit. This mechanism is described in more detail in the "Transmission Control Protocol (TCP)" section of this paper. TcpMaxHalfOpen Valid Range: 100–0xFFFF Default: 100 (Professional, Server), 500 (Advanced Server) in Appendix C, below, for more information). See the SynAttackProtect parameter for more details. TcpMaxHalfOpenRetried Valid Range: 80–0xFFFF Default: 80 (Professional, Server), 400 (Advanced Server) Description: This parameter controls the number of connections in the SYN-RCVD state for which there has been at least one retransmission of the SYN sent, before SYN-ATTACK attack protection begins to operate. See the SynAttackProtect parameter for more details. TcpMaxPortsExhausted Description: This parameter controls the point at which SYN-ATTACK protection starts to operate. SYN-ATTACK protection begins to operate when TcpMaxPortsExhausted connect requests have been refused by the system because the available backlog for connections is set at 0. TcpMaxSendFree Default: 5000 Description: This parameter controls the size limit of the TCP header table. On machines with large amounts of RAM increasing this setting can improve responsiveness during synattack. TcpNumConnections Valid Range: 0–0xFFFFFE Default: 0xFFFFFE Description: This parameter limits the maximum number of connections that TCP can have open simultaneously. TcpTimedWaitDelay Value Type: REG_DWORD—time in seconds Valid Range: 30-300 (decimal) Default: 0xF0 (240 decimal) Description:. TcpUseRFC1122UrgentPointer Description: This parameter determines whether TCP uses the RFC 1122 specification for urgent data or the mode used by BSD-derived systems. The two mechanisms interpret the urgent pointer in the TCP header and the length of the urgent data differently. They are not interoperable. Windows 2000 defaults to BSD mode. TcpWindowSize Key: Tcpip\Parameters, Tcpip\Parameters\Interface\interface Valid Range: Protocoal (TCP)" section of this document. Default: The smaller of the following values: 0xFFFF GlobalMaxTcpWindowSize (another registry parameter) The larger of four times the maximum TCP data size on the network 16384 rounded up to an even multiple of the network TCP data size The default can start at 17520 for Ethernet, but may shrink slightly when the connection is established to another computer that supports extended TCP head options, such as SACK and TIMESTAMPS, because these options increase. TrFunctionalMcastAddress Description: This parameter determines whether IP multicasts are sent using the Token Ring Multicast address described in RFC 1469 or using the subnet broadcast address. The default value of 1 configures the computer to use the RFC1469 Token Ring Multicast address for IP multicasts. Setting the value to 0 configures the computer to use the subnet broadcast address for IP multicasts. TypeOfInterface Default: 0 (allow multicast and unicast) Description: This parameter determines whether the interface gets routes plumbed for unicast, multicast, or both traffic types, and whether those traffic types can be forwarded. If it is set to 0, both unicast and multicast traffic are allowed. If it is set to 1, unicast traffic is disabled. If it is set to 2, multicast traffic is disabled. If it set to 3, both unicast and multicast traffic are disabled. Since this parameter affects forwarding and routes, it may still be possible for a local application to send multicasts out over an interface, if there are no other interfaces in the computer that are enabled for multicast, and a default route exists. UseZeroBroadcast Description: If this parameter is set to 1 (true), IP will use 0s broadcasts (0.0.0.0) instead of 1s broadcasts (255.255.255.255). Most systems use 1s broadcasts, but some systems derived from BSD implementations use 0s broadcasts. Systems that use different broadcasts do not interoperate well on the same network. The following parameters are created and modified automatically by the NCPA as a result of user-supplied information. There should be no need to configure them directly in the registry. DefaultGateway Value Type: REG_MULTI_SZ—list of dotted decimal IP addresses Valid Range: Any set of valid IP addresses Description: This parameter specifies the list of gateways to be used to route packets that are not destined for a subnet that the computer is directly connected to, and for which a more specific route does not exist. This parameter, if it has a valid value, overrides the DhcpDefaultGateway parameter. There is only one active default gateway for the computer at any time, so adding multiple addresses is only done for redundancy. See the "Dead Gateway Detection" section in this paper for details. Domain Value Type: REG_SZ—character string Valid Range: Any valid DNS domain name Description: This parameter specifies the DNS domain name of the interface. In Windows 2000, this and NameServer are per-interface parameters, rather than system-wide parameters. This parameter overrides the DhcpDomain parameter (filled in by the DHCP client), if it exists. EnableDhcp Description: If this parameter is set to 1 (true), the DHCP client service attempts to use DHCP to configure the first IP interface on this adapter. EnableSecurityFilters Description: If this parameter is set to 1 (true), IP security filters are enabled. See TcpAllowedPorts, UdpAllowedPorts, and RawIPAllowedPorts. To configure these values, on the Start menu, point to Settings, then click Network and Dial-up Connections, right-click Local Area Connection, and then click Properties. Select Internet Protocol (TCP/IP), and click Properties, then click Advanced. Click the Options tab, select TCP/IP filtering, and click Properties. Hostname Valid Range: Any valid DNS hostname Default: The computer name of the system Description: This parameter specifies the DNS host name of the system, which is returned by the hostname command. IPAddress Value Type: REG_MULTI_SZ—list of dotted-decimal IP addresses Description: This parameter specifies the IP addresses of the IP interfaces to be bound to the adapter. If the first address in the list is 0.0.0.0, the primary interface on the adapter is configured from DHCP. A system with more than one IP interface for an adapter is logically multihomed. There must be a valid subnet mask value in the SubnetMask parameter for each IP address that is specified in this parameter. To add parameters with Regedt32.exe, select this key and type the list of IP addresses, pressing Enter after each one. Then go to the SubnetMask parameter, and type a corresponding list of subnet masks. NameServer Value Type: REG_SZ—a space delimited list of dotted decimal IP addresses Valid Range: Any set of valid IP address Default: None (blank) Description: This parameter specifies the DNS name servers that Windows Sockets queries to resolve names. In Windows 2000, this and the DomainName are per-interface settings. PPTPFiltering Description: This parameter controls whether PPTP filtering is enabled on a per-adapter basis. If this value is set to 1, the adapter accepts only PPTP connections. This reduces exposure to hack attempts if the adapter is connected to a public network, such as the Internet. RawIpAllowedProtocols Value Type: REG_MULTI_SZ—list of IP protocol numbers Valid Range: Any set of valid IP protocol numbers Description: This parameter specifies the list of IP protocol numbers for which incoming datagrams are accepted on an IP interface when security filtering is enabled (EnableSecurityFilters = 1). The parameter controls the acceptance of IP datagrams by the raw IP transport, which is used to provide raw sockets. It does not control IP datagrams that are passed to other transports (for example, TCP). that are configured on a specific adapter. SearchList Value Type: REG_SZ—space delimited list of DNS domain name suffixes Valid Range: 1-50 Description: This parameter specifies a list of domain name suffixes to append to a name to be resolved through DNS if resolution of the unadorned name fails. By default, only the value of the Domain parameter is appended. This parameter is used by the Windows Sockets interface. See also the AllowUnqualifiedQuery parameter. SubnetMask Valid Range: Any set of valid IP addresses. Description: This parameter specifies the subnet masks to be used with the IP interfaces bound to the adapter. If the first mask in the list is 0.0.0.0, the primary interface on the adapter is configured using DHCP. There must be a valid subnet mask value in this parameter for each IP address specified in the IPAddress parameter. TcpAllowedPorts Value Type: REG_MULTI_SZ—list of TCP port numbers Valid Range: Any set of valid TCP port numbers Description: This parameter specifies the list of TCP port numbers for which incoming SYNs. UdpAllowedPorts Value Type: REG_MULTI_SZ—list of UDP port numbers Valid Range: Any set of valid UDP port numbers Description: This parameter specifies the list of UDP port numbers for which incoming datagrams. The route command can store persistent IP routes as values under the Tcpip\Parameters\PersistentRoutes registry key. Each route is stored in the value name string as a comma-delimited list of the form: destination,subnet mask,gateway,metric For example, the command: route add 10.99.100.0 MASK 255.255.255.0 10.99.99.1 METRIC 1 /p produces the registry value: 10.99.100.0,255.255.255.0,10.99.99.1,1 The value type is a REG_SZ. There is no value data (empty string). Addition and deletion of these values can be accomplished using the route command. There should be no need to configure them directly. The following parameters are created and used internally by the TCP/IP components. They should never be modified using the Registry Editor. They are listed here for reference only. DhcpDefaultGateway Description: This parameter specifies the list of default gateways to be used to route packets that are not destined for a subnet to which the computer is directly connected and for which a more specific route does not exist. This parameter is written by the DHCP client service, if enabled. This parameter is overridden by a valid DefaultGateway parameter value. Although this parameter is set on a per-interface basis, there is always only one default gateway active for the computer. Additional entries are treated as alternatives if the first one is down. DhcpIPAddress Value Type: REG_SZ—dotted decimal IP address Valid Range: Any valid IP address Description: This parameter specifies the DHCP-configured IP address for the interface. If the IPAddress parameter contains a first value other than 0.0.0.0, that value overrides this parameter. DhcpDomain Value Type: REG_SZ—Character string Default: None (provided by DHCP server) Description: This parameter specifies the DNS domain name of the interface. In Windows 2000, this and NameServer are now per-interface parameters, rather than system-wide parameters. If the Domain key exists, it overrides the DhcpDomain value. DhcpNameServer Value Type: REG_SZ—A space delimited list of dotted decimal IP addresses Description: This parameter specifies the DNS name servers to be queried by Windows Sockets to resolve names. It is written by the DHCP client service, if enabled. If the NameServer parameter has a valid value, it overrides this parameter. DhcpServer Description: This parameter specifies the IP address of the DHCP server that granted the lease on the IP address in the DhcpIPAddress parameter. DhcpSubnetMask Value Type: REG_SZ—dotted decimal IP subnet mask Valid Range: Any subnet mask that is valid for the configured IP address Description: This parameter specifies the DHCP-configured subnet mask for the address specified in the DhcpIPAddress parameter. DhcpSubnetMaskOpt Description: This parameter is filled in by the DHCP client service and is used to build the DhcpSubnetMask parameter, which the stack actually uses. Validity checks are performed before the value is inserted into the DhcpSubnetMask parameter. Lease Description: The DHCP client service uses this parameter to store the time, in seconds, for which the lease on the IP address for this adapter is valid. LeaseObtainedTime Value Type: REG_DWORD—absolute time, in seconds, since midnight of 1/1/70 Description: The DHCP client service uses this parameter to store the time at which the lease on the IP address for this adapter was obtained. LeaseTerminatesTime Description: The DHCP client service uses this parameter to store the time at which the lease on the IP address for this adapter expires. LLInterface Key: Tcpip\Parameters\Adapters\interface Value Type: REG_SZ—Windows 2000 device name Valid Range: A legal Windows 2000 device name Default: Empty string (blank) Description: This parameter is used to direct IP to bind to a different link-layer protocol than the built-in ARP module. The value of the parameter is the name of the Windows 2000 device to which IP should bind. This parameter is used in conjunction with the RAS component, for example. It is only present when ARP modules other than LAN bind to IP. NTEContextList Value Type: REG_MULTI_SZ—number Description: This parameter identifies the context of the IP address associated with an interface. Each IP address associated with an interface has its own context number. The values are used internally to identify an IP address and should not be altered. T1 Description: The DHCP client service uses this parameter to store the time at which the service first tries to renew the lease on the IP address for the adapter by contacting the server that granted the lease. T2 Description: The DHCP client service uses this parameter to store the time at which the service tries to renew the lease on the IP address for the adapter by broadcasting a renewal request. Time T2 should only be reached if the service is unable to renew the lease with the original server for some reason. The ATM ARP client parameters are located—along with the TCP/IP parameters for each interface—under the AtmArpC subkey. A sample dump of the registry for a single TCP/IP interface for an ATM adapter is shown below. HKEY_LOCAL_MACHINE \System \CurrentControlSet \Services \Tcpip \Parame ters\Interfaces\{A24B73BE-D2CD-11D1-BE08-8FF4D413E1BE}\AtmArpC SapSelector = REG_DWORD 0x00000001 AddressResolutionTimeout = REG_DWORD 0x00000003 ARPEntryAgingTimeout = REG_DWORD 0x00000384 InARPWaitTimeout = REG_DWORD 0x00000005 MaxResolutionAttempts = REG_DWORD 0x00000004 MinWaitAfterNak = REG_DWORD 0x0000000a ServerConnectInterval = REG_DWORD 0x00000005 ServerRefreshTimeout = REG_DWORD 0x00000384 ServerRegistrationTimeout = REG_DWORD 0x00000003 DefaultVcAgingTimeout = REG_DWORD 0x0000003c MARSConnectInterval = REG_DWORD 0x00000005 MARSRegistrationTimeout = REG_DWORD 0x00000003 JoinTimeout = REG_DWORD 0x0000000a LeaveTimeout = REG_DWORD 0x0000000a MaxJoinLeaveAttempts = REG_DWORD 0x00000005 MaxDelayBetweenMULTIs = REG_DWORD 0x0000000a ARPServerList = REG_MULTI_SZ "4700790001020000000000000000A03E00000200" MARServerList = REG_MULTI_SZ "4700790001020000000000000000A03E00000200" MTU = REG_DWORD 0x000023dc PVCOnly = REG_DWORD 0x00000000 A description of each of these parameters follows. SapSelector Key: Tcpip\Parameters\Interfaces\interface\AtmArpC Valid Range: 1–255 Default: 1 Description: Specifies the selector byte value used by the ATMARP client as the twentieth byte of its ATM address. The resulting address is used to register with the ATMARP server and the Multicast Address Resolution Server (MARS). AddressResolutionTimeout Value Type: REG_DWORD—number of seconds Valid Range: 1–60 Description: Specifies how long the ATMARP client waits for a response after sending an ARP request for a unicast IP address (or MARS request for a multicast/broadcast IP address). If this timer elapses, the ATMARP client retransmits the request a maximum of (MaxResolutionAttempts – 1) times. ARPEntryAgingTimeout Valid Range: 90–1800 Default: 900 seconds (15 minutes) Description: Specifies how long the ATMARP client retains address resolution information for a unicast IP address before it is invalidated. If this timer expires, the ATMARP client does one of the following things: If there are no virtual circuits (VCs) associated with the IP address, it deletes the ARP entry for this IP address. If there is at least one permanent virtual circuit (PVC) associated with the IP address, it uses Inverse ARP on the PVC to revalidate the ARP entry. If there is at least one SVC associated with the IP address, it sends an ARP request to the ARP server to revalidate the ARP entry. InARPWaitTimeout Description: Specifies how long the ATMARP client waits for a response after sending an Inverse Address Resolution Protocol (InARP) request to revalidate a unicast IP address to ATM address mapping, that is, an ARP entry. If this timer expires, the ATMARP client deletes the ARP table entry that contains the IP address. MaxResolutionAttempts Description: Specifies the maximum number of attempts to be made by the ATMARP client to resolve a unicast or multicast or broadcast IP address to an ATM address (or addresses). MinWaitAfterNak Default: 10 Description: Specifies how long the ATMARP client waits after receiving a failure (ARP NAK) response from the ARP server or MARS. This prevents the ATMARP client from flooding the server with queries for an IP address that is nonexistent or unknown. ServerConnectInterval Valid Range: 1–30 Description: Specifies how long the ATMARP client waits after a failed attempt to connect to the ARP server before retrying the connection. ServerRefreshTimeout Description: Specifies the interval at which the ATMARP client sends an ARP Request with its own IP/ATM address information to refresh the ATMARP server's cache. ServerRegistrationTimeout Description: Specifies how long the ATMARP client waits for an ARP Response packet in reply to an ARP Request packet that it sent to register its own IP/ATM information with the ATMARP server. If this timer expires, the ATMARP client retransmits the ARP Request packet. DefaultVcAgingTimeout Valid Range: 10–1800 Default: 60 Description: Specifies the inactivity time-out for all VCs initiated by the ATMARP client. This does not apply to PVCs. Inactivity is defined as a condition of no data activity in either direction. If this timer expires, the ATMARP client disconnects the VC. MARSConnectInterval Description: Specifies how long the ATMARP client waits after a failed attempt to connect to the MARS before retrying the connection. MARSRegistrationTimeout Description: Specifies how long the ATMARP client waits for an MARS Join packet in reply to a MARS Join packet that it sent to register its ATM address with the MARS. If this timer expires, the ATMARP client retransmits the MARS Join packet. JoinTimeout Valid Range: 5–60 Description: Specifies how long the ATMARP client waits for a MARS Join packet in reply to a MARS Join packet it sent to initiate membership to an IP multicast group (or the IP broadcast address). If this timer expires, the ATMARP client retransmits the MARS join packet a maximum of MaxJoinLeaveAttempts. LeaveTimeout Description: Specifies how long the ATMARP client waits for a MARS Leave packet in reply to a MARS Leave packet that it sent to terminate membership from an IP multicast group (or the IP broadcast address). If this timer expires, the ATMARP client retransmits the MARS Leave packet a maximum of MaxJoinLeaveAttempts one time. MaxJoinLeaveAttempts Valid Range: 1–10 Description: Specifies the maximum number of attempts to be made by the ATMARP client to Join or Leave an IP multicast (or broadcast) group. MaxDelayBetweenMULTIs Valid Range: 2–60 Description: Specifies the maximum delay expected by the ATMARP client between successive MARS MULTI packets corresponding to a single MARS Request. ARPServerList Value Type: REG_MULTI_SZ Valid Range: A list of strings containing ATM addresses Default: 4700790001020000000000000000A03E00000200 Description: This is the list of ARP servers that the ARP client is allowed to register with. This is used in a failover fashion; that is, the ARP client tries to register using each address in sequence until successful. MARServerList Value Type: REG_MULTI_SZ—list of strings Description: This is the list of MARS servers that the ARP client is allowed to register with. This is used in a failover fashion; that is, the ARP client tries to register using each address in sequence, until successful. Valid Range: 9180–65527 Default: 9180 Description: Specifies the maximum transmission unit reported to the IP layer for this interface. All of the NetBT parameters are registry values located under one of two different subkeys of HKEY_LOCAL_MACHINE \SYSTEM \CurrentControlSet \Services: NetBT\Parameters NetBT\Adapters\Interfaces\interface, in which interface refers to the subkey for a network interface to which NetBT is bound Values under the latter key(s) are specific to each interface. If the system is configured using DHCP, a change in parameters takes effect if you issue the command ipconfig /renew from a command prompt. Otherwise, you must reboot the system for a change in any of these parameters to take effect. The following parameters are installed with default values by the NCPA during the installation of the TCP/IP components. They may be modified using the Registry Editor (Regedt32.exe). A few of the parameters are visible in the registry by default, but most must be created in order to modify the default behavior of the NetBT driver. BacklogIncrement Key: Netbt\Parameters Valid Range: 3–0x14 (1–20 decimal) Description: This parameter was added in response to Internet SYN-ATTACK issues. When a connection attempt is made to the NetBIOS TCP port (139), if the number of free connection blocks is below 2, a BackLogIncrement number of new connection blocks are created by the system. Each connection block consumes 78 bytes of memory. A limit on the total number of connection blocks allowed can be set using the MaxConnBackLog parameter. One connection block is required for each NetBT connection. BcastNameQueryCount Valid Range: 1–0xFFFF Description: This value determines the number of times NetBT broadcasts a query for a specific name without receiving a response. BcastQueryTimeout Valid Range: 100–0xFFFFFFFF Default: 0x2ee (750 decimal) Description: This value determines the time interval between successive broadcast name queries for the same name. BroadcastAddress Value Type: REG_DWORD—4-byte, little-endian encoded IP address Default: The 1s-broadcast address for each network Description: This parameter can be used to force NetBT to use a specific address for all broadcast name-related packets. By default, NetBT uses the 1s-broadcast address appropriate for each net (that is, for a network of 10.101.0.0 with a subnet mask of 255.255.0.0, the subnet broadcast address would be 10.101.255.255). This parameter would be set, for example, if the network uses the 0s-broadcast address (set using the UseZeroBroadcast TCP/IP parameter). The appropriate subnet broadcast address would then be 10.101.0.0 in the example above. This parameter would then be set to 0x0b650000. This parameter is global and is used on all subnets to which NetBT is bound. CachePerAdapterEnabled Description: This value determines whether NetBIOS remote name caching is done on a per-adapter basis. Nbtstat -c has been enhanced to show the per-adapter name cache. CacheTimeout Value Type: REG_DWORD—time, in milliseconds Valid Range: 0xEA60–0xFFFFFFFF Default: 0x927c0 (600000 milliseconds = 10 minutes) Description: This value determines the time interval that names are cached in the remote name table. The nbtstat –c command can be used to view the remaining time for each name in the cache. ConnectOnRequestedInterfaceOnly Description: This value can be used to allow NetBT connections on the requested adapter only. When the redirector on a multihomed computer calls another computername, it places calls on all NetBT transports (protocol/adapter combinations) to which it is bound. Each transport independently attempts to reach the target name. Setting this parameter limits each transport to connecting to other computers that are reachable via its own adapter, preventing crossover traffic. For more details, see the "NetBIOS Name Registration and Resolution for Multihomed Computers" section of this paper. It no longer works no wonder it doesn't make sense. EnableDns Description: If this value is set to 1 (true), NetBT queries the DNS server for names that cannot be resolved by WINS, broadcast, or the Lmhosts file. EnableProxyRegCheck Description: If this parameter is set to 1 (true), the proxy name server sends a negative response to a broadcast name registration if the name is already registered with WINS or is in the proxy's local name cache with a different IP address. This feature prevents a system from changing its IP address as long as WINS has a mapping for the name. For this reason, it is disabled by default. InitialRefreshT.O. Valid Range: 960000–0xFFFFFFF Default: 960000 (16 minutes) Description: This parameter specifies the initial refresh time-out used by NetBT during name registration. NetBT tries to contact the WINS servers at one-eighth of this time interval when it is first registering names. When it receives a successful registration response, that response contains the new refresh interval to use. LmhostsTimeout Valid Range: 1000–0xFFFFFFFF Default: 6000 (6 seconds) Description: This parameter specifies the time-out value for Lmhosts and DNS name queries submitted by NetBT. The timer has a granularity of the time-out value, so the actual time-out could be as much as twice the value. MaxConnBackLog Valid Range: 2–0x9c40 (1-40,000 decimal) Description: This value determines the maximum number of connection blocks that NetBT allocates. See the BackLogIncrement parameter for more details. MaxPreloadEntries Valid Range: 0x3E8–0x7D0 (1000–2000 decimal) Default: 1000 decimal Description: This value determines the maximum number of entries that are preloaded from the Lmhosts file. Entries to preload into the cache are flagged in the Lmhosts file with the #PRE tag. MaxDgramBuffering Valid Range: 0x20000–0xFFFFFFFF Default: 0x20000 (128K) Description: This parameter specifies the maximum amount of memory that NetBT dynamically allocates for all outstanding datagram sends. Once this limit is reached, further sends fail due to insufficient resources. MinimumRefreshSleepTime Valid Range: 21600000-4294967295 Default: 21600000 ms (6 hours) Description: This parameter is used to reset the TTL on the WakeupTimer if ½ of the TTL is less than 6 hours when the machine is put into sleep or hiberate mode. MinimumFreeLowerConnections Valid Range: 20-500 Default: 50 Description: This parameter is used allocate the number of free handles that the system has upon boot to accept incoming connections. These handles are allocated in addition to the number of active connections that are being serviced. Once the machine is in a steady state the number of free handles increases to ½ the value of the used handles. The number of free handles is never less than 50 unless specified in the registry. NameServerPort Value Type: REG_DWORD—UDP port number Default: 0x89 Description: This parameter determines the destination port number to which NetBT sends name service-related packets, such as name queries and name registrations, to WINS. The Microsoft WINS Server listens on port 0x89 (138 decimal). NetBIOS name servers from other vendors may listen on different ports. NameSrvQueryCount Description: This value determines the number of times that NetBT sends a query to a WINS server for a specified name without receiving a response. NameSrvQueryTimeout Default: 1500 (1.5 seconds) Description: This value determines the time interval between successive name queries to WINS for a specified name. NodeType Valid Range: 1, 2, 4, 8 (b-node, p-node, m-node, h-node) Default: 1 or 8 based on the WINS server configuration Description: This parameter determines what methods NetBT uses to register and resolve names. A b-node system uses broadcasts. A p-node system uses only point-to-point name queries to a name server (WINS). An m-node system broadcasts first, then queries the name server. An h-node system queries the name server first, then broadcasts. Resolution through Lmhosts and DNS, if enabled, follows these methods. If this key is present, it overrides the DhcpNodeType key. If neither key is present, the system defaults to b-node if there are no WINS servers configured for the client. The system defaults to h-node if there is at least one WINS server configured. NoNameReleaseOnDemand Description: This parameter determines whether the computer releases its NetBIOS name when it receives a name-release request from the network. It was added to allow the administrator to protect the machine against malicious name-release attacks. RandomAdapter Description: This parameter applies to a multihomed host only. If it is set to 1 (true), NetBT randomly chooses the IP address to put in a name-query response from all of its bound interfaces. Usually, the response contains the address of the interface to which the query arrived. This feature would be used for load balancing by a server with two interfaces on the same network. RefreshOpCode Valid Range: 8, 9 Default: 8 Description: This parameter forces NetBT to use a specific opcode field in name-refresh packets. The specification for the NetBT protocol is somewhat ambiguous in this area. Although the default of 8 that is used by Microsoft implementations appears to be the intended value, some other implementations, such as those by Ungermann-Bass, use the value 9. Two implementations must use the same opcode field to interoperate. ScopeId Valid Range: Any valid DNS domain name consisting of two dot-separated parts or an asterisk (*). Description: This parameter specifies the NetBIOS name scope for the node. This value must not begin with a period. If this parameter contains a valid value, it overrides the DHCP parameter of the same name. A blank value (empty string) is ignored. Setting this parameter to the value "*" indicates a null scope and overrides the DHCP parameter. SessionKeepAlive Valid Range: 60,000–0xFFFFFFFF Default: 3,600,000 (1 hour) Description: This value determines the time interval between keep-alive transmissions on a session. Setting the value to 0xFFFFFFF disables keep-alives. SingleResponse Description: This parameter applies to a multihomed host only. If this parameter is set to 1 (true), NetBT supplies only the IP address from one of its bound interfaces in name-query responses. By default, the addresses of all bound interfaces are included. Size/Small/Medium/Large Valid Range: 1, 2, 3 (small, medium, large) Default: 1 (small) Description: This value determines the size of the name tables that are used to store local and remote names. In general, a setting of 1 (small) is adequate. If the system is acting as a proxy name server, the value is automatically set to 3 (large) to increase the size of the name cache hash table. Hash table buckets are sized as follows: Small: 16 Medium: 128 Large: 256 SMBDeviceEnabled Description: Windows 2000 supports a new network transport known as the SMB Device, which is enabled by default. This parameter can be used to disable the SMB device for troubleshooting purposes. See the "NetBT Internet/DNS Enhancements and the SMB Device" section of this paper for more details. TryAllNameServers Description: This parameter controls whether the client continues to query additional name servers from the list of configured servers when a NetBIOS session setup request to one of the IP addresses fails. If this parameter is enabled, attempts are made to query all the WINS servers in the list and connect to all the IP addresses supplied before failing the request to the user. TryAllIPAddrs Description: When a WINS server returns a list of IP addresses in response to a name query, they are sorted into a preference order based on whether any of them are on the same subnet as an interface belonging to the client. This parameter controls whether the client pings the IP addresses in the list and attempts to connect to the first one that responds, or whether it tries to connect to the first IP address in the (sorted) list and fails if that connection attempt fails. By default, the client pings each address in the list and attempts to connect to the first one that answers the ping. UseDnsOnlyForNameResolutions Description: This parameter is used to disable all NetBIOS name queries. NetBIOS name registrations and refreshes are still used, and NetBIOS sessions are still allowed. To completely disable NetBIOS on an interface, see the NetbiosOptions parameter. WinsDownTimeout Default: 15,000 (15 seconds) Description: This parameter determines the amount of time that NetBT waits before trying to use WINS again after it fails to contact any WINS server. This feature primarily allows computers that are temporarily disconnected from the network, such as laptops, to proceed through boot processing without waiting to time out each WINS name registration or query individually. The following parameters can be set using the Network Control Panel tool (NCPA). There should be no need to configure them directly. EnableLmhosts Description: If this value is set to 1 (true), NetBT searches the Lmhosts file, if it exists, for names that cannot be resolved by WINS or broadcast. By default, there is no Lmhosts file database directory (specified by Tcpip\Parameters\DatabasePath), so no action is taken. This value is written by the Advanced TCP/IP Configuration dialog box of the NCPA. EnableProxy Description: If this value is set to 1 (true), the system acts as a proxy name server for the networks to which NetBT is bound. A proxy name server answers broadcast queries for names that it has resolved through WINS. A proxy name server allows a network of b-node implementations to connect to servers on other subnets that are registered with WINS. NameServerList Key: Netbt\Parameters\Interfaces\interface Value Type: REG_MULTI_SZ—space separated, dotted decimal IP address (that is, 10.101.1.200) Valid Range: any list of valid WINS server IP addresses. Default: blank (no address) Description: This parameter specifies the IP addresses of the list of WINS servers configured for the computer. If this parameter contains a valid value, it overrides the DHCP parameter of the same name. This parameter replaces the Windows NT 4.0 parameters NameServer and NameServerBackup, which are no longer used. NetbiosOptions Valid Range: 1, 2 Description: This parameter controls whether NetBIOS is enabled on a per-interface basis. On the Start menu, point to Settings, and click Network and Dial-up Connections. Right-click Local Area Connection, and click Properties. Select Internet Protocol (TCP\IP), and click Properties, then click Advanced. Click the WINS tab. The NetBIOS options are Enable NetBIOS over TCP\IP, Disable NetBIOS over TCP\IP, or Use NetBIOS setting from the DHCP server, the default. When enabled, the value is 1. When disabled, the value is set to 2. If this key does not exist, the DHCPNetbiosOptions key is read. If this key does exist, DHCPNetbiosOptions is ignored. The following parameters are created and used internally by the NetBT components. They should never be modified using the Registry Edit or it can cause the component to become unstable. They are listed here for reference only. DHCPNameServerList Description: This parameter specifies the IP addresses of the list of WINS servers, as provided by the DHCP service. This parameter replaces the Windows NT 4.0 parameters DHCPNameServer and DHCPNameServerBackup, which are no longer used. See also NameServerList, which overrides this parameter if it is present. DHCPNetbiosOptions Description: This parameter is written by the DHCP client service. See the NetbiosOptions parameter for a description. DhcpNodeType Valid Range: 1–8 Description: This parameter specifies the NetBT node type. It is written by the DHCP client service, if enabled. A valid NodeType value overrides this parameter. See the entry for NodeType for a complete description. DhcpScopeId Valid Range: a dot-separated name string such as microsoft.com Description: This parameter specifies the NetBIOS name scope for the node. It is written by the DHCP client service, if enabled. This value must not begin with a period. See the entry for ScopeId for more information. NbProvider Valid Range: _tcp Default: _tcp Description: This parameter is used internally by the RPC component. The default value should not be changed. TransportBindName Valid Range: N/A Default: \Device\ Description: This parameter is used internally during product development. The default value should not be changed. Afd.sys is the kernel-mode driver that is used to support Windows Sockets applications. When there are three default values, the default is calculated based on the amount of memory detected in the system: The first value is the default for smaller computers (less than 19 MB). The second value is the default for medium computers (<32 MB on Windows 2000 Professional, <64 MB on Windows 2000 Server). The third value is the default for large computers (>32 MB on Windows 2000 Professional, >64 MB on Windows 2000 Server). For example, if the default is given as 0/2/10, a system containing 12.5 to 20 MB of RAM would default to 2. The following values can be set under: HKEY_LOCAL_MACHINE \SYSTEM \CurrentControlSet \Services \Afd \parameters: DefaultReceiveWindow. DefaultSendWindow Description: This is similar to DefaultReceiveWindow, but for the send side of connections. DisableAddressSharing. DisableRawSecurity Description: Disables the check for administrative privileges when attempting to open a raw socket. This is not used for Windows 2000 transports (like TCP/IP, which manages its own security for raw sockets), which have TDI_SERVICE_FORCE_ACCESS_CHECK set. See the TCP/IP AllowUserRawAccess registry parameter. DynamicBacklogGrowthDelta Description: Controls the number of free connections to create when additional connections are necessary. Be careful with this value; a large value could lead to explosive free connection allocations. (Although this parameter still exists, the TCP stack itself has been hardened against SYN-ATTACK in Windows 2000; therefore, it should not be necessary to use this feature of AFD.) FastCopyReceiveThreshold. FastSendDatagramThreshold. IgnorePushBitOnReceives Description:. IrpStackSize Description: The count of IRP stack locations used by default for AFD. Changing this value is not recommended. LargeBufferSize Default: PAGE_SIZE (4096 bytes on i386, 8192 bytes on Alpha) Description: The size, in bytes, of large buffers used by AFD. Smaller values use less memory and larger values can improve performance. LargeBufferListDepth Default: 0/2/10 Description: Depth of large buffer look-aside list. MaxActiveTransmitFileCount Valid Range: 0–0xffff (server), 2 (workstation) Default: 0 (server), 2 (workstation) Description: Allows configuration of the maximum number of concurrent TransmitFile requests outstanding. The value 0 means that it is not limited, except by system resources. This value is not configurable for Windows 2000 Professional. MaxFastTransmit. MaxFastCopyTransmit. MediumBufferSize Default: 1504 Description: The size, in bytes, of medium buffers used by AFD. MediumBufferListDepth Default: 4/8/24 Description: Depth of medium buffer look-aside list. OverheadChargeGranularity Default: 1 page Valid Range: A power of 2 Description: This parameter determines in what increments overhead is actually charged. The default is one page, and the intention is to properly charge and contain attacker type applications that try to run the system out of memory. PriorityBoost Valid Range: 0–16 Description: The priority boost that AFD gives to a thread when it completes I/O for that thread. If a multithreaded application experiences starvation of some threads, the problem may be remedied by reducing this value. SmallBufferListDepth Default: 8/16/32 Description: Depth of the small buffer look-aside list. SmallBufferSize Description: The size in bytes of small buffers used by AFD. StandardAddressLength Default: 22 Description: The length of TDI addresses that are typically used for the computer. When using an alternate transport protocol, such as TP4, which uses very long addresses, increasing this value results in a slight performance improvement. TransmitIoLength Default: PAGE_SIZE/PAGE_SIZE*2/65536 Description: The default size for I/O (reads and sends) performed by TransmitFile(). For Windows 2000 Professional, the default I/O size is exactly one page. TransmitWorker. These parameters control behavior of the dynamic DNS registration client. If a parameter is not present, the default value listed is used. DNSQueryTimeouts Value Type: REG_MULTI_SZ—list of timeouts terminated by a zero Valid Range:. DefaultRegistrationTTL Value Type: REG_DWORD—seconds Default: 0x4B0 (1200 decimal, or 20 minutes) Description: This parameter can be used to control the TTL value sent with dynamic DNS registrations. EnableAdapterDomainNameRegistration. DisableDynamicUpdate Default: 0 (false; dynamic DNS. DisableReplaceAddressesInConflicts Description: This parameter is used to turn off the address registration conflict rule that the last writer wins. By default, a computer does not replace any current records on the DNS server that do not appear to have been owned by it at one time. DisableReverseAddressRegistrations Default: 0 (false; registration of PTR records enabled) Description: This parameter can be used to turn off DNS dynamic update reverse address (PTR) record registration. If the DHCP server that configures this computer is a Windows 2000 Server,. UpdateSecurityLevel Value Type: REG_DWORD—flags Valid Range: 0,0x00000010, 0x00000020, 0x00000100 Description: This parameter can be used to control the security that is used for DNS dynamic updates. It defaults to 0, to try nonsecure update, and if refused, to send Windows 2000 secure dynamic updates. Valid values are listed below: 0x00000000—default, nonsecure updates 0x00000010—security OFF 0x00000100—secure ONLY ON Windows 2000. AdapterTimeoutCacheTime.) CacheHashTableSize Default: 0xD3 (211 decimal) Valid Range: Any prime number greater than 0 Description: This parameter can be used to control the maximum number of rows in the hash table used by the DNS caching resolver service. It should not be necessary to adjust this parameter. CacheHashTableBucketSize Default: 0xa (10 decimal) Range: 0–0x32 (50 decimal) Description: This parameter can be used to control the maximum number of columns in the hash table used by the DNS caching resolver service. It should not be necessary to adjust this parameter. DefaultRegistrationRefreshInterval Default: 0x15180 (86400 decimal, or 24 hours) Range: 0–0xFFFFFFFF Description: This parameter can be used to control the dynamic DNS registration refresh interval. MaxCacheEntryTtlLimit Default: 0x15180 (86400 decimal) Valid Range:. MaxSOACacheEntryTtlLimit Value Type: REG_DWORD—time, in seconds. NegativeCacheTime Default: 0x12c (300 decimal, or 5 minutes) Valid Range: 0–0xFFFFFFFF (the suggested value is less one day, to prevent very stale records) Description: This parameter can be used to control the cache time for negative records. NegativeSOACacheTime Default: 0x78 (120 decimal, or 2 minutes) Valid Range:. NetFailureErrorPopupLimit Description: This parameter enables the UI popup to indicate that the DNS resolver was unable to query (reach) the configured DNS servers for a repeated number of query attempts. NetFailureCacheTime Default: 0x1e (30 decimal) Valid Range:. AllowUnqualifiedQuery. DisjointNameSpace. PrioritizeRecordData. QueryIpMatching. UseDomainNameDevolution Value Type: REG_DWORD—binary. In addition to the settings that are listed above, the following keys can be altered to assist the system to deal more effectively with an attack. It is important to note that these recommendations by no means makes the system impervious to attack and only focuses on tuning the TCP/IP stack's response to an attack. The setting of these keys does not address any of the many other components on the system, which could be used to attack the system. As with any change to the registry, the administrator needs to fully understand how these changes affect the default function of the system and whether they are appropriate in their environment.. Also, note that the actions taken by the protection mechanism only occur if TcpMaxHalfOpen and TcpMaxHalfOpenRetried settings are exceeded. Description: This parameter controls the number of connections in the SYN-RCVD state allowed before SYN-ATTACK protection begins to operate. If SynAttackProtect is set to 1, ensure that this value is lower than the AFD listen backlog on the port that you want to protect (see backlog parameters in Appendix C for more information). See the SynAttackProtect parameter for more details. Recommendation: 0 Recommendation: 1. Recommendation: 300,000 Default: 2, DHCP-controlled but off by default. 1 Specifications and programming information are included in the Windows NT Device Driver Kit (DDK). Some information is also available from the Microsoft Internet site (). 2 Most NICs have the ability to be placed into a mode in which the NIC does not perform any address filtering on frames that appear on the media. Instead, it passes every frame upwards that passes the cyclic redundancy check (CRC). This feature is used by some protocol analysis software, such as Microsoft Network Monitor. 3 The 6 bits defined by DiffServ were previously known as the TOS bits. DiffServ makes obsolete the previous use of TOS. Hence, the setting of TOS bits through Winsock is not supported. All requests for IP TOS must be made through the GQoS API unless the DisableUserTOSSetting registry parameter (Appendix A) is modified. 4 Adding [1] to the registry parameter TcpMaxDataRetransmissions or TcpMaxConnectRetransmissions approximately doubles the total retransmission time-out period. If it is necessary to configure longer time-outs, these parameters should be increased very gradually. 5 Instead of sending one TCP segment when starting out, Windows NT/Windows 2000 TCP starts with two. This avoids the need to wait for the delayed ACK timer to expire on the first send to the target computer, which improves performance for some applications. 6 See the Microsoft Windows NT/Windows 2000 Resource Kit or Microsoft Knowledge Base for Redirector registry parameters. 7 Stevens, Richard. TCP/IP Illustrated, Volume 1:The Protocols. Reading, MA: Addison-Wesley Publishing Co., 1993. 8 Both specifications are available from the Microsoft Internet site on and. 9. 10 See "draft-ietf-dhc-dhcp-dns-*.txt" 11
http://technet.microsoft.com/en-us/library/bb726981.aspx
crawl-002
refinedweb
22,641
54.12
Hello, I am working on implementing a TSPTW solver according to Pesant et al. in their paper "An exact constraint logic programming algorithm for the tsptw) However in the model there exists the following constraint: S[i] = j => S[epsilonj] != beta[i] with the exception that e[j] != (N+1), for all i in Vo (1) In the above equation, subscripts are represented by square brackets and the operator "=>" means "implies". The constraint functionality that I wish to accomplish would then be something like // Subtour elimination forall (i in Vo) { if (epsilon[Si] != (N+1)) S[epsilon[Si]] != beta[i]; } Where epsilon and beta are arrays with the same length as Vo This however does not work, since according to the documentation: "Conditions in if-else statements must be ground; that is, they must not contain decision variables." Now this is very problematic since the model presented in the paper implies very strongly that the subscript j I need in eq. (1) to access the epsilon and beta vectors are in fact values of the dvar vector S. Is there a way around this? Should I be modelling the "implies" operator in a different way? I also tried an alternative approech where i would write // (4) Subtour elimination forall (i in Vo : S[i] != (N+1)) { S[epsilon[Si]] != beta[i]; } This yielded an error regarding the usage of decision variables. My variables are int betaVo; int epsilonVo; dvar int SVo; Thanks in advance. PS. How does one invoke the proper formatting for code on this forum? Topic Pinned topic On conditional constraints involving decision variables 2012-10-12T11:36:53Z | Updated on 2012-10-12T12:05:02Z at 2012-10-12T12:05:02Z by SystemAdmin - SystemAdmin 110000D4XK623 Posts Re: On conditional constraints involving decision variables2012-10-12T12:05:02Z This is the accepted answer. This is the accepted answer.Hello. You can use just "=>" to model implications. For example: using CP; dvar int x; subject to { x!=1 => x>5; } Regarding the code formatting, it is necessary to write tags "(code)" before and after your code, however instead of () brackets use {} brackets. When you write the message there is a table of markups on the right from the edit box with your text. It's very easy to overlook. There's also preview tab above the edit box to see how the message will look like. Best regards, Petr
https://www.ibm.com/developerworks/community/forums/html/topic?id=77777777-0000-0000-0000-000014896929
CC-MAIN-2017-51
refinedweb
400
63.9
SALT 12.2.2.0: Enhancement to Support soapenc:string Namespace in WSDL Convertion Tool wsdlcvt (Doc ID 2490164.1) Last updated on MAY 11, 2021 Applies to:Oracle Service Architecture Leveraging Tuxedo (SALT) - Version 12.2.2 and later Information in this document applies to any platform. Symptoms When compiling the WSDL file, if it includes soapenc:string field types, then running "wsdlcvt -i ABC.wsdl -o ABC" does not give any errors, but the .mif file does not contain any fields typed soapenc:string. soapenc is the namespace Tested environment is: > wsadmin -v INFO: Oracle SALT, Version 12.2.2.0.0, 64-bit, Patch Level 015 INFO: Oracle Tuxedo, Version 12.2.2.0.0, 64-bit, Patch Level 027 Cause In this Document
https://support.oracle.com/knowledge/Middleware/2490164_1.html
CC-MAIN-2022-27
refinedweb
127
59.5
Timeline 03/26/09: - 20:26 Changeset [20509] by Gracefully deal with the case where we resume a stalled task but there was no work left to be done. This fixes a divide by zero error. - 20:08 Ticket #190 (when doing non-recursive Add From Server, G3 should create only the last ...) reopened by - I'll rethink this after beta 1 - 19:51 Ticket #192 (Adding dashboard blocks is broken) closed by - fixed: Fixed in r20508 - 19:51 Changeset [20508] by Rename $block_id to $id to fix ticket #192 - 19:48 Ticket #192 (Adding dashboard blocks is broken) created by Try to add a dashboard block and you get "undeclared variable $id" - 19:43 Ticket #190 (when doing non-recursive Add From Server, G3 should create only the last ...) closed by - wontfix: Its non trivial to make that type of processing. If you don't want the whole tree then create another authorized source directory. - 19:06 Ticket #191 (Use EXIF data to populate photo caption) created by If you add a photo to the gallery that contains EXIF and IPTC data some of this data should be used for generating the photo caption. For example, if you click on a photo that has been added and choose the 'View more information' button the window that pops up has fields for keywords and captions (it is very nice that the EXIF module parses these out even though they are IPTC fields). It would be nice to be able to use the caption from the IPTC data to automatically populate the caption field for the photo (as the keywords already do as they automatically populate the photo's keyword data). - 17:49 Ticket #190 (when doing non-recursive Add From Server, G3 should create only the last ...) created by In G3, configure /home/user as the directory on the server where it should look to get images from. Now go ahead and create an album called "party". Enter the album, and import pictures to it from /home/user/some/random/things/here Expected behavior: those pictures are uploaded to the album "party". Actual behavior: within "party", a new album called "some" is created. Within it, another album called "random" is created. Within it, another album called "things" is created. Within it, another album called "here" is created. Within "here" the pictures are uploaded. I understand the reason for creating albums within albums: if you're importing a structure of folders which is already organized, as it were, in "albums", then you want to preserve that. E.g. like this: memories/ |-- 2007 | |-- overseas | |-- parents | `-- wedding |-- 2008 | |-- hiking | `-- school `-- 2009 |-- baby `-- house That is a good idea which G3 should not discard. It makes it easier to import photo repositories which are already organized hierarchically. That's good! But if you're importing from a single directory (non-recursive), the expectation is to upload those pictures to the current album, NOT to create sub-sub-sub-albums within it. Maybe a behavior switch should be exposed here to the users, and let them decide whether to preserve the folder structure, or whether to import everything flat into the current album? OTOH, this will complicate the UI, which is not good. Or maybe let G3 decide: if it's a single directory, with no sub-directories (or with subdirectories that don't contain pictures), then add the pictures to the current album. But if it looks like a recursive thing (directories within directories, some containing pictures) then create sub-albums accordingly. Yeah, I think that's better, let G3 decide. Anyway, I'm just saying, the current behavior (create recursive structure even when importing from a single folder) is unnatural. - 17:42 Changeset [20507] by Update comments to reflect that the results of item::children or item::descendants uses the specified sort order - 07:10 Changeset [20506] by Convert language updates over to task form. It's still very rough, the task only has one step from zero to 100. - 06:56 Changeset [20505] by Set csrf into the global theme for convenience. - 06:32 Changeset [20504] by Replace iterators with stack based scanner, which we can serialize into the task context. - 04:44 Changeset [20503] by Normalize code style. - 04:36 Changeset [20502] by Convert the L10n scanner from a library to a helper. In order to make the class static, I had to remove the index cache. I'll restore that and cache the index keys in the task context in a subsequent change. For now, I've put in a @todo to add the caching back in. - 04:09 Changeset [20501] by Guard the calling of the form closing event so its not called if there is no form. - 03:17 Changeset [20500] by - - 03:09 Changeset [20499] by Use pathinfo() instead of substr/strchr/etc to get the file extension. - 03:02 Changeset [20498] by Normalize exception format. - 03:01 Changeset [20497] by Optimize the way we lookup incoming translations. Undo last commit (accidentally committed benchmarking code) - 02:54 Changeset [20496] by Normalize exception format. - 02:53 Changeset [20495] by Normalize exception string format. - 02:50 Changeset [20494] by Fix typo, whitespace. 03/25/09: - 19:59 Ticket #189 (allow user to edit picture when viewing it) created by When viewing an album, I see the edit / delete / etc icons on each thumbnail in the album, which allow me to edit those pictures. That's OK. But if I click on a picture, so therefore I quit the album view and enter the picture view, those icons are not displayed anymore. This is illogical. I want to be able to edit a picture while I'm viewing it. Yes, editing pictures while in album view is faster if you're changing many pictures. But users might yet feel frustrated if the application tells them what to do instead of allowing them to use their own style, even if it's less than optimal. - 19:47 Ticket #188 (make Add Photo more prominent) created by I created an album and now I'm staring at the empty album page wondering how am I supposed to put pictures on it. Hmmm, maybe I should go wandering through random places in the app in search for a clue? Oh yeah, it's buried in a menu somewhere. :-( That's not nice. Look at these conditions: - I am a logged in user - I am browsing an album - I have the permissions to upload in that context If all of the above are true, an "Add Photo" button should magically appear in a visible place on the page. The UI must facilitate the user operations that make sense in the current context, not hinder them. If I own the album, adding pictures is something I will likely do, so make it easy for me. OTOH, if I don't have the rights to upload in that context, then of course I am not supposed to see the "Add Photo" button. - 19:39 Ticket #187 (text mode, image mode, and dual text/image mode for buttons) created by When I'm viewing an image, the buttons (show large, album view, add comments, slideshow, etc.) are not self-explanatory. I look at them and I can't figure out what they do. I don't think the icons are bad (they do lean towards the abstract style, but that may be okay in this day and age), I just think not enough visual cues are provided to the user. The tooltips requested in ticket #186 would help, and are definitely needed, but are probably not enough. It would be nice if G3 could be configured so that the buttons can be displayed in one of these three modes: - image only: it's the current mode, buttons are icons - text only: the buttons are textual links, no image - image/text combo: the buttons are icons with a textual link underneath This is a standard usability technique employed by many UIs. Let the owner choose the verbosity (or the terseness) of the app, and whether it's visually or textually oriented. It should probably be set only at the site level (not at the album level, it would make the site too inconsistent). - 19:31 Ticket #186 (tooltips for all buttons please) created by When I'm in image view mode, I'm looking at the buttons that are exposed on the interface (show large, album view, add comments, slideshow, etc.) and it's hard to figure out what they do. The interface feels dry and unfriendly because of it. Indeed, it's hard to interact with it unless you're an experienced user. All buttons must have tooltips. When the user hovers the mouse over it, the button must show a brief description as a tooltip. - 19:26 Ticket #185 (allow owner to control the slideshow behavior) created by The slideshow now starts "playing" automatically when it's launched. These should be config items which can be controlled by the owner: - set the slideshow to either start browsing the files automatically, or come up in Pause mode, waiting for user input (so the user can go from file to file manually, click / one file, click / another file) - set the slideshow to cycle back to first picture, or not cycle (and stop at the end) At the very least, these config items should operate at the site level. If possible, also make them per-album config bits and by default inherit the site-level values when the album is created (but it's OK if it can't be done because it makes the app too complicated - G3 must stay simple after all). - 18:15 Ticket #184 (Server Add should sort folder names alphabetically for findability) closed by - fixed: 2nd part of the fix implemented r20493. Sort the children before display. Sort the directories first then by the files - 18:14 Changeset [20493] by Fix for ticket #184. Sort the output children as DirectoryIterator? does not provide a sort order. Separate the directory and files, sort them individually and then merge them together so directories are at the top of the list - 17:21 Changeset [20492] by Fix for ticket #184. Set the default album sort order to "Title" - 16:21 Ticket #181 (Add user don't work) closed by - fixed: Fix implemented r20491 - 16:21 Changeset [20491] by Fix for ticket #181. Valiant take note of the change to admin_users.php. I had to remove the check for the locale as it hasn't been added to the form. - 16:20 Ticket #184 (Server Add should sort folder names alphabetically for findability) created by Details: - 16:13 Ticket #183 (Uploader filter should be case-insensitive) created by Details here: - 15:27 Ticket #182 (Unit tests delete album, resizes and thumbs directories) closed by - fixed: Fix implemented r20490 - 15:22 Changeset [20490] by Fix unit tests where the albums, resizes and thumbs directory were being deleted. Fix for ticket #182 - 15:21 Ticket #182 (Unit tests delete album, resizes and thumbs directories) created by Some unit tests don't use the album::create method to generate test albums and as a result key fields are not consistently populated. When the the album is deleted as part of the test, the path works out to test/var/albums// and the albums directory then gets removed. - 08:32 Ticket #181 (Add user don't work) created by Hi, "Add a New User" button don't work. Logs said : 2009-03-25 09:17:57 +01:00 --- debug: Global GET, POST and COOKIE data sanitized 2009-03-25 09:17:57 +01:00 --- debug: MySQLi Database Driver Initialized 2009-03-25 09:17:57 +01:00 --- debug: Database Library initialized 2009-03-25 09:17:57 +01:00 --- debug: Session Database Driver Initialized 2009-03-25 09:17:57 +01:00 --- debug: Session Library initialized 2009-03-25 09:17:57 +01:00 --- error: Uncaught PHP Error: Undefined variable: user in file modules/user/helpers/user.php on line 73 Debian Testing php-5.2.6 in module apache-2.2.11 mysql 5.0.51 Stéphane - 04:49 Ticket #63 (content administration view) closed by - fixed: if this was referring to the not working tag admin it should be solved - 04:47 Changeset [20489] by tag changes in the tag admin should now work as expected - 03:41 Ticket #180 (Session drops regularly) created by Apparently there's a bug in the Kohana session driver that causes the session to get dropped regularly when it tries to regenerate the session id. A workaround is to turn off session id regeneration (g3 defaults that to 100 in core/config/session.php-- set it to zero). - 02:13 Changeset [20488] by Remove debug line. - 02:00 Ticket #179 (Don't expose the "name" field to users) created by It's confusing, and most users don't care anyway. Because it affects the url, we should find some way to let advanced users edit it, but it doesn't need to be there for most users. - 01:34 Changeset [20487] by untabify 03/24/09: - 21:09 Changeset [20486] by change version from "3.0 Alpha 3" to "3.0 pre-beta svn" - 21:00 Ticket #178 (Delete the scaffolding controller and views) created by Requires ticket 177 to be completed - 20:58 Ticket #177 (Extract packaging code from the scaffolding and move to a standalone ...) created by Move the packaging from the scaffolding into the gallery3/packaging branch and create a standalone package to create the default installation. - 20:49 Ticket #176 (UX improvement: change label to add tag on album view) created by The label in the tag input field on the album view should read "Add tag to album" to reduce confusion - 20:42 Changeset [20485] by Exclude scaffold - 20:21 Ticket #175 (Installer won't allow multiple tables in database) created by If you attempt to install Gallery3 into a database that already has tables (using a different table prefix) the installer prevents this - 20:07 Ticket #174 (Deleting items leaves the file system objects) closed by - fixed - 20:07 Ticket #174 (Deleting items leaves the file system objects) created by When an item or object was deleted the corresponding file system object was not deleted. - 19:47 Changeset [20484] by Add some code to automatically generate the package name from the svn tag - 19:04 Changeset [20483] by Change the item model so it will actually delete all the file system objects when an item is deleted. - 18:03 Changeset [20482] by Fix the problem I created by trying to run the task again after it was completed. - 17:44 Ticket #173 (Change the server add dialog to allow the user to pause the import) closed by - fixed - 17:44 Ticket #173 (Change the server add dialog to allow the user to pause the import) created by - - 17:43 Ticket #172 (Add a form_closing event to the standard ui procesing) closed by - fixed - 17:43 Ticket #172 (Add a form_closing event to the standard ui procesing) created by When the dialog is closed, a form_closing event is triggered on the form contained in the dialog box. - 17:41 Changeset [20481] by Add a pause button to the server add dialog and if it is clicked then the upload is paused. If the dialog is closed and the task is not complete then a warning message is displayed on the album. - 17:30 Changeset [20480] by Add a "form_closing" custom event to the dialog processing. This allows the form in the dialog todo custom processing when the form closes. For example, the server_add dialog (next commit) uses this callback to determine if the upload task was cancelled and display a warning message when the page reloads. Usage: $("#gServerAdd form").bind("form_closing", function(target){...}); - 17:00 Ticket #171 (Change .gItem CSS width to proportionally match thumbnail width changes) created by Originally reported in the fourth point of the following comment: Thumbnail default is 200px. Default theme's gContent area width: need to retrieve Calculate new div.gItem width based on proportional difference of set thumbnail size to default. The new width should fit equally in the row, filling it completely. Consider the use of tables, if necessary. IRC discussion log (3/24/09) 10:43 talmdal 1) the user changes the thumbnail size and rebuild the thumbnails 10:43 talmdal 2) the theme takes the the thumbnail size and the expected width of the album grid and calcualtes the number pre row 10:43 talmdal *per 10:43 talmdal 3) It can then determine how much spacing is required to center the grid 10:44 thumb we could take the proportional difference of the new thumbnail width to the default and calculate the new gItem width 10:44 talmdal 4) determines the page size by the number of columns (calcualted) and the number of rows 10:45 talmdal i think that you can have number/page or thumbnail size driving the grid calculation but you can't have both and keep it simple 10:45 thumb talmdal: okay, that makes sense 10:46 talmdal whew :-) 10:47 thumb okay, we'll get much better performance by handling this on the server-side - 16:15 Ticket #170 (Add Theme generator to developer module) created by Create something like: to generate themes for g3 - 15:24 Milestone 3.0 Alpha 3 completed - - 14:59 Ticket #169 (Javascript dependencies - Need a Theme Javascript Api) created by In a lot of the javascript we do a document.location.reload() to refresh the screen to pick up content (i.e. status messages and such) I'm uncomfortable with this approach in that, we are making assumptions about how a theme may want to handle it. Doing a full page refresh is a decision on how a theme wants to do its presentation. Another theme, might have created static headers and footers and doing a full page refresh, may ruin or impact the effect that theme is trying to create. - 09:12 Changeset [20479] by First stab at a packaging script - 08:27 Changeset [20478] by Tag Gallery 3.0 Alpha 3 03/23/09: - 14:51 Ticket #168 (Display of search results) closed by - fixed: implemented fix r20477 - 14:51 Changeset [20477] by Fix for ticket #168: Set a default value for extra_attrs on the Item_Model::thumb_tag() - 14:45 Ticket #168 (Display of search results) created by Warning Message An error was detected which prevented the loading of this page. If this problem persists, please contact the website administrator. core/models/item.php [328]: Missing argument 1 for Item_Model::thumb_tag(), called in /var/www/gallery3/modules/search/views/search.html.php on line 32 and defined - 14:24 Ticket #167 (G3 Security Scan) created by I noticed when reading about WordPress?, there is a plugin (wp-security-scan) which looks at various settings both (WordPress? and file system) and makes suggestions based on what it finds. Do we want a similar feature in g3 - 14:21 Ticket #166 (Backup and restore module) created by Need to create a facility to backup the g3 installation. At a minimum it should: 1) dump the database 2) zip the contents of config directory 3) email both the database and the zip contents to the administrator 03/21/09: - 08:06 Ticket #165 (Escape l10n messages for JavaScript / for HTML attributes) created by Localization messages can contain single and double quotes, and other characters that are possibly harmful in JavaScript? context. E.g. consider: <script ..> MSG = "<?= t("Hello world"); ?>"; ... </script> The translation can contain single and double quotes (it's unreasonable to entity-fy all t() output, t() strings can contain HTML as well). Thus it's an easy XSS vector, and even for non-malicious translations, it's easy to break our JS code. The other scenario is HTML element attributes. E.g. <a href=......</a>, i.e. using the output of t() as a HTML element attribute value. If the translation contains a single or a double quote, it could break the page. Deliverables: - function escapeForJavaScript($string) {...} which doesn't just escape ', " and \. Google for other characters, which should be escaped in JS context. - function escapeForHtmlAttribute($string) {} which escapes single and double quotes. - Maybe provide an easy API. e.g. I18n::translate() (and thus t()) could return an instance of: class Localized_Message { private $string; function toString() { return $this->string; } function forJs() { return escapeForJavaScript($this->string); } function escaped_quotes() { return escapeForHtmlAttribute($this->string); } } (profile with xdebug if the overhead of object creation / implicit toString() significantly impacts the page load time) - 07:44 Changeset [20476] by Refactor all translation strings that have ambiguous placeholders. E.g. "%link_startClick here%link_end" is now '<a href="%url">Click here</a>'. Note: This isn't always the best solution. E.g. consider "Foo <a href='%url' class='gDialogLink'>bar</a>." Now the translator has to deal with preserving CSS classes too... - 05:15 Ticket #164 (Move Gallery Project credits to Footer text db storage variable) created by marketing vs. stemming the tide of "how do I remove that" questions - 05:04 Ticket #163 (If album is empty, display messages to users and privileged users) created by Guest message: This album is empty User w/ add photo rights: This album is empty. <a href="">Add some photos</a> 03/20/09: - 23:29 Ticket #162 (Provide upgrade path) created by Provide an upgrade path from one major release to the next. E.g. from beta 1 to beta 2, or from beta 1 to Gallery 3.0. Also figure out whether modules should have dedicated "major releases" or if there's just a major Gallery release. Non-goals: - Don't necessarily provide an upgrade path from / to every SVN version. - 20:33 Changeset [20475] by Don't divide by zero in thumb_tag if there's no thumbnail - 20:31 Changeset [20474] by Add title string to this page. Doesn't show yet because its in a gPanel - 18:25 Changeset [20473] by background image for the "select photos ..." button in the uploader - 18:19 Changeset [20472] by This change checks that the active theme is available and if its not, reverts to the default theme. - 17:22 Changeset [20471] by minor UI and CSS improvements (the styling of the flash button is a not themable solution and I put the CSS in the file for now) - 17:20 Changeset [20470] by rollback of r20469... see trac #161 - 16:15 Ticket #161 (Revisit Gallery3 theme) created by 1) I copied the default theme to theme/blackonblack 2) I downloaded a new themeroller file based on vader (I include UI Core, Draggable, Droppable, Resizable, Selectable, Dialog, Tabs, Datepicker, Progressbar, Effects Core, Effect Highlight and Effect Slide). 3) I put the compressed css file and images in theme/blackonblack/lib/themeroller (Files are found correctly) 4) Displayed the theme and there were very few visible changes. Some of the dialog boxes had partial effects. My thinking is that if we want to make it easy to customize the theme, we need to get as much color and style into the themeroller style as possible - 14:59 Changeset [20469] by Rather than moving the themeroller and all of its associated files into each theme. I chose to create methods Theme_View::file($path) and Admin_View::file($path). These methods check for a theme override file in the theme and return a link to it if it exists. So to override the themeroller files. just create a lib/themeroller in the theme and the files will be picked up. - 08:25 Changeset [20468] by Oops, we need UNIX_TIMESTAMP() instead of NOW() - 08:22 Changeset [20467] by Do some data normalization so that the install files will have stable ordering and known values. This way subsequent packaging runs won't have any differences unless there's a real data change. - 08:21 Changeset [20466] by Set our version to 3.0 Alpha 3 and add 'logs' as a dir we create at install time - 04:39 Changeset [20465] by Restore $extra_attrs in img tags. Roll back to using .gThumbnail in quick pane. 03/19/09: - 19:45 Ticket #160 (Add basic BiDi support (Arabic, Hebrew, Persian)) created by Set the text flow directionality for the whole HTML text based on the locale. Set it to RTL for Arabic, Hebrew and Persian. Handling of multiple text in different with directionalities on the same page (e.g. English comments to a page that is generally in Arabic) is outside of this task - 16:01 Changeset [20464] by Remove YUI grids hd and ft ids, we don't need and they're cluttering our HTML. - 13:17 Ticket #113 (Using the scaffolding to reset the installation fails) closed by - wontfix: The code to reinstall gallery3 has been removed so this is no longer valid - 09:10 Changeset [20463] by - - 09:07 Changeset [20462] by - - 08:43 Changeset [20461] by Update to reflect resolved tickets - 08:38 Changeset [20460] by Normalize whitespace inconsistencies with upstream code that probably snuck in during various edits. These files are now the same as the upstream code. - 03:17 Ticket #159 (Add translate.google.com suggestion in l10n client) created by In the l10n client, add a "translate with Google" button which populates the translation field with the translation result from translate.google.com. See: AJAX API for translate.google.com: Note: Need to ensure that placeholders are preserved in the translate.google.com roundtrip. - 02:37 Ticket #156 (Packager fails) closed by - fixed: Fixed in r20459 - 02:35 Changeset [20459] by Rejigger the way we do reinstalls while Kohana is running. core_installer::install() now takes an $initial_install param that allows us to enforce that we're doing a clean install. Use this in both the scaffolding and the unit test code. Greatly simplify the scaffolding uninstall/reinstall code. - 01:47 Changeset [20458] by Fix syntax errors. - 01:41 Changeset [20457] by - - 01:36 Changeset [20456] by - - 01:34 Changeset [20455] by - - 01:33 Ticket #153 (Quick pane item delete doesn't actually delete the item) closed by - fixed: My mistake.. This bug is fixed. Closing it again. - 01:32 Ticket #152 (Quick pane has no visual effects) closed by - fixed: My mistake.. This bug is fixed. Closing it again. - 01:27 Ticket #158 ("Select photos..." button in "Add photos" dialog is unthemed) created by The "Select photos..." button in the "Add photos" dialog is not themed, it even lacks the style of a normal button or link, it just appears as normal text. At least it should be themed as a normal button. Ideally, it should be the prominent action of the dialog, making it intuitive to click that button to add photos. - 00:19 Ticket #153 (Quick pane item delete doesn't actually delete the item) reopened by - This is not fixed. I get the loading dialog, but the page does not reload after its done to show that the image is gone. - 00:18 Ticket #152 (Quick pane has no visual effects) reopened by - Reopening. I now see the loading dialog, but I do not see the rotated image after the operation is complete. - 00:11 Ticket #157 (Add support for locale variants (e.g. German Du vs. Sie)) created by Some languages have popular variations. E.g. in German, there are two different forms to address someone. The English phrase "You've got mail" can be translated with the informal "Du" as "Du hast Post" or the formal "Sie" as "Sie haben Post." This formal vs. informal variants in German are a different issue than having alternative translations for a specific message in that there's usually a consens on a single translation as the preferred one when there are many to choose from in "edit wars", but with variants, there are maybe 30% of all users that really want the informal variant and 70% want the formal variant. And most translations can be clearly categorized into one of the variants. Implementation: A locale identifier in G3 currently consists of a language tag and an optional territory (=country) tag. E.g. "de" or "de_DE" for German, or German in Germany. To support variants, variant subtags need to be added to the locale identifier. Example: de_formal, de_informal, de_DE_informal. Note: the variant subtag should be 5-8 letters long, according to BCP 47. See variant subtags in BCP 47: For now, let's start by replacing the "de" locale with de_formal and de_informal. de should continue to have the default country DE. If there are later more de locales (for different territories), we can add de_DE_formal, de_AT_formal, etc. When requesting translations for de_AT_formal, it should fetch translations for de_AT_formal, de_AT_informal, de.*, default_locale. And when building messages_de_AT_formal.php, the following fallbacks should be applied: de_AT_formal -> de_formal -> de_<any-territory>_formal -> de_AT_<any-variant> -> de_<any-variant> -> de_<any-territory>_<any-variant> -> default_locale 03/18/09: - 23:24 Ticket #156 (Packager fails) created by 1) I did a a reset of my g3 installation 2) Set up modules to include 3) Went to mysql and listed all the tables, things are fine 4) Went to the scaffolding 5) Check mysql and talse were still defined 6) Clicked on the packaging link 7) Got errors saying Table g3_groups does not exist and gallery3.g3_modules doesn't exist 8) Did show tables in mysql again and there was no tables in the database. - 22:43 Ticket #155 (Delete of Album fails with "No such file or directory") closed by - fixed: Fixed r20454 - 22:42 Changeset [20454] by Fix for ticket #155. Delete the item record before unlinking the files. - 21:37 Ticket #155 (Delete of Album fails with "No such file or directory") created by Haven't looked at it yet, but it looks like it can't find the child item to unlink so the delete fails. - 19:39 Ticket #153 (Quick pane item delete doesn't actually delete the item) closed by - fixed: Fixed implmented r20453 - 19:39 Changeset [20453] by Fix for ticket #153. The sort column was not not initialized for movies or photos. Turns out that when you go to delete, ORM tries to check for children and apply the sort order. - 18:56 Ticket #152 (Quick pane has no visual effects) closed by - fixed: Fixed with 20452 - 18:55 Changeset [20452] by Fix for ticket #152. Somewhere along the line we we stopped using gThumbnail as a class. Changed the selector to select the image from ".gThumbnail" to ".Quick img" - 17:06 Ticket #154 (admin/languages doesn't prefix tables) closed by - fixed: Hopefully we got it with r20451 - 17:02 Changeset [20451] by This is the real fix to ticket #154 - 16:45 Changeset [20450] by Fix failed unit test: private methods are required to begin with an underscore(_) - 16:36 Changeset [20449] by Remove trailing ?> - 16:33 Changeset [20448] by Fix for ticket #154: Remove the raw count and use the ORM wrapper. - 16:15 Ticket #154 (admin/languages doesn't prefix tables) created by To repro: 1) Set your table prefix to g3_ 2) browse to admin/languages There was an SQL error: Table 'g3dev.outgoing_translations' doesn't exist - SELECT COUNT(*) AS C FROM outgoing_translations Suggest ORM::factory("outgoing_translations")->count_all() - 15:38 Ticket #153 (Quick pane item delete doesn't actually delete the item) created by The thumbnail and the source image seem to go away, but the item is still there. - 15:37 Ticket #152 (Quick pane has no visual effects) created by All the quick pane actions are lacking visual effects. None of them show the loading image, and the rotation ones don't replace the image with the rotated version. To reproduce, upload an image, hover over it and click one of the quick icons. - 06:15 Changeset [20447] by Added rotate cc and ccw icons to themeroller theme and css to default theme, applied to quick pane rotate buttons. Hope that jQuery UI includes rotate icons eventrually so we don't have to maintain this. - 05:16 Changeset [20446] by Stop header height from collapsing when there's no breadcrumb present, as is the case with tag albums. Thanks to gadulia for reporting: - 03:36 Changeset [20445] by Remove semi-colons from single sql statements and correct anoter instance of {items` which won't get prefixed properly - 03:35 Changeset [20444] by Remove back ticks from sql - 02:49 Ticket #143 ("Additional options" quick link doesn't do anything) closed by - fixed: Implemented fix r20443 - 02:49 Changeset [20443] by - - 02:41 Ticket #142 (t2() uses singular when the count is zero) closed by - fixed: Fixed in r20442. - 02:40 Changeset [20442] by Fix for ticket 142: Choose plural form "other" for count == 0 (unless the locale has a specific plural form for zero) - 02:25 Changeset [20441] by Make sure that the SPL library is installed - 02:23 Ticket #142 (t2() uses singular when the count is zero) reopened by - - 02:14 Changeset [20440] by Ticket 1156 is resolved - 02:12 Changeset [20439] by Resolve deviation from the kohana code left aronud by r20297 and r20301 (which was an unclean rollback of r20297) by just rolling them both back: $ svn merge -c-20301 kohana/helpers/form.php $ svn merge -c-20297 kohana/helpers/form.php - 02:00 Ticket #142 (t2() uses singular when the count is zero) closed by - fixed: Fix implemented r20438 - 01:58 Changeset [20438] by Fix for ticket #142. Valiant: u might want to check the implications of this. - 01:39 Ticket #151 (Use default locale for each language to share translations) created by Current state: Each locale has a complete language and region tag (e.g. de_DE for German in Germany). Problem: Translations for en_US are not shared with en_UK. Each language should have a default region (e.g. US for en_US) such that "en" implies "en_US". When requesting en_UK translations, it should expand this to requesting [en_UK, en] and then build the message_en_UK.php file from en_UK, en the default language (as fallback, if default isn't en or en_UK already). - 01:32 Ticket #150 (Compile message bundle for installed languages) created by Current state: translate() calls either result in a translation or fall back to the root locale (= the source message in en_US). Problem: Before falling back from locale xx_YY to root, it should traverse the locale tree. I.e. try xx_YY, then xx, (then maybe xx_*), then default_locale, then root. Proposed solution: - Generate a messages_xx_YY.php file for each installed locale xx_YY. - When the language settings are changed or translations are fetched, generate that messsages_xx_YY.php file. - Change the I18n library to load messages_xx_YY.php instead of querying the DB. - 01:27 Ticket #149 (l10n client / server - Manage resubmissions) created by Current state: - l10n_client submits all messages from the outgoing_translations table. - l10n_server adds all submitted messages to the translations table, without checking for duplicates / whether it has seen the exact same translation before. Problems: - l10n_server grows and grows, even when the same translator submits the same translations over and over again. Challenges: - Ensure that a translator NEVER loses his translations. E.g. consider that outgoing_translations is truncated after successful submission. Later, someone else submits different translations for the same messages. If there's no further logic, the translator's incoming_translations table would get overwritten with new translations, possibly with spam / undesired translations / translations that don't match the preferences of this translator. - There could be edit wars (changing a translation back and forth). Proposed solution: - On submission, update the base_rev in outgoing_translations with the rev returned by the server. - Include the base_rev in the submission - The server should ignore all submissions where sid, translation and tid=base_rev matches - 01:24 Changeset [20437] by Forgot to remove a back tick - 01:20 Changeset [20436] by Couple of sql statements that had incorrect prefix handling or no prefix handling. - 01:10 Ticket #55 (add localization client) closed by - fixed: Created ticket 148 to keep track of plural form translations. Closing this ticket since the basic implementation is done. - 01:09 Ticket #148 (Add l10n_client UI / functionality to translate plural forms.) created by Plural forms are currently ignored by the l10n_client. Consequently, it's impossible to translate messages that have plural forms, at this point. Find a way to handle plural forms as well (mostly JavaScript? / controller code) and change the UI appropriately. - 01:07 Ticket #147 (Move l10n_client out of core) created by l10n_client has core and non-core components. Everything that is related to translating / submitting translations should be moved out of core. In core, we should keep: - admin_languages UI: select installed languages, select default language, fetch updates Note: fetch updates currently depends on helpers/l10n_client.php. In a new l10n_client module, we should have: - js/l10n_client.js, css/l10n_client.css, controller/l10n_client.php - parts of helpers/l10n_client.php and controllers/admin_language.php which are for API key / submission of translations I guess the admin_language view could have a predefined area where it shows the l10n_client module forms if it's installed. - 01:02 Ticket #76 (Add admin view/controller to submit translations) closed by - fixed: Added in r20435. - 01:00 Ticket #75 (Admin view/controller to download translations) closed by - fixed: Added in r20435. - 00:59 Ticket #59 (add translation server) closed by - fixed: Update: It's up and running now on GMC. It's based on 2 drupal modules: - l10n_community (part of the drupal-contrib/modules/l10n_server/ module-package = a module with modules in subfolders. we only use the l10n_community module) - gallery_l10n_server module, our "RPC" server running inside drupal, serving as an adapter between gallery 3 clients and the l10n_community module / DB tables. l10n_server lives in vendor-contrib gallery_l10n_server lives in trunk - 00:53 Changeset [20435] by Functional l10n_client / server interaction: - Get / verify API Key from l10n server - Submit translations - Fetch translations / updates Reference: Tasks: 75, 76, 55 TODO: Move out of core (and a series of other tasks). - 00:01 Changeset [20434] by Corrections based on feedback 03/17/09: - 22:39 Ticket #101 (The I10n scanner needs to extract module descriptions) closed by - fixed: implemented in r20433 - 22:38 Changeset [20433] by - - 20:59 Changeset [20432] by Remove the in-place tag editing code from the default theme. It should be implemented in the tags module for now, and then possibly generalized out to lib later on. - 20:57 Changeset [20431] by Don't allow empty tag names - 20:22 Ticket #146 (Implement renaming tags in admin/tags) created by It hasn't been implemented since jhilden's ui overhaul. The code is there in the controller, but we need to make the in-place editing work and be wholly in the tags module (not in the theme) - 20:19 Ticket #73 (Replace SimpleUploader with swfupload) closed by - fixed: Replaced. It's not complete, but it's functional. - 20:17 Ticket #89 (Add Flash plugin detection for Add a New Photo, all other Flash uses) closed by - fixed - 20:16 Ticket #123 (Simpleuploader reads 100% when not complete) closed by - fixed: Replaced SimpleUploader? with swfupload - 18:56 Changeset [20430] by This resolves ticket 1156: "Table prefix gets append to column name" All tests pass. - 18:43 Changeset [20429] by - - 18:41 Changeset [20428] by - - 18:33 Changeset [20427] by Fix missing preambles - 18:32 Changeset [20426] by Remove windows line endings - 18:31 Changeset [20425] by Add a modified version which strips M's - 18:17 Changeset [20424] by Treat calls to install() with TEST_MODE set to be initial installs. At least for now. - 18:16 Changeset [20423] by Fix correctness issue if there are no tables (list_tables will return null) Clean out the module caches directly now that the module loading robustness code is gone. - 18:09 Changeset [20422] by Fix minor correctness issues - 18:07 Ticket #81 (Modal dialog shadow/mask doesn't resize when form height increases on form ...) closed by - wontfix: The jQuery UI team has removed the shadow due to this issue. I believe the change is in HEAD and is noted as fixed for their upcoming 1.8 release. - 18:04 Ticket #88 (Unable to login or cancel login dialog in IE8 beta) closed by - invalid: This appears to be an issue with my IE security settings, not IE itself. - 17:19 Changeset [20421] by Fix the locale field in the change user settings form - 15:49 Changeset [20420] by Fix edit user form handler - 06:05 Changeset [20419] by Even out the padding for #gAddPhotosQueue .box - 06:00 Changeset [20418] by Update paths - 06:00 Changeset [20417] by Updated lib/swfupload to SWFUpload v2.2.0 Beta 5 - 05:57 Changeset [20416] by Updated to SWFUpload v2.2.0 Beta 5 - 05:57 Changeset [20415] by Style breadcrumb in photo upload dialog - 05:53 Changeset [20414] by Move everything into the 'SWFUpload v2.2.0 Core' package for convenience - 05:50 Changeset [20413] by SWFUpload v2.2.0 Beta 2 - 05:41 Changeset [20412] by rss/updates doesn't have an item. Clean up some typos here. - 05:40 Changeset [20411] by filesize() dies if the file doesn't exist, which can happen in the case that a gallery is slightly corrupt. In that case, just ignore the error. - 05:28 Changeset [20410] by Make the gAddPhotosCanvas take up the entire dialog, for now. - 05:25 Changeset [20409] by Initialize $tags properly - 05:24 Changeset [20408] by Fix typo: $max_page -> $max_pages - 05:20 Changeset [20407] by Switch from using SimpleUploader? to using swfUpload as our flash based uploader. This is modeled on but is not yet complete. Notes: * Changed #gProgressBar to .gProgressBar to support multiple progress bars on the same page * Added a bunch of CSS to the "needs a home" section in themes/default/css/screen.css - 05:11 Changeset [20406] by Simplify gError, gWarning, gInfo, gSuccess selectors to allow them to be used within more elements. Updated gError styles in forms. - 05:07 Changeset [20405] by Remove mptt warning message hack - 00:37 Changeset [20404] by Fix typo: tag_block -> tag_theme Overlooked when I renamed this class. 03/16/09: - 11:33 Changeset [20403] by Oops, I used the wrong resize variables in my last change. - 11:22 Ticket #141 (Dashboard not working...) closed by - fixed: Fixed. installer.sql was missing the blocks, they're back now. - 11:18 Ticket #145 (Search should search file name substrings) created by Add a photo called "IMG_1234.jpg" and you should be able to find it by searching for "1234". - 11:17 Changeset [20402] by Set $item and $tag in the Theme_View so that calls like $theme->item() which fall through to calling &View::get() have an lvalue to return, else you can't return them by reference. Also, don't show sidebar blocks for pages that don't have an item so that the rss and tag modules don't break the search page. - 09:16 Ticket #144 (Deleting a group doesn't show up in the popup window) created by To reproduce: 1) Go to admin/users 2) Create a new group (click delete now, it'll show up in a popup, hit cancel) 3) Drag a user into the group 4) Click delete. You'll get a new page When we drag the user into the group, we're getting a new group container and its gDialogLink's aren't hooked up properly. - 09:13 Ticket #143 ("Additional options" quick link doesn't do anything) created by - - 09:11 Changeset [20401] by Get rid of $hidden; it was never defined - 09:08 Changeset [20400] by Switch the locale::$locales data structure to be an array instead of a stdClass because we're not allowed to asort() stdClass objects in PHP 5.2.6. - 09:01 Changeset [20399] by Fix indentation - 08:59 Changeset [20398] by Fix bug, $active -> $site. - 08:57 Ticket #142 (t2() uses singular when the count is zero) created by To reproduce, do a complete reinstall then go to admin/tags. The text there is "there is one tag" even though there are no tags. t2 call for that is in admin_tags.html.php - 08:55 Changeset [20397] by Remove unused orig_public_key from the form, it wasn't actually doing anything (and was causing an error). - 08:53 Changeset [20396] by Initialize some variables - 08:49 Changeset [20395] by Revive the install() and uninstall() functions in Scaffold_Controller because we need those to make a package. Fix the packaging code to ignore whatever prefix is being used by the developer who is doing the packaging. Update the install.sql file (there were a variety of small inconsistencies, probably from hand-editing. Don't hand-edit this file!) - 08:48 Changeset [20394] by Set the sort_column and sort_order for the root album - 08:34 Changeset [20393] by Oops, fix a typo. - 08:29 Changeset [20392] by Move security into the constructor. Protecting the index() call is easily bypassed. - 08:25 Changeset [20391] by Use a query to get the database version. Newer versions of PHP bomb if you're using mysqli, and it asks for a connection which we can't easily get from Kohana. - 08:05 Changeset [20390] by Proxy the url through _auth() to user::get_login_form() - 08:05 Changeset [20389] by Get rid of the extra robust code we had in here to make the scaffolding work when the Gallery wasn't installed yet. Now we force users through the installer. - 08:02 Changeset [20388] by Get rid of obsolete/undefined $block_adder - 08:01 Changeset [20387] by Initialize $result in get_html() - 07:59 Changeset [20386] by Provide an empty sidebar by default - 07:45 Changeset [20385] by Don't count on the uri having 3 components; that breaks on newer versions of PHP. - 07:40 Changeset [20384] by Illegal use of $this in static function site(). Replace with $theme. - 07:17 Changeset [20383] by Remove unnecessary get() function - 07:16 Changeset [20382] by clean up style attr - 07:07 Changeset [20381] by Remove unnecessary slash from url::site() arg. - 05:52 Ticket #87 (IE 6, 7, 8 issues leading up to G3 alpha 2 release) closed by - fixed: Should've been closed w/ alpha 2 release. - 05:50 Ticket #97 (Not all buttons in dialogs are styled properly) closed by - fixed: r20380 - 05:50 Changeset [20380] by Ticket #97. Applied button css where missing. Minor form css improvements. - 04:54 Changeset [20379] by Combined "Logged in as..." and "Modify Profile" to by just "Logged in as FullName?" - 04:42 Changeset [20378] by Missed this in the last commit - 04:37 Ticket #122 (Quick menu icons don't display in IE6) closed by - fixed - 04:33 Changeset [20377] by Clean up the login, maintenance login and required-top-level-login code. We now have two clear and separate login approaches: login/ajax login/html Choose the one that's appropriate. Totally simplified the maintenance page to be separate from the theme and dead simple, and use login/html approach there. Totally simplified the top level login (login_page.html.php) to just be a login page, not the rest of the chrome on the page and use the login/ajax approach there. Don't use access::required in albums and then catch the exception, instead use access::can and check the return code. Improve the text for maintenance mode. - 04:30 Changeset [20376] by Stop loading jeditable-- we don't use it anymore - 04:15 Ticket #53 (Replace default theme icon and status message styles with jQuery UI ...) closed by - wontfix: On second thought, jQuery UI's message styles aren't complete. They don't include success or warning CSS classes, just error and highlight. If they change in the future, will consider an update. - 03:50 Changeset [20375] by Thin down the scaffolding code so that all that is there is the test data creation and the packaging code. The rest ofthe functionality is either no longer required, or moved to the developer module (MPTT Tree). Also provide checking for the active user to be an admin. - 02:58 Changeset [20374] by Add a new css selector for the high lighting a warning on the MPTT graphical display screen in the developer module. 03/15/09: - 23:24 Changeset [20373] by ignore the developer directory - 22:54 Changeset [20372] by Remove automatic load of atom module. - 22:45 Changeset [20371] by Move the start/stop translating menu item to the admin menu - 22:23 Changeset [20370] by Move atom, developer, polar_rose and gmaps modules into gallery-contrib - 22:05 Changeset [20369] by Remove profiling and debugging from the scaffold info screen. - 20:35 Changeset [20368] by Move profiling and debugging out of the scaffolding and into the developer module. - 19:27 Changeset [20367] by Updates to the developer tool create module. It now creates a fully functional sidebar block, a dialog pop up on the option menu for albums or photos, a dashboard block and an admin screen. - 19:21 Ticket #141 (Dashboard not working...) created by In testing module creation I realized that 1) there is no way to add a new dashboard block. i.e. there is no menu item that calls admin/dashboard/add_block 2) If you drag the Project news from the sidebar to the center. Then try to drag it back to the sidebar you get a page not found on admin/dashboard/reorder?csrf=... - 05:15 Changeset [20366] by Refactored the developer module. When a new module is generated a skeleton adminstration page is generated as well. @todo is still generate a skeleton block and a skeleton dialog. - 02:00 Changeset [20365] by Remove the word 'album' from phpdoc. - 01:17 Changeset [20364] by Move references to "album" out of ORM_MPTT since it's supposed to be implementation agnostic. 03/14/09: - 21:24 Changeset [20363] by Don't use html::image because it forces absolute urls, which we don't want. - 18:43 Changeset [20362] by style fixes - 18:42 Changeset [20361] by Use relative urls for the feed links. - 18:31 Changeset [20360] by Default thumb/resizes to relative urls. - 02:21 Changeset [20359] by * remove debug code - 02:16 Changeset [20358] by * Allow module names with spaces * remove debug code - 01:57 Changeset [20357] by Invert the check for https vs http. 03/13/09: - 22:40 Ticket #140 (Developer Tools) created by - Create automated generation of new modules. r20356 Provides the functionality to generate the module skeleton. @todo: clone a module or theme and generate basic functionality controllers, screen and menus - 22:15 Changeset [20356] by The first incarnation of the developer tools. This allows the user to enter a module name, a description and pick the call backs and or events they want to support and generate the basic module skeleton with one click. @todo: clone a module, clone a theme, generate skeleton controller, view, - 04:09 Changeset [20355] by Make the exif_key value size 1k 03/12/09: - 23:04 Ticket #61 (README) closed by - wontfix: I'm going to close this, as we have a readme and no one seems to know what it's about - 21:55 Ticket #139 (Update ui for deletion of items in lists) created by Use the Netflix deletion UX model in list deletion environments. Essentially this means that rather than removing a deleted item entirely, show it as grayed out with an "undo" button. Delete and undo toggle in this scenario. This avoids the need for any sort of confirmation but allows for quick undos when a user deletes something inadvertently. Album view and comment admin are the primary ui's which need this. Might be applicable in item comment lists too. See - 19:07 Ticket #138 (Standardize the javascript to the jquery standard.) created by 1) Use something like what is described here: 2) Create a specific gallery3 name space for gallery3 functionality. valiant proposed the following standard: the top level namespace would be "gallery" or maybe (gallery3|g3). core would put specific java script in that name space. Modules or themes would use gallery.<module-name|theme-name>. - 18:16 Changeset [20354] by Remove event handlers that are are no longer called (start_batch and end_batch) - 16:06 Changeset [20353] by Move the setting of the page title into the controller that is creating the page. Provide for a default page title if none is set. This allows less changes to page.html.php as different modules want to change the page title. - 15:40 Changeset [20352] by Rename tag.html.php to dynamic.html.php as part of ticket #115 creating Dynamic Albums. This name change reflects the usage better and allows multiple dynamic albums (including tags) to use the same page template. - 04:46 Changeset [20351] by Lighten color of user name in login menu - 04:24 Ticket #137 (Flash Mass Import requires lower-case JPG extension) created by A folder of .JPG files shows to be empty. Most cameras create images ending in .JPG when opened in Linux. Clicking "Chose Photos to Add" shows a file selector with a filter requiring lowercase file extensions. No "All" filter is available. Workaround: A: Manually rename each image B: Use obscure command-line compound statements to batch rename. Beginner Linux users have difficulty identifying the problem cause. - 04:23 Changeset [20350] by Minor after install message edits, added updated Gallery logo alt description - 04:00 Changeset [20349] by Handle no prefix being set when building key/value table map - 03:54 Changeset [20348] by Strip down the loin page (not sure if this is what bharat had in mind) - 03:24 Ticket #136 (All {values} in SQL get prefixed, even if they are not table names) closed by - fixed: I think we finally got this straightened out. Basically build a key/value table that maps a table name string ("{table name}") to the prefixed value ("prefix_table_name") and then only do string replacement on these key/value pairs. r20347 - 03:20 Changeset [20347] by Attempt to reduce the chance of replacing text in sql statements that is not a table name (but contained in braces) with the database prefix by building and maintaining a cache of database tables and prefixes. 03/11/09: - 21:07 Changeset [20346] by Bag the header("Location:", ...);exit() and replace with url:site(url::abs_file(...)) Create a login_page.html to be used when there is no guest access to the root album. It doesn't have a sidebar nor breadcrumb. - 20:55 Changeset [20345] by Bag the header("Location:", ...);exit() and replace with url:site(url::abs_file(...)) - 20:31 Changeset [20344] by Tried various combinations of url::redirect(...) or url::redirect(url::file(...)) The only thing that works is this way. - 15:01 Ticket #119 (Show the logged in user's name somewhere on the page) closed by - fixed: Fixed r20343 - 15:00 Changeset [20343] by Fix ticket #119. Display the full name of the user in the same block as the Modify profile and logout links. - 14:39 Ticket #69 (Better error message when accessing /gallery3 without a database being set ...) closed by - fixed: Fixed r20342. Redirect to the installer - 14:38 Changeset [20342] by Fix ticket #69. Rather than giving a better error message when the gallery3 database is not setup. Just redirect to the installer. - 13:59 Ticket #118 (Show a login page if everybody user can't access albums/1) closed by - fixed: Fixed r20341 - 13:58 Changeset [20341] by Force a login if everybody does not have access to the root item. ticket #118. - 03:29 Changeset [20340] by $task_definitions -> $task_def - 03:14 Changeset [20339] by Refine the task api but removing the optional parameters on the task::create method call - 02:39 Changeset [20338] by Get rid of stray 'type' argument to task::get_definitions() - 00:41 Changeset [20337] by Fix the test failures. If albums are created manually instead of calling album::create, then the default sourt column needs to be set to id. - 00:27 Changeset [20336] by Fix the Var_Test by making sure that the cache is cleared or updated when a variable is set or cleared. - 00:17 Changeset [20335] by Fix the test, with the addition of the additional fields required by album sort order change. 03/10/09: - 21:30 Ticket #132 (Change the server_add module to use the task API) closed by - fixed: Implemented r20334 - 21:30 Changeset [20334] by Refactor the server add module to make use of the task api (Ticket #125). Haven't quite figured out what to do with the errors in the context. Maybe they should show on the mainenance screen? - 21:01 Ticket #107 (<edit photo> form may die when deal with photos with non-ascii character ...) closed by - duplicate: Duplicate of - 20:43 Ticket #66 (Dynamic album support) closed by - duplicate: Duplicate of 115 - 20:34 Changeset [20333] by access::allow/deny/reset functions will now throw an exception if you don't pass in a Group_Model as the argument. This prevents us from setting permissions on the wrong group by accidentally passing in a User_Model. - 13:53 Changeset [20332] by Minor change to the task api with the addition of two optional parameters. The first allows the specification of a task name. Non-maintenance tasks are not defined as part of availabl_tasks so we can't get the name from the task definitions. The 2nd allows the specification of a context when the task is completed. - 06:45 Changeset [20331] by Add profiling/debugging switches in the Scaffold menu. - 03:28 Changeset [20330] by Show the album edit form for albums, not the photo edit form - 03:27 Changeset [20329] by Add info about -x flag 03/09/09: - 17:54 Ticket #136 (All {values} in SQL get prefixed, even if they are not table names) created by The regex which turns {value} into prefix_value is too expansive. We need to limit it only to table names. - 17:39 Ticket #125 (Make Gallery3 work with https) reopened by - The flash uploader dies with error 2038 if the protocol is https - 16:04 Ticket #135 (Server Add configure message is not reset.) closed by - fixed: Fixed r20328 - 16:03 Changeset [20328] by - - 15:54 Ticket #135 (Server Add configure message is not reset.) created by When the server_add module is installed a site warning is created say that the module needs to be configured... This right. When a path is set, the site warning is not removed. When the only path is removed, the site warning goes away. - 15:11 Ticket #133 (/rss/albums/1 in an empty G3 creates infinite redirect) closed by - fixed: fixed in r20327 - 15:10 Changeset [20327] by Fix for ticket #133: If $max_pages is zero don't try to redirect to max_page, just return an empty feed. - 14:38 Changeset [20326] by Override the ORM_MTTP::children and ORM_MTPP::descendants methods in the item model and always pass the orderby fields. This insures that all children or descendant calls will respect the album sort order. - 13:57 Ticket #134 (PicLens fails in FF3) created by [Exception... "Component returned failure code: 0x80040111 (NS_ERROR_NOT_AVAILABLE) [nsIChannel.contentType]" nsresult: "0x80040111 (NS_ERROR_NOT_AVAILABLE)" location: "JS frame :: :: FP_onStartRequest :: line 1440" data: no] Line 742 ? in piclens.js@723()piclens.js (line 742) ? in piclens.js@275("/gallery3/index.php/rss/albums/1", undefined)piclens.js (line 277) ? in piclens.js@38(undefined)piclens.js (line 77) ? in javascript:PicLensLite.start()@1()javascri...e.start() (line 1) [Break on this error] context = new ActiveXObject("PicLens?.Context"); // IE It looks like it thinks its in IE and trying to create an activex object - 13:43 Changeset [20325] by Change Item_Model::get_position to respect the sort order. This also forced the next/prev buttons in album navication to respect the sort order as well. - 13:30 Changeset [20324] by Restructure the sort order to maintain the sort column and sort order as two separate columns in the item table. - 07:54 Changeset [20323] by Don't show the description field if there's no description - 07:48 Changeset [20322] by Undo "#gProgressBar { visibility: hidden }", introduced in r20264 which caused the progress bar to be invisible for admin/maintenance tasks. - 07:45 Changeset [20321] by Add a 'cancel all' link too - 07:28 Changeset [20320] by More tasks cleanup. Don't join through to the users table; that won't work in embedded mode. Instead, add Tasks_Model::owner() that calls user::lookup() and refer to the object directly in the view. Add Admin_Maintenance:remove_finished_tasks() so that we can easily do old task cleanup. Hide Running / Finished sections if there aren't any running or finished tasks. - 07:12 Changeset [20319] by Fix the progress param to be an actual boolean to resolve a JS error. - 07:02 Changeset [20318] by Get rid of Task_Definition types: they're not necessary. This incidentally fixes the the problem that admin/maintenance tasks have been broken. - 06:59 Ticket #117 (Graphics toolkit should not upscale images) closed by - fixed: fixed in r20317 - 06:59 Changeset [20317] by Don't let graphics::resize() upscale images. Fixes ticket #117. - 06:40 Ticket #96 (Login as admin at the end of the installer) closed by - fixed - 06:38 Ticket #45 (Module administration view) closed by - fixed: Declaring this done for now until we have a clear idea of what we want to change. - 04:11 Ticket #120 (Erase files out of var/uploads after uploading) closed by - fixed: Fixed in r20316 - 04:11 Changeset [20316] by Don't forget to clean up temp files after uploading. Fixes ticket #120. - 03:33 Changeset [20315] by On second thought, make the description column varchar(2048) instead. If I understand correctly, this is better for performance. I could be wrong here, though. - 03:29 Changeset [20314] by Make the description a text column so that we can handle much larger descriptions. - 02:08 Ticket #125 (Make Gallery3 work with https) closed by - fixed: Fixed in r20313 - 02:07 Changeset [20313] by Tweak abs_file() and abs_site() to generate https urls as appropriate. Fixes ticket #125 - 00:03 Changeset [20312] by Instead of putting after_install in the url, put it in the session. This helps us to make sure that we only see the welcome message once. 03/08/09: - 21:21 Changeset [20311] by Log the user in as admin after running the web installer, and give them a nice "Welcome to Gallery 3" dialog. The text in there needs a little work but it's a start. In the process, re-build the install.sql using the scaffolding code. - 21:11 Changeset [20310] by Post process the sql generation code to support prefixes - 20:51 Ticket #133 (/rss/albums/1 in an empty G3 creates infinite redirect) created by Create a blank G3 Browse to /rss/albums/1 You get an infinite redirect loop of it sending you to ?page=0 which doesn't exist. I suspect this is a bug in the code for all rss feed types, too. - 20:50 Ticket #125 (Make Gallery3 work with https) reopened by - Looking at a sandbox server, if I right click on an image and look at its url I see "http" not "https". - 20:48 Ticket #125 (Make Gallery3 work with https) closed by - worksforme - 19:10 Changeset [20309] by Update sort order processing per Bharat's feedback * Remove mime type and type as sortable fields * Change the internal representation to serialized array * Shorten the database field to varchar(64) - 16:29 Ticket #127 (Implement Sort orders for albums) closed by - fixed: Implemented r20308 - 16:29 Changeset [20308] by Implement Sortable albums. Current sort fields include (Creation Date, Update Date, Random Key, Title, Mime Type, Item Type & Number of views) - 07:55 Changeset [20307] by Undo... pass any additional parameters to the _edit_form method - 07:42 Changeset [20306] by Pass any additional parameters to the _edit_form method - 05:23 Ticket #132 (Change the server_add module to use the task API) created by This will also provide better failure feedback to the user - 03:57 Changeset [20305] by Don't show the pager if there're no photos on the page. - 00:23 Changeset [20304] by Pass on_success through to ajaxify_dialog, resolving an bug created in r20302. - 00:07 Ticket #131 (Exif data should show lens info) created by From: ..> 03/07/09: - 04:44 Changeset [20303] by Add in-request caching of vars that we've already looked up. We're still doing too many database queries, but this cuts down some dupes. - 04:23 Ticket #50 (installer) closed by - fixed 03/06/09: - 22:35 Ticket #130 (Dialog "cancel" link disappears if there is a validation error) closed by - fixed: Fixed r20302 - 22:35 Changeset [20302] by Fix for ticket #130 1) Shuffled code around to create a on_form_loaded function 2) Check for a data.reset string in the json return. If it exists and is a function then call. The idea being that if there is an error we might have to reset some jquery widget initialization. - 21:24 Ticket #130 (Dialog "cancel" link disappears if there is a validation error) created by If a dialog field (i.e. form) has an error then custom processing that was done when the dialog was initialized. Typically our processing on form error is to replace the contents of the dialog with a new version of the form that contains the error messages. When we do this, any custom processing that we did when the dialog was initialized (i.e. adding a cancel link, hooking jquery widgets to html elements, etc.) are destroy and on longer available. - 18:25 Changeset [20301] by Undo local change for ticket #1170 - 18:22 Changeset [20300] by Track ticket #1170 was not really a problem so remove it - 17:54 Changeset [20299] by Track ticket #1170 in the readme in the vendor branch - 17:49 Changeset [20298] by Include the ui.tabs.css in the theme css file - 17:48 Changeset [20297] by Local fix to address Kohana ticket 1170 where form::open doesn't allow specification of a put method. Only post and get are currently allowed. - 17:03 Changeset [20296] by Add the tabs css to gallery3 - 16:41 Changeset [20295] by Update jquery-ui to 1.7 and include the tab library - 16:38 Changeset [20294] by update to jquery 1.7 and include the ui tabs - 02:48 Ticket #129 (Update default theme's icon set) created by View icons - View slideshow - View full-size - View album - View comments Other icons - Setup notification/subscription - Default user avatar (person with camera was what I was thinking) - Various RSS (photos, comments, comments on current item) Admin menu - Dashboard - Settings - Modules - Content - Presentation - Users/Groups - Maintenance - Statistics - 02:36 Changeset [20293] by oops. fix accidental style change that slipped into last commit - 02:22 Changeset [20292] by Added json and filter as requirements - 02:20 Ticket #128 (urlencoding ~ in urls breaks thumbnail urls when passed through file_proxy) created by Steps to reproduce: 0. install G3 at url that includes ~ and add some albums with images 1. note that thumbnails work for an album with images 2. set "denied" for all permissions for non-logged-in users for an album 3. refresh page and note that thumbnails do not work from access_log: "GET /%7Eckdake/g3/var/thumbs/rnd_781294370/DSC_0005.jpg HTTP/1.1" 404 3205 "GET /~ckdake/g3/var/thumbs/rnd_781294370/DSC_0005.jpg HTTP/1.1" 200 11333 the ~ gets encoded as %7E somewhere which results in a 404 - 02:17 Changeset [20291] by Added simplexml as a requirement 03/05/09: - 23:45 Ticket #127 (Implement Sort orders for albums) created by Provide sorting on the objects within an album - 17:19 Ticket #114 (Permissions seem to be ignored) closed by - fixed: Fixed with r20290 - 17:16 Changeset [20290] by Fix for ticket #114 Permissions seem to be ignored - 06:38 Changeset [20289] by Avoid using default task types. Require task::get_definitions() to specify a single type and ask for it appropriately in admin_maintenance. Specify a type for every existing task. - 06:28 Changeset [20288] by A little task restructuring - 06:26 Changeset [20287] by Fix some table names - 06:25 Changeset [20286] by Don't clean out the authorized_paths var at install time, so that uninstall/reinstall doesn't mean starting over - 06:03 Changeset [20285] by Applied jQuery UI buttons to quick edit pane. Not tested, but icons should display iin IE6 now. Rotate icons will need to be updated later. - 02:29 Changeset [20284] by Remove stray reference to server_add_dir_list.html.php - 02:26 Changeset [20283] by Cleanups. - Show the "Server Add needs configuration" message whenever there are no paths. - Un-ajaxify the admin code to remove complexity and allow us to update the status message as appropriate. - Rename server_add_admin.html.php to admin_server_add.html.php for consistency. - Fix up form to properly display error messages - Get rid of server_add_dir_list.html.php now that we're non-ajaxified. - Change delete <span> to an <a> for non-ajax world. - 01:57 Ticket #126 (L10n server side security related validation of all input) created by - Assert all input is valid Unicode - Assert there's no malicious HTML / JS (HtmlSafe? / or superior libs) - Only allow a subset of HTML tags (e.g. b, i, strong, a) - Flag all messages for review if they have an anchor tag Related: Ticket 70 (), which is for the client side of this filtering (less exhaustive / thorough) - 01:40 Changeset [20282] by Minor cleanups. - 01:22 Changeset [20281] by Change how the urls are built in the java script - 00:50 Changeset [20280] by Clean up no authorized directoriesmessage - 00:34 Ticket #15 (Toggle Maintenance Mode) closed by - fixed - 00:32 Changeset [20279] by Implement a Maintenance mode as per ticket: #15 - 00:05 Changeset [20278] by Correct typo 03/04/09: - 20:59 Changeset [20277] by Remove addition options to the autocomplete call. No point in sending csrf if we are not verifying it. Remove the must match flag so non-existent paths don't cause the input box to empty - 20:09 Changeset [20276] by Last of changes required from Bharat's 2nd review pass - 19:55 Changeset [20275] by update instructions to reflect renaming jquery-autocomplete.pack.js to jquery-autocomplete.js - 19:50 Changeset [20274] by Continuation of the rename of jquery.autocomplete.pack.js - 19:46 Changeset [20273] by rename jquery.autocomplete.pack.js - 19:32 Changeset [20272] by Add the forgotten README - 18:58 Changeset [20271] by Annotate local fix for Kohana ticket #1156 - 18:50 Changeset [20270] by Restructure jquery-autocomplete vendor branch - 18:32 Changeset [20269] by Add a section on open tickets with Kohana and commands for diffing the upstream code vs. what we have in Gallery3 - 18:15 Changeset [20268] by - - 18:10 Changeset [20267] by - - 18:09 Changeset [20266] by - - 18:00 Changeset [20265] by Add jquery-autocomplete to the vendor branch - 16:36 Changeset [20264] by move server_add styles into the theme screen.css files - 16:10 Changeset [20263] by Move the autocomplete js and css files to lib - 16:01 Changeset [20262] by Changed $uid to $tree_id, so not to confuse anyone between and acroynm for unique identifier as opposed to user id. :-) - 15:46 Changeset [20261] by Rename local_import module to server_add - 14:52 Changeset [20260] by Work around for Kohana ticket #1156 - 08:56 Changeset [20259] by Implement batch support in a simple fashion to avoid having to change the swf file for now - 08:55 Changeset [20258] by Remove cruft from API - 08:51 Changeset [20257] by Redefine the batch API to be very very simple. You call batch::start() before starting a series of events, and batch::stop() when you're done. In batch mode, the notification module will store up pending notifications. When the batch job is complete, it'll send a single digested email to each user for all of her notifications. Updated the scaffold and local_import to use this. Haven't modified SimpleUploader? yet. - 06:25 Changeset [20256] by A variety of cleanups: * Allow for the "movie" type in all of our text * Try to follow the pattern of mainly only passing ORM objects to the view and let it generate its own text (this becomes even more important when 3rd parties want to customize notification messages) * Rename _send_message to be _notify_subscribers to be more acccurate and have it explicitly take a subject in the API * Use Item_Model::url() in the views instead of hand crafting URLs * Reformat HTML in views * Use $comment->author_xxx() functions instead of replicating that code * Fix several places where we were encoding data by doing ucfirst($item->type) with conditionals where we form the text properly. We should *never* be showing data types to the end user! This is not localizable! Note that this probably breaks the existing batch processing code. I am going to redo that in a subsequent pass. - 06:25 Changeset [20255] by Allow url() to return absolute urls - 06:25 Changeset [20254] by Delete test(), it should never have been checked in. - 05:22 Changeset [20253] by Simplify logic a bit and tweak the visible text. - 04:55 Ticket #116 (Modify Notifications to not sent an email with each update) closed by - fixed - 04:55 Changeset [20252] by Indentation and whitespace tweaks. - 04:49 Changeset [20251] by Fix indentation. - 04:46 Changeset [20250] by remove unnecessary render() - 04:28 Changeset [20249] by Move <label> outside of <?= ?> block - 03:31 Changeset [20248] by Send 1 items added notification per batch of items - 03:08 Changeset [20247] by Forgot to remove a debugging statement - 02:03 Changeset [20246] by Use Directory Iterator 03/03/09: - 23:07 Changeset [20245] by * Validate that the source path is authorized. * Add site warning message if local_import is installed an there is no authorized directories - 22:27 Changeset [20244] by Remove csrf verification from autocomplete handler - 22:25 Changeset [20243] by Only show local_import head stuff (css and js) when the local_import admin page is shown - 22:19 Changeset [20242] by Inline the admin view creation that was in helpers/local_import.php and remove it. Cleanup unused variables. Rename the method remove() to remove_path() - 21:48 Changeset [20241] by Use the gallery root directory on the batch::operation call when generating random albums and images - 21:43 Changeset [20240] by Improve the comment about why we skip the first path. Change to use access::required - 21:19 Changeset [20239] by Fix indentation, remove unnnecessary csrf check. - 18:57 Changeset [20238] by Create a proxy event (gallery_event) which is called when the request is completing. - 16:09 Changeset [20237] by Add the ability for modules to define hooks. The challenge is that when the hooks are run, we haven't added all the installed modules to the path, So if a module defines a hook it will never be run. This change runs any module defined hooks as part of the gallery initialization. - 06:29 Changeset [20236] by - - 06:26 Changeset [20235] by Remove the === false and === true checks I really mean it this time - 06:18 Changeset [20234] by Removed the === false and === true checks - 05:59 Changeset [20233] by Refactored the batch Api: 1) created a small batch helper class: Starting a batch call batch::operation(name, item). In the case of adding photos name = add and item is the parent of the new items. When the operation is finished the batch::end_operation(name) is called. operation and end_operation events are called. Handlers (i.e. item_created) can call batch::in_progress(name) to determine if a batch is being processed. - 04:17 Changeset [20232] by If backticks (`) are used to delimit the name of the table in database, Kohana gets confused an appends the prefix outside of the back ticks - 04:04 Ticket #125 (Make Gallery3 work with https) created by G3 tends to use a hardcoded protocol. Make it work with https too. Not a high priority since very few users will have this config. - 04:02 Ticket #124 (Show appropriate EXIF data) created by See this forum post for details: - 03:52 Changeset [20231] by Fix issue identified by security review... some table names where not being translated. - 03:32 Ticket #123 (Simpleuploader reads 100% when not complete) created by I find this really confusing, in that when using the flash uploader, it quickly marks the file as 100%. So if you are doing multiple files, you think you are finished when it says 100%. So you press the finish button. And not all images have thumbnails. What has happened is there is another state after 100%... complete which shows up sometime after the 100% which appears when the file has been successfully uploaded. When need to either only show 100% instead of completed (when its really done. Or set the progress bar to a maximum of 90% so it never reads 100%. - 03:17 Ticket #122 (Quick menu icons don't display in IE6) created by In IE6 on WinXP, the quick menu icons don't display. Instead, one sees the correct number of small gray squares, but without the start, etc. inside them. The tool tips display correctly, and the buttons function as expected. In FF3 on WinXP and Ubuntu 8.10, the icons do display correctly. - 03:15 Ticket #121 (New album covers don't always display correctly) created by After changing the album cover using the quick menu, the new cover doesn't always display correctly when navigating back to the parent album. In FF3 on Ubuntu 8.10, the old cover image will display, but stretched to the dimensions of the new image. After pressing control-reload, only part of the new image will display (e.g., the bottom half); the rest of the image area will be filled with the background gray. After one rolls the cursor over the image, the entire image will display. In IE6 on WinXP, the old cover image will display, but stretched to the dimensions of the new image. After forcing a reload, the new image will display correctly. In FF3 on WinXP, the new cover image displays correctly. 03/02/09: - 04:22 Ticket #120 (Erase files out of var/uploads after uploading) created by Reports from the alpha-2 bug thread say that we're leaving files around in var/uploads. Delete 'em. - 04:21 Ticket #119 (Show the logged in user's name somewhere on the page) created by - - 04:19 Ticket #118 (Show a login page if everybody user can't access albums/1) created by To repeat, deny view permissions to everybody then log out. - 03:55 Ticket #117 (Graphics toolkit should not upscale images) created by Upload a 640x480 image and set the resize to 800px. The result will be an upscaled image. We should just copy the original to the resized in those cases. - 01:53 Changeset [20230] by Forgot to update the install.sql when i changed the [] to {} to identify table names that need substitution. 03/01/09: - 19:11 Changeset [20229] by Simplify the batch api by having the core event handlers for start_batch and end_batch add and remove the batch id from the session. Modules wishing to do batch processing, just need to fire the start_batch and end_batch events. Other modules that need to be aware of batches (i.e. notifications) just check the session for "batch_id". 02/28/09: - 20:13 Ticket #83 (Create a start batch/end batch events.) closed by - fixed: Implemented in r20228 - 20:12 Changeset [20228] by The scaffolding, simple_uploader and local_import now call two new events: start_add_batch and end_add_batch. The parameter is a batch id which is generated on the first add request. The protocol is call the add_photo as many times as required and then call finish when done. Also renamed the add method in local_import to add_photo so it is consistent with simple_uploader - 06:37 Changeset [20227] by Change the pattern to identify tables that need prefix substitution to mirror the drupal pattern of using braces {}. - 03:34 Changeset [20226] by Correct a type and missed a table name - 02:55 Ticket #116 (Modify Notifications to not sent an email with each update) created by Use the system shutdown event to send any notifications such that if multiple objects are added, only 1 email is sent - 02:52 Ticket #115 (Create Dynamic Albums (Recent Changes, Most Viewed)) created by Extend the basic album view used by tags to be used for other dynamic albums that are not associated with an particular item. 02/27/09: - 21:16 Ticket #68 (Remove all direct SQL and use the ORM classes (enable table prefixes)) closed by - fixed: Table prefix handling was implemented in r20217, r20222, r20223, r20224 and r20225 - 21:15 Changeset [20225] by This implement table prefix for the watermark notification modules (Ticket #68) - 21:09 Changeset [20224] by This implements table prefix for all the queries in core, user, exif, tag, search, comment and notification modules (Ticket #68) (missed this one) - 21:07 Changeset [20223] by This implements table prefix for all the queries in core, user, exif, tag, search, comment and notification modules (Ticket #68) - 19:26 Changeset [20222] by Both the command line and web installer installer now supports creating tables with a table prefix. There are still some queries that haven't been converted, so don't start using prefixes yet. However, if you do, you can login and modify the user profile. - 16:29 Ticket #111 (Refactor Tasks so that they can be used outside of the admin screens) closed by - fixed: Implemented r20221 - 16:28 Changeset [20221] by * Refactor task management methods from admin_maintenance.php to task.php * Added a owner_id field to the task database * Modified the admin maintenace to show the owner of the task <<**** Requires a reinstallation of core ****>> - 13:44 Changeset [20220] by 1) Change the image block so it has the correct url in the anchor tag. 2) Change the wrapping class to gImageBlock instead of gImage so the quick kinks aren't enabled - 05:40 Changeset [20219] by Remove unneeded code. - 05:39 Changeset [20218] by Optimize the code by ditching the count query; we don't need it. - 03:25 Changeset [20217] by Replace the string [table_name] with {$prefix}table_name. Slowly working through setting up the database access to support table prefixes. (Ticket#68) Before going ahead, just wanted to check this approach... whatcha think? - 02:50 Changeset [20216] by Remove commented code Correct unbalanced brackets - 00:34 Changeset [20215] by Continue the journey of replacing raw sql with ORM or Database method calls (Ticket #68) - 00:19 Changeset [20214] by Continue the journey of replacing raw sql with ORM or Database method calls (Ticket #68) 02/26/09: - 23:38 Changeset [20213] by Update to image_block based on bharat's feedback 1) move the rand_key column into core 2) don't do a max rand, just try to a get a random number less than the current random number if that doesn't successd look the other way - 23:30 Ticket #114 (Permissions seem to be ignored) created by I created a sub album and set its view permissions to be denied for everybody. I then added two items to the album and logged out of g3. The album showed up in the root album... I would of expected the root gallery to appear empty. And the images in the directory showed up in the image block although the search for a random image includes a call to viewable. - 21:23 Changeset [20212] by Replace ORM->select(count(*)) with a call to Database::count_records - 20:43 Changeset [20211] by Removed raw update sql and replaced it with Database::update(...) calls. (ticket #68) - 20:23 Changeset [20210] by Remove the commented line $parent->$movie->parent() as the $parent object was passed in as a parameter. - 19:49 Changeset [20209] by Use the Database::update function instead of a raw SQL query - 19:47 Ticket #113 (Using the scaffolding to reset the installation fails) created by If you try to use the scaffolding to reset the installation, then it fails with "Table vars does not exist in your database." - 19:11 Changeset [20208] by Delete unused code - 16:59 Ticket #103 (Don't display "View More Information" if exif data doesn't exist or module ...) closed by - fixed: Fixed r20207. I'm not going to change the exif module to remove "useless" information. If anyone needs that functionality they can open a low priority enhancement request or make a local change. Probably making a local change would be faster. - 16:57 Changeset [20207] by Implement fix for ticket #103. If there is no exif data, don't display the "Show more Information Button". - 16:27 Ticket #42 (local import module) closed by - fixed: Issues addressed... implemented as of r20206 - 16:27 Changeset [20206] by Fix up add from server: 1) Upload requests are serialized so we don't over load the server or get race conditions occurring. 2) new albums are created based on the file structure of the authorized path that is the source directory. - 16:12 Changeset [20205] by Fix thumbnail and resize generation for photos. the variable $type had never been set, so it was never equal to "photo" so no thumbnails were generated. - 16:02 Changeset [20204] by Remove debugging statement - 15:34 Ticket #112 (Add a admin ui to set default rss feed) created by Create a RSS administration to set the default feed for the main page. Change the slideshow module to specify the feed to use for show the slideshow. - 15:05 Ticket #19 (Stats Collection) closed by - duplicate: Same as ticket 54. - 15:02 Ticket #13 (Image Block) closed by - fixed: Implemented as of r20203 - 15:02 Changeset [20203] by Implement a random image block for the side bar. Ticket #13 - 14:53 Ticket #109 (When generating thumbnails, need to check file name is unique) closed by - fixed: Fixed as of r20202 - 14:53 Changeset [20202] by Added a check to insure that the resize or thumbs image files do not exists. As per ticket #109 - 14:31 Ticket #111 (Refactor Tasks so that they can be used outside of the admin screens) created by Currently the methods to manage tasks are tightly integrated into the maintenance screen. Break these methods out to a task helper that can be used by other modules to control long running user interactions. (i.e. Add from server) - 03:38 Changeset [20201] by Move tag CSS into admin theme's screen.css Use JS to add titles to avoid repeating the same text 10s of times. - 03:05 Ticket #64 (User/group/permissions management UI) closed by - fixed: This is done, hoo-rah! - 03:05 Changeset [20200] by Add slightly more visual feedback when you're hovering over a draggable user. Also, drag the icon and name not just the icon. - 02:50 Ticket #110 (Add a UI for choosing and ordering sidebar blocks) created by I'm envisioning a UI that's kind of like what we did for admin/dashboard where you can drag and drop, etc. - 02:48 Ticket #14 (Theme Settings) closed by - fixed: header/footer text (which includes your own logo if you want one) is done in r20199. I'm going to close this and add a new ticket for doing sidebar block configs. - 02:47 Changeset [20199] by Support adding custom header/footer text to themes via admin/theme_details - 02:38 Changeset [20198] by Make theme details its own page so that we can wrap it in a div and give it a title. - 02:30 Ticket #7 (Movie Support) closed by - fixed: I'm going to declare victory on this task. There are probably a few UI issues that we need to deal with, but we can open new tickets for them. - 02:09 Changeset [20197] by Minor code simplification. - 02:00 Changeset [20196] by Change quote style. - 01:59 Changeset [20195] by Minor cleanups. - 01:56 Changeset [20194] by Minor style changes. - 01:50 Changeset [20193] by Make scaffold into a menu, move translation option into it and shorten it so that it fits on one line. - 01:48 Changeset [20192] by "Import" -> "Add" so that the menu option fits on one line. 02/25/09: - 16:17 Changeset [20191] by Add closing ) - 14:16 Ticket #109 (When generating thumbnails, need to check file name is unique) created by graphics::generate() needs to check that the thumbnails and resize file names are unique. For example, if an movie (a.flv) is uploaded generate will create thumbnail a.jpg and resize a.jpg. Now if for some reason an image (a.img) is uploaded, there will be a conflict - 08:12 Ticket #108 (Search didn't work for multi-byte character title/description) created by When search for multi-byte characters title/description, always get "No results found for xxx". Existing photos title: äüö.jpg, 大.jpg Search string 1: äüö Search result: No results found for äüö Search string 2: 大 Search result: No results found for 大 - 07:51 Ticket #107 (<edit photo> form may die when deal with photos with non-ascii character ...) created by Host: dreamhost. After the photos with non-ascii character filenames uploaded, their filenames changed. German filename: äüö.JPG >>> uploads/1235545377盲眉枚.JPG thumbs/盲眉枚.JPG Chinese filename: 大.JPG >>> uploads/1235545384澶?JPG thumbs/澶.JPG Then when the users popup the edit photo form, edit the title or description, then no response for clicking the "modify" botton, user only can click "cancel" to return. - 07:12 Ticket #106 (Uploader can't deal with photos with multibyte character filenames) created by If the uploaded photos with multibyte character filenames, the progress bar showed everything is ok, uploaded successfully. But only entries are generated, no thumbnails, no resized/full size images. Tested filenames: 1. German characters: äüö.JPG, result: OK. 2. Chinese characters: 大.JPG, result: failed. - 06:54 Changeset [20190] by Restore sidebar_top() - 05:27 Changeset [20189] by Add support MP4 movies also. Flowplayer supports them and can stream them using the h264streaming plugin. Everything else is a fairly minor change. - 04:39 Changeset [20188] by Add h264streaming module from - 03:39 Ticket #105 (Add date / number formatting (localization)) created by The locale helper should provide date and number formatting as well. Ideally, this should also be called by I18n::translate() when interpolating number values. - 03:30 Ticket #104 (Add "copy" button in l10n_client) created by From: * Button to copy original string to localized (in case of many variables or names). -- (there's code for this in l10n_client.php, it's been commented out because there was a forge issue) - 03:19 Ticket #103 (Don't display "View More Information" if exif data doesn't exist or module ...) created by There is a link "View More Information", click on it will bring up a window with no detail. - 02:56 Ticket #102 (Change CSS dimensions of gItem div if thumbnail is shrunk or enlarged) created by In response to "Able to customise the view (theme default). When change the size of the Thumnail, it was prompted immediately that the photos are Out of Date and prompted me to rebuilt, after which I can see the thumbnail and rebuilt. But that does not change the size of the frame on the thumbnail page." - 00:20 Ticket #101 (The I10n scanner needs to extract module descriptions) created by Each module has a description contained in its module.info file. This description needs to be extracted and available for translation. 02/24/09: - 19:52 Ticket #98 (Deleting an album removes var/albums) closed by - fixed: Fixed r20187 - 19:52 Changeset [20187] by Fix for ticket #98. The problem was that item::delete was deleting the parent album - 19:39 Ticket #100 (Show strings from dialogs (AHAH) in l10n_client) created by The l10n_client currently only shows all messages that are part of the generated HTTP response. If the view loads additional components / dialogs / widgets via XHR, these additional messages are not shown in the l10n_client yet. - 19:23 Ticket #17 (RSS Feed w/ Comments) closed by - fixed - 19:19 Changeset [20186] by Added a block to the siebar that lists the available feeds - 16:47 Changeset [20185] by Major change to local import, in that checking the high level directory will process all the files underneath w/o having to expand the tree first. - 16:30 Ticket #99 (Polish installer UI) created by - - 14:03 Ticket #98 (Deleting an album removes var/albums) created by If you create an album in the root album. Then delete it, it also deletes the var/albums directly. Steps to recreate: 1) Start with empty g3 2) Add an album 3) Go to the root album and delete the album with the quick link. 4) Go look in gallery3/var... the albums directory is gone - 08:16 Ticket #90 (Testing) closed by - invalid - 08:00 Milestone 3.0 Alpha 2 completed - - 07:23 Changeset [20184] by Tag Gallery 3 Alpha 2 release - 06:58 Ticket #97 (Not all buttons in dialogs are styled properly) created by Progress bar dialog Add photo "Finish" button Button styles are currently applied to submit input's with a class of submit. - 06:54 Changeset [20183] by Minor README update for alpha 2 - 06:18 Ticket #96 (Login as admin at the end of the installer) created by UX improvement for the web based installer: After the successful installation, perform a login (create session / send cookie) and show the modify user form to set a new password). - 06:10 Changeset [20182] by Fix i18n create table sql (forgot to change core_install.php) - 05:54 Changeset [20181] by File structure style fixes - 05:27 Changeset [20180] by Fix bootstrap / installation issue for unit test framework: Install user module before installing other modules. E.g. local_import's installation routine depends on the user module to be installed. - 01:49 Ticket #95 (Routing misinterprets and album named movies as a request to display movie ...) created by If you create a album called movies (I discovered this when I was testing local changes to the local_import module with a sub-directory movies). The creation works fine and the children are created properly. When you go to open the album to view the contents, the pretty url that was generated for the album is gallery3/index.php/movies. This url gets misinterpreted by the router and tries to load the index page of a movie item instead of the album called movie - 01:24 Changeset [20179] by remove the extension and just use the IMAGETYPE_xxx constants
http://sourceforge.net/apps/trac/gallery/timeline?from=2009-03-26T03%3A17%3A11Z%2B0000&precision=second
CC-MAIN-2014-10
refinedweb
15,784
60.24
ARCreateSchema Note You can continue to use C APIs to customize your application, but C APIs are not enhanced to support new capabilities provided by Java APIs and REST APIs. Description Creates a new form with the indicated name on the specified server. The nine required core fields are automatically associated with the new form. Privileges BMC Remedy AR System administrator. Synopsis #include "ar.h" #include "arerrno.h" #include "arextern.h" #include "arstruct.h" int ARCreateSchema( ARControlStruct *control, ARNameType name, ARCompoundSchema *schema, ARSchem, form to create. The names of all forms on a given server must be unique. schema The type of form to create. The information contained in this definition depends on the form type that you specify.. getListFields A list of zero or more fields that identifies the default query list data for retrieving form entries. The list can include any data fields except diary fields and long character fields. The combined length of all specified fields, including separator characters, can be as many as 128 bytes (limited by AR_MAX_SDESC_SIZE). The query list displays the Short-Description core field if you specify NULL for this parameter (or zero fields). Specifying a getListFields argument when calling the ARGetListEntry function overrides the default query list data. sortList A list of zero or more fields that identifies the default sort order for retrieving form entries. Specifying a sortList argument when calling the ARGetListEntry function overrides the default sort order. indexList The set of zero or more indexes to create for the form. You can specify from 1 to 16 fields for each index (limited by AR_MAX_INDEX_FIELDS). Diary fields and character fields larger than 255 bytes cannot be indexed. for the default view. helpText The help text associated with the form. This text can be of any length. Specify NULL for this parameter if you do not want to associate help text with this object. owner The owner for the form. The owner defaults to the user performing the operation if you specify NULL for this parameter. changeDiary The initial change diary associated with the form. form, and a list of zero properties is returned when an ARGetSchemaField, ARDeleteSchema, ARGetSchema, ARGetListEntry, ARGetListAlertUser, ARSetField, ARSetSchema. See FreeAR for: FreeARCompoundSchema, FreeAREntryListFieldList, FreeARIndexList, FreeARInternalIdList, FreeARPermissionList, FreeARPropList, FreeARSortList, FreeARStatusList.
https://docs.bmc.com/docs/ars91/en/arcreateschema-609070940.html
CC-MAIN-2020-24
refinedweb
373
50.94
Constraint programming is a field that is about solving problems where you have to find a solution in a (usually large) search space given specific constraints. A classic problem of this sort is the eight queens puzzle. There are quite a few good constraint program libraries, among them are java based Choco and C++ based Gecode. At first I didn’t understand why such powerful libraries provide the eight queen puzzle as an example that can be solved using constraint programming, as it is trivial to generate all the solutions to the eight queens puzzle with simple DFS. I wrote the following C program to illustrate this (which actually shows all the solutions to the n queen problem using Depth First Search): #include <stdio.h> #include <stdlib.h> int main(int argc, char* argv[]) { int *lines; int s,i, currentLine; if ( argc != 2 ) { printf( "Usage: nqueens <board-size>n" ); return -1; } s = atoi( argv[1] ); if ( s < 1 ) { printf( "Board size must be a positive number.n" ); return -1; } lines = (int*) malloc( s * sizeof( int ) ); for ( i=0; i < s; i++ ) { lines[ i ] = 0; } currentLine = 0; // begin the DFS while ( currentLine > -1 ) { while ( lines[ currentLine ]<s && !safe(currentLine, lines) ) lines[ currentLine ]++; if ( lines[ currentLine ] == s ) { lines[ currentLine ] = 0; currentLine--; if ( currentLine > -1 ) lines[currentLine]++; continue; } if (currentLine == s-1) { for (i=0; i<s; i++) printf( " %d", lines[ i ] ); printf("n"); lines[ currentLine ]++; continue; } currentLine++; } free( lines ); } int safe( int currentLine, int* lines ) { int i; for ( i = currentLine-1; i > -1; i-- ) { if ( lines[ currentLine ] == lines[ i ] || abs( lines[currentLine]-lines[i]) == (currentLine-i) ) { return 0; } } return 1; } However, when the number of queens grows, the time it takes for the naive DFS method above grows much faster, and getting all the solutions for 30 queens and higher required waiting more time than I had patience for. I then tested what Choco and Gecode could do with the same problem and was amazed. Both libraries generated results in seconds for 300 queens and higher. Note that each added queen exponentially expands the search space so this is very impressive.
https://nocurve.com/2012/11/12/constraint-programming/
CC-MAIN-2021-21
refinedweb
349
62.82
Tools to correct the pointing of Hinode/EIS Project description EIS pointing Tools to correct the pointing of Hinode/EIS. This Python package implements the method described in the paper Pelouze et al. 2019, Sol Phys 294:59. Usage From the command line This tool can be run from the command line by calling compute_eis_pointing: usage: compute_eis_pointing [-h] [-s STEPS_FILE] [--io IO] [-c CORES] [--cache-aia-data] filename [filename ...] Determine the pointing of Hinode/EIS. positional arguments: filename The names of the level 0 EIS files, eg. 'eis_l0_20100815_192002'. optional arguments: -h, --help show this help message and exit -s STEPS_FILE, --steps-file STEPS_FILE Path to a yaml file containing the registration steps. --io IO Directory where output files are written, default: ./io. -c CORES, --cores CORES Maximum number of cores used for parallelisation, default: 4. --cache-aia-data Cache the AIA data to a file. This uses a lot of storage, but speeds things up when the same raster is aligned for the second time. Examples (command line): compute_eis_pointing -c16 eis_l0_20140810_042212 compute_eis_pointing --steps-file steps/shift_only.yml eis_l0_20140810_042212 As a Python module The tool can also be used from within a Python script, using eis_pointing.compute(). compute(*filename, steps_file=None, io='io', cores=4, cache_aia_data=False) Perform all computation steps to determine the optimal EIS pointing. Parameters ========== filename : list The names of the level 0 EIS files, eg. 'eis_l0_20100815_192002'. steps_file : str or None (default: None) Path to a yaml file containing the registration steps. io : str (default: 'io') Directory where output files are written. cores : int (default: 4) Maximum number of cores used for parallelisation. cache_aia_data : bool (default: False) Cache the AIA data to a file. This uses a lot of storage, but speeds things up when the same raster is aligned for the second time. Examples (Python): import eis_pointing eis_pointing.compute('eis_l0_20140810_042212', cores=16) eis_pointing.compute('eis_l0_20140810_042212', steps_file='steps/shift_only.yml') Installation Install the latest release by running: pip install eis_pointing. Alternatively, the latest version can be installed from GitHub by cloning this repository with git clone, then running cd eis_pointing, and pip install .. Optional: install SolarSoft Before computing the optimal pointing, this tool can download, prepare, and export the EIS data by calling external IDL routines from SolarSoft. For these features to be available, a functioning installation of SolarSoft containing the EIS instrument is required. Install SolarSoft, and set the environment variable $SSW to your installation path (by default, SolarSoft is assumed to be installed installed into /usr/local/ssw). It is perfectly fine not to install or configure SolarSoft to run with this tool. In this case, you will need to manually download the EIS level0 FITS, prepare them into level1 FITS, and save a windata structure containing the Fe XII 195.119 Å line to a .sav file placed in <io directory>/windata/eis_windata_<date>.sav. See pipeline for details on how to do this. Customisation The registration steps used to find the optimal pointing can be customised in a YAML file, and passed to eis_pointing using the --steps-file parameter (see examples above). The file should have a top-level key named steps that contains a list of registration steps. Each step must specify at least a type, chosen between shift, rotshift, and slitshift. By default, EIS data are coaligned with synthetic AIA raster. To coalign with a single AIA image, add the top-level key single_aia_frame: True. In this case, the reference AIA image chosen at the middle of the EIS raster. See files in steps/ for examples. When no file is specified, the default behaviour is the same as using steps/full_registration.yml. Code structure Pipeline All the steps required to determine the optimal pointing data from EIS level 0 files are defined in driver.py. The appropriate functions are called by the executable compute_eis_pointing when using the tool from the CLI, or by eis_pointing.compute() when using it as a Python module. Download data Download the required EIS level 0 FITS, and place them in the EIS data files and directory structure described in EIS Software Note #18 (eg. $HINODE_DATA/eis/level0/2014/08/10/eis_l0_20140810_042212.fits). Prepare data Generate EIS level 1 FITS from level 0, and save it to the EIS data files and directory structure (eg. $HINODE_DATA/eis/level1/2014/08/10/eis_l1_20140810_042212.fits). Performed by eis_pointing/prep.pro, which calls the SolarSoft routine eis_prep.pro. Export windata Save a windatastructure containing the Fe XII 195.119 Å line, obtained using the SolarSoft function eis_getwindata(see EIS Software Note #21). The structure is saved to <io>/windata/eis_windata_<date>.sav(eg. ./io/windata/windata_20140810_042212.sav). Performed by eis_pointing/export_windata.pro. Alternative to steps 1-3 without SolarSoft If SolarSoft is not installed or configured, you will need to separately generate a windata structure containing the Fe XII 195.119 Å line, and save it to <io>/windata/eis_windata_<date>.sav. Example (SSW): wd = eis_getwindata('eis_l1_20140810_042212.fits', 195.119, /refill) save, wd, filename='./io/windata/windata_20140810_042212.sav' Once this is done, run the tool normally, either from the command line, or as a Python module. It will detect the existing .sav file, and skip steps 1-3. Compute the EIS emission Generate an intensity map of the Fe XII 195.119 Å line by summing the spectra between 194.969 and 195.269 Å. Data are saved to <io>/eis_aia_emission/eis_aia_emission_<date>.fits(eg. ./io/eis_aia_emission/eis_aia_emission_20140810_042212.fits). Performed by eis_pointing.eis_aia_emission.compute(). Determine the optimal pointing Determine the optimal pointing for EIS using the intensity map generated at the previous step, and AIA 193 data retrieved from Medoc as a reference. (The AIA FITS are downloaded to ./sdo/aia, or to $SDO_DATA/aia/if the environment variable $SDO_DATAis set set.) Results from the alignment (ie. new EIS coordinates) are saved to <io>/pointing/eis_pointing_<date>.fits. Diagnostics plots, correlation cubes, as well as a YAML file containing the results from the coregistration are also saved to <io>/pointing_verification/<date>/. Performed by eis_pointing.eis_aia_registration.optimal_pointing(). Coregistration functions: eis_pointing.coregister imagescontains functions to register images in translation, relatively to another image. rasterscontains functions to register images in translation and rotation, relatively to a synthetic raster. slitsfunctions to register slit positions (ie. vertical columns in an image) separately, relatively to a synthetic raster. toolsfunctions shared among the previous submodules. Functions shared by different components eis_pointing.utils aia_raster: defines AIARasterGeneratorthat builds synthetic rasters from AIA data. Also contains SimpleCacheand FileCache. cli: argument parsing and output display. eis, aia.py: functions to handle native EIS and AIA data, filenames, and data queries. This does not take care of transformed data such as AIARasterGenerator. files: manage local filenames (ie. those in io/); canonical EIS or AIA filenames are handled in eis.pyor aia.py. idl: run IDL or SSW code from Python, load and format data returned by IDL. Contains IDLFunction, SSWFunctionand IDLStructure. num: tools that extend numpy or scipy. plots: help generate plots at step 4. sun: generic solar computations. Reference / License If you use this package for a publication, please acknowledge the following paper: Pelouze et al. 2019, Sol Phys 294:59. This package is released under a MIT open source licence. See LICENSE.txt. Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/eis-pointing/
CC-MAIN-2022-27
refinedweb
1,221
51.04
The most important part of my job is writing AIR applications, and I’ve written quite a few. In addition to applications, my team and I have written a great deal of reusable ActionScript library code (Amazon S3 APIs, Exchange APIs, etc.). Everything I’ve written is completely free and open source. The problem is that I haven’t done a very good job in keeping these applications and libraries synchronized with the the publicly available version of AIR. It’s not that I’m lagging behind, though. In fact, I have the opposite problem: most of this code has been updated to work with the the latest internal versions of AIR so that I can keep banging on, testing, and exercising the latest and greatest runtime. I just wanted to let everyone know that I will be releasing all my apps the day we release AIR 1.0. If you can’t wait that long, the source code is all available here in Google Code. In theory, all of the libraries should work with the public AIR beta (beta 3), and the applications should all work, as well, as long as the application descriptor’s namespace is set to 1.0.M6 rather than 1.0. Anything that doesn’t work, or any bugs that you find, will be fixed in the final 1.0 release. Below is a list of almost everything I’ve written that’s worth releasing (I’ve written dozens of test applications that will never see the light of day). Applications: - Apprise: A simple but powerful RSS aggregator. - Lineup: A application for viewing your Exchange calendar, and receiving meeting notifications. - PixelPerfect: Measure things on your screen in pixels. - S3E: Easily manage your Amazon S3 objects and buckets. - SPF (Screen Protection Factor): A multi-monitor screen saver. - ScreenBoard: Draw on your screen during presentations. - HTMLScout: A tool for inspecting the DOM of an HTML page. - Timeslide: A simple but useful countdown timer with good visual feedback. Libraries: - as3corelib: The corelib project is an ActionScript 3 Library that contains a number of classes and utilities for working with ActionScript 3. - as3fedexlib: A wrapper on top of some of the FedEx XML APIs. - as3syndicationlib: Use the syndication library to parse Atom and all versions of RSS easily. This library hides the differences between the formats so you can parse any type of feed without having to know what kind of feed it is. - as3awss3lib: This is an AS3 library for accessing Amazon’s S3 service. - as3notificationlib: This project makes it easy to add cross-platform notifications to your AIR application. It handles "native system notifications" like the dock icon bouncing and the taskbar icon flashing, and it allows you to easily create alert "pop-ups". - as3exchangelib: This is an ActionScript 3 library for talking to Exchange servers. (Currently only supports retrieving meeting information.) - as3preferenceslib: An AIR library for storing preferences. It worries about persistence and even encryption for you. - as3nativealertlib: This project creates Flex-like alerts, but they are native windows so they can appear outside the bounds of your application. The source code for all these applications and libraries is available here. You will need to use subversion to download the source.
http://blogs.adobe.com/cantrell/archives/2008/01
CC-MAIN-2014-42
refinedweb
536
64.61
Fun talk:Zombie Disco Squad From RationalWiki I may come back and add to this later, as well as create articles on other musicians and bands.--Graham 18:26, 25 September 2007 (EDT) - Can you add categories as well please. Genghis Khant 18:42, 25 September 2007 (EDT) - Done sir--Graham 19:01, 25 September 2007 (EDT) --> Fun?[edit] Susantalk to me 20:18, 25 September 2007 (EDT) Perhaps. If you guys don't see any point to such articles then delete them, I do not mind.--Graham 02:28, 26 September 2007 (EDT) - It's just that we're not an encyclopædia - anything not related to our mission is put in namespace "Fun" - no offence intended. Susantalk to me 02:33, 26 September 2007 (EDT) No offence taken. I write sincerely by default and genuinely do not care what happens to work I author on this site considering that we opperate under mob rule.--Graham 11:53, 26 September 2007 (EDT) There's two or three with a music bias - all have been moved to Fun - don't let it stop you, will you. Susantalk to me 12:11, 26 September 2007 (EDT)
https://rationalwiki.org/wiki/Fun_talk:Zombie_Disco_Squad
CC-MAIN-2019-47
refinedweb
194
70.63
Vee validate es Install Repository: CDNs bundle.run: jsDelivr: unpkg: Note: Este módulo contiene vee-validate en español. Los créditos son del autor original. Vee-Validate vee-validate is a lightweight plugin for Vue.js that allows you to validate input fields, and display errors. What makes it different is: You don't have to do anything fancy in your app, most of the work goes into the html, You only need to specify for each input what kind of validators should be used when the value changes. You will then get informed of the errors for each field. Although most of the validations occur automatically, you can use the validator however you see fit. The validator object has no dependencies and is a standalone object. This plugin is built with localization in mind. and currently there are over 20 validation rules available in the plugin. Read the docs for more info. This plugin is inspired by PHP Framework Laravel's validation. Installation npm npm install vee-validate --save or if you are using Vue 2.0: npm install vee-validate@next --save bower Vue 1.0: bower install vee-validate#1.0.0-beta.8 --save Vue 2.0: bower install vee-validate#2.0.0-beta.13 --save CDN vee-validate is also available on jsdelivr cdn: Vue 1.0: <script src=""></script> Vue 2.0: <script src=""></script> Or select whatever version you would like to use. Getting Started In your script entry point: import Vue from 'vue'; import VeeValidate from 'vee-validate'; Vue.use(VeeValidate); Now you are all setup to use the plugin. Usage Just apply the v-validate directive on your input and a rules attribute which is a list of validations separated by a pipe, for example we will use the required and the <input v-validate Now every time the input changes, the validator will run the list of validations from left to right, populating the errors helper object whenever an input fails validation. To access the errors object (in your vue instance): this.$validator.errorBag; // or this.errors; // injected into $data by the plugin, you can customize the property name. Of course there is more to it than that, refer to the documentation for more details about the rules, and usage of this plugin. Documentation Read the documentation and demos. Compatibility This plugin should be compatible with the major browsers, but it requires few polyfills to work on older ones. The polyfills are: - Promise Polyfill. - Object.assign Polyfill. The reason they are not included is because most of the workflows already use polyfills within their code, so to cut down the package size the redundant polyfills were removed, you can use Polyfill.io to provide the needed polyfills for all browsers automatically. Contributing You are welcome to contribute to this repo with anything you think is useful. fixes are more than welcome. However if you are adding a new validation rule, it should have multiple uses or as generic as possible. You can find more information in the contribution guide.
http://tahuuchi.info/vee-validate-es
CC-MAIN-2021-25
refinedweb
509
58.58
Important: Please read the Qt Code of Conduct - [Solved] Scroll bar visible when scrolling Hello - I am looking to create a scroll bar that is visible only when scrolling. I am using flickable and have found several examples of QML code that will do this for me. What I am wondering is how to incorporate it into my C++ code. Is it as simple as just setting the style sheet in my .cpp file for the flickable object? If not, how do I embed QML code in C++. I have looked at many different resources, but cannot find an example that fully makes sense to me. I am relatively new to this. Thanks for your help, Katelyn EDIT: moved to Qt Quick forum, Gerolf Have a look "here": Thank you - I found that helpful. I am receiving several errors: error: 'QDeclarativeView' was not declared in this scope error: 'qmlScroll' was not declared in this scope error: expected type-specifier before 'QDeclarativeView' error: expected ';' before 'QDeclarativeView' QDeclarativeView is a subclass of QWidget, which has been included in the file. The following is the portion of c++ code that adds the scroll bar to the ui: QVBoxLayout *vLayout = new QVBoxLayout; QDeclarativeView *qmlScroll = new QDeclaritiveView; qmlScroll->setSource(QUrl::fromLocalFile("ScrollBar.qml")); vLayout->addWidget(qmlScroll); The following is the qml code for the scroll bar: import Qt 4.7 Rectangle { id: container color: black; property var flickableArea Rectangle { y: flickableArea.visibleArea.yPosition * container.height width: parent.width height: flickableArea.visibleArea.heightRatio * container.height color: "gray" opacity: 0.7 } opacity: flickableArea.moving? .7 : 0; opacity: Behavior { NumberAnimation { duration: 400 } } } I am unsure as to what I am doing incorrectly. Any help would be much appreciated. Thanks, Katelyn Do you have: @ #include <QDeclarativeView> @ in you .cpp file and @QT += declarative CONFIG(debug, debug|release):CONFIG += declarative_debug @ in your .pro file? That might me the problem. And please use Code Wrappers. (@) What is a .pro file and how does it relate to what I am trying to accomplish? Also, would it be possible to create the scroll bar in C++ to accomplish the same thing I am trying to do in QML? [quote author="kbt90" date="1307722818"]What is a .pro file and how does it relate to what I am trying to accomplish? [/quote] I was assuming you were using qmake. What build system are you using? [quote author="kbt90" date="1307722818"] Also, would it be possible to create the scroll bar in C++ to accomplish the same thing I am trying to do in QML? [/quote] It would certainly be possible, but I don't know of any existing implementation. You'd have to write it yourself. I am building in an arm environment. Well, make sure you're linking against the QtDeclarative library then (if it's available for arm - I don't know that, otherwise you'll have to rewrite it in plain C++) okay, thanks for your help!
https://forum.qt.io/topic/6530/solved-scroll-bar-visible-when-scrolling/3
CC-MAIN-2020-45
refinedweb
486
66.74
GUARDIANSHIP, GUARDIANSHIP OF and 4, ibid.) MINORS, AND CUSTODY OF MINORSAND HABEAS CORPUS IN RELATION TO CUSTODY OF MINORS 1 F. Order of proceedings (Secs. 5, 7, 8, 9, 10, 11, 12, 21, 18 and 19, supra) G. Prohibited motion (Sec. 6, supra) D. Jurisdictional facts and contents of petition H. Factors to consider in determining custody (Sec. 7, ibid.) (Sec. 14, supra) E. Where filed (Sec. 3, ibid.) I. Provisional relief: F. Case study report (Sec. 9) 1. Sec. 13, supra; G. Opposition to petition; hearing (Secs. 11 2. Sec. 15, supra; and 12 [2nd and 3rd pars.], ibid.) 3. Sec. 16, supra; and, H. Who may be appointed (Secs. 6 and 12 [1st 4. Sec. 17, supra. par.], ibid.) I. Qualifications of guardians (Sec. 5, ibid.) PART III J. Service of final and executory judgment HABEAS CORPUS, WRIT OF AMPARO, AND WRIT OF HABEAS DATA (Sec. 13, ibid.) K. Posting of bond; conditions; where filed I Habeas Corpus Under Rule 102 (Secs. 14 and 15, ibid.) L. Bond posted by parents (Sec. 16, ibid.) A. To what kinds of cases extend; purpose; M. Duties of guardian (Secs. 17 and 19, ibid.) nature N. Grounds for removal or resignation of 1. Sec. 1, Rule 102 guardian (Sec. 24, ibid.) 2. Sombong vs. CA (G.R. No. 111876; Jan. 31, 1996) O. Ground for termination of guardianship 3. Caballes vs. CA (G.R. No. 163108; (Sec. 25, ibid.) Feb. 23, 2005) P. Service of final and executory judgment of B. Who may grant writ; enforceability (Sec. 2, termination (Sec. 26, ibid.) supra) Q. Distinctions between rules on guardianship C. Application for writ; requisites (Sec. 3, of incompetent and rules on guardianship supra) of minors (De Leon, pp. 246-247) D. Contents of the return (Sec. 10, supra) E. When writ not allowedIII Rule on Custody of Minors and Writ of 1. Sec. 4, supraHabeas Corpus in Relation to Custody of Minors 2. In Re Kunting (G.R. No. 167193; April(A.M. No. 03-04-04-SC) 19, 2006) 3. Ampatuan vs. Macaraig (G.R. No. A. Applicability of the rule (Sec. 1, A.M. No. 182497; June 29, 2010) 03-04-04-SC) 4. Kiani vs. BID (G.R. No. 160922; Feb. B. Date it took effect 27, 2006) C. Who may file 5. Go., Sr. vs. Ramos (G.R. No. 167569; 1. Sec. 2, ibid. Sept. 4, 2009) 2. Salientes vs. Abanilla (G.R. No. 6. Alejano vs. Cabuay (G.R. No. 160792; 162734; Aug. 29, 2006) Aug. 25, 2005) 3. Pablo-Gualberto vs. Gualberto (G.R. F. When writ granted and to whom directed No. 154994; June 28, 2005) (Secs. 5 and 6, supra) D. Where to file G. Peremptory writ of habeas corpus 1. Sec. 3, supra distinguished from writ of preliminary 2. Thornton vs. Thornton (G.R. No. citation 154598; Aug. 16, 2004) 1. De Leon, p. 399 E. Jurisdictional facts and contents of petition H. Service of writ (Secs. 7 and 8, supra) (Sec. 4, supra) 2 I. Formalities of writ; hearing on return 4. Lozada vs. Arroyo (G.R. Nos. 184379- (Secs. 11, 9, 13 and 12, supra.) 80; April 24, 2012) J. When person lawfully imprisoned 5. Navia, et al. vs. Pardico (G.R. No. recommitted, when allowed to bail 184467; June 19, 2012) 1. Sec. 14, supra K. Judgment K. Habeas corpus as a post-conviction remedy 1. Sec. 18 1. Go vs. Dimagiba (G.R. No. 151876; 2. De Lima vs. Gatdula (G.R. No. June 21, 2005) 204528; Feb. 19, 2013) L. AppealII Rule on the Writ of Amparo (A.M. No. 07-9- 1. Sec. 1912-SC) 2. Mamba vs. Bueno (G.R. No. 191416; Feb. 7, 2017) A. Definition; Coverage; Purpose; Nature M. Archival of cases; Institution of separate 1. Sec. 1 actions; Effect of filing of a criminal action 2. Secretary of Defense vs. Manalo (G.R. 1. Sec. 20 No. 180906; Oct. 7, 2008) 2. Secs. 21, 22 and 23 3. Caram vs. Segui (G.R. No. 193652; Aug. 5, 2014) III Rule on the Writ of Habeas Data (A.M. No. 4. Navia, et al. vs. Pardico (G.R. No. 08-1-16-SC) 184467; June 19, 2012) B. Who may file A. Definition; Purpose; Nature 1. Sec. 2 1. Sec. 1 2. BOAC vs. Cadapan (G.R. Nos. 2. Meralco vs. Lim (G.R. No. 18476; Oct. 184461-62; May 31, 2011) 5, 2010) C. Where to file; Other procedural 3. Gamboa vs. Chan (G.R. No. 193636; requirements July 24, 2012) 1. Sec. 3 4. Lee vs. Ilagan (G.R. No. 203254; Oct. 2. Secs. 4 and 5 8, 2014) 3. Sec. 13 5. Vivares vs. St. Theresa’s College (G.R. D. Issuance of the Writ; Nature No. 202666; Sept. 29, 2014) 1. Sec. 6 6. In Re Rodriguez vs. Arroyo (G.R. No. 2. De Lima vs. Gatdula (G.R. No. 193160; Nov. 15, 2011) 204528; Feb. 19, 2013) B. Who may file petition; Where to file; 3. Mamba vs. Bueno (G.R. No. 191416; Where returnable; enforceable Feb. 7, 2017) 1. Sec. 2 E. Return; Contents; Effect of failure to file 2. Secs. 3 and 4 return or raise all defenses available C. Other procedural requirements 1. Sec. 9 (as amended), 10 and 12 1. Sec. 5 2. De Lima vs. Gatdula (G.R. No. 2. Sec. 6 204528; Feb. 19, 2013) a. Tapuz vs. Del Rosario (G.R. No. F. Prohibited pleadings and motions (Sec. 11) 182484; June 17, 2008) G. Interim relief for the petitioner (Sec. 14) 3. Sec. 15 H. Interim relief for the respondent (Sec. 15) 4. Sec. 12 I. Contempt measure (Sec. 16) D. Writ; Issuance; Refusal to serve J. Burden of proof 1. Sec. 7 1. Sec. 17 2. Secs. 9 and 8 2. Republic vs. Cayanan (G.R. No. E. Filing of Return; Hearing on return; 181796; Nov. 7, 2017) Failure to file 3. In Re Ladaga vs. Mapagu (G.R. Nos. 1. Secs. 10, 18 and 14 189689-189691; Nov. 13, 2012) F. Prohibited pleadings and motions (Sec. 13) 3 G. Contempt measure (Sec. 11) 2. Reyes vs. Sotero (G.R. No. 167405; H. Judgment; Execution of judgment Feb. 16, 2006) 1. Secs. 16 and 17 3. In Re Garcia, supra I. Appeal 4. In Re Lim, supra 1. Sec. 19 H. Rescission of Adoption J. Institution of separate action; 1. Who may file (Sec. 19 [1st par.], supra) Consolidation; Effect of filing of criminal 2. Where and when filed; grounds (Secs. action 19, 20 and 21, supra) 1. Secs. 19, 20, 21 and 22 3. Order to answer; judgment (Secs. 22 and 23, supra) PART IV 4. Remedy of adopter DOMESTIC AND INTER-COUNTRY a. Article 919, Civil Code ADOPTION, RECOGNITION OF ILLEGITIMATE CHILDREN, AND II Inter-Country Adoption (Part B of A.M. No. ABSENTEES 02-6-02-SC) I Domestic Adoption (Part A of A.M. No. 02-6- A. Who may adopt (Secs. 26 and 30, supra)02-SC) B. Who may be adopted (Sec. 29, supra) C. Where filed (Sec. 28, supra) A. Concept and nature; purpose D. Contents of petition (Sec. 30) 1. In Re Garcia (G.R. No. 148311; March E. Duty of court (Sec. 32, supra) 31, 2005) 2. Republic vs. Elepano (G.R. No. 92542; III Judicial Approval of Voluntary Recognition Oct. 15, 1991) of Illegitimate Children (Rule 105) 3. Daoang vs. Municipal Judge of San Nicolas, Ilocos Norte (G.R. No. L- A. Classification of children into legitimate 34568; March 28, 1988) and illegitimate B. Construction of the rules on adoption (Sec. B. Recognition of legitimate and illegitimate 2[a], R.A. 8552) filiation C. Who may adopt 1. Articles 172 and 175, Family Code 1. Sec. 4, A.M. No. 02-6-02-SC 2. De Jesus vs. Estate of Dizon (G.R. No. 2. Castro vs. Gregorio (G.R. No. 188801; 142877; Oct. 2, 2001) Oct. 15, 2014) 3. Gono- Javier vs. CA (G.R. No. 111994; 3. In Re Lim (G.R. Nos. 168992-93; May Dec. 29, 1994) 21, 2009) C. Voluntary recognition distinguished from D. Who may be adopted compulsory recognition 1. Sec. 5, supra 1. Gapusan- Chua vs.CA (G.R. No. L- 2. Sec. 9, supra 46746; March 15, 1990) E. Where filed (Sec. 6, supra) D. Where filed (Sec. 1, Rule 105) F. Jurisdictional facts and contents of the E. Time to file action petition 1. Articles 173 and 175, Family Code 1. Sec. 7, supra 2. Guy vs. CA (G.R. No. 163707; Sept. 2. Sec. 10, supra 15, 2006) a. Republic vs. Hernandez (G.R. No. F. Contents of petition (Sec. 2, supra) 117209; Feb. 9, 1996) G. Order of proceedings (Secs. 3, 5 and 5, 3. Sec. 11, supra supra) a. Castro vs. Gregorio, supra G. Order of proceedings IV Absentees (Rule 107) 1. Secs. 12, 13, 14, 15 and 16, supra 4 A. Grounds for the appointment of 2. Republic vs. Capote (G.R. No. 157043; representative of the absentee (Sec. 1, Rule Feb. 2, 2007) 107) B. Who may file and where to file petition II R.A. 9048 (Secs. 1 and 2, ibid.) C. Time to file petition (Sec. 2, ibid.) A. Coverage (Sec. 1, R.A. 9048 as amended D. Jurisdictional facts and contents of petition by R.A. 10172) (Sec. 3, supra) B. Who may file and where filed (Sec. 3, ibid.) E. Order of proceedings (Secs. 5 and 6, supra) C. Grounds for change of first name or F. Who may be appointed (Sec. 7) nickname (Sec. 4, ibid.) G. Termination of administration (Sec. 8) D. Form and contents of petition (Sec. 5, ibid.) E. Duties of the City or Municipal Civil PART V Registrar or Consul General (Sec. 6, ibid.) CHANGE OF NAME AND CORRECTION F. Duties and powers of the Civil Registrar AND CANCELLATION OF ENTRIES General (Sec. 7, ibid.) G. Effect of approving the petition (Rule 12,I Change of Name (Rule 103) Implementing Rules and Regulation of R.A. 9048) A. Nature of proceedings H. Effect of denying the petition by the City or 1. Republic vs. Gallo (G.R. No. 207074; Municipal Civil Registrar or Consul Jan. 17, 2018) General (Rule 13, ibid.) 2. Basilio vs. Republic (G.R. No. 207107; I. Appeal from the order of denial (Rule 14, Sept. 14, 2016) ibid.) 3. In Re Wang (G.R. No. 159966; Mar. 30, J. Effect of impugning the decision of the city 2005) or municipal civil registrar or the consul 4. Silverio vs. Republic (G.R. No. 174689; general (Rule 16, ibid.) Oct. 22, 2007) K. Failure of the Civil Registrar General to 5. Republic vs. Cagandahan (G.R. No. impugn (Rule 15, ibid.) 166676; Sept. 12, 2008) B. Who may file; where filed III Rule 108 (Cancellation or Correction of 1. Sec. 1, Rule 103 Entries) 2. Republic vs. Marcos (G.R. No. L- 31065; Feb. 15, 1990) A. Substantive Basis C. Jurisdictional facts and contents of petition 1. Article 412, Civil Code 1. Sec. 2, supra B. Coverage; entries subject to cancellation or 2. In Re Diangkinay (G.R. No. L-29850; correction June 30, 1972) 1. Sections 1 and 2, Rule 108 D. Grounds for change of name 2. Onde vs. Office of the Local Civil 1. Republic vs. Coseteng- Magpayo (G.R. Registrar of Las Piñas City (G.R. No. No. 189476; Feb. 2, 2011) 197194; Sept. 10, 2014) 2. Republic vs. CA (G.R. No. 97906; May C. Who may file petition (Section 1, ibid.) 21, 1992) D. Nature of proceedings 3. Chua Tan Sang vs. Republic (G.R. No. 1. Republic vs. Tipay (G.R. No. 209527; L-15101; Sept. 30, 1960) Feb. 14, 2018) 4. Republic vs. Marcos, supra 2. Republic vs. Gallo (G.R. No. 207074; 5. Yasin vs. Judge of Shar’ia District Jan. 17, 2018) Court (G.R. No. 94986; Feb. 23, 1995) 3. Republic vs. Olaybar (G.R. No. E. Order of proceedings 189538; Feb. 10, 2014) 1. Secs. 3, 4 , 5 and 6 5 4. Republic vs. Uy (G.R. No. 198010; Aug. 12, 2013) III Recognition and Enforcement of Foreign 5. Republic vs. Valencia (G.R. No. L- Judgment on Divorce 32181; March 5, 1986) 6. Republic vs. Kho (G.R. No. 170340; A. Article 26 (2nd par.), Family Code June 29, 2007) B. Sec. 48, Rule 39 of the 1997 Rules of Civil 7. Braza vs. City Civil Registrar of Procedure Himamaylan City, Negros Occidental C. Secs. 24 and 25, Rule 132 of the Rules of (G.R. No. 181174; Dec. 4, 2009) Court, as amended by A.M. 19-08-15-SC 8. Republic vs. Mercadera (G.R. No. D. Doctrine of processual presumption 186027; Dec. 8, 2010) E. Cases: E. Parties to the proceedings (Sec. 3, supra) 1. Van Dorn vs. Romillo (G.R. No. L- F. Order of proceedings (Secs. 4, 5, 6 and 7, 68470; Oct. 8, 1985) supra) 2. Pilapil vs.Ibay-Somera (G.R. No. 80116; June 30, 1989) PART VI 3. Roehr vs. Rodriguez (G.R. No.RULE ON DECLARATION OF ABSOLUTE 142820; June 20, 2003) NULLITY OF VOID MARRIAGES AND 4. San Luis vs. San Luis (G.R. Nos. ANNULMENT OF VOIDABLE 133743 and 134029; Feb. 6, 2007)MARRIAGES (A.M. No. 02-11-10-SC), AND 5. Republic vs. Orbecido (G.R. No.RECOGNITION AND ENFORCEMENT OF 154380; Oct. 5, 2005) FOREIGN JUDGMENT ON DIVORCE 6. Corpus vs. Sto. Tomas (G.R. No. 186571; Aug. 11, 2010)I A.M. No. 02-11-10-SC 7. Republic vs. Manalo (G.R. No. 221029; April 24, 2018) A. Scope (Sec. 1, A.M. No. 02-11-10-SC) B. Petition for declaration of nullity of Suggested Readings: marriage 1. Who may file a. Sec. 2(a), ibid. b. Fujiki vs. Marinay (G.R. No. 01. Tan, Ferdinand A. Special Proceedings: An In-Depth Study for the Bench and the Bar. 196049; June 26, 2013) 2. Where to file (Sec. 2[b] and 4, supra) Manila. Rex Bookstore, 2019. 3. What to allege (Sec. 2[d], supra) C. Petition for annulment of voidable 02. De Leon, Magdangal M. and Wilwayco, marriages Dianna Louise R. Special Proceedings: 1. Who may file (Sec. 3[a], supra) Essentials for Bench and Bar. Manila, Rex 2. Where to file (Sec. 3[b] and 4, supra) Bookstore, 2015 D. Contents and form of petition (Sec. 5, supra) E. Summons (Sec. 6, supra) F. Prohibited motion (Sec. 7, supra) G. Filing of answer (Sec. 8, supra) H. Non-collusion investigation (Sec. 9, supra) I. Pre-trial (Secs. 11-15, supra) J. Decision and appeal (Secs. 19 and 20, supra) K. Decree of nullity or annulment of marriage (Secs. 22 and 23,.
https://de.scribd.com/document/446342501/PART-II-course-syllabus
CC-MAIN-2020-40
refinedweb
2,468
81.19
The work is based on labs and exercises from previous offerings of CSCI 363. In this lab you will create a pair of programs that mimic the behavior of Linux login server and client. You will be given a set of source programs as a starting point. One of the programs, login.c asks the user to enter a user name and a corresponding password. The program checks the validity of the user name and password against a given database. If the user name and password matches properly, the program starts a user shell with which the user can do any Linux work. The other two programs are a pair of remote login client and server rshClient.c and rshd.c, some of the functions are implemented in a separate file called rsh.c. This set of programs allow a user to remote log into a server without any authentication. Your task is to try these programs first, then read the programs to understand the process. Finally you will revise the programs such that the remote login server will ask for user name and password for authentication. Only the users with proper credentials can log onto the remote server. You first do the following to gain some first-hand experiences. ~cs363/Spring16/student/labs/lab06/into your lab06 directory. make. A number of executables will be generated. The program mlogin allows a user to enter a user name and a password to use a shell on a local Linux system. The program rshd is a server program that allows a user to get into a remote system without password. The program loginClient is a client program that will connect to a host that is running rshd. The program mypasswd is a program that allows you to create a password for an existing system user. You also should see two text files called passwd and shadow which is a faked password file and a faked shadow file. The use of these files will become clear as we move on the lab exercises. man -s 5 passwdif you are not sure how the file is structured. You then set a password for the account you just added to the passwd file by the command host% ./mypasswd userwhere user is a user name that is in the passwd file. The program will ask you for a password. Note that this is a faked password. Don't enter your real password. This faked password is stored in a shadow file in your current directory. We describe some basic ideas here. There are three major components in this set of programs. rshdwaits on a particular port for clients to connect. Once it receives and accepts a connection request, it spawns a child either using a thread, or forking a new process to service the client. getpwnam()and getspnam(). We created a pair of faked system calls because on our Linux systems, we no longer use the passwd and shadow files to check user credentials. But the essential concepts are the same. Read manual pages on getpwnam()and getspnam()to gain some basic understanding of this concept. You will develop a set of remote login service program using the two existing pairs of programs. You should have a client program that will take the user name and password on the local machine. The client program then sends the pair of user name and password to the remote machine (server). The remote login service should check the validity of the user name and password combination. If the user is valid, the program provides a shell service to the remote client. If the user is invalid, the program simply ignores the request and prompt the user for next trial if so chose by the user, just like any Linux systems would do. The user credential files passwd and shadow reside on the server side.Revise the server/client to check user name and password Your first task is to revise the server (rshd.c and rsh.c) and client (rshClient.c) such that your client program and the server program can run on two different Linux computers with user name and password checking. You will have to add the part of the logic in login.c that reads user name and password into rshClient.c at proper place. You also need to add the part of the logic in login.c that check the user name and password to the server rshd.c and rsh.c at proper place. Note that at this point the user name and password sent to the remote server are in plain text. So the service is not secure. In order to make a secure service, we need to use Secure Sockets Layer or SSL protocol, which we will explore later in the semester.Revise the server to run on your VM After you make the programs work properly on our local Linux machines, you are asked to revise the programs to run the server program on your VM so the program can access a set of user names and passwords on the VM. You have to complete two separate sets of tasks to make the programs work. First, you need to change the firewall set-ups on the VM side such that the server program can run on the VM at a particular port. useradd user-idto add a new user. For example, useradd jdoe. passwd. For example, passwd jdoe. chkconfig iptables off service iptables stop -A INPUT -p tcp -m multiport --dports 8000 -m comment --comment "700 allow port 8000" -j ACCEPTin the file /etc/sysconfig/iptables after the --dport 22line. The number "8000" is a sample port number. Please make sure use a port number that you feel comfortable. (Don't just use the one given as the example above.) service iptables reload Next you need to revise the rshd program so it will use the real getspnam() function in the Linux system. Notice that in your current program, the function getspnam() is custom developed to mimic the behavior of the system function with the same name. The reason for doing so is that your program usually runs from user space that doesn't have the privilege of reading user information. But when running on your VM as the root user, the program has access to these real data. Thus you can use real system calls to access these information. The following is what you need to do. sftpon your Linux computer. For example sftp mynode-123-4 cd cs363-lab06 mput * #include "shadow.h"to #include <shadow.h> extern struct spwd *getspnam();because now you will be using the function provided by the system. Now you should be able to compile the program by simply doing a make. Fix any errors you might have. Then run rshd on the VM. With the server (rshd) running on the VM, you can compile and run your client on any other Linux computers and the pair of programs should allow you to log into the VM from local Linux machine, and work in a way similar to ssh. When all is working well on the VM side, please copy all program files, including the Makefile back to your Linux side. Put them in the subdirectory native. You are asked to submit this set of files as well. You are asked to commit and push all program files in your lab06 and its subdirectory native. In addition, create an answer.txt file. Include four sets of screen outputs, using copy-and-paste, or script. Please label these output files with the following number with a proper title. make. make. Congratulations! You just finished this lab.
http://www.eg.bucknell.edu/~cs363/2016-spring/labs/lab06-linux-rsh.html
CC-MAIN-2017-51
refinedweb
1,282
74.39
In Machine Learning, a decision tree is a decision support tool that uses a graphical or tree model of decisions and their possible consequences, including the results of random events, resource costs, and utility. This is a way of displaying an algorithm that contains only conditional control statements. In this article, I will take you through how we can visualize a decision tree using Python. Visualizing a Decision tree is very much different from the visualization of data where we have used a decision tree algorithm. So, If you are not very much familiar with the decision tree algorithm then I will recommend you to first go through the decision tree algorithm from here. Also, Read – Visualize Real-Time Stock Prices with Python. How to Visualize a Decision Tree? If you are a practitioner in machine learning or you have applied the decision tree algorithm before in a lot of classification tasks then you must be confused about why I am stressing to visualize a decision tree. Just look at the picture down below. In the right side, we have a visualization of the output we get when we use a decision tree algorithm on data to predict the possibilities. In the left side, we have the structure that a decision tree algorithm follows to make predictions by making trees. So, I hope now you know what’s the difference between visualizing the decision tree algorithm on the data, and to visualize the structure of a decision tree algorithm. Now let’s see how we can visualize a decision tree. Visualize a Decision Tree To explain you the process of how we can visualize a decision tree, I will use the iris dataset which is a set of 3 different types of iris species (Setosa, Versicolour, and Virginica) petal and sepal length, which is stored in a NumPy array dimension of 150×4. Now, let’s import the necessary libraries to get started with the task of visualizing a decision tree: Code language: JavaScript (javascript)Code language: JavaScript (javascript) import pandas as pd import numpy as np from sklearn.datasets import load_iris, load_boston from sklearn import tree Now, let’s load the iris dataset and have a quick look at the first 5 rows of the data by using the pandas.head() method: Code language: JavaScript (javascript)Code language: JavaScript (javascript) iris = load_iris() df_iris = pd.DataFrame(iris['data'], columns=iris['feature_names']) df_iris['target'] = iris['target'] df_iris.head() Train a Decision Tree For visualizing a decision tree, the first step is to train it on the data, because the visualization of a decision tree is nothing but the structure that it will use to make predictions. So, to visualize the structure of the predictions made by a decision tree, we first need to train it on the data: clf = tree.DecisionTreeClassifier() clf = clf.fit(iris.data, iris.target) Now, we can visualize the structure of the decision tree. For this, we need to use a package known as graphviz, which can be easily installed by using the pip command – pip install graphviz. Now, if you have installed this package successfully, let’s move forward for the task of visualizing the decision tree: Code language: PHP (php)Code language: PHP (php) !pip install graphviz import graphviz dot_data = tree.export_graphviz(clf, out_file=None, feature_names=iris.feature_names, class_names=iris.target_names, filled=True, rounded=True, special_characters=True) graph = graphviz.Source(dot_data) graph In the output, we can see the structure of the decision tree that is used in making predictions on the data. But these are numerical values which means a lot in machine learning, but to make this task interesting let’s visualize the graphical representation of each step involved in the structure of the decision tree. Graphical Visualization of Each Step For this task, we need to install another package known as dtreeviz, which can be easily installed by using the pip command – pip install dtreeviz. Now, if you have installed this package successfully let’s see how we can visualize the graphical representation of each step involved in making predictions: Code language: JavaScript (javascript)Code language: JavaScript (javascript) !pip install dtreeviz from dtreeviz.trees import dtreeviz viz = dtreeviz(clf, iris['data'], iris['target'], target_name='', feature_names=np.array(iris['feature_names']), class_names={0:'setosa',1:'versicolor',2:'virginica'}) viz Also, Read – Build and Deploy a Chatbot with HTML, CSS and Python. In the output above, we can see the distribution for each class at each node, you can also see where is the decision boundary for each split, and can see the sample size at each leaf as the size of the circle. I hope you liked this article on how we can visualize the structure of a decision tree. Feel free to ask your valuable questions in the comments section below. You can also follow me on Medium to learn every topic of Machine Learning.
https://thecleverprogrammer.com/2020/08/22/visualize-a-decision-tree-in-machine-learning/
CC-MAIN-2021-43
refinedweb
812
51.28
Join devRant Pipeless API From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple APILearn More Search - "developer" - What a stupid configuration of firewall at my work: devrant -> blocked because of entertainment category. xvideos -> no problem at all. Conclusion: sysadmin likes watching porn.14 - - - - -.43 - - The CEO asks God: "God, how much time do you need to create the earth?" God: "uh, 10 billion years I think" CEO: " You have only 7 days. Well 6, the last one is to fix everything gone wrong after deploying" And here we are6 - - - - Friend: what do you do for living? Me: I am a developer, software engineer. Friend: Lucky you! you have a comfortable work, always in your desk. Me inside: *he doesn't have a clue about developers struggling* *dying inside*6 - - - - - - - - - This is my message to that particular developer of Microsoft who made a change in the Win32 API but was too lazy to update the MSDN doc: FUCK YOU FUCK YOU FUCK YOU. You wasted 3 days of mine and I had to find your fucking change by looking into the source code - - Progression in mindset of a developer trough professional life: 1. I'm going to make my code so efficient and beautiful that everyone will envy it! 2. I'm going to make sure I keep separation of concern. 3. I'm going to make my code at least maintainable for other developers. 4. Well shit. At least it works, for now.3 - - Me: I don't spend the prime of my life watching series, I code, I develope, I learn * Discovers Silicon Valley * 😓6 - I hired a new developer after careful screening and interviewing many candidates. First thing he's asking first day on the job - I have already booked august month for holiday, is that a problem? - I need to come 2 hours before anyone else in the morning and leave 2 hours before, everyday because I have things to do at home. - I've seen that espn.com sport news are blocked by the firewall, why is that? - I've installed bitTorrent on my PC but it's very slow downloading movies I hope he's good.21 - - - - - - Wait... wait....., I'm the sole developer at a company does that make me the Lead Developer and Senior Developer. TIME TO UPDATE THE CV.3 - - - - - - She asked to tell her a joke. "My life...", said I. "Error 404, joke not found..." replied she. She is also a developer.5 - - - - - - - - The time when i learned to turn on the developer options ; i felt the same as a developer who has compiled his code without a single error. 😂3 - - - - top categories of people screwing my job: - consultant - fake positive thinking attitude We just hired a fake positive thinking consultant9 - You Either Die As A Developer, Or You Live Long Enough To See Yourself Become The Tester P.S. No offence to Testers. It just i hate testers6 - - naked developer day: Today I'll work from home, sitting completely naked in front of my mac. Only keyboard, mac and me. It's a huge saving in clothes and energy to clean them23 - - Somebody told me this: You see this graph? ColdFusion is the best language ever. There are almost no questions whatsoever on stackoverflow: that means nobody has no fucking problems using it.11 - - - - - Is it only me or does anyone else think that they are a bad developer? Everytime im on devrant i think that i dont know shit.. :...25 - - - Oh I forgot. Once I got promoted with more responsibility and my pay raised, but since I just passed some tax threshold for few $ my net income was more or less 1 hundred $ lower than before the promotion.. - - What do you wanna become? / What are you? 1. PHP Developer 2. Python Developer 3. Node.Js Developer 4. JavaScript Developer 5. Java Developer 6. Android Developer 7. Other (please mention in comment)68 - - - - !student Principles of Programming Languages teacher: No one in industry uses git. The same guy who refused to take semester project submissions as github links. Also "Python is never pass by reference/id()"5 - person: what type of work do you do? me: I'm a developer person: oh, so can fix computers and stuff? me: you realize that you insult me, right?3 - - - - - Seaches "How to get Google - Play Developer account" - Clicks on first link - Enters details - Sees Price -$25 - Searches " How to get Google play developer account for free"2 - - Why did you choose to be a developer? For me: I always liked to know how softwares work, and watch a thing that I created running!14 - - - - Yes I am a developer and I am good at soccer, come on you naive idiot standing in front of me in complete awe, the days where these things were mutually exclusive are long gone! 🤓⚽️8 - - Cousins came over... Me: just compiling some python code, opens up jupyter notebook to take a look at some data science code Little Sis: *looks at jupnb dump on cmd* Whoa are you Hacking? Me: yeah. I got bored of whole Hacking command typing thing so I opened up my hacker console. *print("hello world")* Sis:wow! Me: you know what, typing is too tiresome, I'll connect to it with my mind *alt-tab* *cmatrix -b* *sits in yoga pose* Little Sis: Screams at the top of her lungs and runs to aunt "DAVE IS HACKING MATRIX"3 - - - Daily scrum Today, the Scrumpy Master was not here, so I leaded the daily scrum meeting, rephrasing the 3 standards questions a little bit: the results where amazing. Here my questions in case somebody want to use them: - What the fuck did you do on Friday? - What the fuck do you think you are doing today? - What is your fucking problem? We managed to keep the meeting very short and after the meeting everybody was sooooo concentrated I couldn't believe it. Beeep Beeeeeep 7:00 o'clock. Shit. I was dreaming. Must wake up and go to work. Scrum master will be there too.8 - Reinstalling Android Studio. It takes a while. So you take a rest, exercise a little. Sure, it will installed when you'll come back ready to throw yourself into deep work, with fresh energy. You come back. There is a pop up: Do you want to send usage data to google ? Nothing installed yet. Only Yes/No option. Where is the "Fuck you" option?12 - - - Just rememberes a collegue i had for a short period, i was remote and he was on location randomly added, I was told he was a php developer. What he did was delete ALL the whitespace in the php code and called it optimising, he told the director it’ll make the code run faster. You can imagine how fun that was...2 - - - - When you realize that "O&O ShutUp10" (AntiSpy tool for Windows) creator company is a Gold Microsoft Partner - when you are in a manual editing frenzy and you press F1, "HELP" by mistake instead of F2, "edit" And the fukken Excel stop the universe for precious seconds to give you stupid help. I want to remove fukken F-uck 1 key5 - - - half day gone try to find or remember the password of some SSL/key/encrypt/crt/shit/whatever. Blaming myself for hours, how could I not save the password somewhere? (I pressed enter, no password). it works. I love IT security - - 1. take a web application working in somebody computer since 4 years with tons of features. 2. Believe that the application is the future and solve brilliantly a general market need. 3. multiply income of current only customer by 10: you are going to be rich. 3. start to install 2, 3 customers. 4. discover the application is shit. Doesn't solve well the problem. Functionalities are different for each customer 5. discover that customers are willing to pay 1/10th of the original customer 6. quickly reingeneer the application to a multitenant cloud application, because with 3 customers and different versions you are already in deep shit 7. keep giving away the application for free to flagship customers. With a lot of customisation developed for free. 8. reach 200 customers in 5 years and still no break even, but lot of debts 9. resort to financial tricks to keep the company going Luckily money are not mine. They could be recovered. Unluckily the time spent was mine. It couldn't be recovered Hope that the application will finally crash so that I can move on to the next thing: retirement in a mental asylum - - - What must someone do to be called a developer? Is there some number of projects that someone should complete before they should call themself a developer?12 - - - Developer proposing a solution to architect-- Workaround😵 Architect asking a developer to use workaround-- Architect Solution 😎2 - - I'm in a company with no senior devs I can look to for mentoring. How do you go about scaling with the company without a developer more senior to guide you during development? I feel like I'm always second guessing decisions.14 - - - - - - It seems almost everyone here is a web or mobile developer of some sort. Am I the only non-student, desktop developer? I occasionally do some backend web stuff, but I just do a lot of desktop stuff (mostly C++)5 - - I don't like when depending on something, e.g. smoking or coffee, but still have to carry on dependencies in node_modules. - I proposed agile training to my company. I choose a well known coach around here, with good references. First 3 days were great. After a month he came back for another session and check progress. This time, he literally fell asleep during the workshop. Several times. He would ask questions, sit down and quietly fall asleep while waiting for our answers. We were astonished and embarrassed. He apparently had a very hard working period and could not cope with traveling and working so much. He apologized some day afterwards and didn't charge us for the day. He never came back. The team didn't take it very well and my reputation was compromised, as well as trust in the methodology I think. I kept saying that everybody can have a bad day, but it was probably just to defend myself and my fucking stupid idea of changing the world. A real fucking shame. Still I can't believe when I remember this️ - - Being a backend developer, the most difficult job is to write <button class="btn btn-primary">Sign up</button> 😞3 - An excerpt from the encyclopedia of "Developer Confessions": At times, when I have no clue what some code does, I comment it out to see what breaks. Sometimes I just want to see the code burn used to be a developer back in the days" yep ... maybe this is true - but the fact is: I'm the developer and you aren't - do your fucking work and don't bother me with stuff you don't understand ...1 - Seriously, when a company asks to recruit a developer, but only female developer for a regular post. What IS exactly the work to do? 🤔11 - Hi devRanters, hi you all. I really appreciate all of you that are patiently reading, humorously or not so humorously commenting and wise or not wise giving advice to my semi-serious rants. It's a great stress relief for me in this moment to know somebody it's out there listening to my stupid problems. And probably will also improve the life of people around me.2 - I'm not any kind of developer (yet), but I'm learning and this awesome site realy helped me to understand more about this industry from inside and I want to do this even more than before! =D1 - - The mixed feeling when you manage to give orders without giving orders: - feeling of omnipotence: you worked behind the scenes and you got what you wanted. Nobody knows, but what the hell you are the Puppet Master, God. - sadness and loneliness: they will never learn. Somebody else claims to have given the orders, nobody knows about you. God is alone after all. And you'll be killed one day.1 - Fuck, really FUCK the fucking MySQLWorkbench on Mac. Useless piece of shit. I fucking touched some fucking buttons and now I can't have my view back with query editor, output results, and schema view. A fucking hour wasted restarting this shit of a tool touching things, nothing. All to execute a fucking stupid query. AHAHAHAAHAHAHAHAH FUUUUUUUUUUCKKKKK I NEED to work, not to understand how your stupid GUI works, designed by a cripple mind with poor IQ and developed by retarded24 - - All day long meeting with business consultant about company future, software architecture, technical debt, refactoring, resources, projects. Conclusion from top consultant, ex country manager of a weeeell known tech company: Who cares about "code" anyway? (disgusted smile - I think I'm reaching now 40 hours in 3 days coding a function for a nasty grouping report. Now the report is ready. Testing with real data I'm 3/4 units off. Now start at least one full week of monster counting-debugging-fixing on hundred of data. If somebody get close to me in these days, I'll cut their throat drink their blood and eat their heart still beating, like Aztecs. I'll have no time to cook or buy anything else to eat anyway, so it will be for survival.1 -) - Newbie Agile Team: "Hi Scrum Coach, we studied and implemented the Scrum methodology, but we are late as before and our software is buggy and shitty as ever, how is that?" Agile Coach: "Scrum Methodology is easy to learn, but difficult to master!" Newbie Agile Team (chorus): "Oh coach, Fuck yourself daily, with your coffe thermos, standing up and once per week retrospectively. If you'll come at the next review meeting, we will gangbang your ass in front of the stakeholders"5 - - - - - - CVE-2019-3568 Description: A buffer overflow vulnerability in WhatsApp VOIP stack allowed remote code execution via specially crafted series of SRTCP packets sent to a target phone number. NSO group even sell a spyware application based on that vulnerability to governments. Listen!!!!! I'm going to the toilet with my phone!!! Listen!!!3 - - hmmm more rants about Linux than about Windows? Actually analysing better, most of "rants" about Linux are actually expressions of love. I can't find however any expression of love for Windows. So the question remains open6 - - How do you guys take care of your eyes? I've been coding on this uni project since 2 weeks and my eyes have literally turned into fried nuggets. And my head hurts like shredded tacos. My ophthalmologist prescribed me mild painkillers and anti inflammatory and lubricant eye drops. This knowledge will be useful to all :)22 - Life of an Oracle Developer ... Day {I've lost bloody count now} Task: Optimise a 236 line cursor consisting of 7 SQL SELECTS and unions, 39 joins and nested sub queries galore. "YAYYY" said no one ever ...3 - - People completing Stanford + Andrew Ng's course and bragging how they know machine learning in and out while having no idea how to code simplest application using the simplest libraries.3 - - just read this, wanted to share it: Jesus at Last Supper: (breaks bread and gives it to them) Take, this is my body. (pours wine) Drink, this is my body. (opens a jar of mayo)... Judas: I'm gonna stop you right there1 - - - - wait, if there are 3.4 Billions FaceBook fake users, that means than there are also at least 3.4 Billions fake email accounts around. Jeez. And the spam traffic estimates are at 260Billions email per day or 260B/3.4B=76 emails sent by each fake email accounts per day. Much less as probably fake email accounts are more. So, only 76 spam emails sent per account per day. I think there is still room for a big improvement5 -. - - """Itty bitty frustration""" # wannabe mode on import sklearn iris_dataset = sklearn.datasets.load_ir - - - - one boolean can change your Life 😂😂 Ever think a life without booleans? Share your views on same. A true developer better knows it!! 👀7 - - $git commit -m "fix fucking bug again" $git push fatal: Right now you can only push 1 commit every 2 rants (every 1 rant for gitRant++ members) because we want to make sure everyone's commit are pushed in a relaxed state of mind! 2 rants to go until you can push another commit.7 - - Not a web developer, but last time I discovered that developer tools from Mozilla are much more reach in many small but useful functionalities. For example blackbox mode or eyedropper. Plus all other awesome stuff from Chrome.1 - - When you have to write super detailed description so the offshore developer doesn't screw up the task...but better yet when the outcome is 👌🏽 - the daily/weekly/monthly/yearly meeting where everybody absolutely agree on absolutely fucking everything and everybody shows only positive things. After that, you go back to work and it's the opposite.3 - Well, my first project was to replicate something I saw somewhere: connect a pen to a potentiometer and to the serial port of an Apple II in such a way that you could replicate the movement of the pen on screen and also draw. Apple II . Mouse, touch screens, tablet, etc didn't exist. It worked. However, a part from feeling old, I feel also stupid now, because I didn't understand at all the potentiality of such a tool nor what was going to happen in few years. I could have invented a mouse. Or the concept of GUI. It was just in front of me. Instead, I think I just draw some tits an some dick. So I'm here. Wondering, what is there now in front of my eyes, that I don't see? - - - Listening to @addlinny and @cascross123 dealing with our apple developer account, I probably need popcorn for this!3 - - I don't get it. The job listing is for a developer. I applied as a developer. Why do they ask me whether I'd be willing to do tech support? What's their motive?6 - - - counting things and columns matching. Two easy and stupid things that make you loose it if you can't get it right4 - - So I moved to a new company. When entered there at first working day, found one candle on my table. I suppose that former employee left it for me. Hmm, do you think the same thing as I do ?7 - -.1 - - A few days back I wrote one blog on 'Be a Happy Developer' topic. Later on, I figure it out that I am the most boring developer among the developers I know. - - - “Everything fails all the time!” Werner Vogels CTO @Amazon.com "Werner Vogels, fuck you, everywhere, all the time!" Deviloper Ranter @devRant - Siri, remind me to write a new rant in 1 hour and 20 minutes. So I can keep my stress down and increase my productivity, and maybe become a devRant++ member. Am I addicted?2 - - - - - - - - - - family: so now you're getting a job. me: oh yes, I'm a developer. family: oh great! which real estate? me : (...)1 - - - 1. Brain feels like in motion without idle time. It's a very pleasurable feeling 2. Create things from scratch by myself 3. solving problems Not all coding is like this, though. - I'm responsible of the smooth operations of the platform, i.e. I'm responsible if something doesn't work. As I'm the technical leader, cofounder and original developer. However I have no control on installations scheduling, on feedback from customers, on new features planning, on installations tasks performed by the team. No resources whatsoever. And everybody NEEDS me to perform even small tasks. I would delegate and automate if only I had the time to explain and develop scripts. But I have zero time. So basically everybody is counting on me working 15 hours per day to get things done. And one person is also claiming to be "in charge of operations". He is actually only in charge of me. I cannot exit from this vicious circle. I'm like the house doormat.3 - Hate spending more time answering questions than doing the development. Pointless title of developer... - !rant... He must be developer. It is alright bud. It is alright.2 - - new Suit() + new Developer() == interview() ; The irony here is I usually wear casual to interviews6 - - - when your application works everywhere, in every situation, for every user, EXCEPT on the fucking device and user of the new Board Member.... ...fuck fuck fuck. (specially because we have a task dedicated specially to test and target CEO and old board members devices) -."9 - - - - WHEN: ...when the analyst decides whether a feature is too complex to implement or not. So you don't get the requirements because he thinks it's too complex. So you develop something that has nothing to do with requirements. Actually much more complex. And after that, one week before deployment, the customer actually show you and the analyst that what you did is fucking useless. It was much easier, or at least completely different.5 - can you share some interesting and useful topics for a web developer(PHP), who want to become nodejs based full stack developer? udemy courses link will be useful.2 - That exciting moment when remotely connected from one job's computer to second job's PC and doing third job's tasks. 😏 - - We go to see a customer for a small project, probably 1/2 month dev and few thousand $, basically a small member page for a small local club. "We want something exactly like this" he says. He opens the browser, logs in Linkedin, I wait. "Something like this" he repeats. I finally understand. Well - I say- Login page and profile picture upload, yes we can do it...2 - Every time when being too lazy to type "localhost" completely and quickly hit Enter, going to "Shakira - Loca" trap.2 - - - I guess it's a well known classic stress-relief game, but just in case somebody doesn't know it and need it:... I still play it from time to time2 - #include <stdio.h> /* * Windows Update Algorithm */ int main() { int percent = 1; while (percent <= 100) { printf("Working on updates\n"); printf("%i %% complete\n", percent); printf("Don't turn off your computer\n\n"); if (percent == 30) { printf("Restarting\n"); break; } percent++; } return 0; } - - - - Try playing T-Rex game in chrome with dark mode enabled. For me, this looks like a bug, but maybe it's a feature!1 - - - Our Bus Factor is 0.1: if a Bus, somebody, or something breaks or slightly touch one of my balls, the whole company is screwed.11 - I started to write an API for our application and asked everybody to use it. Everybody liked the idea, but nobody liked the API. So now we have api/v1, api/dev1, api/dev2, api/dev3 to do the same fucking operations. When I complained about them not respecting the guidelines, dev1,2,3 told me it's my fault because I'm the director. I thought for a while about how to get rid of these apis and I finally agreed with their view. I removed developer 1,2 and 3 and finally now there is only api/v13 - - [dying] I came here with a simple dream...a dream of killing all humans. And this is how it must end? Who's the real seven billion ton robot monster here? Not I... Not I... -- Bender Rodriguez -- (Futurama) - - A movie about a developer will be black and white. Either black should exist or white. Both appears to co exists but in reality may be not.2 - What kind of developer you are? What you write first LHS or RHS while assigning values? Step 1: a+b Step 2 : const a = a+b; or Step 1: const a; Step 2 : const a = a + b;23 - - The urban dictionary definition of developer is “organism capable of turning coffee into code.” You’re welcome.1 - good thing about working from home all weekend is that on Monday at work you don't have that shitty feeling "it's fucking Monday". Plus, you meet some humans after 2 days in the cave2 - - - Part-of-a-team...teamwork!... Always fun to watch if you know it, inspirational if you don't. And if your boss is De Niro, meditate - and better be part of the team. Top Tags
https://devrant.com/search?term=developer
CC-MAIN-2021-10
refinedweb
4,138
74.59
- × If you are looking for Bootstrap without jQuery or vanilla Javascript for Bootstrap, this is the place to get started. Filed under application tools › frameworksShow All BSN stands for Native JavaScript for Bootstrap The faster, lighter and more compact set of JavaScript components for Bootstrap 5 and Bootstrap 4, developed on modern ES6+ standards. The bootstrap.native library is available on npm, CDN and comes packed with build tools and lots of goodies. The library is around 30Kb minified (V5 38Kb) and around 10Kb gZipped. See the demo for components guidelines and examples, or the Wiki/How to use on how to use the library or create custom builds. BSN Wiki Please check the bootstrap.nativeWiki pages, they're updated with almost every new commit: - Acknowledgements - there are similarities and differences with the original jQuery plugins, good to know for maximizing your workflow. - How to use - An in depth guide on how to use the library. - CDN Links - use CDN links available on jsdelivr and cdnjs - Locally Hosted - download and copy in your project assets/jsfolder, then use proper markup to enable BSN on your pages - ES5 Example - basic component initialization with JavaScript ES5 - ES6+ Example - modern application would like you to import BSN from "bootstrap.native" - NPM Installation - just execute npm install bootstrap.nativeor mark it as dependency and take it from there - Custom Builds - use rollup build scripts to create your own custom builds, only with the components you need - Dynamic Content - use the library callbacks with your turbolinks:load, mount, loadand similar events - RequireJS/CommonJS - NodeJS applications would like you to let BSN = require "bootstrap.native" - Factory Methods - for NodeJS apps you need to have documentand windowin scope - Browser support - Enable legacy browsers support via polyfills. - FAQs - A short list of frequent asked questions regarding the use of the library. - About - Learn about the bootstrap.nativeproject inception, goals and motivations. BSN Contributors BSN License The BSN library is released under the MIT license.
https://www.javascripting.com/view/bootstrap-native
CC-MAIN-2021-39
refinedweb
326
51.48
7.10.0 Released: Class Fields in preset-env, '#private in' checks and better React tree-shaking We just released a new minor version of Babel! This 7.10 release includes: - Full support for the new Stage 1 proposal, #prop in objchecks for private fields proposal. @babel/preset-envnow compiles ES2015-style Unicode escapes ( \u{Babe1}) to the equivalent legacy syntax ( \uDAAA\uDFE1). - Two improvements to the Optional Chaining operator ( ?.) - Parser support for the new Stage 1 Module Attributes proposal ( import a from "./a.json" with type: "json"). - Better tree-shaking support for React code (i.e. React.memo)! - Setting up RFCs repo and GitHub Discussions pages! You can read the whole changelog on GitHub. Alongside this Babel release, we are releasing the first experimental version of our new polyfills compatibility architecture (see below for more details), thanks to Nicolò and some awesome folks in the community! We began discussions about this over a year ago in a RFC issue within the Babel repository. As an aside, we now have an official RFC process for discussing changes that significantly impact our users: please check it out over in the babel/rfcs repository! In addition, we've enabled GitHub Discussions on our repository if you have feedback or questions! If you or your company want to support Babel and the evolution of JavaScript, but aren't sure how, you can donate to us on our Open Collective and, better yet, work with us on the implementation of new ECMAScript proposals directly! As a volunteer-driven project, we rely on the community's support to fund our efforts in supporting the wide range of JavaScript users. Reach out at team@babeljs.io if you'd like to discuss more! New features enabled by defaultNew features enabled by default Parsing for import.meta Now that it has reached Stage 4, parsing for import.meta is enabled by default, thanks to Kiko. Please note that @babel/preset-env doesn't have any default support for transforming it, because what that object contains is up to the engine and is not defined in the ECMAScript specification. console.log(import.meta); // { url: "" } Transforming \u{...}-style Unicode escapes (#11377) We also discovered that we didn't have support for compiling a 5-year-old ECMAScript feature: \u{...}-style Unicode escapes! Thanks to Justin, @babel/preset-env can now compile them in strings and identifiers by default. var \u{1d49c} = "\u{Babe1}"; console.log(\u{1d49c}); var _ud835_udc9c = "\uDAAA\uDFE1"; console.log(_ud835_udc9c); Class Properties and Private Methods to shippedProposals option of @babel/preset-env (#11451) Lastly, thanks to Jùnliàng we have added @babel/plugin-proposal-class-properties and @babel/plugin-proposal-private-methods to the shippedProposals option of @babel/preset-env. These proposals are not Stage 4 (i.e. part of the ECMAScript standard) yet, but they are already enabled by default in many JavaScript engines. If you aren't familiar: class Bork { // Public Fields instanceProperty = "bork"; static staticProperty = "babelIsCool"; // Private Fields #xValue = 0; a() { this.#xValue++; } // Private methods get #x() { return this.#xValue; } set #x(value) { this.#xValue = value; } #clicked() { this.#x++; } } If you missed it from the last release, in 7.9 we added a new option: "bugfixes": true which can greatly reduce your code output. { "presets": [ ["@babel/preset-env", { "targets": { "esmodules": true }, // Use the targets that you was already using "bugfixes": true // will be default in Babel 8 }] ] } Improved optional chaining ?. ergonomics (#10961, #11248) In TypeScript 3.9, the interaction between non-null assertions (postfix !) and optional chaining has been changed to make it more useful. foo?.bar!.baz In TypeScript 3.8 and Babel 7.9, the above would be read as (foo?.bar)!.baz: "If foo is not nullish, get .bar from it. Then trust that foo?.bar is never nullish and always get .bar from it". This means that when foo is nullish that code would always throw, because we are trying to get .baz from undefined. In TypeScript 3.9 and Babel 7.10, the code behaves similarly to foo?.bar.baz: "If foo is not nullish, get .bar.baz from it and trust me that foo?.bar isn't nullish". Thanks to Bruno for helping to implement this! Additionally, the class fields proposal recently added support for mixing optional chaining ?. with private fields. This means that the following code is now valid: obj?.property.#priv; obj?.#priv; Note that in the second example, if obj is not nullish and does not have the #priv field, it would still throw an error (exactly as obj.#priv would throw). You can read the next section to see how to avoid it! Private Fields in in (#11372) class Person { #name; hug(other) { if (#name in other) console.log(`${this.#name} 🤗 ${other.#name}`); else console.log("It's not a person!") } } This Stage 1 proposal allows you to statically check if a given object has a specific private field. Private fields have a built-in "brand check": if you try to access them in an object where they aren't defined, it will throw an exception. You can determine if an object has a particular private field by leveraging this behavior with a try/ catch statement, but this proposal gives us a more compact and robust syntax to do so. You can read more about it in the proposal's description and test this proposal by installing the @babel/plugin-proposal-private-property-in-object plugin and adding it to your Babel config. Thanks to Justin for the PR! #10962)Module Attributes parser support ( The Modules Attributes proposal (Stage 1) allows providing the engine, module loader or bundler some additional information about the imported file. For example, you could explicitly specify that it should be parsed as JSON: import metadata from "./package.json" with type: "json"; Additionally, they can also be used with dynamic import(). Note the support for trailing commas to make it easier to add or remove the second parameter! const metadata = await import( "./package.json", { with: { type: "json" } }, ); Thanks to Vivek, Babel now supports parsing these attributes: you can add the @babel/plugin-syntax-module-attributes plugin to your Babel config or, if you are using @babel/parser directly, you can enable the moduleAttributes plugin. Currently, we only accept the type attribute but we might relax this restriction in the future depending on how the proposal evolves. ℹ️ Babel doesn't transform these attributes, and they should be handled directly by your bundler or a custom plugin. Currently babel module transformers ignore these attributes. We are discussing whether we should pass through these attributes in the future. #11428)Better tree-shaking for React components ( React exposes many pure functions used to annotate or wrap elements, for example React.forwardRef, React.memo or React.lazy. However, minifiers and bundlers aren't aware that these functions are pure and thus they cannot remove them. Thanks to Devon from the Parcel team, @babel/preset-react now injects /*#__PURE__*/ annotations in those functions calls to mark them as being safe to be tree-shaken away. We had only previously done this with JSX itself ( <a></a> => /*#__PURE__*/React.createElement("a", null)) import React from 'react'; const SomeComponent = React.lazy(() => import('./SomeComponent')); import React from 'react'; const SomeComponent = /*#__PURE__*/React.lazy(() => import('./SomeComponent')); #10008, New experimental polyfills architecture ( babel-polyfills) In the last three years, @babel/preset-env has helped users reduce bundle sizes by only transpiling the syntax features and including the core-js polyfills needed by their target environments. Currently Babel has three different ways to inject core-js polyfills in the source code: - By using @babel/preset-env's useBuiltIns: "entry"option, it is possible to inject polyfills for every ECMAScript functionality not natively supported by the target browsers; - By using useBuiltIns: "usage", Babel will only inject polyfills for unsupported ECMAScript features but only if they are actually used in the input souce code; - By using @babel/plugin-transform-runtime, Babel will inject ponyfills (which are "pure" and don't pollute the global scope) for every used ECMAScript feature supported by core-js. This is usually used by library authors. Our position in the JavaScript ecosystem allows us to push these optimizations even further. @babel/plugin-transform-runtime has big advantages for some users over useBuiltIns, but it doesn't consider target environments: it's 2020 and probably very few people need to load an Array.prototype.forEach polyfill. Additionally, why should we limit the ability to automatically inject only the necessary polyfills to core-js? There are also DOM polyfills, Intl polyfills, and polyfills for a myriad of other web platform APIs. Not everyone wants to use core-js; there are many other valid ECMAScript polyfills which have different tradeoffs (e.g. source size versus spec compliancy), and users should have the ability to use the polyfill of their choice. For example, we are actively working on an es-shims integration. What if the logic to inject them was not related to the actual data about the available or required polyfills, so that they can be used and developed independently? We are now releasing the first experimental version of four new packages: babel-plugin-polyfill-corejs3 babel-plugin-polyfill-es-shims babel-plugin-polyfill-regenerator babel-plugin-polyfill-corejs2(legacy) These packages all support a method option for adjusting how they're injected (analogous to what @babel/preset-env and @babel/plugin-transform-runtime currently offer). You can inject a polyfill into an entry point (global scope only) or by direct usage in your code (both global scope and "pure" options). Below is a custom CodeSandbox where you can try out the differences between the polyfill options. We are also releasing @babel/helper-define-polyfill-provider: a new helper package which makes it possible for polyfill authors and users to define their own polyfill provider plugins. Big thanks to Jordan for working with Nicolò to make it possible to build the es-shims plugin! ℹ️ If you want to read more about these packages, and learn how to set them up, you can check out the project's README. ⚠️ These packages are still experimental. We would appreciate feedback about them either on Twitter or on GitHub, but they are not ready for production yet. For example, we still need to wire some polyfills, and we haven't tested the plugins in production applications yet.
https://babeljs.io/blog/2020/05/25/7.10.0
CC-MAIN-2020-29
refinedweb
1,723
56.96