text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Rethinking Swing Threading Improper Swing threading is one of the main causes of sluggish, unresponsive, and unstable Swing applications. There are many reasons for this, from developers not understanding the Swing single threading model, to the difficulty of ensuring proper thread execution. Even when a conscious effort is given to Swing threading, application-threading logic tends to get quite difficult to understand and maintain. This article explains how to use event-driven programming to develop Swing applications, resulting in greatly simplified development, maintenance, and flexibility. Background Since we are trying to simplify the threading of Swing applications, let's first take a quick look at how Swing threading works and why it's necessary. The Swing API is designed around the single threading model. This means that Swing components must always be modified and otherwise interacted with using the same thread. There are a number of reasons for the single thread model, including the development cost and complexity of synchronizing Swing -- an already slow API. To facilitate the single threading model, there is a dedicated thread for interacting with Swing components. This thread is known as the Swing thread, the AWT (sometimes pronounced "ought") thread, or the event-dispatch thread. For the rest of this article, I'll refer to is as the Swing thread. Since the Swing thread is the only thread that should interact with Swing components, it has a lot of responsibilities. All painting and graphics, mouse events, component events, button events, and all other events occur in the Swing thread. Since the Swing thread is already weighted down with work, problems occur when too much other work is executed in the Swing thread. One of the most common places this can occur is in the placement of non-Swing work, like a database lookup, in an event listener method, such as an ActionListener on a JButton. Since the ActionListener's actionPerformed() method automatically gets performed in the Swing thread, the database call is also executed in the Swing thread. This occupies the Swing thread with work, preventing it from performing its other responsibilities -- like painting, responding to mouse movements, processing button events, and application resizing. Users think the application is frozen, but it may not be. Executing the code in the appropriate thread is essential to guarantee that the system is executing properly. Now that we've taken a look at why it is important to execute Swing application code in the appropriate thread, let's take a look at how threading is often implemented. We'll look at the standard mechanisms for moving code into and out of the Swing thread. In the process, I'll highlight some of the problems and difficulties with the standard approach. As we'll see, most of the problems come from attempting to implement a synchronous code model with the asynchronous Swing threading model. From there, we will see how to modify our example to be event-driven -- migrating the entire approach to an asynchronous model. Common Swing Threading Solution Let's start by looking at one of the most common Swing threading mistakes. We will try to fix this problem using the standard techniques. In the process, you will see the complexity and common difficulties of implementing correct Swing threading. Also, note that in the process of fixing this threading problem, many of the intermediate examples will also not work. In the example, I note where the code breaks with a code comment starting with //broken. So now, let's get to our example. Assume that we are performing book searches. We have a simple user interface with a search text field, a search button, and an output text area. This interface is shown in Figure 1. Don't hold me to the UI design. This is pretty ugly, I agree. Figure 1. Basic query UI The user enters a book title, author, or other criteria, and a list of results is displayed. The following code sample shows the button's ActionListener calling the lookup() method in the same thread. For these examples, I am using a stubbed out-lookup where I call thread.sleep() for five seconds. The result of the sleeping thread is the same as a synchronous server call that lasts five seconds. private void searchButton_actionPerformed() { outputTA.setText("Searching for: " + searchTF.getText()); //Broken!! Too much work in the Swing thread String[] results = lookup(searchTF.getText()); outputTA.setText(""); for (int i = 0; i < results.length; i++) { String result = results[i]; outputTA.setText(outputTA.getText() + '\n' + result); } } If you run this code (the complete source is available for download), there are a few things that you will immediately notice are wrong. Figure 2 shows a screen shot of the search running. Figure 2. Doing the search in the Swing thread Notice that the Go button appears "pressed." This is because the actionPerformed method, which notifies the button to be repainted in its non-pressed look, has not returned. Also, notice that the search string "abcde" is not displayed in the text area. The first line of the searchButton_actionPerformed method sets the text area text to the search string. But remember that Swing repaints are not immediate. Rather, a repaint request is placed on the Swing event queue for the Swing thread to process. But here, we are occupying the Swing thread with our lookup, so it can't process the repaint. To fix these and other problems, let's move to the lookup to a non-Swing thread. The first tendency is to have the entire method execute in a new thread. The problem with this is that the Swing components, in this case the output text area, can only be edited from the Swing thread. Here is the modified searchButton_actionPerformed method: private void searchButton_actionPerformed() { outputTA.setText("Searching for: " + searchTF.getText()); //the String[][] is used to allow access to // setting the results from an inner class final String[][] results = new String[1][1]; new Thread(){ public void run() { results[0] = lookup(searchTF.getText()); } }.start(); outputTA.setText(""); for (int i = 0; i < results[0].length; i++) { String result = results[0][i]; outputTA.setText(outputTA.getText() + '\n' + result); } } There are a few problems with this. Notice the final String[][]. This is an unfortunate artifact involved with anonymous inner classes and scope. Basically, any variable used in an anonymous inner class but defined in the containing class scope needs to be declared final. You can get around this by making an array to hold the variable. This way, you can make the array final and modify the element in the array but not the array reference itself. Now that we're done with the minutiae, let's get to the real problem. Figure 3 shows what happens when this code is run: Figure 3. Doing the search outside of the Swing thread The display shows a null because the display code is processed before the lookup code completes! This is because the code block continues execution once the new thread is started, not when it's done executing. This is one of those strange-looking concurrent code blocks where code later on in a method can actually execute before code earlier in a method. There are two methods in the SwingUtilities class that can help us out here: invokeLater() and invokeAndWait(). Each method takes a Runnable and executes it in the Swing thread. The invokeAndWait() method blocks until the Runnable completes execution, and invokeLater() executes the Runnable asynchronously. Using invokeAndWait() is generally frowned upon, since it can cause severe thread deadlocks that can wreak havoc on your application. So let's just put it aside and use the invokeLater() method. To fix the last problem with variable scooping and order of execution, we have to move the text-area getText() and setText() calls into a Runnable that executes only when the results are returned, and executes in the Swing thread. We can do this by creating an anonymous Runnable that we pass to invokeLater(), containing the text area manipulation from the end of the new Thread's Runnable. This guarantees that the Swing code will not execute before the lookup completes. Here is the code: private void searchButton_actionPerformed() { outputTA.setText("Searching for: " + searchTF.getText()); final String[][] results = new String[1][1]; new Thread() { public void run() { //get results. results[0] = lookup(searchTF.getText()); // send runnable to the Swing thread // the runnable is queued after the // results are returned SwingUtilities.invokeLater( new Runnable() { public void run() { // Now we're in the Swing thread outputTA.setText(""); for (int i = 0; i < results[0].length; i++) { String result = results[0][i]; outputTA.setText( outputTA.getText() + '\n' + result); } } } ); } }.start(); } This will work. But it was a major headache to get here. We had to pay serious attention to the order of execution through anonymous Threads, and we had to deal with difficult scooping issues. These are not rare problems. Additionally, this is a pretty simple example, and we've already had major issues with scope, variable passing, and order of execution. Imagine more complex problems where there are several levels of nesting, with shared references and a designated order of execution. This approach quickly gets out of hand. The Problem We are trying to force synchronous execution through an asynchronous model -- trying to fit a square peg in a round hole. As long as we're trying to do this, we will continue to encounter these problems. From experience, I can tell you this code will be hard to write, hard to maintain, and very error-prone. This seems like a common problem, so there must be standard ways to solve this, right? There are several frameworks including one that I wrote but was never publicly released. I called it the Chained Runnable Engine ad it suffered from similar synchronous-versus-asynchronous problems. Using this framework, you would create a collection of Runnables that would be executed by the engine. Each Runnable had an indicator telling the engine whether to execute it in the Swing thread or an alternate thread. The engine also ensured that each Runnable executed in proper order. So Runnable #2 would not be queued until Runnable #1 completed. And finally, it supported variable passing in the form of a HashMap that was passed from Runnable to Runnable. On this surface, this looks like it solves our main problems. But once you start to dig deeper, the same problems arise. Essentially, we haven't changed anything from what has been described above -- we would just be hiding some of the complexity in the engine. The code was very tedious to write and was quite complex, due to a seemingly exponential number of Runnables, which often ended up being tightly coupled. The non-typed HashMap variable passing between Runnables became hard to manage. The list goes on. After working on this framework, I realized this requires a completely different solution. This led me to reexamine the problem, look at how others are solving similar problems, and take a close look at the Swing source. The Solution: Event-Driven Programming All of the previous solutions share the same fatal flaw --. If we can make this truly asynchronous, we can solve our problem and simplify Swing threading tremendously. Before we go on, let's just enumerate the problems we are trying to solve: - Execute code in the appropriate thread. - Asynchronous execution using SwingUtilities.invokeLater(). And asynchronous execution causes the following problems: - Coupled components. - Difficult variable passing. - Order of execution. Let's think for a minute about message-based systems like Java Messaging Service (JMS), since they promote loosely coupled components functioning in an asynchronous environment. Messaging systems fire asynchronous events into the system, as described at the Enterprise Integration Patterns site. Interested parties listen for that event and react to it -- usually by performing some work of their own. The result is a set of modular, loosely coupled components that can be added to and removed from the system without affecting the rest of the system. But more importantly, dependencies between components are minimized, since each component is well defined and encapsulated -- each responsible for its own work. They simply fire messages to which the other components respond, and respond to messages that have been fired. For now, let's ignore the threading issue and work on decoupling and moving to an asynchronous environment. After we've solved the asynchronous problems, we'll go back and take a look at the threading issue. As we'll see, solving it at that point will be much easier. Let's take our example from the first section and begin migrating it to an event-based model. To get started, let's abstract the lookup call into a class called LookupManager. This will enable us to move all of the database logic out of the UI class and will eventually allow us to completely decouple the two. Here is the code for the LookupManager class: class LookupManager { private String[] lookup(String text) { String[] results = ... // database lookup code return results } } Now we'll start to move towards an asynchronous model. To make this call asynchronous, we need to abstract the call from the return. In other words, methods can't return anything. We'll start by deciding what the relevant actions are that other classes might want to know about. The obvious event in our case is the completion of the search. So let's create a listener interface reflecting these actions. The interface will have a single method called lookupCompleted(). Here is the interface: interface LookupListener { public void lookupCompleted(Iterator results); } Following the Java standard, we'll create another class called LookupEvent to contain the result String array rather than passing the String array around directly. This will also allow us flexibility down the road to pass other information without changing the LookupListener interface. For example, we could include the search string along with the results. Here is the LookupEvent class: public class LookupEvent { String searchText; String[] results; public LookupEvent(String searchText) { this.searchText = searchText; } public LookupEvent(String searchText, String[] results) { this.searchText = searchText; this.results = results; } public String getSearchText() { return searchText; } public String[] getResults() { return results; } } Notice that the LookupEvent class is immutable. This is important, since we are unaware who will be processing these events down the road. And unless we are willing to make a defensive copy of the event that we send to each listener, we need to make the event immutable. If not, a listener could unintentionally or maliciously modify the event and break the system. Now we need to call the lookupComplete() event from LookupManager. We'll start by adding a collection of LookupListeners to LookupManager: List listeners = new ArrayList(); And we'll add methods to add and remove LookupListeners from LookupManager: public void addLookupListener(LookupListener listener){ listeners.add(listener); } public void removeLookupListener(LookupListener listener){ listeners.remove(listener); } We need to call the listeners from the code when the action occurs. In our example, we'll fire a lookupCompleted() event when the lookup returns. This means iterating through the list of listeners and calling their lookupCompleted() methods with a LookupEvent. I like to extract this code to a separate method called fire[event-method-name] that constructs an event, iterates through the listeners, and calls the appropriate methods on the listeners. It helps to isolate the code for calling the listeners from the main logic. Here is our fireLookupCompleted method: private void fireLookupCompleted(String searchText, String[] results){ LookupEvent event = new LookupEvent(searchText, results); Iterator iter = new ArrayList(listeners).iterator(); while (iter.hasNext()) { LookupListener listener = (LookupListener) iter.next(); listener.lookupCompleted(event); } } The second line creates a new collection, passing it the collection of listeners from which to create the array. This is in case the listener decides to remove itself from the LookupManager as a result of the event. If we don't safely copy the collection, we'll get nasty errors where listeners are not called when they should be. Next, we'll call the fireLookupCompleted() helper method from the point that the action is completed. In this case, it's the end of the lookup method when the results are returned. So we can change the lookup method to fire an event rather than return the String array itself. Here is the new lookup method: public void lookup(String text) { //mimic the server call delay... try { Thread.sleep(5000); } catch (Exception e){ e.printStackTrace(); } //imagine we got this from a server String[] results = new String[]{"Book one", "Book two", "Book three"}; fireLookupCompleted(text, results); } Now let's add our listener to LookupManager. We want to update the text area when the lookup returns. Previously, we just called the setText() method directly, since the text area was in local as the database calls were being done in the UI. Now that we've abstracted the lookup logic out from the UI, we'll make the UI class a listener to the LookupManager to listen for lookup events and update itself accordingly. First, we'll implement the listener in the class declaration: public class FixedFrame implements LookupListener Then we'll implement the interface method public void lookupCompleted(final LookupEvent e) { outputTA.setText(""); String[] results = e.getResults(); for (int i = 0; i < results.length; i++) { String result = results[i]; outputTA.setText(outputTA.getText() + "\n" + result); } } Finally, we'll register it as a listener to the LookupManager. public FixedFrame() { lookupManager = new LookupManager(); //here we register the listener lookupManager.addListener(this); initComponents(); layoutComponents(); } For simplicity, I added it as a listener in the class constructor. This works fine for most systems. As systems get more complicated, you may want to refactor and abstract the listener registration out of constructors, allowing for greater flexibility and extensibility. Now that you can see everything connected, notice the separation of responsibilities. The user interface class is responsible for the display of information -- and only the display of information. The LookupManager class, on the other hand, is responsible for all lookup connections and logic. Additionally, LookupManager is responsible for notifying listeners when it changes -- but not what it should do when those changes occur. This allows you to connect an arbitrary set of listeners. To see how to add new events, let's go back and add an event for starting a lookup. We can add an event to our LookupListener called lookupStarted() that we will fire before the lookup is executed. Let's also create a fireLookupStarted() event calling lookupStarted() in all of the LookupListeners. Now the lookupMethod looks like this: public void lookup(String text) { fireLookupStarted(text); //mimic the server call delay... try { Thread.sleep(5000); } catch (Exception e){ e.printStackTrace(); } //imagine we got this from a server String[] results = new String[]{"Book one", "Book two", "Book three"}; fireLookupCompleted(text, results); } And we'll add the new fire method, fireLookupStarted(). This method is identical to the fireLookupCompleted() method except that we are calling the lookupStarted() method on the listener, and that the event does not have a result set yet. Here is the code: private void fireLookupStarted(String searchText){ LookupEvent event = new LookupEvent(searchText); Iterator iter = new ArrayList(listeners).iterator(); while (iter.hasNext()) { LookupListener listener = (LookupListener) iter.next(); listener.lookupStarted(event); } } And finally, we'll implement the lookupStarted() method in the UI that will set the text area to reflect the current search string. public void lookupStarted(final LookupEvent e) { outputTA.setText("Searching for: " + e.getSearchText()); } This example shows the ease of adding new events. Now, let's look at an example that shows the flexibility of the event-driven decoupling. We'll do this by creating a logger class that prints a statement out to the command line whenever a search is started or completed. We'll call the class Logger. Here is the code: public class Logger implements LookupListener { public void lookupStarted(LookupEvent e) { System.out.println("Lookup started: " + e.getSearchText()); } public void lookupCompleted(LookupEvent e) { System.out.println("Lookup completed: " + e.getSearchText() + " " + e.getResults()); } } Now, we'll add the Logger as a listener to the LookupManager in the FixedFrame constructor: public FixedFrame() { lookupManager = new LookupManager(); lookupManager.addListener(this); lookupManager.addListener(new Logger()); initComponents(); layoutComponents(); } Now you've seen examples of adding new events as well as creating new listeners -- showing you the flexibility and extensibility of the event-driven approach. You'll find that as you develop more with event-centered programs, you start to get a better feeling for creating generic actions that are used throughout your application. Like anything else, it just takes some time and experience. And it seems like a lot of work up front to set up the event model, but you have to weigh it against the consequences of other alternatives. Consider the development time cost; first of all, it's a one-time cost. Adding listeners later to your applications, once you set up your listener model and their actions, is trivial. Threading At this point, we've solved our stated asynchronous problems; decoupled components through listeners, variable passing through event objects, and order of execution through a combination of event generation and registered listeners. With that behind us, let's get back to the threading issue, since that's what brought us here in the first place. It's actually quite easy: since we have asynchronously functioning listeners, we can simply have the listeners themselves decide what thread they execute in. Think about the separation between the UI class and the LookupManager. The UI class is deciding what kind of processing to do, based on the event. Also, that class is all Swing, whereas a logging class would not be. So it makes a lot of sense to have the UI class be responsible for which thread it executes in. So let's take a look at our UI class again. Here is the lookupCompleted() method without threading: public void lookupCompleted(final LookupEvent e) { outputTA.setText(""); String[] results = e.getResults(); for (int i = 0; i < results.length; i++) { String result = results[i]; outputTA.setText(outputTA.getText() + "\n" + result); } } We know that this is going to be called from a non-Swing thread, since the events are being fired directly from the LookupManager, which cannot be executing code in the Swing thread. Since all of the code is functioning asynchronously (we don't have to wait for the listener method to complete to invoke any other code), we can redirect the code into the Swing thread using SwingUtilities.invokeLater(). Here is the new method, passing an anonymous Runnable to SwingUtilities.invokeLater(): public void lookupCompleted(final LookupEvent e) { //notice the threading SwingUtilities.invokeLater( new Runnable() { public void run() { outputTA.setText(""); String[] results = e.getResults(); for (int i = 0; i < results.length; i++) { String result = results[i]; outputTA.setText(outputTA.getText() + "\n" + result); } } } ); } If any LookupListeners are not executing in the Swing thread, we can execute in the listener code in the calling thread. As a rule of thumb, we want all of the listeners to be notified quickly. So if you have a listener that is going to take a lot of time to complete its functionality, you may want to create a new Thread or send the time consuming code off to a ThreadPool for execution. The last step is to make the LookupManager perform the lookup in a non-Swing thread. Currently, the LookupManager is being called from a Swing thread in the JButton's ActionListener. Now we have a decision to make; either we can introduce a new thread in the JButton's ActionListener, or we could ensure that the lookup method itself guarantees that it is being executed in a non-Swing thread and starts a thread of its own. I prefer to manage Swing threading as close to the Swing classes as possible. This helps encapsulate all Swing logic together. If we added Swing threading logic to the LookupManager, we are introducing a level of dependency that is not necessary. Additionally, it is completely unnecessary for the LookupManager to spawn its own thread in a non-Swing context, such as a headless (non-graphical) user interface or, in our example, the Logger. Spawning new threads unnecessarily would only hurt your applications' performance, rather than help it. The lookup manager executes perfectly fine regardless of Swing threading -- so I like to keep the code out of there. Now we need to make the JButton's ActionListener execute the lookup in a non-Swing thread. We'll create an anonymous Thread with an anonymous Runnable that executes the lookup. private void searchButton_actionPerformed() { new Thread(){ public void run() { lookupManager.lookup(searchTF.getText()); } }.start(); } This completes our Swing threading. Simply adding the thread in actionPerformed() method and making sure the listeners are executing in the new thread takes care of the whole threading issue. Notice we didn't deal with any problems like the first examples. By spending our time defining an event-driven architecture, we save that time and more when it comes to Swing threading. Conclusion If you need to execute a lot of Swing code and non-Swing code in the same method, there is likely to be some code in the wrong place.. It certainly takes more effort up front to design and build an event-driven client, but over time, that up-front cost is far outweighed by the flexibility and maintainability of the resulting system. - Login or register to post comments - Printer-friendly version - 16648 reads
https://today.java.net/pub/a/today/2003/10/24/swing.html
CC-MAIN-2015-35
refinedweb
4,214
55.95
Hi folks, I have a parser problem. I have a basic calculator program (Graham Hutton's from Nottingham) which contains the following code: -- Define a parser to handle the input expr :: Parser Int expr = do t <- term do symbol "+" e <- expr return (t + e) +++ return t term :: Parser Int term = do f <- factor do symbol "*" e <- expr return (f * t) +++ return f factor :: Parser Int factor = do symbol "(" e <- expr symbol ")" return e +++ natural symbol and natural are defined elsewhere and work fine, but when I compile it I get the error ERROR "C:/HUGS/Calculator.hs":66 - Undefined variable "t" I suspect I'm missing something obvious, but for the life of me I can't see it. Any suggestions? Thanks, Nik (Trying to keep a couple of weeks ahead of her students) Dr Nik Freydís Whitehead University of Akureyri, Iceland ********************************************************************* Having the moral high ground is good. Having the moral high ground and an FGMP-15 is better. ********************************************************************* -------------- next part -------------- An HTML attachment was scrubbed... URL:
http://www.haskell.org/pipermail/haskell-cafe/2005-March/009368.html
CC-MAIN-2014-41
refinedweb
168
52.83
![if !(IE 9)]> <![endif]> When discussing static analysis tools for C# projects, programmers will often deny the necessity of static analysis arguing that most errors can be caught through unit testing. So, I decided to find out how well one of the most popular unit-testing frameworks, NUnit, was tested and see if our analyzer could find anything of interest there. NUnit is a popular unit-testing library for .NET projects ported from Java to C#. Its code is open and can be downloaded from the project website. It should be noted that JUnit - the project that NUnit was ported from - was created by such renowned programmers as Erich Gamma, a co-author of the textbook on object-oriented design patterns, and Kent Beck, the creator of the test-driven development and extreme programming methodologies. I recall reading his book Test Driven Development By Example once, where he explains test-driven development by the example of creating a unit-testing framework, like JUnit, following all of his methodologies. What I mean to say is that there is no doubt that JUnit and NUnit were developed in keeping with the best traditions of unit testing, which is also confirmed by Kent Beck's comment at the NUnit site: "... an excellent example of idiomatic design. Most folks who port xUnit just transliterate the Smalltalk or Java version. That's what we did with NUnit at first, too. This new version is NUnit as it would have been done had it been done in C# to begin with." I looked through NUnit's source files: there are piles of tests; it looks like they have tested everything that could be tested. Taking into account the project's great design and the fact that NUnit has been used by thousands of developers over a number of years, I didn't expect PVS-Studio to find a single bug there. Well, I was mistaken: it did find one bug. It triggered the V3093 diagnostic, which deals with an issue when programmers use operators & and | instead of && and ||. This issue may cause troubles when it is critical that the right part of an expression should not execute under certain conditions. Let's see what this error looks like in NUnit. public class SubPathConstraint : PathConstraint { protected override bool Matches(string actual) { return actual != null & IsSubPath(Canonicalize(expected), Canonicalize(actual)); } } public abstract class PathConstraint : StringConstraint { protected string Canonicalize(string path) { if (Path.DirectorySeparatorChar != Path.AltDirectorySeparatorChar) path = path.Replace(Path.AltDirectorySeparatorChar, Path.DirectorySeparatorChar); .... } } Even if the Matches method receives the value null as the actual parameter, the right operand of the & operator will be evaluated anyway, which means that the Canonicalize method will be called, too. If you look at this method's definition, you'll see that the value of its path parameter is not tested for null and method Replace is called on it right away - this is where NullReferenceException might be raised. Let's try to reproduce this issue. For that purpose, I wrote a simple unit-test: [Test] public void Test1() { Assert.That(@"C:\Folder1\Folder2", Is.SubPathOf(null)); } Now let's run it and here's what we get: Here it is: NUnit crashed with NullReferenceException. PVS-Studio managed to find a real bug even in such a well-tested product as NUnit is. Note that it was no harder than writing a unit-test: you just run project analysis from the menu and check the grid with the results. Unit testing and static analysis are not alternative, but complementary software-development strategies [1]. Download PVS-Studio analyzer and run it on your projects to see if it can find errors that tests didn. ...
https://www.viva64.com/en/b/0420/
CC-MAIN-2020-50
refinedweb
610
52.29
In an earlier weblog entry, I bemoaned the lack of link typing out there. There are several link type taxonomies, but they’re like database schemas without databases: hardly anyone has actually put these taxonomies into practice, assigning the types to any realistic collection of links. So I started assigning some link types to a bunch of links. Because weblogs, as a new class of content, have inspired new linking applications such as Technorati and Weblog Bookwatch, and because weblog entries often cite each other, it seemed like a good subset of the web to use. For an even more focused subset, I just went with O’Reilly Developer weblogs. Assigning types to the links within my own weblog entries was easy; that’s what the HTML A element’s REL attribute is for. See the source of this weblog posting or my last few for examples. To assign types to links created by other O’Reilly Developer webloggers, RDF was a no-brainer—if you can’t use it to add metadata to resources that can be identified by URLs, you can’t use it for anything. (And, the necessary RDF turned out to be remarkably simple and straightforward.) For the link type values, I used the link taxonomy from Randall Trigg’s 1983 Phd thesis, augmented with more types that I came up with myself to suit the world of weblogging: Blog Link Types, or BLT. I also threw in a few of the suggested values for the REL attribute. Technorati.com and Weblog Bookwatch are only the beginning of potential new linking applications built around weblog content. Just imagine the possibilities if a large amount of the links in weblogs had link type indicators. When you know why links were created, and can look for patterns in those motivations, all kinds of interesting information can emerge. For example, according to Weblog Bookwatch, Ann Coulter’s “Treason” is the most commonly mentioned book after the new Harry Potter. We can assume that many people admire her book and others consider it badly-documented lies; wouldn’t it be nice to know exactly how many people liked her book (indicated with a link type value of “blt:Resource-good”), how many thought her arguments were simplistic (”tt:Pt-simplistic”), based on strawman arguments (”tt:A-strawman”), based on dubious data (”tt:D-dubious”), and so forth? (I use “tt” as a namespace prefix for Trigg’s types—he has quite a rich set of negative link types to choose from.) Wouldn’t it be great to see how those numbers change from week to week? Or to have an SVG-generated pie graph next to each book on Bookwatch showing the relative proportions of types assigned to all of the links to a given book? So join me! Go to my Blog Link Types (blt) home page to learn more about what I’ve done so far. Then, add types to your own links. Add types to any links on the web that you want (to add out-of-line link typing entries to my RDF file, I have a form you can fill out), but particularly to weblog links, and especially O’Reilly Developer weblogs. Let me know the URLs of pages where you’ve added REL attributes, or of the URLs of RDF files if you’ve created new files of out-of-line link typing. If enough are added to O’Reilly Developer weblogs, we’ll have the data to experiment with an interesting new class of linking applications. Have you added types to any links following the Blog Link Types guidelines? Or is my whole idea a waste of time? Whuffie Link Attributes There's been a debate about links as endorsement that seems to be a real need in the blog world. The idea is to add a vote attribute indicating positive or negative affect. See: Purple Numbers Looking forward to your thoughts on Purple Numbers Purple Numbers Before I posted, I e-mailed Kim to ask his opinion, and never got a reply. I think that his purple numbers were a good idea, but that along with the a/@name attribute they are no longer necessary now that the popular browsers can address points within a document when @id values are specified instead of @name values. (For example, to point to an element in foo.html for which id="bar".) Since then, I've been adding id attributes to all the block level elements in everything I write for the web. The more documents that have this, the more granularity we can take advantage of in our link addressing. Bob
http://www.oreillynet.com/xml/blog/2003/07/help_me_add_link_type_values_t.html
crawl-002
refinedweb
777
67.08
I'm trying to build a python based API application to serve HTML, XML, and JSON. Initially I found webpy and it seemed like as a library it would meet my needs. However after some more reading, it doesn't seem to support true Object Oriented Principles. Example: If i wanted to create a class to handle a particular part of the application (in this case an online Crypto Currency Wallet) I would have to create a new class for each type of POST I want to make, however when interacting the bitcoin client I am using a library called bitcoin-python (). If you review this library there are many functions which accept different parameters depending on the action you wish to carry out. So bearing this in mind, I would normally write a Class to handle all the functions based on the URL mappings for the API. I started doing this and came across what seems to be a limitation of the webpy library, each class can only have a single GET and POST function, this makes it quite long winded to develop with as I would have to create a class for each function that I wanted to carry out using the API, instead of having a single Class which would allow for different POST variable per function. You can see an example of this on their site specifically the following code: - Code: Select all """ Basic blog using webpy 0.3 """ import web import model ### Url mappings urls = ( '/', 'Index', '/view/(\d+)', 'View', '/new', 'New', '/delete/(\d+)', 'Delete', '/edit/(\d+)', 'Edit', ) ### Templates t_globals = { 'datestr': web.datestr } render = web.template.render('templates', base='base', globals=t_globals) class Index: def GET(self): """ Show page """ posts = model.get_posts() return render.index(posts) class View: def GET(self, id): post = model.get_post(int(id)) return render.view(post) class New: form = web.form.Form( web.form.Textbox('title', web.form.notnull, size=30, description="Post title:"), web.form.Textarea('content', web.form.notnull, rows=30, cols=80, description="Post content:"), web.form.Button('Post entry'), ) def GET(self): form = self.form() return render.new(form) def POST(self): form = self.form() if not form.validates(): return render.new(form) model.new_post(form.d.title, form.d.content) raise web.seeother('/') class Delete: def POST(self, id): model.del_post(int(id)) raise web.seeother('/') class Edit: def GET(self, id): post = model.get_post(int(id)) form = New.form() form.fill(post) return render.edit(post, form) def POST(self, id): form = New.form() post = model.get_post(int(id)) if not form.validates(): return render.edit(post, form) model.update_post(int(id), form.d.title, form.d.content) raise web.seeother('/') app = web.application(urls, globals()) if __name__ == '__main__': app.run() As you can see here each part of CRUD has its own class with its own POST/GET functions. However I am trying to build a class to handle all interaction between the API and the bitcoin client within a single class. My code so far (please forgive it, as its very early stages at this point): - Code: Select all import web import bitcoinrpc import bitcoinrpc.data urls = ( '/', 'index', '/bitcoin/UserBalance/(.+)', 'bitcoin', '/bitcoin/SendFromUser/(.+)','bitcoin', '/bitcoin/ListAccounts/','bitcoin' ) class index: def GET(self): return "Hello, world!" if __name__ == "__main__": app = web.application(urls, globals()) app.run() class bitcoin: def POST(self): i = web.input() if i.action == "GetUserBalance": getUserBalance(i.bitcoinAccount) elif i.action == "SendFromUser": sendFromUser(i.Account, i.to,i.amount) elif i.action == "ListAccounts": listAccounts() def __init__(self): conn = bitcoinrpc.connect_to_remote('Username', 'password', host='localhost', port=8332) def getUserBalance(self, bitcoinAccount): balance = conn.getaccountaddress(bitcoinAccount) return balance def sendFromUser(self, bitcoinAccount, to, amount): txid = conn.sendfrom(bitcoinAccount, to, amount) return txid def listAccounts(self): accounts = conn.listaccounts() return accounts Does anyone know of a way were I can keep this type of OO model with webpy maybe by using some kind of Inheritance for the POST function? Or if you can suggest a better library or framework to achieve this. Many Thanks in Advance
http://www.python-forum.org/viewtopic.php?f=22&t=4256
CC-MAIN-2015-40
refinedweb
672
53.17
From: "Mathew Robertson" <mathew.robertson@...> > >provide a different template for each language I initially went this way. But, I didn't like duplicating page structure. I found that I *really* like to keep everything in Locale::Maketext lexicons. But, the downsides are: 1. The Locale::Maketext is geared towards one large project. Not small pages. The more I moved constant/static text out of templates, the more the lexicon grew with text individual C::A modules would never need. That's a lot to load. 2. I have a lot of TMPL_VARs to replace with Locale::Maketext calls. Which leads me to want to perform an initialization of the template with replacement of text that would have more naturally been in the template if it were not for language differences. 3. As those one-time TMPL_VARs grow in number, they worry me that they add to the re-evaluation of the template. Even *if* I go through an initialization process to perform all the PARAMs only once, they'll still be re-evaluated with every "->output." (Steve's suggestion will eliminate that concern). 4. Paragraph text is a little clumsy in Locale::Maketext. I'm tempted to TMPL_INCLUDE the fragments. But, I have to get creative to specify language-specific fragments because TMPL_INCLUDE doesn't handle a variable filename. To get around #1, I use Locale::Maketext in a way not intended. Instead of putting all my language into one "en_us" file within a single directory, I've created individual subdirectories mirroring the structure of the pages. In the lowest level is the "sitewide" language file. It is loaded and applied used for all pages for the site. Language files for specific pages are in subdirectories for each page. Like: /L10N/lang/sitewide/en-us /"/"/page-a/dynamic/en-us /"/"/page-a/static/en-us /"/"/page-a/sub-page-1/dynamic/en-us /"/"/page-a/sub-page-1/static/en-us /"/"/page-b/dynamic/en-us /"/"/page-b/static/en-us In my CGI::App "prerun" I detect the page hasn't been loaded yet, load the Locale::Maketext for sitewide (if it's not loaded yet), and the page-specific "static" language file. I can apply the page-specific static translation and discard the handle. I load the dynamic language file and keep that handle in a hash for as long as I keep the partially-processed page in a hash. This works really well for me. I only load as much Locale::Maktext lexicon as I need for a page. I only keep as much as I need for display-by-display changes. The "static" lexicons are dropped immediately after first-time use. The "sitewide" handle can be dropped when it is determined all the pages for a CGI::App module have been loaded. Because of the nature of Locale::Maketext's lexicon objects, I can put common language structures subroutines that can be required by all the lexicons that need it. One problem is that Locale::Maketext expects language lexicons in a single directory and uses them for determining what languages are available. So, I do my own negotiation of which language to use and call L::M's "->new" method instead of "->get_handle." I'm happier keeping the negitiation of language outside of L::M. This works really good! It's just the three things I mentioned regarding H::T. 1. It would work better if H::T had a way to perform an "internal_output" and replace its internal template with whatever the result of the "output" was (without eliminating empty tags). 2. If evaluations of TMPL_VARs could re-evaluate (maybe based upon an attribute on the VAR) so that a VAR containing a VAR could be processed. 3. If TMPL_INCLUDE was capable of resolving a filename like "my_text.<TMPL_VAR NAME=LANG>". This would let a single template be used for all languages. A lot of heavy lifting could be done as part of the page initialization. The page could then be minimally processed for as long as it remains in the cache (either H::T's or the application's own hash of templates like I'm doing). Steve's suggestion (to run a template through, then regex some munged tags) might be a good way to get around these things. Mark Approach the problem from a slightly different tack. Don't try to do everything in one pass through H::T. Instead, think of it as a two-step approach. Each user belongs to a group, and the group portions of the template are the same for all members of that group. Only the user portions need to change with each visit. So, separate the group variables from the user variables. In this type situation, I do a one-time "page generation" of the "group" template to be used for subsequent user visits. That user template is generated from a "master template" where all the "group" level elements (like logo image, background and text colors, etc.) are normal TMPL_VAR's to be handled during this page gen step. Portions that pertain to user-specific variables are "renamed" (or purposely misnamed, if you will) TMPL_VAR's so they are not recognized and handled by H::T during this pass. So, the user's first name, which should be "ignored" during the page gen might be in a "tag" named like - <my_special_VAR NAME=first> In my page gen code I take the output H::T and rename the dummy VAR names to "proper" H::T TMPL_VAR's before writing out the user version of the template. Assuming you've got an H::T object, $template, and previously opened FileHandle, $fh, the code would look something like this - my $output = $template->output; ## Replace temporary tags with HTML::Temlate tags before write to file # $output =~ s/my_special_VAR/TMPL_VAR/g; print $fh $output; The template output in the output file pointed to by $fh is then your "user" version of the template. All the "group-level" elements have already been replaced with the hard-coded equivalents. Makes for a much faster end-user experience since there are less tags to process at that level. Steve Ragan Sr. Internet Developer Harris Internet Services 2500 Westchester Ave. Purchase, NY 10577 Phone: 914-641-3948 sragan@... -----Original Message----- From: html-template-users-admin@... [mailto:html-template-users-admin@...]On Behalf Of Mark Fuller Sent: Tuesday, January 18, 2005 12:26 PM To: HTML::Template users Subject: [htmltmpl] RFC: Persistent or 2-stage evaluation Sam, earlier I said it would be useful if I could apply some evaluation once. Then operate upon that partially evaluated template. I didn't realize H::T applies everything at output(?). If H::T applies everything at output, wouldn't it be relatively easy to accomplish this if there was a way to tell H::T to perform an "output" *but don't eliminate any H::T tags that are unevaluated*? If I could do this, I could set all my one-time page evaluation and do a "new_scalar_ref->" (using the output from my first "new->" from a file). If there were a way to tell H::T to reload it internally there may be a way to utilize H::T's caching mechanism (instead of me keeping my H::T object in %hash_of_templates{language}). Doing a "new->" would have to tell me it reloaded the template so that I could do the one-time processing again (and output_internal-> to put it back into the state where it's mostely prepared for repeated processing). What do you think? Would it be easy to do by sub-classing? Did you eventually agree there is a legitimate case for one-time page evaluation? (One-time language replacement of title, headings, navigation, etc.) I'd like to apply some heavy replacement and do it only once. Keep the resulting text containing only the tags that are evaluated on a display-by-display basis (messages, etc.). This might have some application for select/option lists too. I won't confuse the issue with that yet. My main concern is just to avoid repetitive numerous replacement of text which is constant for a page. Thanks! Mark ------------------------------------------------------- The SF.Net email is sponsored by: Beat the post-holiday blues Get a FREE limited edition SourceForge.net t-shirt from ThinkGeek. It's fun and FREE -- well, almost.... _______________________________________________ Html-template-users mailing list Html-template-users@... From: "Ragan, Steve" <sragan@...> > Portions that > pertain to user-specific variables are "renamed" (or purposely misnamed, if > you will) TMPL_VAR's so they are not recognized and handled by H::T during > this pass. > > <my_special_VAR NAME=first> > > [then] > > $output =~ s/my_special_VAR/TMPL_VAR/g; Steve, thanks. That might work for me. I'm not sure I'd want to pre-generate all the pages (and keep them in synch). Since I'm caching my own pages (by language), I could probably do the ->new, ->output, regex and ->new_scaler_ref when a page is not in the cache without too much slowdown on the first display of a page. The three things I've found that make multi-language processing difficult with H::T are: 1. One-time evaluation of a template's page-specific vars (so that subsequent displays can deal only in truly variable evaluations). 2. No way to say <tmpl_include name="constant_text_<tmpl_var name=LANG>"> to use language-specific text determined at run time. 3. No way to recurse variables which may have been evaluated with text that contains variables. =<TMPL_VAR NAME=LINK_TITLE_TEXT>" which will not be evaluated. Feeding ->output back into new H::T objects can get around a lot of this. It would be a lot easier if those 3 features were available. They don't seem like they would require too much kludging. Thanks, Mark > The three things I've found that make multi-language processing = difficult > with H::T are: >=20 > 1. One-time evaluation of a template's page-specific vars (so that > subsequent displays can deal only in truly variable evaluations). > 2. No way to say <tmpl_include name=3D"constant_text_<tmpl_var = name=3DLANG>"> to > use language-specific text determined at run time. > 3. No way to recurse variables which may have been evaluated with text = that > contains variables. >=20 > =3D<TMPL_VAR NAME=3DLINK_TITLE_TEXT>" which will not be = evaluated. >=20 > Feeding ->output back into new H::T objects can get around a lot of = this. It > would be a lot easier if those 3 features were available. They don't = seem > like they would require too much kludging. FWIW, the company I work for uses H::T for page generation, and we = handle an arbitrary number of languages. Currently we have the product = translated into 4 different languages, and we have two more on the way. = H::T is used in combination with the Locale::MakePhrase package.... = however, I use a modified H::T.... The reason that I made a custom package of H::T was that the only = solutions that I could see to make language translations work was: a) provide a different template for each language, then use the = appropriate one based on users' language b) use H::T::E with a callback function so that static template strings = could run though the translation engine b) modify H::T so that it supports custom TMPL_xxx tags so that static = template strings could run through the translation engine Points (a) and (b) require no changes to H::T, and I can say from = experience that it works reasonably well. However, I found: <TMPL_VAR EXPR=3D"catalog('some static string')">=20 to be more nasty than: <TMPL_CATALOG "some static string"> so I chose to modify H::T. In the process, I added some extra features = to H::T. If you need more info on how I do language translations, let me know... Mathew Cees Hek provided me with a solution using H::T's filter. I think Sam suggested a filter first but, I didn't understand. (I would'nt have figured this out without seeing the sample Cees gave me.) It involves passing the language value to H::T's load_tmpl method. Then override H::T's _cache_key method to add the language value to the key H::T uses to cache the template. This lets you load a language-neutral template, update it for a language, and H::T will cache it uniquely by template-name and language. Another advantage is that I can generate all my pull-down (select/option) lists *in the filter* and they become permanent parts of the cached template -- not reevaluated with each output. When I create the string of select/option values, I add a "<TMPL_VAR NAME=form_control_name . $key>". Each time the page is reevaluated for output only this variable is evaluted (to replace it with "selected=selected"). This way the only variable template content is truly dynamic -- could change from display to display. It seems like this would be a significant performance boost. (I don't know.) I put most of the page-specific initialization like this into a module which is loaded when the filter fires. It gets called from the filter and performs the work. Because it's a module, variables/objects in the filter's namespace can be transferred into the module (by reference) using "$Static_page_init::{$page_name}::variable_name = $self->{my_db_handler}". References to logging subroutines can be passed in. (I do this just once after the module is loaded.) Another advantage to this modular design is that the language-translation handles, page initialization modules, etc., can be dropped after their use. Since it's one-time initialization there' no need to keep them around (using memory). Ultimately, I end up with a single language-translation handle used for dynamic (display to display) message translation. A very small lexicon for the page. I'm very happy with this solution. It does exactly what I was trying to do. And, makes selection/option lists more static like I'd always thought they should be. Below is a more detailed example with samples. It may be hard to follow since this complicated (and involves Locale::Maketext). I could create a small working example if people thought it would be useful. Thanks for everyone putting up with my ramblings! I hope this ends up being useful to someone else. ============== Detailed Example ============ The following describes a way to use H::T to 1. Perform efficient language translation upon templates 2. Minimize the number of duplicated templates which can result from supporting multiple languages (reducing the duplicated page structure and ongoing maintenance costs). 3. Cache templates unique by language -- not merely filename. 4. Cache templates in a manner that the bulk of language translation is cached and not re-evaluated with each display. 5. Use Locale::Maketext in a manner that lexicons are kept small and used as needed. 6. Avoid retention in memory of language translation objects/packages if, as stated in #4, the translation is one-time. 7. Use a language negotiation method other than Locale::Maketext (because the lexicons are used in a way that breaks Locale::Maketext's negotiation which is based upon a file structure it expects on disk). The example system uses CGI::App. But, this is not required. My CGI::App inherets from "superclassMyApp.pm" (which itself inherets from CGI::Application) containing the following pieces of code: 1. A subclass for H::T's _cache_key method. This method retrieves the "$language" value that was passed into H::T's load_tmp call. It uses the value (something like "en-US") to make the cached template unique by path, filename and language. ==============================>>> CUT HERE <<<================================== #*************************************************************************** **** # _cache_key # # subclass of H::T's method so we can cache templates by language. #*************************************************************************** **** sub _cache_key { my $self = shift; my $options = $self->{options}; return $self->SUPER::_cache_key().$options->{language}; } ==============================>>> CUT HERE <<<================================== 2. A method named "prepare_page_for_language" which is called from every C::A "run_mode" that displays a page. This method: 2a. Contains the following fragment of code to load the template. Notice the "language =>" parameter. That's what feeds the above-mentioned overriden _cache_key method. ==============================>>> CUT HERE <<<================================== #--------------------------------------------------------------------------- ---- # Load the template with site-wide and page-specific filter. #--------------------------------------------------------------------------- ---- $self->{template} = $self->load_tmpl($page . '.html', filter => [ { sub => $filter, format => 'scalar' }, ], cache => 1, double_file_cache => 1, file_cache_dir => '/tmp', language => $self->{session}->{LANGUAGE}, # pass this for the custom _cache_key method ); ==============================>>> CUT HERE <<<================================== 2b. The filter referred to by the above "load_tmpl." The filter is defined immediately prior to the above fragment. By creating the filter inside the same "prepare_page_for_language" method, the anonymous subroutine will be within scope of all the variables in the method. The goal of the filter is to initialize a template as much as possible so that needless H::T re-evaluation does not occur. So that the cached template's dynamic content is truly part of the display-by-display state change. In the case of language translation, reevaluating a template's static text for each display could be significant. The filter deals with three distinct preprocessing steps: 1. Language translation that occurs for all site's pages, where the content is static (header, footer, navigation bar text, title text, etc.). Once these items are generated they will not change for as long as the page is cached and redisplayed. 2. Language translation that occurs for a specific page being loaded, where the content is static (captions, form titles, sub-area navigation link text). Similar to site-wide text, this will not change for as long as the page is cached and redisplayed. This text only exists on this specific page. The translation handle won't be used for other pages (unlike the static site-wide handle). 3. Page initialization that occurs for the specific page, but does not necessarily involve language translation. For example, A "ROBOTS" "NOINDEX" might be set using a page-specific initialization routine since, once this is set it remains this value (like language translation) for as long as the page remains cached. This routine might generate select/option list values so that the only thing that has to be re-evaluated (by H::T) is the "selected" attribute (without having to redo TMPL_LOOPs). The goal is to eliminate (while we're going to the trouble) stuff that H::T would have to superflously re-evaluate for each display. The targets of these three preprocessing steps are represented by <LANG_SITE NAME=blah>, <LANG_PAGE NAME=blah> and <INIT_PAGE NAME=blah> tags. The filter follows: ==============================>>> CUT HERE <<<================================== #--------------------------------------------------------------------------- ---- # Subroutine used by H::T. Must be defined within this CGI::App method (in order # to be within scope of the variables it accesses). #--------------------------------------------------------------------------- ---- my $filter = sub { my $text_ref = shift; $filter_fired = 1; # so we know a page was loaded after tmpl_load (below) #--------------------------------------------------------------------------- -- # Load the sitewide language-translation handle for static content if it # hasn't already been loaded. (After all the pages are loaded for a CGI::App # module this handle is deleted.) #--------------------------------------------------------------------------- -- if (!exists($self->{LH}{'_sitewide'}{'_static'}{$self->{session}->{LANGUAGE}})) { $self->new_LH('_sitewide', '_static'); } #--------------------------------------------------------------------------- -- # Perform sitewide translation. For every LANG_SITE tag, send the value to the # sitewide language-translation handle (static content). #--------------------------------------------------------------------------- -- $$text_ref =~ s#<LANG_SITE +NAME\=([^>]*)>#$self->{LH}{'_sitewide'}{'_static'}{$self->{session}->{LANGU AGE}}->maketext($1)#eg; #--------------------------------------------------------------------------- -- # Load the page-specific language translation handle for static content. # Process all "LANG_PAGE" tags. Also load the page-specific module for # initializing a template in ways beyond "LANG_PAGE" translation. # # (We have to test if this is not already loaded because H::T calls the filter # multiple times when it loades a template). #--------------------------------------------------------------------------- -- if (!exists($self->{LH}{$page}{'_static'}{$self->{session}->{LANGUAGE}})) { $self->new_LH($page, '_static'); if ($perform_page_init) { my $module = 'Static_page_init::' . $page; eval "require $module"; } } #--------------------------------------------------------------------------- -- # Perform page-specific translation. For every LANG_PAGE tag, send the value # to the page-specific language-translation handle (static content). #--------------------------------------------------------------------------- -- $$text_ref =~ s#<LANG_PAGE +NAME\=([^>]*)>#$self->{LH}{$page}{'_static'}{$self->{session}->{LANGUAGE}}- >maketext($1)#eg; #--------------------------------------------------------------------------- -- # Call the page-specific initialization routine. #--------------------------------------------------------------------------- -- if ($perform_page_init) { no strict "refs"; &{'Static_page_init::' . $page . '::set_static_values'}($text_ref, $self->{LH}{$page}{'_static'}{$self->{session}->{LANGUAGE}}); use strict "refs"; } #--------------------------------------------------------------------------- -- # Perform common page initialization. # 1. Set the navigation bar's "selected". # 2. If the page is not "main", set the "NO" in front of "INDEX". (In the case # of "main" the tag will be stripped and the page will be indexed.) # 3. Eliminate any unset INIT_PAGE tags. #--------------------------------------------------------------------------- -- $$text_ref =~ s#<INIT_PAGE +NAME=$hdr_nav_selected># class="selected"#g; if ($page ne 'main') { $$text_ref =~ s#<INIT_PAGE +NAME=HDR_ROBOTS_INDEX>#NO#g; } $$text_ref =~ s#<INIT_PAGE +NAME\=[^>]+>##g; }; ==============================>>> CUT HERE <<<================================== The filter sets a variable to let me know it was executed by "H::T". It's the only way I can know after the 'tmpl_load' if H::T performed processing for a new template, or reused a cached copy. 2c. The following fragment of code is placed after "tmpl_load." If the filter was executed, this piece of code will - Get rid of the page-specific language handle (and package) (since neither are expected to be used again unless H::T senses that it needs to load a template again to replace a cached copy). - Adds a pagename to a hash so we can determine when all the pages a C::A module might display have been loaded (for a language). - Get rid of the site-wide language handle (and package) if all the pages have been loaded. - Loads the page-specific langauge handle for dynamic content (msgs, etc.) This is used as long as the page remains cached. ==============================>>> CUT HERE <<<================================== #--------------------------------------------------------------------------- ---- # After loading a page determine if the filter executed. If it did, perform # post-initialization processing. #--------------------------------------------------------------------------- ---- if ($filter_fired) { $filter_fired = 0; my $module = $self->{session}->{LANGUAGE}; $module =~ s/\-/_/; $module = lc($module); #--------------------------------------------------------------------------- -- # Delete the page-specific language-translation handle for static content, and # the module for page-specific initialization. After a page is loaded and # cached these aren't used any longer. #--------------------------------------------------------------------------- -- delete($self->{LH}{$page}{'_static'}{$self->{session}->{LANGUAGE}}); delete_package('lang::' . $page . '::_static::' . $module); delete_package('Static_page_init::' . $page); #--------------------------------------------------------------------------- -- # Add the page-name to a hash of page-names known to have been loaded. #--------------------------------------------------------------------------- -- ${$loaded_pages{$self->{session}->{LANGUAGE}}}{$page} = 1; #--------------------------------------------------------------------------- -- # If the total number of pages displayable have been loaded, delete the site- # wide language-translation handle for static content. After all pages are # loaded and cached, this isn't needed any longer (unless H::T refreshes # the cache, when it will be reloaded in the filter and deleted here, again.) #--------------------------------------------------------------------------- -- if (keys %{$loaded_pages{$self->{session}->{LANGUAGE}}} == $max_pages) { delete($self->{LH}{'_sitewide'}{'_static'}{$self->{session}->{LANGUAGE}}); delete_package('lang::_sitewide::_static::' . $module); } #--------------------------------------------------------------------------- -- # Load the page's language-translation handle for dynamic content. We keep # this for each redisplay. (Test if it's already loaded. The cached template # may have been refreshed causing the filter to fire. No need to reload the # handle). #--------------------------------------------------------------------------- -- if (!exists($self->{LH}{$page}{'_dynamic'}{$self->{session}->{LANGUAGE}})) { $self->new_LH($page, '_dynamic'); } } # end filter fired ==============================>>> CUT HERE <<<================================== That's all there is to it. For additional clarification 1. The superclassMyApp.pm (which my C::A modules inheret from) contains: - The following relevant statements: use base 'CGI::Application'; use strict; use Symbol qw(delete_package); - The following "our" variables: our ($max_pages, $perform_page_init, $hdr_nav_selected); our (%loaded_pages); Those variables are filled in by subclassing C::A's "cgiapp_init" and calling it with cgiapp_init(2, 0, 'HDR_NAV_MAIN'); - The prepare_page_for_language method, which is called from any C::A runmode as: $self->prepare_page_for_language('main'); where 'main' is the page to load. 2. The filter checks if the language handles have been created before creating them. It may not be intuitive, but H::T executes the filter multiple times when it loads the template. Therefore, it can't be assumed that the filter is being exeucted for the first or last time (for a page). The conditional is used to determine if it's necessary to load the language handles. And, the "filter_fired" variable is used to determine afterwards if anything happened (so cleanup can occur). 3. The subroutine to create language handles (referenced throughout the above sample code) looks like the following. It is contained within superclassMyApp.pm. It loads Locale::Maketext language handles in a more granular manner than L::M expects. For this reason, it loads the package and performs a "new" instead of "get_handle" (which performs language negotiation. I use use "I18N::AcceptLanguage" for that.) ==============================>>> CUT HERE <<<================================== #*************************************************************************** **** # new_LH # # Common process to create a Locale::Maketext handle. We create 'sitewide', # 'page::_static' and 'page::_dynamic'. We use Maketext's "->new" method because # it has an inefficient language negotiation feature. #*************************************************************************** **** sub new_LH { my $self = shift; my ($page, $type) = @_; # Start: use maketext's -> new method (bypass negotiation) my $module = $self->{session}->{LANGUAGE}; $module =~ s/\-/_/; $module = 'lang::' . $page . '::' . $type . '::' . lc($module); eval "require $module"; $self->{LH}{$page}{$type}{$self->{session}->{LANGUAGE}} = $module->new(); # End: use maketext's -> new method (bypass negotiation) return; } ==============================>>> CUT HERE <<<================================== - The language modules (lexicons) are in a directory structure as follows: ~/lang/_sitewide/_static/en_us ~/lang/main/_static/en_us ~/lang/main/_dynamic/en_us ~/lang/main/help/_static/en_us # examples of lexicons for sub-pages, in ~/lang/main/help/_dynamic/en_us # which case $page is "main::help" Common translation materials can be shared by using something like this in a language module (lexicon) $shared_sidenav = do '/home/fm/bin/lang/profile/static/shared/sidenav/en_us.include'; Which refers to a file containing: ==============================>>> CUT HERE <<<================================== { sidenav_text => (['Area 1', 'Area 2', 'Area 3', 'Area 4', 'Area 5', 'Area 6', 'Area 7']), sidenav_link_title => (['Go to area 1.', 'Go to area 2.', 'Go to area 3.', 'Go to area 4.', 'Go to area 5.', 'Go to area 6.', 'Go to area 7.']) } ==============================>>> CUT HERE <<<================================== The lexicon will then relate these two tags: <LANG_PAGE NAME=SIDENAV_TEXT> <LANG_PAGE NAME=SIDENAV_TEXT> to text this way: 'SIDENAV_TEXT' => ${$shared_sidenav}{sidenav_text}, 'SIDENAV_LINK_TITLE' => ${$shared_sidenav}{sidenav_link_title}, And return an array ref of the same 7 items through different lexicons (specific to different pages where the sidenav text is needed). 4. The page-specific initialization modules are stored in the file location such as: ~/Static_page_init/main.pm ~/Static_page_init/main/help.pm # an example of a sub-page Common processing (like generating the side-navigation links) can be shared by required libraries. === end
http://sourceforge.net/p/html-template/mailman/message/6460919/
CC-MAIN-2015-11
refinedweb
4,320
56.25
Question: I'm trying to compile a list of features that were introduced in PHP 5.3. That I have to check out as time permits. I'd like to do this in the order of usefulness of the features. The question is subjective, that is the point. I want to end up with a list ordered by what the community liked. Such a list would hopefully be useful to many who need to do historical research then in the years 2012 or 2013 and I have not been able to find one on SO who did this so far. Please have named one specific feature per answer, thanks in the past! Solution:1 Late static binding! Finally some sensible way for "normal" inheritance (similar to C or Java). For example I've created base class that hides all the gory details of accessing the database, object relational mapping, caching etc. and it's child classes define only: - name of the table - column names - parent-child relationships Solution:2 My favorite feature is that the magic quotes and register globals have been DEPRECATED. Now, any fool still using these will get a warning right to their face :) Solution:3 Lambda lambda lambda! Definitely adds flexibility that was missing before. Solution:4 Solution:5 I realize you said "one" and "likes", but sometimes a single answer doesn't cut it to put opinion into perspective. In the wild, you may not see shared hosting services or dev teams use any added features for years to come, so importance is subjective. These are picked from scanning over PHP's 5.3 changelog. I could be wrong about which version these features first appeared in, but... - ?: Operator: Shortcut to the shortcut: $a = (($a) ? $a : $somethingelse). If $a is loosely false, just resign it to something else: $a = $a ?: $somethingelse; Now just waiting for $a ?= $somethingelse;. Also, it's like the "OR" operator: if($a ?: $somethingelse), evaluates to true if either $a or $somethingelse are true. Redundant, but there. - __callStatic(): Now that specialized Singleton class just reduced to a single universal class probably 5 code lines long. - Per directory ini files: PHP's version of .htaccess files. Though I have yet to experiment with what ini values are allowed to be switched where. - Additional File functions and DNS lookup support for WIN: at least it would be had WIN obeyed your command to create a `symlink`/shortcut without question since you're the user running the script. - array_replace: Whereas $a + $b kept original values, array_replace($a, $b) replaces them. - Mail logging: Logging of all mail() calls to check if you're site has been turned into a spam bot. Though I have yet to test this in the wild to see exactly which mail functions are hooked into (exec()? imap?). Missed Chances: - [FIXED] Calling a method with the same name as the parent class calls the constructor: This would've been good to know before. I think it seems like a useful "feature". Dislikes: - Mysqli is still broken. - WIN32api has been abandoned - DOTNET() never improved and still pretty much just a fancy alias for COM(). Rumors of PHP and WIN cooperating are just rumors. Solution:6 I can't resist: Clearly, adding GOTO is the biggest thing since sliced bread. Solution:7 PHP's DateTime-Classes for Timezone-aware Timestamps. It existed before but was improved greatly in 5.3. Solution:8 In my opinion the late static binding is one of the feature that I will use the most. With this, it will now be possible to get the maximum out of inheritance. Solution:9 At first I was happy about Lambda in PHP 5.3 but now after several months of developing with 5.3 in in my day to day work, I found that I rarely use Lambda in PHP. Unlike JavaScript where I use closures ALL THE TIME. The really most useful feature for me in 5.3 is late static binding. Almost every time I have to develop something in 5.2, I really miss it. And just to make it complete: The worst idea for 5.3 is GOTO. 'Nuff said. Solution:10 I think PHAR, Lambda and namespace. Theses features seems interesting. It's hard to answer right now cause we ddidn't use it on a whole project and we already find some strange behavior. I think next versio of PDT will help programming with PHP 5.3. Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com EmoticonEmoticon
http://www.toontricks.com/2019/05/tutorial-what-was-your-favorite-feature.html
CC-MAIN-2020-40
refinedweb
762
75.1
Parse suboptions from a string #include <stdlib.h> int getsubopt( char** optionp, char* const* tokens, char** valuep ); libc Use the -l c option to qcc to link against this library. This library is usually included automatically. The getsubopt() function(int argc, char **argv) { int sc, c, errflag; char *options, *value; extern char *optarg; extern int optind; . . . while((c = getopt(argc, argv, "abf:o:")) != -1) { switch (c) { case 'a': /* process a option */ break; case 'b': /* process b option */ break; case 'f': ofile = optarg; break; case '?': errflag++; break; case 'o': if ((optarg = strdup(optarg)) == NULL) { error_no_memory(); errflag++; break; } options = optarg; while (*options != '\0') { switch(getsubopt(&options,myopts,&value)) { case READONLY : /* process ro option */ break; case READWRITE : /* process rw option */ break; case WRITESIZE : /* process wsize option */ if (value == NULL) { error_no_arg(); errflag++; } else write_size = atoi(value); break; case READSIZE : /* process rsize option */ if (value == NULL) { error_no_arg(); errflag++; } else read_size = atoi(value); break; default : /* process unknown token */ error_bad_token(value); errflag++; break; } } free(optarg); break; } } if (errflag) { /* print usage instructions etc. */ } for (; optind < argc; optind++) { /* process remaining arguments */ } ... } During parsing, commas in the option input string are changed to null characters.
https://www.qnx.com/developers/docs/7.1/com.qnx.doc.neutrino.lib_ref/topic/g/getsubopt.html
CC-MAIN-2022-27
refinedweb
185
54.12
IRC log of rif on 2008-03-11 Timestamps are in UTC. 14:36:16 [RRSAgent] RRSAgent has joined #rif 14:36:17 [RRSAgent] logging to 14:36:32 [ChrisW] zakim, this will be rif 14:36:32 [Zakim] ok, ChrisW; I see SW_RIF()11:00AM scheduled to start in 24 minutes 14:36:53 [ChrisW] Meeting: RIF Telecon 11 March 2008 14:37:41 [ChrisW] Chair: Chris Welty 14:37:59 [ChrisW] Agenda: 14:38:12 [ChrisW] ChrisW has changed the topic to: 11 March RIF Telecon Agenda 14:38:29 [ChrisW] rrsagent, make minutes 14:38:29 [RRSAgent] I have made the request to generate ChrisW 14:38:35 [ChrisW] zakim, clear agenda 14:38:35 [Zakim] agenda cleared 14:38:46 [ChrisW] agenda+ Admin 14:38:52 [ChrisW] agenda+ F2F10 14:38:58 [ChrisW] agenda+ Action Review 14:39:04 [ChrisW] agenda+ Liason 14:39:13 [ChrisW] agenda+ Issue 40 (Builtins) 14:39:24 [ChrisW] agenda+ Lists 14:39:36 [ChrisW] agenda+ BLD syntax 14:39:44 [ChrisW] agenda+ Publication Plan 14:39:49 [ChrisW] agenda+ AOB 14:39:56 [ChrisW] rrsagent, make logs public 14:41:11 [ChrisW] zakim, next item 14:41:11 [Zakim] agendum 1. "Admin" taken up [from ChrisW] 14:54:08 [Harold] Harold has joined #rif 14:56:22 [csma] csma has joined #rif 14:58:53 [Hassan] Hassan has joined #rif 14:59:37 [Zakim] SW_RIF()11:00AM has now started 14:59:57 [Zakim] +Hassan_Ait-Kaci 14:59:58 [Zakim] +Sandro 14:59:59 [josb] josb has joined #rif 15:00:02 [mdean] mdean has joined #rif 15:00:39 [Zakim] +Mike_Dean 15:00:57 [StellaMitchell] StellaMitchell has joined #rif 15:01:03 [Zakim] +josb 15:02:09 [Zakim] +[IBM] 15:02:22 [ChrisW] zakim, ibm is temporarily me 15:02:22 [Zakim] +ChrisW; got it 15:02:24 [Zakim] +[NRCC] 15:02:56 [Harold] zakim, [NRCC] is me 15:03:00 [Zakim] +Harold; got it 15:03:03 [ChrisW] Stella, can you scribe today? 15:03:05 [StellaMitchell] yes 15:03:06 [Zakim] +[IBM] 15:03:11 [StellaMitchell] zakim, [ibm] is temporarily me 15:03:14 [DougL] DougL has joined #rif 15:03:17 [ChrisW] Scribe: StellaMitchell 15:03:22 [Zakim] +StellaMitchell; got it 15:03:55 [Zakim] +DougL 15:03:56 [Zakim] + +1.703.418.aaaa 15:04:12 [csma] zakim, aaaa is me 15:04:12 [Zakim] +csma; got it 15:04:14 [Harold] Hi Dough, Should we refer to CycL? 15:04:28 [LeoraMorgenstern] LeoraMorgenstern has joined #rif 15:04:29 [DougL] Hi, sure. 15:04:32 [ChrisW] 15:04:42 [csma] zakim, mute me 15:04:42 [Zakim] csma should now be muted 15:04:58 [ChrisW] RESOLVED: accept F2F9 Minutes 15:05:03 [StellaMitchell] Chris: any objections to accepting minutes from F2F9? ... none 15:05:08 [csma] no 15:05:27 [Zakim] +LeoraMorgenstern 15:05:33 [Harold] Doug how? (I found something online, but maybe you have more precise ref) 15:05:43 [StellaMitchell] Chris: no minutes from March 4th yet 15:06:01 [StellaMitchell] Leora: I just sent out the minutes from March 4th 15:06:07 [ChrisW] zakim, next item 15:06:07 [Zakim] agendum 2. "F2F10" taken up [from ChrisW] 15:06:10 [csma] zakim, unmute me 15:06:10 [Zakim] csma should no longer be muted 15:06:14 [StellaMitchell] Chris: any adjenda ammendments? ... none 15:06:37 [StellaMitchell] csma: Jos also wanted to discuss appendix of swc doc 15:06:38 [ChrisW] zakim, list agenda 15:06:38 [Zakim] I see 7 items remaining on the agenda: 15:06:39 [Zakim] 2. F2F10 [from ChrisW] 15:06:39 [Zakim] 3. Action Review [from ChrisW] 15:06:40 [Zakim] 4. Liason [from ChrisW] 15:06:40 [Zakim] 5. Issue 40 (Builtins) [from ChrisW] 15:06:41 [Zakim] 6. Lists [from ChrisW] 15:06:41 [Zakim] 7. BLD syntax [from ChrisW] 15:06:43 [Zakim] 9. AOB [from ChrisW] 15:06:47 [StellaMitchell] chris: we will talk about that during the publication plan 15:07:04 [DougL] The wikipedia page for CycL references the CycL syntax document (near the bottom) 15:07:26 [Harold] OK. 15:07:27 [StellaMitchell] Chris: any news on F2F10? Axel (host) is not here 15:07:37 [csma] zakim, mute me 15:07:37 [Zakim] csma should now be muted 15:07:52 [csma] yes 15:08:01 [StellaMitchell] Chris: f2f10 will be in deri Galway on May 26-28 15:08:22 [csma] ACTION: Axel to update the F2F10 wiki page 15:08:22 [trackbot-ng] Created ACTION-443 - Update the F2F10 wiki page [on Axel Polleres - due 2008-03-18]. 15:08:36 [StellaMitchell] Chris: (a 3 day meeting) 15:08:41 [ChrisW] zakim, next item 15:08:41 [Zakim] agendum 3. "Action Review" taken up [from ChrisW] 15:09:00 [StellaMitchell] Chris: Action review: 15:09:11 [IgorMozetic] IgorMozetic has joined #rif 15:10:13 [StellaMitchell] cw: action-423 is pending discussion 15:10:14 [Zakim] +??P53 15:10:16 [csma] zakim, unmute me 15:10:16 [Zakim] csma should no longer be muted 15:10:20 [Harold] ACTION-423: 15:10:24 [StellaMitchell] harold: the rest of my actions are continued 15:10:28 [IgorMozetic] zakim, ??P53 is me 15:10:28 [Zakim] +IgorMozetic; got it 15:10:32 [IgorMozetic] zakim, mute me 15:10:32 [Zakim] IgorMozetic should now be muted 15:11:57 [StellaMitchell] sandro: action-435 (request namespace for functions and operators) 15:12:29 [StellaMitchell] ... it's turning out to be harder than expected. I need help from the working group 15:12:46 [StellaMitchell] ...I have been in touch with ?? 15:13:34 [StellaMitchell] csma: action-434, change due date to March 21st 15:14:21 [ChrisW] zakim, next item 15:14:21 [Zakim] agendum 4. "Liason" taken up [from ChrisW] 15:14:38 [sandro] s/??/xquery+xpath WGs/ 15:14:39 [StellaMitchell] cw: csma, any news from the OMG meeting? 15:15:21 [StellaMitchell] csma: the only thing that might be of interest to this group is that there is request for proposals on svbr vocab on date and time that is aligned with owl and uml 15:16:01 [josb] no 15:16:10 [StellaMitchell] cw: jos, mike, what news from owl task force? 15:16:18 [josb] ack me 15:16:24 [StellaMitchell] miked: no news 15:17:23 [StellaMitchell] cw: I understand that there is work going on in owl wg to consider a blessed (recommended) fragment of owl for bld 15:17:31 [csma] zakim, mute me 15:17:31 [Zakim] csma should now be muted 15:17:45 [Zakim] +Gary_Hallmark 15:17:52 [Harold] DLP is the intersection of Horn logic and Description Logic. 15:17:53 [StellaMitchell] s/bld/??/ 15:18:04 [StellaMitchell] s /??/dlp/ 15:18:12 [josb] s/bld/DLP/ 15:18:21 [sandro] Zhe (Alan) Wu, at Oracle 15:19:03 [StellaMitchell] cw: Gary, do you know about this? 15:19:08 [StellaMitchell] Gary: no 15:19:34 [StellaMitchell] miked: I will attend the owled workshop in early april 15:20:14 [ChrisW] zakim, next item 15:20:14 [Zakim] agendum 5. "Issue 40 (Builtins)" taken up [from ChrisW] 15:20:17 [StellaMitchell] cw: please bring the swc doc to their attention and solicit feedback 15:20:49 [StellaMitchell] cw: at f2f10 we pretty much agreed on builtins 15:21:17 [StellaMitchell] ... but in the documented issue there is one item left open, about order of the arguments 15:21:21 [csma] PROPOSED: BLD builtins are not sensitive to order as they are in query 15:21:21 [csma] languages and production rules (closing issue-40). 15:21:31 [ChrisW] PROPOSED: BLD builtins are not sensitive to order as they are in query languages and production rules (closing issue-40). 15:21:32 [csma] q+ 15:21:47 [csma] ack csma 15:21:53 [MichaelKifer] MichaelKifer has joined #rif 15:22:12 [StellaMitchell] csma: I have no objection to that resolution, but I wonder what it means that they are sensitive to order 15:22:34 [ChrisW] PROPOSED: BLD builtins are not sensitive to order 15:23:31 [StellaMitchell] harold: if you call a builtin before all arguments are bound, you can have a problem in some implentations 15:23:47 [StellaMitchell] csma: in rif all bindings are done outside of the rule, so we would not have this problem 15:23:49 [Harold] PROPOSED: BLD builtin calls are not sensitive to order of conjunctions 15:24:29 [Zakim] +MichaelKifer 15:24:31 [StellaMitchell] harold: is the above wording ok with you, csma? 15:24:59 [StellaMitchell] csma: yes, even the original wording was fine, but just might be a little confusing 15:25:01 [ChrisW] PROPOSED: BLD builtins are not sensitive to order of evaluation 15:25:06 [sandro] +1 15:25:12 [MichaelKifer] zakim, mute me 15:25:12 [Zakim] MichaelKifer should now be muted 15:25:18 [csma] zakim, mute me 15:25:18 [Zakim] csma should now be muted 15:25:24 [MichaelKifer] -1 15:25:25 [StellaMitchell] cw :any objections to the above proposal? ... none 15:25:31 [ChrisW] PROPOSED: BLD builtins are not sensitive to order of evaluation 15:25:35 [csma] zakim, unmute me 15:25:35 [Zakim] csma should no longer be muted 15:25:37 [MichaelKifer] +1 15:25:42 [DougL] +1 15:25:48 [josb] +1 15:25:53 [Harold] +1 15:25:53 [Hassan] 0 15:25:54 [IgorMozetic] +1 15:25:57 [csma] zakim, mute me 15:25:57 [Zakim] csma should now be muted 15:26:12 [sandro] Chris: I think Michael was saying "-1" on IRC to "does anyone object?" 15:26:22 [LeoraMorgenstern] +1 15:26:42 [ChrisW] RESOLVED: BLD builtins are not sensitive to order of evaluation (closing issue 40) 15:26:46 [csma] do you have some wine to celebrate? 15:27:23 [ChrisW] zakim, next item 15:27:23 [Zakim] agendum 6. "Lists" taken up [from ChrisW] 15:27:26 [csma] action: ChrisW to close issue 40 15:27:26 [trackbot-ng] Sorry, couldn't find user - ChrisW 15:27:44 [csma] action: cwelty to close issue 40 15:27:44 [trackbot-ng] Created ACTION-444 - Close issue 40 [on Christopher Welty - due 2008-03-18]. 15:28:17 [ChrisW] 15:28:21 [StellaMitchell] cw: we agreed on syntax, but not on semantics yet 15:28:49 [Harold] 15:28:58 [StellaMitchell] cw: above, are links to 2 proposals for semantics 15:29:11 [csma] PROPOSED: Approve Michael's alternative proposal on lists [6] and 15:29:11 [csma] update FLD+BLD syntax/semantics accordingly to reflect that and the 15:29:11 [csma] previous resolution on lists 15:29:37 [StellaMitchell] harold: I have no preference between the two. I think we should use the "alternative" proposal 15:30:33 [StellaMitchell] harold: I think on one level the semantics interpretation is more complicated in mk's (alternative) proposal 15:30:43 [StellaMitchell] ... it is kind of unusual, but it seems to work 15:31:01 [StellaMitchell] cw: can you clarify? 15:31:11 [Harold] These functions are required to satisfy the following: Itail(a1, ..., ak, Iseq(ak+1, ..., ak+m)) = Iseq(a1, ..., ak, ak+1, ..., ak+m). 15:31:57 [StellaMitchell] harold: this leads us into the realm of semantic description that is more expressive than the original 15:32:27 [josb] yes 15:32:29 [StellaMitchell] cw: any other discussion on this? are people ready to accept this semantics? 15:33:00 [LeoraMorgenstern] So, we are voting for one of the two pages? 15:33:01 [Hassan] Why not use the standard free algebra style of semantics? 15:33:03 [StellaMitchell] cw: does anyone feel uncomfortable accepting the semantics of the "alternative" proposal? 15:33:06 [ChrisW] PROPOSED: Approve Michael's alternative proposal on lists and update FLD+BLD syntax/semantics accordingly to reflect that and the previous resolution on lists 15:33:26 [StellaMitchell] cw: does anyone object to the above resolution? 15:33:34 [LeoraMorgenstern] I'm confused. Which wiki page are we voting for? 15:33:43 [StellaMitchell] hak: I think it is overly complicated 15:34:08 [StellaMitchell] ...there are standard semantics for lists everwhere, why are we reinventing the wheel 15:34:17 [StellaMitchell] hb: to keep it n-ary 15:34:23 [StellaMitchell] hak: that is just syntax 15:34:50 [csma] csma has joined #rif 15:35:05 [StellaMitchell] hb: first step was to eliminate pairs from the syntax, and then we eliminated pairs from the semantics too 15:35:05 [MichaelKifer] zakim, unmute me 15:35:05 [Zakim] MichaelKifer should no longer be muted 15:35:19 [StellaMitchell] hb: and how would you deal with rest variables? 15:35:29 [csma] q+ 15:35:32 [csma] q- 15:35:32 [Harold] Itail deals with rest variables. 15:35:33 [StellaMitchell] hak: just a logic variable 15:36:00 [StellaMitchell] mk: we have a model theory so when we introduce a new kind of term we have to define the interpretation of this new kind of term in the model theory 15:36:10 [StellaMitchell] ...you have to be specific about your proposal 15:36:26 [Harold] Direct treatment of 'Seq(' TERM+ ` | ` TERM ')'. 15:37:10 [StellaMitchell] hak: use standard semantics and syntactic sugar transformation 15:37:25 [Harold] In particular 'Seq(' TERM+ ` | ` Var ')'. 15:37:25 [StellaMitchell] hak: I don't object, I am just saying my opinion 15:37:36 [StellaMitchell] cw: any other comments? 15:38:02 [StellaMitchell] cw: sequence semantics in the alternatives and pairs semantics was the original 15:38:07 [csma] zakim, unmute me 15:38:07 [Zakim] csma should no longer be muted 15:38:41 [Harold] Michael, Pair is a function symbol, so I eliminated that from the syntax, moving it to the semantics. 15:38:41 [StellaMitchell] mk: if you don't have function symbols, you cannot treat it as syntactic sugar 15:38:47 [csma] zakim, mute me 15:38:47 [Zakim] csma should now be muted 15:39:03 [StellaMitchell] cw: so advantage is you can handle lists without requiring functions 15:40:11 [StellaMitchell] gary: it is good to decouple them (lists and function symbols) for production systems 15:40:16 [Hassan] fine 15:40:39 [Hassan] ??? 15:40:47 [ChrisW] PROPOSED: Approve Michael's alternative proposal on lists and update FLD+BLD syntax/semantics accordingly to reflect that and the previous resolution on lists 15:40:53 [StellaMitchell] cw: any objections to above? 15:41:04 [StellaMitchell] ...none 15:41:06 [sandro] +1 15:41:13 [DougL] +1 15:41:13 [Hassan] 0 15:41:15 [Harold] +1 15:41:16 [IgorMozetic] +1 15:41:16 [LeoraMorgenstern] +1 15:41:19 [MichaelKifer] +1 15:41:27 [mdean] +1 15:41:39 [sandro] Gary on phone: +1 15:41:41 [josb] +1 15:41:42 [ChrisW] RESOLVED: Approve Michael's alternative proposal on lists and update FLD+BLD syntax/semantics accordingly to reflect that and the previous resolution on lists 15:41:52 [ChrisW] zakim, next item 15:41:52 [Zakim] agendum 7. "BLD syntax" taken up [from ChrisW] 15:42:29 [StellaMitchell] hb: can you give an update on this discussion 15:42:38 [StellaMitchell] s/ hb:/cw: hb,/ 15:43:01 [StellaMitchell] hb: we agreed at previous meeting to remove reification from bld 15:43:28 [MichaelKifer] zakim, mute me 15:43:28 [Zakim] MichaelKifer should now be muted 15:43:57 [StellaMitchell] ...we also discussed at f2f10 about going back to making a distiction inthe grammar between terms and predicates 15:44:17 [StellaMitchell] ...and also bring in syntax for builtins 15:44:59 [StellaMitchell] cw: and also Jos had an action to add metadata and iris to the syntax 15:45:06 [josb] q+ 15:45:15 [josb] ack me 15:46:00 [StellaMitchell] cw: people have agreed to remove reificaiton and to add metadata and iris 15:46:24 [StellaMitchell] ...so the remaining issue is whether to distinguish between functions and predicates in the grammar 15:47:30 [StellaMitchell] hb: mk said it is a good idea to keep uniterm 15:47:55 [StellaMitchell] cw: we are not proposing to remove uniterms...just in how they are used in the grammar 15:47:57 [Zakim] -Gary_Hallmark 15:48:12 [StellaMitchell] s/inthe/in the/ 15:48:28 [StellaMitchell] s/between terms and predicates/between functions and predicates/ 15:49:16 [Zakim] +Gary_Hallmark 15:49:19 [StellaMitchell] cw: yes, it changes the markup by distinguising functions from predicates 15:49:34 [StellaMitchell] ...but still they will have the same syntax 15:49:43 [csma] q+ 15:50:03 [josb] the grammar: 15:50:21 [StellaMitchell] hb: we want to handle future ilog extensions 15:50:27 [csma] q- 15:50:32 [StellaMitchell] s/ilog/hilog/ 15:50:49 [MichaelKifer] zakim, unmute me 15:50:49 [Zakim] MichaelKifer should no longer be muted 15:51:15 [StellaMitchell] cw: mk, where do you stand on this issue? does distinguishing functions and predicates in the syntax make it more difficult to do hilog extensions? 15:51:27 [StellaMitchell] mk: no, I don't think it does 15:52:05 [StellaMitchell] mk: that's why I wanted to make bld grammar a specialization of fld grammar 15:52:38 [StellaMitchell] ...(so that it can be extended in a compatible way) 15:52:59 [StellaMitchell] hb: I'm not convinced this will work 15:53:27 [StellaMitchell] hb: yes, hilog would be generalization of bld 15:53:55 [csma] q+ 15:54:05 [StellaMitchell] jos: I proposed 2 grammars: fld and bld. the fld one contains hilog 15:54:08 [josb] I give up.... 15:54:08 [csma] zakim, unmute me 15:54:08 [Zakim] csma should no longer be muted 15:54:22 [MichaelKifer] zakim, mute me 15:54:22 [Zakim] MichaelKifer should now be muted 15:54:22 [csma] q- 15:54:28 [sandro] josb, is your BLD grammar a subset of your FLD grammar? 15:54:35 [josb] Yes 15:54:47 [StellaMitchell] csma: I don't understand the current discussion 15:54:48 [josb] the grammar: 15:54:54 [sandro] q? 15:55:25 [StellaMitchell] csma: ..fld and bld are the same in the area of subject of predicates and functions 15:55:38 [josb] I showed that you CAN! 15:55:43 [josb] 15:55:49 [StellaMitchell] sandro: I think harold is saying that if you split uniterm into functions and predicates in fld then you can't extend to hilog 15:56:16 [josb] right 15:56:18 [Harold] We want to read BLD documents (with BLD facts and rules) into future HLD (HiLog) documents. 15:56:22 [StellaMitchell] csma: but hilog distinguishes between predicates and functions 15:56:42 [MichaelKifer] zakim, unmute me 15:56:42 [Zakim] MichaelKifer should no longer be muted 15:56:48 [Harold] Therefore BLD documents should not separate oreds and funcs. 15:57:01 [Harold] Therefore BLD documents should not separate preds and funcs. 15:57:04 [StellaMitchell] cw: mk, you made a proposal for the grammars for fld and bld. Can you summarize 15:57:15 [josb] Harold, just read the grammars I proposed................... 15:57:29 [StellaMitchell] mk: I proposed a framework to use around the grammars that jos had proposed 15:57:55 [StellaMitchell] hb: I explained my point above in the irc 15:58:28 [StellaMitchell] mk: I understand that you are saying we need to also consider how it will look in xml, and not just in bnf 15:58:47 [GaryHallmark] GaryHallmark has joined #rif 15:58:59 [StellaMitchell] ...I think it would be possible to accomplish the extensible design in xml 15:59:38 [StellaMitchell] ...I wanted to show the concept in bnf, but intended that it would carry over to xml 16:00:21 [StellaMitchell] ...I didn't think hard about this yet, so can't say for sure whether it is possible 16:00:40 [StellaMitchell] cw: this should be ok in xml 16:00:50 [StellaMitchell] mk: it has to be checked 16:01:07 [StellaMitchell] cw: how will we go about checking this? 16:01:15 [Harold] E.g., the BLD XML-like Atom(a Fun(f c d) e) cannot be importet unchanged in HLD. 16:01:34 [Harold] E.g., the BLD XML-like Uniterm(a Uniterm(f c d) e) cannot be importet unchanged in HLD. 16:01:56 [StellaMitchell] sandro: why can it not be imported? 16:02:06 [csma] zakim, mute me 16:02:06 [Zakim] csma should now be muted 16:02:19 [StellaMitchell] cw: someone has to demonstrate that there is an xml syntax that can be specialized from hilog to bld 16:02:23 [Harold] E.g., the BLD XML-like Atom(a Fun(f c d) e) cannot be importet unchanged in HLD. 16:02:23 [Harold] <Harold> E.g., the BLD XML-like Uniterm(a Uniterm(f c d) e) cannot be importet unchanged in HLD. 16:02:36 [Harold] E.g., the BLD XML-like Uniterm(a Uniterm(f c d) e) CAN be importet unchanged in HLD. 16:02:36 [StellaMitchell] sandro: jos says he has done this 16:02:51 [StellaMitchell] mk: jos hasn't done it for hilog yet, so he would have to do that 16:03:32 [csma] Fallbacks! 16:03:56 [StellaMitchell] cw: rif is an interchange syntax, we would not break hilog by requiring they use this format 16:04:05 [josb] FLD subsumes hilog 16:04:15 [josb] so, I did it for hilog 16:04:23 [StellaMitchell] cw: hilog requires functions to be allowed in places where they are not conventioally used in other languages 16:04:45 [StellaMitchell] ...it doesn't require that you don't distinguish between them 16:05:01 [Harold] And ( ?x = Uniterm(f c d) Uniterm(a ?x e) ) 16:05:21 [csma] q+ 16:05:28 [josb] q+ 16:05:29 [Harold] And ( ?x = Uniterm(f c d) ?x(a ?x e) ) 16:06:02 [StellaMitchell] hb: in above example, ?x occurs in 2 places... at the top level it is an atom 16:06:26 [StellaMitchell] ...the other occurance is not 16:07:09 [csma] q? 16:08:17 [StellaMitchell] cw: the distinction is there is what you typed, why is it a problem to call it out syntactically 16:08:32 [josb] q- 16:08:33 [csma] ack csma 16:08:36 [ChrisW] ack csma 16:08:37 [StellaMitchell] s/occurance/occurrence/ 16:08:54 [StellaMitchell] sandro: (something about parse trees) 16:09:00 [StellaMitchell] csma: I agree with what sandro said 16:09:54 [Harold] At the time you write ?x = Uniterm(f c d) you don't need to say how it's going to be used: So both ?x occurrences in ?x(a ?x e) are fine. 16:09:55 [StellaMitchell] ...problem may occur when using a bld doc in hilog dialect 16:09:56 . 16:10:09 [josb] right 16:10:23 [josb] +1 to Sandro 16:11:28 [csma] zakim, mute me 16:11:28 [Zakim] csma should now be muted 16:11:36 [csma] q? 16:11:52 [Harold] And ( ?x = Uniterm(f c d) Pre(?x)(a Fun(?x) e) ) 16:12:05 [StellaMitchell] hb: is the above what you mean, mk? 16:12:07 [StellaMitchell] mk: no 16:12:10 [Harold] And ( ?x = Uniterm(f c d) Pred(?x)(a Fun(?x) e) ) 16:12:46 [StellaMitchell] mk: I am not proposing to mark it up. The basic difference between your grammar and jos's is just at the top level 16:12:59 [Harold] And ( ?x = Uniterm(f c d) ?x(a ?x e) ?x ) 16:13:12 [StellaMitchell] hb: what about the above? is this possible? 16:13:45 [StellaMitchell] mk: yes, the x's will be marked as atom, but inside they will all be uniterms 16:13:54 [StellaMitchell] cw: let's move this discussion to email 16:14:15 [ChrisW] zakim, next item 16:14:15 [Zakim] agendum 9. "AOB" taken up [from ChrisW] 16:14:29 [csma] q+ 16:14:35 [ChrisW] TOPIC: Publication plan 16:14:37 [csma] ack csma 16:14:51 [sandro] ACTION: Harold to make the case, in e-mail, based on examples in 11 March meeting, for keeping Uniterm in the XML 16:14:51 [trackbot-ng] Created ACTION-445 - Make the case, in e-mail, based on examples in 11 March meeting, for keeping Uniterm in the XML [on Harold Boley - due 2008-03-18]. 16:15:14 [StellaMitchell] csma: we didn't discuss the orthogonal item of having the syntax (presentation and xml) distinguish between logical and builtin functions and predicates 16:15:23 [StellaMitchell] sandro: we decided that already 16:16:21 [StellaMitchell] csma: one proposal distinguishings builtins from logical and one distinguishes functions and predicates, but neither does both 16:16:49 [Harold] For reference, I talked about Hterms (Uniterm) in the W3C Submission of SWSL-Rules: 16:16:56 [StellaMitchell] jos: it is still not clear how the xml syntax will be defined 16:17:30 [StellaMitchell] ... i.e. how it relates to presenation syntax 16:18:11 [StellaMitchell] cw: we agreed that the mapping would be in a table, but that the xml syntax would be as close as possible to presentation, so that the mapping woujld be trivial 16:18:36 [MichaelKifer] zakim, mute me 16:18:36 [Zakim] MichaelKifer should now be muted 16:18:55 [Harold] For instance, the HiLog term ?Z(?X,a)(b,?X(?Y)(d)) is serialized as shown below: 16:18:56 [StellaMitchell] csma: for the predicate production you would need to have 2 entries in the table 16:18:58 [Harold] <Hterm> 16:18:58 [Harold] <op> 16:18:58 [Harold] <Hterm> 16:18:58 [Harold] <op><Var>Z</Var></op> 16:18:58 [Harold] <Var>X</Var> 16:18:59 [Harold] <Con>a</Con> 16:19:01 [Harold] </Hterm> 16:19:03 [Harold] </op> 16:19:05 [Harold] <Con>b</Con> 16:19:07 [Harold] <Hterm> 16:19:09 [Harold] <op> 16:19:11 [Harold] <Hterm> 16:19:13 [Harold] <op><Var>X</Var></op> 16:19:15 [Harold] <Var>Y</Var> 16:19:17 [Harold] </Hterm> 16:19:19 [Harold] </op> 16:19:19 [StellaMitchell] jos: the table is to translate the syntax, it does not care about bnf or schema, just about syntax 16:19:21 [Harold] <Con>d</Con> 16:19:23 [Harold] </Hterm> 16:19:25 [Harold] </Hterm> 16:19:59 [StellaMitchell] jos: I need to see how the xml can be derived from the bnf - I am skeptical 16:20:50 [StellaMitchell] hak: I think it can be derived, I have been working on a tool that can do this 16:21:14 [StellaMitchell] csma: if we allow metadata inside uniterms for roundtripping purposes... 16:21:20 [StellaMitchell] hak: you need to annotate the bnf 16:21:56 [StellaMitchell] csma: we may want to have things in the xml syntax that we don't have to reflect in the presenation syntax 16:22:01 [sandro] hak: you want a forgetful homomorphism 16:22:36 [StellaMitchell] cw: csma, please put your point in an email, with an example 16:23:14 [StellaMitchell] cw: I don't think we should publish next working draft without having syntactic issues revolved 16:23:20 [csma] action: csma to write an email with an example of XML that should not be derived from the BNF of the prez syntax 16:23:20 [trackbot-ng] Sorry, couldn't find user - csma 16:23:25 [MichaelKifer] zakim, unmute me 16:23:25 [Zakim] MichaelKifer should no longer be muted 16:23:40 [StellaMitchell] cw:: we can dedicate next week's telcon to all these syntactic issues 16:24:07 [StellaMitchell] sandro: and I have two syntactic issues, which I will describe in email 16:24:07 [csma] action: christian to write an example of XML that should not be derived from the BNF of the prez syntax 16:24:08 [trackbot-ng] Created ACTION-446 - Write an example of XML that should not be derived from the BNF of the prez syntax [on Christian de Sainte Marie - due 2008-03-18]. 16:24:34 [StellaMitchell] cw: are fld/ bld ready to be reviewed? 16:24:46 [StellaMitchell] mk: there are some outstanding issues, I sent an email about it 16:25:38 [StellaMitchell] mk: I will not be at next week's telecon 16:26:02 [StellaMitchell] ...I will plan to make all my changes by saturday 16:26:02 [csma] +1 to postpone 16:26:15 [StellaMitchell] cw: I think we need to postpone our schedule by one week 16:26:32 [csma] ack csma 16:26:34 [StellaMitchell] ...and then reevaluate where we are with syntactic issues 16:26:58 [StellaMitchell] ...actions assigned today are critical, so that we can resolved syntactic issues at next week's telecon 16:27:18 [MichaelKifer] zakim, mute me 16:27:18 [Zakim] MichaelKifer should now be muted 16:27:18 [StellaMitchell] csma: can we talk about jos's issue about appendix? 16:27:24 [josb] 16:27:52 [StellaMitchell] jos: in the current swc document, the appendix describes embedding, but this is really more of an implementatin hint 16:28:35 [StellaMitchell] ...so it shouldn't really be part of swc doc, it should ideally be in another document, so I'd like to move it to another doc that can be published as a working group note 16:28:47 [StellaMitchell] cw: you don't like it in appendix because it makes the document longer? 16:29:13 [StellaMitchell] jos: no, because it doesn't belong there, because it's a different topic from the main document 16:29:24 [Harold] Jos, Sandro, I think a Working Note is too level a document to be referred to from a Proposed Recommendation. 16:29:31 [MichaelKifer] zakim, unmute me 16:29:31 [Zakim] MichaelKifer should no longer be muted 16:29:38 [StellaMitchell] sandro: I think people would want it in the same document...it is ok to have non normative parts of the document 16:29:42 [StellaMitchell] cw: agree 16:29:44 [IgorMozetic] I'm in favor in keeping it in 16:29:53 [MichaelKifer] zakim, mute me 16:29:53 [StellaMitchell] jos: I don't object to leaving it as a non normative appendix 16:29:53 [Zakim] MichaelKifer should now be muted 16:29:57 [StellaMitchell] mk: I don't object either 16:30:07 [StellaMitchell] jos: ok, agreed 16:30:28 [Zakim] -MichaelKifer 16:30:29 [Zakim] -Gary_Hallmark 16:30:31 [Zakim] -Harold 16:30:33 [ChrisW] rrsagent, make minutes 16:30:33 [RRSAgent] I have made the request to generate ChrisW 16:30:34 [Zakim] -DougL 16:30:37 [Zakim] -Hassan_Ait-Kaci 16:30:47 [ChrisW] Regrets: DaveReynolds AxelPolleres 16:30:54 [Zakim] -Mike_Dean 16:30:55 [ChrisW] zakim, list attendees 16:30:55 [Zakim] -IgorMozetic 16:30:57 [Zakim] As of this point the attendees have been Hassan_Ait-Kaci, Sandro, Mike_Dean, josb, ChrisW, Harold, StellaMitchell, DougL, +1.703.418.aaaa, csma, LeoraMorgenstern, IgorMozetic, 16:30:59 [Zakim] ... Gary_Hallmark, MichaelKifer 16:31:00 [ChrisW] rrsagent, make minutes 16:31:00 [RRSAgent] I have made the request to generate ChrisW 16:31:09 [Zakim] -LeoraMorgenstern 16:31:18 [Zakim] -StellaMitchell 16:31:19 [ChrisW] zakim, who is on the phone? 16:31:20 [Zakim] On the phone I see Sandro, josb (muted), ChrisW, csma 16:31:38 [Zakim] -josb 16:33:53 [sandro] 16:35:01 [Zakim] -ChrisW 16:35:02 [Zakim] -Sandro 16:35:04 [Zakim] -csma 16:35:05 [Zakim] SW_RIF()11:00AM has ended 16:35:06 [Zakim] Attendees were Hassan_Ait-Kaci, Sandro, Mike_Dean, josb, ChrisW, Harold, StellaMitchell, DougL, +1.703.418.aaaa, csma, LeoraMorgenstern, IgorMozetic, Gary_Hallmark, MichaelKifer 16:51:24 [csma] csma has left #rif
http://www.w3.org/2008/03/11-rif-irc
CC-MAIN-2013-48
refinedweb
5,337
59.57
Magellan SAC Meeting: 21 - 22 March 2009 Minutes SAC Members: Edo Berger (Harvard, Secretary), Laird Close (Univ. of Arizona), Mario Mateo (Univ. of Michigan), Paul Schechter (MIT), Andrew Szentgyorgyi (CfA, Chair), Ian Thompson (OCIW) Also present: Davi d Osip (LCO), Frank Perez (LCO), Mark Phillips (LCO), Steve Shectman (OCIW), Alan Uomoto (OCIW), Povilas Palunas (LCO, telecon) Alan Dressler was commended for keeping minutes during the 2008 meeting. Minutes from 2008 adopted. The SAC congratulates J orge Estrada for receiving his bachelor’s degree. Associate Director's Report (Mark Phillips) No personnel changes are reported since the previous SAC meeting. Frank Perez has moved back to the US, but he is still commuting back to LCO every month. Th is is an interim solution, however a full - time on - site engineer is required. A search is underway for an on - site Magellan Telescope Engineer. The search is taking place in Chile mostly for ease of commute. Ideally, this position will be filled July 200 9. Current candidates are from ESO / La Silla. The Magellan Fellow program (Australia) has been renewed. Offers were made to two candidates for next year’s position, and 1 alternate exists; 2 are likely to accept. The interviews were held by phone. Th e current fellows are staying on through August 12, 2009 (David Floyd) and late 2009 (Ricardo Covarrubias ), followed by a third year in Australia. Mateo: Has the program worked well overall? Phillips: The overall sentiment is that the program has been s uccessful. This year there were only 6 applicants. We think that this is a reflection of the requirement to spend 2 years in Chile, the large fraction of time spent on the mountain (2/3 service time versus 1/3 for research effort), and no guarantee of tel escope time. Berger: Should this be open to PhD holders only? Phillips: Australia wants to hire postdocs. However, down the road it would be interesting to explore an equivalent position for Masters degrees. A PhD may be an over - qualification for the telescope work aspect of the program. Instrumentation usage statistics: Usage has been relatively steady for the past year. There was a drop in the use of MagIC from 2008A to 2008B (20% 4%). Dave Osip notes that about 1/3 of the nights use more than 1 instrument and this may affect the statistics since only the primary instrument is listed. This issue will be tracked with new observing reports. Paul notes that his program to track seeing should also provide “open shutter time” statistics. ~10% lost to weather, and ~1% to instrument problems. It is possible the upcoming proliferation of instruments will lead to increased downtime. MagIC: The frame - transfer E2V CCD is not ready to be commissioned as a facility instrument (it will be a PI instrument in 2009B). Based on the overall telescope instrument deployment timeline, it seems likely that MagIC may not become a facility instrument at any point. The SITe CCD is nearly ready to be commissioned. It takes 5 minutes to switch between the two MagIC m odes. MagE: Now operating routinely. Waiting on Council to vote on commissioning Uomoto: the Council is waiting for written instrument report). PANIC: Will be removed when 4Star goes on the telescope. Will not be used as a backup instrument, unless there are serious problems with 4Star commissioning, in which case PANIC would be a stop - gap instrument. LDSS3: Has been retired as facility instrument. Still operated as a PI instrument by OCIW and Harvard. New instruments: 4 new facility and 2 new P I instruments will be deployed in the near future. Schechter stresses that there should be uniform rules for instrument commissioning. Phillips notes that there are rules in place, and that they have not always been followed in the past. In particular, a pre - ship review should be scheduled before the instrument shipping is scheduled! Mateo asked for specific instruments that bypassed the rules. Phillips notes that IMACS was shipped with image quality problems; Szentgyorgyi notes that GISMO shipped was shipped before fixing problems that existed pre - shipping. It was decided to clarify pre - ship procedures to the instrument teams and to enforce these rules. Port plan: Current basis for discussion is 6 ports. The SAC would like to see an increase in thi s number to facilitate a more flexible instrument plan. Magellan Technical Manager's report (Alan Uomoto) The LDSS focus encoder has been replaced, and a temperature sensor was installed to replace the use of “dome air temperature”. Szentgyorgyi sugge sts adding this information to the FITS headers. Osip notes that this is work in progress. The F/5 secondary arrived at LCO. Baffles will be optimized for MegaCam since wider field instruments are not envisioned at the present. A new design with small er baffles allows them to be inserted from the back side of the primary mirror. A clean room is being constructed on site near the support building for instrument work. Changes in export requirements for Chile: Individuals planning to ship materials to LCO should check with Earl Harris (OCIW). Ian notes that taking material out of Chile now requires stricter import documentation. In addition, people are discouraged from hand - carrying materials. A new website with shipping instructions available at: - clay.org . It is the responsibility of the SAC to disseminate this information to the partner institutions. Bibliography: ~80 peer - reviewed papers per year in 2006 - 2008. The list is not complete. Mateo suggests that the list should be circulated to the partners to allow corrections. The SAC notes that Dennis Crabtree’s recent compilation of papers per telescopes shows that Magellan is last among the 6 - 10 m telescopes. By instrument, M IKE, IMACS, and LDSS account for the largest share (in this order). Site manager's report (David Osip) Osip notes that while the number of papers for Magellan is not large, their impact (# of citations per paper) is actually higher than for other facil ities. Throughput on Baade improved by 13 - 37% (z to g band) after washing and aluminizing in Jan 2009. CO2 mirror cleaning takes place on a regular basis. On Clay, there was 19 - 23% (z to g band) improvement. Following cleaning, engineering time is req uired for telescope collimation. In case this doesn’t fit within the scheduled engineering time, the first observers may lose part of the night. This should be communicated to the observers. The collimation is the same for all ports on Clay, but not on Baade. Pointing has improved to ~2” rms (compared to 5” previously). This translates to time saving with image acquisition. Actuator #35 problems are continuing on Clay. Despite extensive testing and part replacement the problem has not been isolated. The new guiders are still being worked out (there are some bugs and design flaws). Some unresolved issues with the Baade NASE probe#2 and Baade AUX2 probe#2. They still function properly for SH corrections. Laird asks that AO people be kept in the lo op about guider issues. Seismic accelerometers have been installed in equipment rooms. Have been triggered by humans several times. Laser mask cutting working optimally. Mask costs does not include cutter amortization. A dust monitor was installed to allow for real - time monitoring and dome closure decision. Software work continues on mirror control, guiders, TCS. F/5 October engineering fit test were satisfactory. April 2009 will be first light for the F/5 secondary and the wide field corrector. A new latch on IMACS results in a more rigid placement of the IFU, masks, GISMO. A new IDL quicklook tool and pipeline reduction scripts installed for MagE. The new blue - side CCD on MIKE works properly. New filters and dark slides installed for PANIC. A new problem reporting software system, JIRA, was installed. It allows reporting and tracking of problems, as well as tracking of work efficiency. Staffing is matched to the current 6 - port plan. The Magellan fellows are essential for telescope operation s. It was suggested that some specialized tasks be contracted out rather than handled by full - time personnel. Patricia Villar no longer works at LCO, and has been replaced by Pamela Rojas. Francisco Figueroa is now in charge of mountain operations (room s, transport). Both Pamela and Francisco are fluent in English and work Mon - Fri. MagIC (Paul Schechter) E2V chip configured for frame transfer operation fast, for high cadence observations and precise timing, especially for planet transits and solar sys tem occultations. SITE chip is currently used primarily for planet transit observations. New software has been installed (LOIS runs the CCD; LOUI is the user interface software and allows frame stare mode in which no time is lost to readout). LOUI is no t ready for general use due to complex user interface and bugs. E2V is considered a PI system. Users should contact Paul Schechter and will need an expert user on - site from MIT. SITE may be commissioned as a facility instrument. However, it is unlikely that E2V will be commissioned as facility instrument before MagIC is permanently consigned to PI status with the arrival of FIRE. AO (Laird Close) The AO system will deliver an F/16.16 diffraction - limited beam to optical and mid - IR cameras. The optica l camera will have a fast - readout E2V detector that can provide visible AO. The Adaptive secondary mirror (ASM) will use the F/11 mount points, and will be mounted and dismounted with a jib crane. It has a triangular strut system to reduce the emissivity at 10 microns. The Nasmyth port assembly includes a peripheral wave - front sensor (PWFS), an optical AO CCD, and MIRAC4 (10 micron imaging spectrograph). A dichroic will send <1.2 micron to the visible CCD, and >1.2 micron to MIRAC. The visible AO and M IRAC can operate at the same time. The clearance of the instrument from the Nasmyth platform is only 3 inches. Progress since last 2008 SAC meeting includes t wo site visits to Magellan by the AO group, a site visit to Italy to work out contracts with A rcetri Observatory, Microgate, and ADS, successful PDR held at Steward in Dec. 2008 (electrical, mechanical, software, and optical interfaces to telescope approved at PDR level), commissioning of 10 micon spectrograph on MIRAC4 at MMT, Magellan Board appro val of continued support for project, TSIP approval of Year 2 funding. NSF MRI no cost extension approved, contracts for the PWFS, E2V CCDs and Scimeasure controllers, and the remaining ASM electronics in the signature stage, design for all the big aluminu m structures are finalized, the design for the f/16.16 SH guide probe is 90% finished, and the design for the calibration optics is finished. Pre - ship review planned for May 2011; shipping to Magellan in July 2011; first light August 2011; commissioning i n Jan 2012. ASM has to be continuously powered to avoid dust contamination. Observing is envisioned as campaign mode since it takes ~1 day to replace the secondary mirror. Szentgyorgyi: Can you add a NIR camera to the Nasmyth assembly? Close: There’s not enough space to replace the visible AO, but MIRAC can be replaced with a NIR imager with a 30” FOV; the ASM can give a 5’ FOV. Mateo: What is the expected competition in 2011? Close: GPI @ Gemini and SPHERE @ VLT are expected to provide >90% Atrehl in the NIR, but they do not provide optical AO. Keck is considering optical AO in the long - range plan. Palomar will likely have a system online before 2011. M2FS (Mario Mateo) MRI instrument development proposal submitted to the NSF. Total projected cost is $2.02M (70% NSF, 30% cost sharing). M2FS is optimized for fibers rather than piggy - backing on MIKE capabilities. It will be simpler to operate than the current MMFS. The key science goal is synergy with SkyMapper for dwarfs galaxies, star forma tion, etc. The design will has an E2V 4kx4k CCD, optimized for 390 - 900 nm with a range of high resolution modes (~20000, using echelle grating + prism) but also a low - resolution mode (~1000, using a standard grating). The two systems sit on a sliding tab le and can be switched rapidly. The cross - dispersion will be optimized for fibers rather than the current system, which is optimized for the MIKE 5” slit. At high resolution can observe 48 targets x 3 orders, or 3 - 4 targets with full order coverage. Th e pair of spectrographs allows both to be used in high - resolution or a combination of low - and high - resolution. The overall design is similar to PFS, but is about twice as large. Funding (if approved) will start in Sep 2009. Commissioning is envisioned for Dec 2011 (an aggressive schedule). MRI allows a no - cost extension available until 2013 providing a schedule cushion. Berger: What is the limit from fiber collisions? Throughput? Mateo: 14” separation; 20 - 30%, which is better than MMFS Schechter: Is Mario going to supply personnel for commissioning since this is a PI instrument? There need to be guidelines in place for support of PI instruments. Mateo: We will provide an expert observer from the team for every observing run. PISCO (Tony Star k) PISCO is on track for completion in Oct 2009. The new dichroic cubes are now complete to spec. All glass blanks have been delivered; RFQ for lens grinding is in progress. Electronics and CCDS are finished and tested. Dewar and instruments mechanics are in progress (some parts are already made). Successful data reduction pipeline testing on LDSS3 data. Due to the passband of the dichroics, the net filter transmission is somewhat different from SDSS griz. There are still issues with instrument sagg ing. The SPT is going well with ~50 clusters found so far. Expectation is ~1 cluster per deg^2. PFS (Steve Shectman) PFS will deliver R~120,000 spectra (0.2” slit; 3.7” length). Test spectra look very good. The thermal control system has not been tested yet. Slit rotation was miscalculated resulting in placement of the grating on the wrong side (will have to be moved). Optical alignment is still on - going. The spectral range is fixed at 3900 - 6200A. Osip: Will there be support for external ob servers? Shectman: Only in collaboration with the instrument team Phillips: How often to you need PFS on the telescope? Shectman: More than once per lunation (likely ~twice). There are overheads like ion pump cooling which may take ~1 day. Szentgyorgy i: Does the instrument need to be thermally controlled all the time? Shectman: Ideally yes, but cannot commit to this answer right now. Close: Is PFS going to be a heat source if it is cooled all the time? Shectman: There are some issues with the heat leakage from hose connectors, but this is being worked on. The instrument should be stored on the dome floor. FourStar (Eric Persson, presented by Alan Uomoto) FourStar is a 4k x 4k JHK band imager. Internals are 95% complete and camera/window optic s are mounted in cells. The optics are ready to install. 12 of 16 temperature sensors are installed. About half the controller system is installed. All focal place mechanisms have been installed and tested warm. There is room for 3 additional filters beyond JHK; it is not clear if other filters have been requested by the community (e.g. narrow band filters). Electronics racks are complete; process controller yet to be finished. Data computers will use blu - ray to record engineering data and USB disks to record science data. Control software is 95% done. Data acquisition and pipeline are in the “Dan Kelson is working on it” mode. To be done: First cooldown in late March 2009; a new postdoc arrives in June 2009; several cold alignment cooldowns (eac h takes ~1 month); pre - ship review in Dec 2009; ship to LCO in 2010A. Berger: How much data is generated per night? Is is backed up on the mountain? Osip: ~10 Gb per night; backup on the mountain for ~1 month. Berger: Do we need a data backup system o n the mountain for the new large format instruments? A serious issue is that 3 of the 4 detectors miss the read noise specs (30 electrons vs. 20 electons). Team is currently working with Teledyne to resolve this issue. Berger: Would the instrument be s hipped with the noisy detectors? Uomoto: Ideally no – we’ll know more about this issue soon. Szentgyorgyi: Should the SAC vote on the detector issue? Close: This is a specification for the instrument so it has to be met; 20 electrons is not even aggress ive. Mateo: This will impact the pre - ship review. Close: I suspect that this noise level will kill any narrow - band applications. F/5 (Andrew Szentgyorgyi) F/5 secondary end - to - end tests will take place April 5 - 13, 2009. Megacam ships to LCO after Ma y 24, 2009 and will be commissioned Sep/Oct 2009 (August 2009 no longer viable). Science operations will start in 2010A (no re - commissioning in Dec 2009). Pre - ship review scheduled on April 28, 2009. MMIRS is optically ready; ships to MMT on April 22, 2 009 with a 7 - night run in May and 9 nights on June. September run in MMT and then ships to Magellan in Dec/Jan 2010 for commissioning. A 2 - month run will take place in 2010A. We are in the process of hiring an ½ FTE electrical engineer; shared with Andr es Jordan for HAT South support. Will also contribute to LDSS3 support. Cadence of F/5 campaigns still TBD. Will depend on difficulty of shipping between MMT and Magellan, demand at each observatory, instrument crowding at each observatory. Andy thinks F/5 instruments likely to stay at Magellan for 1 year. Mateo: Will you be using MMT or Magellan WFS? Is the wave - front sensing continuous? Szentgyorgyi: Not continuous – do it once at the beginning of the night and then use guiders to maintain focus. S hectman: The system can potentially correct for coma and astigmatism. Mateo: Why are you planning for a 2 - month run in 2010A? Shouldn’t this be decided by demand? Szentgyorgyi: I expect that demand will be heavy and justify 2 months. Thompson: This i nformation should be disseminated to the user community ASAP. Mateo: How do you choose which 2 months? Would not like to lose Feb/March. Thompson: The best approach may be to wait until proposals are in before scheduling the 2 month block. Schechter: T he consotium should ask for “letters of intent”. Mateo: Is LCO ready for making MMIRS masks? Phillips/Uomoto: We are working on it. Dave Osip will discuss requirements with Brian McLeod. FIRE (Rob Simcoe) FIRE will be capable of doing 0.8 - 2.5 micro n spectroscopy with R=6000 (0.6” slit; 7” length) or R~1000 (30” length limited by pre - slit optics). IR slit viewer with fixed J+H filter; can be used for imaging with a 50” FOV. FIRE will be installed on Baade FP2. FIRE is targeted for completion in 20 09B. The project is moving fast; there is considerable pressure from VLT/X - shooter and key science targets in the spring. The goal is rapid commissioning, following the example of MagE. All optics are in fabrication (<10 weeks to delivery) or delivered. Major mechanical components are either delivered or in final fabrication. Opto - mechanics are in fabrication (<8 weeks) or delivered. Science detector has QE~75 - 90% in YJHK. It has 5 electron readout noise for Fowler 16 mode (echelle); ~1 hour before da rk current starts dominating (0.008 electrons/sec). Software is fully functional; IDL data reduction package based on MagE, designed to extract ABBA and non - ABBA data. The website is up to date with instrument information. Coming up: April 1 cooldown te sts to certify cryostats. Delivery of all remaining components by June 1. 3 - 5 months of optical alignment in Fall 2009. IDL pipeline released immediately. Pre - ship in Fall 2009. Some assembly work will be required at LCO. The cooling hoses will be sh ipped before the instrument. Mateo: The assembly of the camera at the vendor is unusual for Magellan instruments. Has is it been cost effective? Simcoe: Yes – we did not have in - house expertise. Szentgyorgyi: Using experts was the right approach. S zentgyorgyi: 10 weeks for the camera assembly sounds very optimistic. Simcoe: That’s the schedule from the vendors, and the optics are relatively small so this seems like a reasonable schedule. The consensus is that Coastal Optical is a reliable vendor and 10 weeks should be okay. Thompson: Why are you using so many different vendors? Simcoe: Price, spreading the load; a lot of places are specialized so we had to go with several contractors. Laird: Are you using the same detectors as FourStar? Alan : No. This is a later batch. Rob: This is the same batch as the one FourStar replacement detector, which is the one that’s up to spec. Laird: What would you recommend for FourStar? Rob: Replace the 3 noisy detectors with a batch from after summer 200 8. Mario: What is the planned instrument status for 2010A? Need to communicate this to potential observers. Paul: In commissioning. Rob may be a “service observer”. Rob: Unlikely that I will be able to sign off completely in 2010A, but the instrument should be functional. Paul/Rob: Should be used as PI instrument in 2010A. Edo: Will FIRE be straightforward to use for target - of - opportunity? Rob: The instrument is somewhat more complicated to use than MagE and may take longer to set up, but it will be kept cold and ready to go. MagE Commissioning (Ian Thompson) Mountain support level is ~2 hours per MagE run (checking noise, etc). Manual needs to be completed, particularly with information about flatfield requirements. The corrector is uncoat ed and has a crack. A replacement is in hand and will be installed in Fall 2009. Preference not to handle CaF optics in cold weather (Szentgyorgyi: this is a myth; CaF is robust and can be handled in bad weather; not as fragile as advertised). New coat ing will add ~2% in throughput. Szentgyorgyi: Is MagE essentially used in commissioned mode? Thompson: Yes. Szentgyorgyi: What’s the probability that the Council will accept MagE as a commissioned instrument? Schechter: 3/9. Instrument Arrival Schedule (Alan Uomoto) See schedule below (Megacam delayed to Oct 2009). This is overall a somewhat optimistic schedule. Szentgyorgyi: Is FIRE being shipped by airplane? Uomoto: Yes. Mateo: Is the need to travel to pre - ship reviews too stressful on the mountain staff? According to Magellan regulations they have to be present. This is too much overhead. Schechter: There should be an objective person in each pre - ship review. An outsider. Phillips: LCO staff involved include Osip, Perez. I cannot attend 4 pre - ship reviews in a few months. Szentgyorgyi: Why not do this by telecon? Schechter: Some pre - ship reviews are easier (e.g. MegaCam), but others would need to be in - person. Szentgyorgyi: Should we have “Pre - Ship - Lite” review process for PI in struments? Uomot/Phillips: No! there should be a uniform process across the board. Schechter: Should pre - ship review include a demonstration of data - taking? Szentgyorgi: No. Thompson: We’ve never done this before. Thompson: We need to flesh out what the pre - ship review actually consists of. For example, for FourStar data acquisition is an issue so perhaps it should be reviewed. Complex instruments should require a more thorough review. Mateo: We need to create a well - defined checklist. Phillip s: We need to see the functionality of the instruments. Instrument should be set up, demonstrated to work, take an exposure, move the filters, gratings, etc. Szentgyorgyi: We need to trust the instrument teams to be honest – no need for people to travel to pre - ship review just to see an exposure being taken. Schechter: At least one person should participate in a site visit. Mario: Does the observatory have a veto power if instrument commissioning becomes too time consuming? Phiilips: We cannot commis sion more than 2 instruments in a semester. 2010A also has mirror aluminizing scheduled. Mateo: Should SAC make recommendations about instrument “collisions”? Schechter: Yes if it is a science issue. Otherwise LCO staff should decide. Thompson: If FIR E ships on time it will collide with MagIC. Schechter: MagIC is a facility instrument and therefore takes preference over FIRE when it is in commissioning. Berger: MagIC will be shifting to PI mode and E2V is still not commissioned so it is unclear that i t should have preference over FIRE. Berger: The problem is that PI status is not well - defined. What is the designation as an instrument migrates from facility to PI (e.g. LDSS3, MagIC). Schechter: Proposing that MagIC will become a PI instrument when FIRE becomes a facility instrument. Szentgyorgyi: What is the Carnegie’s long - term interest in LDSS3? Phillips/Thompson We would like to finish on - going programs. It will be easiest if LDSS3 was removed in Dec 2009. Schechter: This issue was discussed at the Council and Wendy stated that LDSS3 will be supported as long as it does not interfere with new instruments. Enter the password to open this PDF file: File name: - File size: - Title: - Author: - Subject: - Keywords: - Creation Date: - Modification Date: - Creator: - PDF Producer: - PDF Version: - Page Count: - Preparing document for printing… 0% Log in to post a comment
https://www.techylib.com/en/view/unkindnesskind/magellan_sac_meeting_21-22_march_2009_minutes
CC-MAIN-2018-17
refinedweb
4,261
66.84
Download Project (Console) v1.0 - 25 KB Download Project (MFC) v1.0 - 144 KB Most of the software systems now have to support parallelism/concurrency, for speed, throughput, efficiency and robustness. For the same, a running process has to have multiple threads - some are spawned on demand, few are waiting for some message to do work, some are just waiting for other threads/processes (or other kernel objects), and some are just "on bench", participating in thread pool party for the process. With advancements in multiprocessor and multicore technologies, the software programmer's work challenge has grown manifold. They need to design systems to efficiently support multithreaded environment, and their software must be able to handle latest multiprocessing advancements. While there are many libraries available for concurrency/parallelism in multiple development environments, I would only cover Visual C ++ Concurrency Runtime. Conceptually, concurrency and parallelism are two different terms, in this article I would use them interchangeably. I assume the reader has fair knowledge about: The programs given here will only compile in Visual C++ 2010 compiler. Let's straightaway move to first parallel C++ program that uses Concurrency Runtime (from now on, I would use CR). The following code calculates the sum of all numbers in range 1 to 100,000: int nSum = 0; for(int nNumber=1;nNumber<=100000;++nNumber) nSum += nNumber; The parallel form of this code can be: int nSum=0; parallel_for(1, 100001, [&](int n) { nSum += n; } ); Few points about this code: parallel_for Concurrency <ppl.h> The complete code would be: #include <iostream> #include <ppl.h> int main() { using namespace Concurrency; int nSum=0; parallel_for(1, 100001, [&](int n) { nSum += n; }); std::wcout << "Sum: " << nSum ; } What about the Library and DLL ? Well, most of the CR code is in templates and gets compiled whenever template class/function is instantiated. Rest of the code is in MSVCR100.DLL, which is standard VC++ Runtime DLL for VC10; thus you need not to link with any library to use CR. Operating System Requirements The minimum operating system required for Concurrency Runtime is Windows XP SP3. It will run on both 32-bit and 64-bit versions of Windows XP SP3, Windows Vista SP2, Windows 7, Windows Server 2003 SP2 and Windows 2008 SP2, and higher versions, respectibely. On Windows 7 and Windows Server 2008 R2 version CR would use the latest advancements for scheduling. It would be discussed in this article. For distribution of Visual C++ 2010 runtime, you may download redistributable (32-bit/64-bit) from Microsoft Download Center and distribute to your clients. Does parallel_for spawns threads? Simple answer would be Yes. But in general, it doesn't create threads by itself, but utilizes the feature of CR for parallelizing the task. There is CR Scheduler involved for the creation and management of threads. We will look up into Runtime Scheduler later. For now, all I can say is: parallel algorithms, tasks, containers, agents etc would finish the task concurrently - which may be needing only one thread, or the number of threads equaling the logical processor count, or even more than logical processor count. The decision is by the scheduler and by the concurrent routine you are calling. In my Quad Core CPU, having 4 logical processors, before the code enters into parallel_for function, I see only 1 thread for the process. As soon as code enters parallel_for, I see the number of threads raised to seven! But it utilizes only 4 threads for the current task, as we can verify using GetCurrentThreadId function. The other threads created by CR is for scheduling purposes. As you would see more of CR features later, you would not complain about extra threads. parallel_for GetCurrentThreadId Well, the sum may be calculated incorrectly! Absolutely! Since, more than one threads are modifying the same nSum variable, the accumulation is expected to be wrong. (if you aren't sure about sum being captures...). For example, if I calculate sum between 1 to 100, the correct sum would be 5050, but the following code produces different results on each run: parallel_for(1, 101, [&](int n) { nSum += n; }); // nSum may be 4968, 4839, 5050, 4216 or any random The simple solution is to use InterlockedExchangeAdd: InterlockedExchangeAdd LONG nSum=0; // not int, Interlocked requires LONG parallel_for(1, 101, [&](int n) { InterlockedExchangeAdd(&nSum, n); }); Obviously, it defeats the whole purpose of putting the accumulation in concurrent execution. Well, CR provides efficient solution for this, but I will discuss it later. With this Start-up Program I briefly elaborated CR. Let me discuss the Concurrency Runtime in more structured manner. Concurrency Runtime classifies following as its core components: The following are concepts in Concurrency Runtime: Do not get confused over the terms, I will explicate them! The following diagram depicts how Runtime and its different components fits-in between Operating System and the Applications: The Upper Layer in is in purple, which includes important and mostly used programming elements in Concurrency Runtime - The Agents Library and the Parallel Patterns Library, both are detailed here in this article. The orange components can be classified as Lower Layer elements of the Runtime. Only Task Scheduler is explained in this article. The purple-to-orange shaded component forms Synchronization Data Structures in Concurrency Runtime, and as you can see it plays vital role in both upper and lower layers of CR. Synchronization primitives are also elaborated in this article. The flow of this article is as follows: The parallel_for function I discussed above falls in this category. The PPL has following classifications: parallel_for_each parallel_invoke concurrent_vector concurrent_queue combinable These three algorithms perform the given work in parallel, and wait for all parallelly executing tasks to finish. It is important to note that CR may schedule any task in any sequence, and thus, there is no guarantee which task (or part of it), would execute before or after other task. Also, one or more tasks may be executing in the same call-stack, same thread (i.e. 2 or more tasks in same thread, which may or may not be caller's call-stack/thread) or may execute inline. Each algorithm may take function-pointer, object or a lambda expression (termed as 'function' in this article). The return/arguments type depend on algorithm and how they are called, as explained below. parallel_for very similar to for construct. It executes the specified 'function' in parallel. It is not absolute replacement for for-loop. The simplified syntax of this function is: for for-loop void parallel_for(IndexType first, IndexType last, Function func); void parallel_for(IndexType first, IndexType last, IndexType step, Function func); Since this is a template function (not shown in syntax above), the IndexType can be any integral datatype. The first version of parallel_for would start from first, increment by 1, and loop till last - 1, calling the specified function at each iteration, parallelly. The current index would be passed to specified function as an argument. first last - 1 The second overload behaves in similar fashion, taking on extra argument: step, the value by which each iteration should be incremented. This must be non-zero positive, otherwise CR would throw invalid_argument exception. step invalid_argument The degree of parallelism is determined by the runtime. The following example counts the number of prime numbers between given range. For now, I am using InterlockedIncrement API; the combinable class provides better solution, which I will discuss later. InterlockedIncrement LONG); } }); wcout << "Total prime numbers: " << nPrimeCount << endl;<nprimecount /> Let's ignore 1 and 2, and write more efficient routine to count this, passing 2 as step-value: LONG nPrimeCount=0; parallel_for(3, 50001, 2, [&](int n) // Step with 2 { bool bIsPrime = true; for(int i = 3; i = (int)sqrt((float)n); i+=2) // Start with 3, increment by 2 {...} ... } This function is semantically equivalent, and syntactically same, to the for_each STL's function. It iterates through the collection in parallel, and like parallel_for function, the order of execution is unspecified. This algorithm, by itself, is not thread safe. That is, any change to the collection is not thread safe, only reading the function argument is safe. for_each Simplified syntax: parallel_for_each(Iterator first, Iterator last, Function func); The function must take the argument of the container's underlying type. For example if integer vector is iterated over, the argument type would be int. For the caller, this function has only one signature, but internally it has two versions: one for random access iterators (like vector, array or native-array) and one for forward iterators. It performs well with random access iterators. (With the help of iterator traits, it finds which overload to call, at compile time). The following code counts number of even numbers and odd numbers from an array: const int ArraySize = 1000; int Array[ArraySize]; // Generate random array std::generate(Array, Array+ArraySize, rand); long nEvenCount = 0 , nOddCount = 0; parallel_for_each(Array, Array+ArraySize, [&nEvenCount, &nOddCount](int nNumber) { if (nNumber%2 == 0) InterlockedIncrement(&nEvenCount); else InterlockedIncrement(&nOddCount); } ); As you can see, it is mostly same as parallel_for. The only difference is that the argument to function (lambda) is coming from the container (the integer array). Reiterating that the supplied function may be called in any sequence, thus the received nNumber may not be in same sequence as in original container. But all items (1000 array elements in this case), will be iterated by this parallel routine. nNumber We can also call this routine with STL's vector. I am ignoring the vector initialization here (assume there are elements in it): vector vector<int><int /> IntVector; // copy(Array, Array+ArraySize, back_inserter(IntVector)); nEvenCount = 0 , nOddCount = 0; parallel_for_each(IntVector.begin(), IntVector.end(), [&nEvenCount, &nOddCount](int n) {... }); Similarly, non random access container may also be used: list<int><int /> IntList; parallel_for_each(IntList.begin(), IntList.end(), [&](int n) {... }); map<int, double> IntDoubleMap; parallel_for_each(IntDoubleMap.begin(), IntDoubleMap.end(), [&](const std::pair<int,double>& element) { // Use 'element' }); It is important to note that, for parallel_for_each, random access container works more efficiently than non-random access containers. This function would execute set of tasks in parallel. I will discuss later what task is, in CR realm. In simple terms a task is a function, function object or a lambda. parallel_invoke would call the supplied functions in parallel, and would wait until all of gives functions (tasks) are finished. This function is overload to take 2 to 10 functions as its tasks. Like other parallel algorithms, there is no guarantee in what sequence, set of functions would be called. Neither it guarantees how many threads would be put to work for completing the given tasks. Simplified signature (for 2 function overload): void parallel_invoke(Function1 _Func1, Function2 _Func2); // template <typename _Function1, typename _Function2> // void parallel_invoke(const _Function1& _Func1, const_Function2& _Func2); Using parallel_invoke is similar to creating set of threads and calling WaitForMultipleObjects on them, but it simplifies the task, as you need not to care about thread creation, termination etc. WaitForMultipleObjects Following example calls two functions parallelly, which computes the sum of evens and odds in the given range. void AccumulateEvens() { long nSum=0; for (int n = 2; n<50000; n+=2) { nSum += n; } wcout << "Sum of evens:" << nSum << std::endl; } void AccumulateOdds() { long nSum=0; for (int n = 1; n<50000; n+=2) { nSum += n; } wcout << "Sum of odds:" << nSum << std::endl; } int main() { parallel_invoke(&AccumulateEvens, &AccumulateOdds); return 0; } In example above, I have passed addresses to functions. The same can be done more elegantly with lambdas: parallel_invoke([] { long nSum=0; for (int n = 2; n<50000; n+=2) { nSum += n; } wcout << "Sum of evens:" << nSum << std::endl; }, []{ long nSum=0; for (int n = 1; n<50000; n+=2) { nSum += n; } wcout << "Sum of odds:" << nSum << std::endl; }); The choice of lambda and/or function-pointer/class-object depends on programmer, programming style and most importantly, on the task(s) being parallely performed. Of course, you can use both if them: parallel_invoke([] { long nSum=0; for (int n = 2; n<50000; n+=2) { nSum += n; } wcout << "Sum of evens:" << nSum << std::endl; } , &AccumulateOdds ); As mentioned before, parallel_invoke takes 2 to 10 parameters as its tasks - any of the given task may be function-pointer, lambda or a function-object. Whilst I would give more examples later, when I would cover more topics on CR; let me draw one more example for sake of examples, to impart more clarity on subject. Let's calculate factorial of few numbers in parallel, using this function: long Factorial(int nFactOf) { long nFact = 1; for(int n=1; n <= nFactOf; ++n) nFact *= n; return nFact; } When you call parallel_invoke: parallel_invoke(&Factorial, &Factorial); You would be bombarded by compiler errors. Why? Well, parallel_invoke requires tasks to have void as argument and void as return type. Though return type can be non-void (as against documentation), but the arguments must be zero! We need to (can) use lambdas: parallel_invoke void long f1, f2, f3; parallel_invoke( [&f1]{ f1=Factorial(12); }, [&f2]{ f2=Factorial(18); }, [&f3]{ f3=Factorial(24); } ); Note that the lambda is of type void(void), and is capturing required variables by reference. Please read about lambdas... void(void) The concurrency runtime provides concurrent container classes, for vector and queue, named concurrent_vector and concurrent_queue , respectively. Another class in parallel classes arena is combinable. queue The concurrent container classes provides thread safe access and modifications to its elements, except few operations. The concurrent_vector class is similar to std::vector class. It allows random access to its elements, but unlike vector, the elements are not stored in contagious memory (like array, vector). The access to elements along with insertions (push_back), is however, thread-safe. Iterator access (normal, const, reverse, reverse- const) is also thread-safe. concurrent_vector std::vector push_back Since elements are not stored contagiously, you cannot use pointer arithmetic to directly access another element from some element (i.e. (&v[2] + 4) would be invalid). (&v[2] + 4) As acknowledged by Microsoft (in concurrent_vector.h header), this class is based on original implementation by Intel for TBB's concurrent_vector. For the examples, I could have used standard thread creation functions; but I prefer to use PPL's construct to illustrate concurrent_vector: concurrent_vector<int><int /> IntCV; parallel_invoke([&IntCV] // Logical thread 1 { for(int n=0; n<10000; ++n) IntCV.push_back(n); }, [&IntCV] // Logical thread 2 { for(int n=0; n<1000;n++) IntCV.push_back(n*2); // Get the size, thread-safe. Would give different results on different runs, // as another task (thread) might be inserting more items. wcout << "Vector size: " << IntCV.size() << endl; }); Here, two threads would be in action, both of them would insert few numbers into concurrent_vector. Method push_back is thread-safe, thus no synchronization primitive is required for the vector modification. Also, size method is also thread- safe. size At this point, if you couldn't understand any part from above' code, I would urge you to read on STL's vector, C++ lambdas and/or revisit parallel_invoke! Yes, you got it right. It is concurrent version of STL' queue class. Like concurrent_vector, important operations of concurrent_queue are thread- safe, except few operations. Important operations include enqueue (insert) and dequeue (pop) and the empty method. For inserting elements, we use push method. For dequeue operation, we use try_pop method. queue concurrent_queue empty push try_pop It is important to note that concurrent_queue does not provide concurrency-safe iterator support! (Aug-09 : Allow me some time to put example for concurrent_queue, though it would be almost same as concurrent_vector. For both of these concurrent classes, I would also list out methods that are concurrency-safe or not, and their comparison with equivalent STL' containers. Till then enjoy reading the new stuff below!) TLS, or thread local storage, would be conceptually the nearest competitor of this class. The combinable class facilitates each running task or thread to have its own local copy for modification. When all tasks are finished, those thread-specific local copies can later be combined. For example, in parallel_for example above, we calculated sum of all numbers in 1 to 100 range - for thread safety we need to use InterlockedExchangeAdd. But we know that all threads would be contending for the same variable to update (add) - thus defeating the positive purpose of parallelism. With combinable object, we let each thread have its own copy of variable to update. Let's see this in action: combinable<int><int /> Count; int nSum=0; parallel_for(1, 101, [&](int n) { int & ref_local = Count.local(); ref_local += n; // InterlockedExchangeAdd(&nSum, n); }); nSum = Count.combine(AddThem); int combinable::local int& There would be no contention for the reference-variable, as each thread would get its own copy. The runtime does it for you. Finally we call combinable::combine method to combine the parts. combine method iterates through all thread specific variables and calls the specified function. It then returns the final result. I have used user-defined function, AddThem, which is this simple: combinable::combine combine int AddThem(int n1, int n2) { return n1 + n2; } The combine requires any function, functor or lambda having this signature: T f(T, T); where T is the template argument type. For sure the arguments, can be const. Thus, the combine call can also be: T f(T, T); T nSum = Count.combine([](int n1,int n2) // Automatic return type deduction { return n1+n2;} ); And this also: plus<int><int /> add_them; nSum = Count.combine(add_them); Like before, to gather more positive audience responses, I simplified it with declaring a local variable of type plus class, and then passed it to combine method! For those who don't know, plus is an STL class, which is a binary_function having () operator overloaded, which just calls operator + on them. Thus, we can also use plus directly, instead of creating object with a variable. Following also illustrates how other operations can be easily performed to combine (though, not all make sense): plus combine binary_function () + combinable<int><int /> Sum, Product, Division, Minus; parallel_for(1, 11, [&](int n)) { Sum.local() += n; Product.local() *= n; Division.local() /= n; Minus.local() -= n; }); wcout << "\nSum: " << Sum.combine(plus<int><int />()) ; wcout << "\nProduct: " << Product.combine(multiplies<int><int />()); wcout<< "\nDivision (?): " << Division.combine(divides<int><int />()); wcout<< "\nMinus result: " << Minus.combine(minus<int><int />()); I have directly used combinable::local, to modify the thread-local variable. The I have used different classes to produce the final result. combinable::local Now... something goes weird, and interesting also. It's not into CR's parlance, but I discovered it, thus I must share it! The result of multiplication and division would be zero. Don't ask why. Here is quick, non-efficient solution: int & prod = Product.local(); if(prod == 0) prod = 1; else prod *= n; Similarly for division operation. Because, when CR allocates a thread specific variable for first time, it calls its default constructor (or the one provided in combinable's constructor). Now, the compiler' provide default constructor of int sets it to zero! The code change made would not be optimal solution. We need to pass a constructor, and one approach is: combinable<int> Product( []{return 1;}); // Lambda as int's constructor! When CR need to allocate new thread-local variable, it would call this so called constructor, which is returning 1. Documentation of combinable calls it initializer. Of course, this initializer can be a regular function or a function, returning the same type and takes zero argument (i.e.T f() ).Remember that, the initializer is called only when a new thread specific local variable has to be allocated by the runtime. 1 T f() When you use combinable with user-defined classes, your default constructor would anyways be called, also your combination operation would probably not be this trivial. As annotated before, a task, conceptually, is nothing but a thread. The CR may execute a given task in same thread as the task initiator, or may use only one thread to execute more than one given tasks. Thus, task is not exactly same as thread. A task can be expressed as functional division of work. For example, in previous examples, we used parallel algorithms to functionally divide the summation of numbers in a range, or to find some elements in a container. We, mostly, would not make several tasks; one of them waiting for user input, another responding to a client request via sockets and third one to read file as overlapped operation. All these operations are dissimilar, and does not form a logical task-group. Unless they form a pipeline for performing a work, in logically connected manner, they cannot be classified as task-group. Reading from a file, counting number of lines/words, vowels, spelling mistakes etc. can be classified as different tasks, that can be classified as a task-group. Before I elaborate different types of tasks-groups, let me present you an example: void DisplayEvens() { for (int n=0; n<1000;n+=2) wcout<<n<<"\t"; } void DisplayOdds() { for (int n=1; n<1000;n+=2) wcout<<n<<"\t"; } int main() { task_group tg; tg.run(&DisplayEvens); tg.run(&DisplayOdds); tg.wait(); } Albeit a bad example, I must exhibit it to make things simple.The Concurrency::task_group is the class to manage set of tasks, execute them and wait for the tasks to finish. The task_group::run method schedules the given task and returns immediately. In sample above, run has been called twice to run different tasks. Both of the tasks would, presumably, parallelly in different threads. Finally, task_group::wait call is made to wait for tasks to finish. wait will block until all scheduled tasks are finished (or exception, cancellation happens in any of the tasks - we will see that later). Concurrency::task_group task_group::run run task_group::wait Another method, run_and_wait may be called to combine last two calls made. That is, it will schedule the task to run, and then wait for all tasks to finish. Of course, a lambda may also be used to represent a task: run_and_wait tg.run([] { for (int n=0;n<1000;n++) { wcout << n << "\t"; } }); // Schedule this task, and wait for all to compete tg.run_and_wait(&DisplayOdds); There is no need to immediately wait for tasks to finish, the program may continue doing something else and wait later (if required). Also, more tasks may be scheduled, later, when needed. The template class task_handle can be used to represent a task, which can take function-pointer, lambda or a functor. For example: task_handle task_handle<function<void(void)>> task = &DisplayOdds; Although you may not need to use task_handle class in your code, you may simplify the declaration using function make_task: make_task auto task = make_task(&DisplayOdds); Before similar kind of alien stuff goes above your head, let me move ahead, which I believe would be more understandable! There are two types of tasks-groups: structured and unstructured. In PPL, a structured task group is represented by structured_task_group class. For small set of tasks, it performs better than unstructured task group. The algorithm parallel_invoke uses structured_task_group internally to schedule specified set of tasks: structured_task_group structured_task_group //, you cannot wait on another thread for the completion of tasks. More comparison below. The class named task_group represents unstructured task group. The class is mostly similar for spawning tasks, waiting for them etc. Unstructured task group allows you to create tasks in different thread (s) and wait for them in another thread. In short, unstructured task group gives you more flexibility than structured task groups, and it pays performance penalty for the same. task_group Remarkable differences between structured and unstructured tasks groups are: task_handle task_group T1 T2 T1 structured_task_group tg1; structured_task_group tg2; // The DTOR of tg2 must run before DTOR of tg1 // In this case, it would obviously happen in FILO pattern // But when you allocate them dynamically, // you need to ensure they get destroyed in FILO sequence. structured_task_group outer_tg; auto task = make_task([] { structured_task_group inner_tg; // Assume it schedules and waits for tasks. }); // Schedule the outer task: outer_tg.run_and_wait(task); Agreed, STGs are not easy to use - so don't use them unless necessary. Use UTG (task_group), or use parallel_invoke. For more than 10 tasks, you can nest parallel_invoke itself. As a final note, all operations of STG are not thread safe, except operations that involve task-cancellation. The method for requesting task-group cancellation is structured_task_group::cancel, which will attempt to cancel running tasks under the group. The method, structured_task_group::is_canceling, can be to check of TG is being canceled. We will look up into Cancellation in PPL, later. structured_task_group::cancel When you need different threads to communicate with each other, you can use Asynchronous Agents Library (Agents Library), more elegantly than standard synchronization mechanisms: locks and events; message passing functions and writing a message-loop; or something similar. You pass a message to another thread/task using a message passing function. The message passing function utilizes agents to send and receive messages between different parts of a program. The Agents Library has following components: Few points before we begin: <agents.h> The two components are tightly coupled with each other, thus it is not possible to elaborate them one by one. Message-blocks are set of classes, which are used to store and retrieve messages. Message Passing functions facilitates passing the messages to/from the message-block classes. First an example: #include <agents.h><agents.h /> int main() { single_assignment<long><long /> AssignSum; task_group tg; tg.run([&AssignSum] { long nSum = 0; for (int n = 1; n < 50000; n++ ) { nSum += n; } // Just for illustration, send is not WinSock send! Concurrency::send(AssignSum, nSum); }); wcout << "Waiting for single_assignment...\n"; receive(AssignSum); wcout << "Received.\n" << AssignSum.value(); } In a nutshell, the above program is calculating a sum in different thread, and waiting for its completion in another (the main thread). It then displays the calculated value. I have used UTG, instead of STG to make this program short. For STG, we must use task_handle, as a separate local variable. The single_assignment is one of the Agent's template class that is used for dataflow. Since I am calculating sum, I instantiated it for long. This class is write- once messaging-block class. Multiple readers can read it from it. Multiple writes can also write to it, but it will only obey the message of first sender. When you learn about more of dataflow messaging-block classes, you would understand it better. single_assignment long Concurrency::send and Concurrency::receive are the Agent's functions used for message-transfer. They take a message-block and the message, and send or receive the data. With the statement, Concurrency::send Concurrency::receive receive(AssignSum); we are actually waiting for the message to arrive on AssignSum message- block. Yes, you sensed it right, this function will block until the message arrives on the specified block. Then we use single_assignment::value method to retrieve what is received. The return value would be of long type, since we instantiated the class for long. Alternatively, we can straightaway call value method, which would wait and block if message wasn't received on this message-block. AssignSum single_assignment::value value The colleague function, Concurrency::send would write the message on specified block. This would wakeup, receive. For single_assignment class, the send would fail if called more than once (from any thread). In Agent's library terms, the single_assignment would decline the second message, thus send would return false. As you read more about this below, you would understand more.(Though Concurrency::send and WinSock' send have same names, they are have different signatures - it is safe to call them without namespace specification. But for preventing possible bugs/errors, you should use fully qualified names) Concurrency::send receive single_assignment send send false Where can or should you use this message-block type? Wherever one time information update is required, from one or more threads; and one or more threads are willing to read the desired information. Before I explore more of Agent's classes, few points to impart: Concurrency::ISource ISource Concurrency::ITarget ITarget asend try_receive long Enough of theory! Let's have some more action. The overwrite_buffer template class is similar to single_assignment class, with one difference - it can receive multiple messages (i.e. accepts more than one message). Like single_assignment class, it holds only one message. The last sent message would be the latest message. If multiple threads do send message, scheduling and timing of message reception determines what latest message would be. If no message has been sent, receive function or value method would block. overwrite_buffer receive value Example code: int main() { overwrite_buffer<float><float /> MaxValue; task_group tg; tg.run([&MaxValue] { vector<float><float /> LongVector; LongVector.resize(50000); float nMax = 1; int nIterations = 1; // Generate incrementing numbers ( Lambda modifies nMax ) generate(LongVector.begin(), LongVector.end(), [&nMax] { return (nMax++) * 3.14159f;}); nMax = -1.0f; // Reset nMax to find the actual maximum number for(auto iter = LongVector.cbegin(); iter != LongVector.cend(); ++iter, ++nIterations) { if( *iter > nMax) nMax = *iter; // Update the MaxValue overwrite_buffer, on each 100th iteration, // and deliberately sleep for a while if(nIterations % 100 == 0) { send(MaxValue, nMax); Concurrency::wait(40); // Sleep for 40 ms } } }); tg.run_and_wait([&MaxValue] { int nUpdates = 50; // Show only 50 updates while(nUpdates>0) { wcout << "\nLatest maximum number found: " << MaxValue.value(); wait(500); // Wait 500ms before reading next update nUpdates--; } } ); } What the code does: Task 1: Generates 50000 float numbers, in ascending orders, puts them into vector. Now it tries to find out the maximum number in that array, updates the nMax variable. On each 100th iteration over vector, it updates the overwrite_buffer to reflect the latest value. float nMax Task 2: Would list the 50 updates from the overwrite_buffer, which essentially means it shows the user the latest maximum value found in vector. Waits slightly longer before issuing a refresh. Since task 1 would finish before task 2 lists all 50 updates, the last few maximums would be repeated (displayed). That is actually the highest number found. The output would be like: ... Latest maximum number found: 75084 Latest maximum number found: 79168.1 Latest maximum number found: 82938 Latest maximum number found: 94876 Latest maximum number found: 98645.9 ... Latest maximum number found: 153938 Latest maximum number found: 157080 Latest maximum number found: 157080 Latest maximum number found: 157080 Latest maximum number found: 157080 ... The code says it all, I guess I need not to explain things. But in short. Payload type is float. The wait for messages is in another task, unlike single_assignment example, where main function (main thread) was waiting. The Concurrency::wait is self explanatory, which takes timeout in milliseconds. Concurrency::wait Where can or should you use this message-block type? Whenever, one or more threads would update a shared information, and one or more threads are willing to get the latest information. A Refresh from user is good example. Multi-threading programmers, who would need producer and consumer pattern, where one thread generates a message and puts into some message queue, and another thread reads those messages in FIFO manner, would appreciate this class. Most of us have used PostThreadMessage function or implemented a custom, concurrency aware, message- queue class. We also needed to use events and some locks for the same. Now here is the boon! PostThreadMessage As the name suggests, unbounded_buffer does not have any bounds - it can have any number of messages stored. The sending and receiving of messages is done in FIFO manner. When a message is received is it removed from the internal queue, thus the same message cannot be received twice. By this, it also means that if multiple threads (targets) are reading from same unbounded_buffer block, either of them would get a message and not the other. If a message is received by target A, target B will not be able to receive it. The receive call would block if there are no pending messages to receive. Multiple sources can send the message, and their order (among different sources) is not deterministic. unbounded_buffer A simple example: int main() { unbounded_buffer<int><int /> Numbers; auto PrimesGenerator = [&Numbers] { for(int nPrime=3; nPrime<10000; nPrime+=2) { bool bIsPrime = true; for(int nStep = 3; nStep<sqrtl(nPrime); nStep+=2) { if(nPrime % nStep==0) { bIsPrime=false; break; } } if (bIsPrime) { send(Numbers, nPrime); } } wcout << "\n**Prime number generation finished**"; send(Numbers, 0); // Send 0 to indicate end }; auto DisplaySums = [&Numbers] { int nPrimeNumber, nSumOfPrime; for(;;) { nPrimeNumber = receive(Numbers); if(nPrimeNumber == 0) // End of list? break; // Or return wcout<<"\nNumber received: "<<nPrimeNumber; // Calculate sum (1..n) nSumOfPrime=0; for(int nStep = 1; nStep<=nPrimeNumber; ++nStep) nSumOfPrime += nStep; wcout<<"\t\tAnd the sum is: "<<nSumOfPrime; } wcout << "\n** Received zero **"; }; parallel_invoke(PrimesGenerator, DisplaySums); } Recollect parallel_invoke, which executes set of tasks in STG? I stored two lambdas in variables, viz. PrimesGenerator and DisplaySums. Then parallelly called both of them to do the work. PrimesGenerator DisplaySums As you can easily understand, the generator lambda is finding prime numbers, and putting into unbounded_buffer message-block, which is inherently a message-queue. Unlike previous two message-buffer classes, this class would not replace the contents of message with a new one. Instead it would put them into queue. That is why, generator is able to pile them up in Numbers message-block. No message will be lost or removed, unless received by some target. Numbers The target, in this case, is DisplaySums, which is receiving numbers from the same message-block. The receive would return the first message inserted (i.e. in FIFO mode), and would remove that message. receive would block if no pending messages are in message-queue. This program is designed in such a way, where a zero (0) value indicates end of messages. In case you need just to check if message is available, you can use try_receive function. try_receive can be used with other target oriented message blocks, no just unbounded_buffer (i.e. any message-block inherited from ITarget). try_receive In simple terms, call class acts like a function pointer. It is a target only block; that means it cannot be used with receive function, but only with send function. When you use send function, you are actually calling the function pointer (or say using the SendMessage API). First a simple example. call SendMessage int main() { call<int /> Display([](int n) { wcout << "The number is: " << n; } ); int nInput; do { wcout << "Enter string: "; wcin >> nInput; send(Display, nInput); } while (nInput != 0); } It takes a number from user, and calls the Display call-block. It should be noted that send will block till Display finishes, since Display is only being the target. Unlike other target blocks, where runtime just puts data for them to process later (via receive function). Display But it should be noted that, call is multi-source oriented message-block. Meaning that, it can be called (via send or asend) from multiple tasks/threads. The runtime will put the messages in queue just like unbounded_buffer. Since send is synchronous function to send message to a message-block, it will wait until it finishes. Using send with call message-block is very similar to SendMessage API. call asend SendMessage To send a message asynchronously, we can use Concurrency::asend function. It will only schedule the message for sending to given target block. Message will eventually be delivered to target as per runtime's scheduling. Using asend with call is similar to using PostMessage API. Concurrency::asend PostMessage Since it is only target oriented message-block, using receive or try_receive will simply result in compiler errors. Reason is simple: it doesn't inherit from ISource interface, the receive functions expect message blocks which are inherited from ISource. No, you need not to about these interfaces (at least not now) - I just mentioned for your knowledge. To see the different in send and asend, just change the lambda as follows: call<int /> Display([](int n) { wcout << "\nThe number is: " << n; wait(2000); // Suspend for 2 seconds } ); If you call it with send, it will simple block your input (since send won't return for 2 seconds!). Now just change send to asend, and enter few numbers in quick succession. You'd see those number are eventually received by call message-block. Your input is not blocked, and messages are also not lost! Definitely, you can use asend with other target oriented message blocks. Any proficient programmer can tell you that you cannot use asend on every occasion. Find the reason, if you don't know! The transformer class is both source and target oriented message block. And, as the name suggests, it transforms one data to another data. It takes two template parameters, thus it makes it possible to transform one data type to another. Since it is logically able to transform one data to another, it needs a function/lambda/functor, just like call class. First just a basic declaration: transformer transformer<int, string><int, /> Int2String([](int nInput) -> string { char sOutput[32]; sprintf(sOutput, "%d", nInput); // or itoa return sOutput; }); I assume you do understand the -> expression in lambda! Everything else is implicit, since you are reading and understanding some good stuff in C++. -> As with transformer construction, first template parameter is its input type and second one is output type. You see that it is almost same as call, the only difference is that it returns something. Now let's transform something: send(Int2String, 128); auto str = receive(Int2String); // string str; I just sent an integer (128) to transformer message-block, and further retrieved the transformed message from it (in string form). One interesting thing you have noticed is the smart use of auto keyword - The Int2String is templatized, the receive function takes ITarget<datatype> reference, and returns datatype - Thus the type of str is smartly deduced! 128 auto Int2String ITarget<datatype> datatype str Clearly, you would not use transformer class just to convert from one datatype to another. It can efficiently be used as a pipeline to transfer data between components of an application. Note that, unbounded_buffer class can also be used for transferring data from one component to another, but that acts only as a message-queue, and it must be backed by some code. The class call can be used as message-buffer where the code is the target (the receiver), the runtime and call class itself, manages multiple senders to send message to it, and put them in internal queue (in FIFO manner). But unlike, unbounded_buffer class, call cannot be used with receive (i.e. it is single target, multiple source). Finally, call cannot be used to forward the message to other component, by itself. NOTE: Remember that all these classes have internal message queues. If the target doesn't receive input or doesn't process a message, they would be kept into queue. No message will be lost. The send function puts the message into queue synchronously, and doesn't return until message is accepted, or acknowledged for reception or denied by the message-block. Similarly, asend puts the message into queue of message-block, asynchronously. All this pushing and popping happens in thread-safe manner. The transformer class can be used to form a message pipeline, since it takes input and emits output. We need to link multiple transformer objects to form a pipeline. The output of one object (say, Left) would be the input of another object (say, Right). The datatype of output from Left must be same as input of Right. Not only transformer message-block, but other input/output message block can be linked. How to link message-blocks? We need to use link_target method to link the target (the right side object). Somewhat gets complicated, but I must mention: link_target is actually a method of source_block abstract class. source_block class inherits from ISource interface. Therefore, all the source message-blocks classes actually inherits from source_block class. link_target source_block Sigh! Too much of text to read, before actually seeing a code? Now it's time to see some code! First let's rewrite the Int2String lambda: Int2String transformer<int, string><int, /> Int2String([](int nInput) -> string { string sDigits[]={"Zero ", "One ", "Two ", "Three ", "Four ", "Five ", "Six ", "Seven ", "Eight ", "Nine "}; string sOutput; do { sOutput += sDigits [ nInput % 10 ]; nInput /= 10; } while (nInput>0); return sOutput; }); It would simply transform 128 to "Eight Two One". To actually convert number to displayable user string, see this article. "Eight Two One" Let's write another transformer, which would count number of consonants and vowels out of this string and return it as std::pair: std::pair transformer<string, pair<int,int><string, />> StringCount( [](const string& sInput) -> pair<int,int><int,int /> { pair<int,int><int, /> Count; for (size_t nPos = 0; nPos < sInput.size(); ++nPos) { char cChar = toupper(sInput[nPos]); if( cChar == 'A' || cChar == 'E' || cChar == 'I' || cChar == 'O' || cChar == 'U') { Count.first++; } else { Count.second++; } } return Count; } ); For some readers, using pair in above' code might make it slightly hard to read; but believe me, its not that hard. The first component of pair would contain vowel counts, and second would store consonants (and others) count. pair second Now let's implement the final message-block in this data pipeline. Since it ends the pipeline, it need not to be transformer. I am using call, to display the pair: call<pair<int,int>> DisplayCount([](const pair<int,int>& Count) { wcout << "\nVowels: " << Count.first << "Consonants: " << Count.second; }); The DisplayCount message-block object displays the pair. Hint: You can make code simpler to read by typedefing the pair: DisplayCount typedef typedef pair<int,int> CountPair; // Replace pair<int,int> with CountPair in code. Finally now we connect the pipeline! // Int2String sends to StringCount Int2String.link_target(&StringCount); // StringCount sends to DisplayCount StringCount.link_target(&DisplayCount); Now send the message to first message-block in this pipeline, and wait for entire pipeline to finish. send(Int2String, 12345); wait(500); wait, at the end, is used to ensure that pipe line finishes before the main function exits, otherwise one or more pipeline message-blocks won't get executed. In actual program, we can use synchronization with single_assignment, standard Windows events or CRs events we would learn later. That all depends on how pipelining is implemented, and how the pipeline should terminate. wait main When you paste entire code in main, starting from Int2String declaration, and run it. It will display following: Vowels: 9 Consonants: 15 You may like to debug the program by yourself, by placing breakpoints. Breakpoints would be needed, since the flow isn't simple and sequential. Or you may put console outputs in lambdas. The Agents classes I have discussed till now are based on data-flow model. In dataflow model, the various components of a program communicate each other by sending and receiving messages. The processing is done when data is available on message-block, and data gets consumed. Whereas, in control flow model, we design program components to wait for some events to occur in one or more message blocks. For control-flow, though, the dataflow components (i.e. message-blocks) are used, but we don't need or read the data itself, but the event that some data has arrived on message block. Thus, we define an event, for one or more message blocks on which data would arrive. Though, in many cases, single_assignment, overwrite_buffer or unbounded_buffer classes may also be used for control flow (ignoring the actual data). But they allow us only to wait on one data-arrived event, and secondly, they aren't made for control-flow mechanism. The three classes mentioned below are designed for control flow. They can wait for multiple- message blocks' data-arrived event. Conceptually, they resemble WaitForMultipleObjects API. WaitForMultipleObjects The choice can wait for 2 to 10 message blocks, and would return the index of first message block on which message is available. All the given message blocks can be different types, including the message block's underlying type (template argument of message-block). We generally don't declare a choice object directly (it's awfully dangerous!). Instead we need to use helper function make_choice: choice make_choice single_assignment<int> si; overwrite_buffer<float> ob; auto my_choice = make_choice(&si,&ob); Here the choice object, my_choice, is created to wait on one of the two given message blocks. The make_choice helper function (hold your breath!) instantiates following type of object (with help of auto keyword, you are spared from typing that stuff): my_choice choice<tuple<single_assignment<int>*,overwrite_buffer<float>*>> // Type my_choice(tuple<single_assignment<int>*,overwrite_buffer<float>*> // Variable (&si,&ob)); // CTOR call A word about tuple class: Similar to std::pair class. This class can take up to 2 to 10 template arguments, and be a tuple for those data types. This class is new VC++ 10 (and in C++0x). The fully qualified name would be std::tr1::tuple. This is the reason why choice can have 2 to 10 message-blocks to wait. I will soon write an article on new features of STL! tuple std::tr1::tuple Let's move ahead with choice class. Following code is simplest example of choice class: send(ob, 10.5f); send(si, 20); size_t nIndex = receive(my_choice); We sent messages to both message-blocks, and then waited for my_choice object to return. As mentioned previously, choice class would return the zero based index of message-block in which data is available. Therefore, the underlying type for reception from choice is size_t. size_t. For code above' the nIndex would be 0, since it finds the message in first message- block. It all depends on how choice object is constructed - the order in which message- blocks are passed to constructor of choice. If we comment the second line above, the return value would be 1. It does not matter on which message block the message was sent first; only the order the choice was initialized, matters. If we comment both of the send calls, the receive would block. nIndex Thus, we see that choice class resembles WaitForMultipleObjects API, with bWaitAll parameter set to false. It returns the index of any of the message-block in which message is found (as per the sequence passed in constructor). bWaitAll Instead of using receive function, we can also use choice::index method to determine which message-block has message. The method, choice::value, which needs a template argument, would return the message that was consumed (i.e. the message that triggered choice). It would be interesting to know that all three approaches (receive, index and value): choice::index choice::value index Please note that, one reception is done, using either approaches, that choice object cannot be re-used. The consumed message is consumed (logically, since not all message-block class would remove the message), and the same message/message-block would always be referred. Any other send call would not convince choice to receive another message from same/another message block! The following code snippet illustrates this (comment at the end is the output): // Send to second message-block send(ob, 10.5f); size_t nIndex = receive(my_choice); wcout << "\nIndex : " << nIndex; // Now send to first message block send(si, 20); nIndex = my_choice.index(); wcout << "\nIndex (after resending) : " << nIndex; // Display float nData = my_choice.value<float><float />(); // Template argument is mandatory wcout << "\nData : " << nData; /* Output: Index : 1 Index (after resending) : 1 Data : 10.5 */ Conceptually, the join class is very similar to choice class. The only essential difference is that it resembles WaitForMultipleObjects with bWaitAll parameter set to true. Something in action is pasted below: join bWaitAll int main() { join<int><int /> join_numbers(4); bool bExit = false; single_assignment<int><int /> si_even, si_odd, si_div5; overwrite_buffer<int><int /> si_div7; // Just for illustration si_even.link_target(&join_numbers); si_odd.link_target(&join_numbers); si_div5.link_target(&join_numbers); si_div7.link_target(&join_numbers); task_group tg; tg.run([&] // Capture all { int n; while(!bExit) { wcout << "Enter number: "; wcin >> n; if(n%2==0) send(si_even, n); else send(si_odd, n); if(n%5==0) send(si_div5,n); if(n%7==0) send(si_div7,n); } }); receive(join_numbers); wcout << "\n**All number types received"; bExit = true; tg.wait(); } About code: ITarget<int> join_members bExit Thus you see that with choice and join classes, and receive function, you achieve same functionality facilitated by WaitForMultipleObjects. The only difference among two classes is selecting all or selecting any. join has deviations, though, which I would describe shortly. receive function What about the time-out parameter? For long we have been using the receive function, passing only the message-block as argument. That way, the function blocks indefinitely. The receive function takes one more argument - Timeout parameter - it is the default argument. The default timeout is infinite (defined as COOPERATIVE_TIMEOUT_INFINITE). The receive function itself is overload, but all versions take timeout, and all versions have it as last defualt argument. The timeout is in milliseconds. So, to wait for 2 minutes before all number types have been received, in previous example, we can change the receive function call as: COOPERATIVE_TIMEOUT_INFINITE receive(join_numbers, 1000 * 2 * 60); As you can implicitly understand that receive function, with timeout paramter, can be used with all message block types. What if receive function timeouts? It would throw Concurrency::operation_timed_out exception, which is derived from std::exception. I would elaborate Exceptions in Concurrency Runtime, later. Concurrency::operation_timed_out std::exception You might have noticed that choice class allows any type of message-block, having message-block's underlying type to be different. But join class accepts message block of only same underlying type. Well, choice operates on top of tuple; whereas join works on top of vector. For the same reason, choice is limited to 10 message-blocks to wait upon, but join can take any number of message blocks. I have not tested join till highest range, but definitely more than WaitForMultipleObject's 64 handles limit! (If you can/have tested, let me know). WaitForMultipleObject NOTE: None of the Agents Library classes/function uses the Windows synchronization primitive (mutex, critical sections, events etc). It however, uses timers, timer-queues and interlocked functions. Joins can be greedy or non-greedy, which is explained under multitype_join class. multitype_join Yes, as the name suggests, multitype_join can wait on message blocks of multiple types (including message-block underlying type). Since it takes multiple types, you either fiddle with template and tuple to instantiate the multitype_join object or use make_join helper function. Functionally, it is same as join class, except that is limits the source message block to be limited to 10. Example to create a multitype_join object: make_join multitype_join single_assignment<int> si_even, si_odd; overwrite_buffer<double> si_negative_double; // double auto join_multiple = make_join(&si_even, &si_odd, &si_negative_double); // Wait receive(join_numbers); Greediness of Joins: Both join types can be greedy or non-greedy. Their greediness is in effect when you attempt to receive information from them. While creating the object we specify greedy or non-greedy mode: join_type::greedy greedy join_type::non_greedy We create greedy and non-greedy join object as: // Greedy join join<int, greedy><int, /> join_numbers(411); // Non-greedy join (DEFAULT) join<int, non_greedy><int, /> join_numbers(411); Default creation mode for join object is non-greedy. We can use make_join to make non-greedy multitype_join, and use make_greedy_join to create greedy multitype_join object. make_greedy_join The timer class produces given message at given interval(s). This is source-only message-block, meaning that you cannot use send with it, but only receive function. The send functionality is the timer itself, which gets fired at regular intervals. The firing of message can be one time or continuously. In general, you would not call receive, but attach another message-block as the target of message. For example: timer int main() { // call object as the target of timer call<int><int /> timer_target([](int n) { wcout << "\nMessage : " << n << " received..."; }) timer<int><int /> the_timer(2000, // Interval in ms 50, // Message to send &timer_target, // The target true); // Repeating: Yes // Start the timer the_timer.start(); wcout << " ** Press ENTER to stop timer** "; wcin.ignore(); the_timer.stop(); wcout << "\n\n** Timer stopped**"; } The constructor of timer is self explanatory. Following methods are important with timer: timer::start timer::stop timer::pause stop start The target of timer can be any target-messaging block. For example, if we change the type of timer_target as follows, timer_target unbounded_buffer<int> timer_target; it will simply fill up this unbounded_buffer object, until timer is stopped! Here we come to the end of message-blocks classes. Summary of message-block classes I have discussed: Source and Target, both. They can be receiver as well as sender. Unlimited Only 1 receiver allowed. See Remarks. 10 -N/A- Asynchronous Agent (or Agent) can be used to specialize a task in more structured and Object-oriented manner. It has set of states (life cycle): created, runnable, started, done and canceled. The agent runs asnychronously with other tasks/threads. In simple terms, an agent facilicates you to write a thread as a separate class (MFC programmers may resemble the same with CWinThread). CWinThread The Agents Library has an abstract base class Concurrency::agent. It has one pure virtual function named run, which you implement in your derived class. You instantiate your class and then call agent::start method. That's correct guess - the Runtime calls (schedules) your run method. Concurrency::agent run agent::start Sample derivation: class my_agent : public agent { void run() { wcout << "This executes as a separate task."; done(); } }; Since run is a virtual function, which is called by CR, it doesn't matter if you put your implementation in public or private section. And here is code to make it work: int main() { my_agent the_agent; the_agent.start(); agent::wait(&the_agent); } And now the description: agent my_agent void run(void) agent::done agent_done the_agent agent::wait Life Cycle of an Agent An agent would have its life cycle from its initial state to terminal state, as illustrated in following diagram (taken from MSDN): As you can see, agent has five stages. The solid lines and the function names represent the programmer's calls, and the dotted line represents the call made by runtime. An agent would not follow all these life-cycle stages, as it may terminate at some stage. The following enumeration members (of type agent_status enum) describes each stage of agent: agent_status enum agent_created agent_runnable agent_started done agent_canceled agent::cancel It is important to note that once agents enters started stage, it cannot be canceled. It would run! You might wonder why you need to explicitly call agent::done method? Isn't it sufficient that run function returns, and runtime would know of its completion? Well, the run override is just a method where agent can start its own work. It may have one or more message-blocks to send, receive and wait upon. For example a call message-block is schedule to take input from other sources, and call done (or conditionally call done). It is not mandatory that you call done from within the run method, but you call it when agent has finished. agent::wait (and other wait functions) would return only when done has been called on the waited agent. Following example illustrates this: class generator_agent : public agent { call<int><int /> finish_routine; public: single_assignment<int><int /> finish_assignment; generator_agent() : // agent' constructor finish_routine([this](int nValue) // Sets the call object with a lambda, captures 'this' { wcout << "\nSum is: " << nValue; // Here we call agent::done done(); // this->done(); }) { } void run() { wcout <<"\nGenerator agent started...\n"; finish_assignment.link_target(&finish_routine); } }; Albeit a slightly complicated implementation of agent, it exemplifies how to hide internal details of an agent, and still communicate with other agents/tasks in program. This example is not good, but only put here for illustration. Here the call is responsible for calling done. The single_assignment sets call as its target via link_target function. Below I mention, why I put finish_assignment in public area. finish_assignment class processor_agent : public agent { ITarget<int><int />& m_target; public: processor_agent(ITarget<int><int />& target) : m_target(target) {} void run() { wcout << "\nEnter three numbers:\n"; int a,b,c; wcin >> a >> b >> c; send(m_target, a+b+c); done(); } }; The processor_agent would take an object of type ITarget<int> as reference. The run method is straightforward. It sends message to specified target (that was setup on construction). The processor_agent has no knowledge about the other agent (or target), it just needs an ITarget of subtype int. To accept any sub type for ITarget, you can make the processor_agent templatized class. processor_agent processor_agent Below is the main function, which uses both agents to communicate and further uses agent::wait_for_all function to wait for both agents to finish. The constructor of processor_agent needs a ITarget<int> object, and thus generator.finish_assignment is passed to it. agent::wait_for_all generator.finish_assignment int main() { generator_agent generator; processor_agent processor(generator.finish_assignment); // Start generator.start(); processor.start(); // Wait agent* pAgents[2] = {&generator, &processor}; agent::wait_for_all(2, pAgents); // 2 is the number of agents. } This paragraph lets know know that upper layer of Concurrency Library discussion is finished. It compromises of Parallel Patterns Library and Asynchronous Agents Library. The lower layer of CR contains Task Scheduler and Resource Manager. I would only discuss Task Scheduler. One more component that fits in separately, is Synchronization Data Structures, which I discuss below before I elaborate Task Scheduler. Concurrency Runtime facilitates following three synchronization primitives, which are Concurrency Aware. They work in alliance with the Cooperative Task Scheduler of CR. I would discuss Task Scheduler, after this section. In short, a cooperative scheduling does not give away the computing resource (i.e. CPU cycle) to other threads in system, but uses them for other tasks in the Scheduler. Following types are exposed for data synchronization in CR: critical_section reader_writer_lock event Header File: concrt.h Unlike standard Windows synchronization primitives, the critical section and reader/writer locks are not reentrant. It means that if a thread already locks/own an object, an attempts to re-lock the same object would raise an exception of type improper_lock. improper_lock Represents the concurrency aware Critical Section object. Since you are reading this content, I do believe you know what Critical Section is. Since is is non-reentrant, it would yield the processing resources to other tasks, instead of preempting them. The critical_section class does not use CRITICAL_SECTION Windows datatype. Following are the methods of critical_section class: CRITICAL_SECTION lock try_lock Attempts to lock the critical section, without blocking. Would not raise exception even if CS is already locked by same thread. true unlock Since above mentioned method are not safe when function has multiple return points, raises some exception, or programmer misses to unlock the critical section - the critical_section embodies a subclass named critical_section::scoped_lock. The scoped_lock class is nothing but RAII wrapper around the parent class, critical_section. This class doesn't have anything except constructor and destructor. The following sample illustrates it: critical_section::scoped_lock scoped_lock Unreliable scheme: critical_section cs; // Assume defined in class, or somewhere void ModifyOrAccessData() { cs.lock(); // Do processing. We have lock. // No other threads are using it. // This function, however may have multiple return statements // and putting unlock everywhere is cumbersome, and misakes // may happen. Also, exceptions may pose problems... cs.unlock(); } void ModifyOrAccessData() { critical_section::scoped_lock s_lock(cs); // CTOR locks it. // Do processing. Return from anywhere, any how. // Forget about unlocking the critical section. // The DTOR or scoped_lock will do it for us, even // when exceptions occur. } It should be noted that if lock is already held by same thread/task, and scoped_lock is used for that, improper_lock would occur. Similarly, if scoped_lock already acquires a lock successfully, and you explicitly unlock it, the destructor of scoped_lock would raise improper_unlock exception. For code snippet shows both: improper_unlock void InvalidLock() { cs.lock(); // Lock it ... critical_section::scoped_lock s_lock(cs); // EXCEPTION! } void InvalidUnlock() { critical_section::scoped_lock s_lock(cs); ... cs.unlock(); // The DTOR of scoped_lock, which would be called ASAP this function // is about to unwind, would attempt to unlock the critical section. // And that thing would raise an exception! } As said before, blocking lock would not preempt the processing resource, but would attempt to give it to other tasks. More on this later. lock Suppose you have a data or data-structure that is being accessed and modified by multiple threads in your program. An array of arbitrary data-type, for example. For sure, you can control the access to that shared data using critical section or other synchronization primitive. But, that shared data is mostly used for reading and not for update/write operations. In that scenario, a lock for reading would mostly be futile. The reader_writer_lock class allows multiple readers to read simultaneously, without blocking other threads/tasks that are attempting to read the shared data. The write-lock, would however be allowed only for one thread/task. Before I present code and list out methods, few points to know: critical_section NOTE: If the shared data involves substantially more frequent writes than reads, it is recommended that you use other locking mechanism. This class would give optimum performance for read-mostly context. The reader_writer_lock does not use Slim Reader/Writer Locks which is available in Windows Vista and higher versions. SRW Locks and reader_writer_lock have following notable differences: Methods exhibited by reader_writer_lock class: Acquires writer (i.e. reader+writer) lock. Blocks until it gains writer-lock. Attempts to acquire writer-lock, without blocking. lock_read try_lock_read Attempts to acquiwriter-lock, without blocking. Raises improper_unlock, if no lock is acquired. The writer-lock requests are chained. Thus, runtime would immediately choose next pending writer-lock and would unblock it. ... This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) for(int i=0;i<nVal1;i++) { ..... GetsomeValues(buf[i],i) for(int j=0;j<nVal2;j++) { PutsomeValues(j,[buf2[j],buf[i]) } ONG); } }); General News Suggestion Question Bug Answer Joke Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/80825/Concurrency-Runtime-in-Visual-C-2010?fid=1571913&df=90&mpp=25&sort=Position&spc=None&noise=3&prof=True&view=None
CC-MAIN-2015-14
refinedweb
10,254
55.64
Enable capturing of events streaming through Azure Event Hubs Azure Event Hubs Capture enables you to automatically deliver the streaming data in Event Hubs to an Azure Blob storage or Azure Data Lake Storage Gen1 or Gen 2 account of your choice. You can configure Capture at the event hub creation time using the Azure portal. You can either capture the data to an Azure Blob storage container, or to an Azure Data Lake Storage Gen 1 or Gen 2 account. For more information, see the Event Hubs Capture overview. Capture data to Azure Storage When you create an event hub, you can enable Capture by clicking the On button in the Create Event Hub portal screen. You then specify a Storage Account and container by clicking Azure Storage in the Capture Provider box. Because Event Hubs Capture uses service-to-service authentication with storage, you do not need to specify a storage connection string. The resource picker selects the resource URI for your storage account automatically. If you use Azure Resource Manager, you must supply this URI explicitly as a string. The default time window is 5 minutes. The minimum value is 1, the maximum 15. The Size window has a range of 10-500 MB. Note You can enable or disable emitting empty files when no events occur during the Capture window. Capture data to Azure Data Lake Storage Gen 2 Follow Create a storage account article to create an Azure Storage account. Set Hierarchical namespace to Enabled on the Advanced tab to make it an Azure Data Lake Storage Gen 2 account. When creating an event hub, do the following steps: Select On for Capture. Select Azure Storage as the capture provider. The Azure Data Lake Store option you see for the Capture provider is for the Gen 1 of Azure Data Lake Storage. To use a Gen 2 of Azure Data Lake Storage, you select Azure Storage. Select the Select Container button. Select the Azure Data Lake Storage Gen 2 account from the list. Select the container (file system in Data Lake Storage Gen 2). On the Create Event Hub page, select Create. Note The container you create in a Azure Data Lake Storage Gen 2 using this user interface (UI) is shown under File systems in Storage Explorer. Similarly, the file system you create in a Data Lake Storage Gen 2 account shows up as a container in this UI. Capture data to Azure Data Lake Storage Gen 1 To capture data to Azure Data Lake Storage Gen 1, you create a Data Lake Storage Gen 1 account, and an event hub: Create an Azure Data Lake Storage Gen 1 account and folders - Create a Data Lake Storage account, following the instructions in Get started with Azure Data Lake Storage Gen 1 using the Azure portal. - Follow the instructions in the Assign permissions to Event Hubs section to create a folder within the Data Lake Storage Gen 1 account in which you want to capture the data from Event Hubs, and assign permissions to Event Hubs so that it can write data into your Data Lake Storage Gen 1 account. Create an event hub The event hub must be in the same Azure subscription as the Azure Data Lake Storage Gen 1 account you created. Create the event hub, clicking the On button under Capture in the Create Event Hub portal page. In the Create Event Hub portal page, select Azure Data Lake Store from the Capture Provider box. In Select Store next to the Data Lake Store drop-down list, specify the Data Lake Storage Gen 1 account you created previously, and in the Data Lake Path field, enter the path to the data folder you created. Add or configure Capture on an existing event hub You can configure Capture on existing event hubs that are in Event Hubs namespaces. To enable Capture on an existing event hub, or to change your Capture settings, click the namespace to load the overview screen, then click the event hub for which you want to enable or change the Capture setting. Finally, click the Capture option on the left side of the open page and then edit the settings, as shown in the following figures: Azure Blob Storage Azure Data Lake Storage Gen 2 Azure Data Lake Storage Gen 1 Next steps - Learn more about Event Hubs capture by reading the Event Hubs Capture overview. - You can also configure Event Hubs Capture using Azure Resource Manager templates. For more information, see Enable Capture using an Azure Resource Manager template. - Learn how to create an Azure Event Grid subscription with an Event Hubs namespace as its source - Get started with Azure Data Lake Store using the Azure portal
https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-capture-enable-through-portal
CC-MAIN-2020-29
refinedweb
790
66.88
public class StateOfTurle { private float x; private float y; private float angle; public StateOfTurtle(float x, float y, float angle) { x = this.x; y = this.y; angle = this.angle; setX(x); setY(y); setAngle(angle); } public void setX(float x2) { x = x2; } public float getX() { return x; } public void setY(float y2) { y = y2; } public float getY() { return y; } public void setAngle(float angle2) { angle = angle2; ] public float getAngle() { return angle; } } public class TurtleCommands { private StateOfTurtle sot; private float x; private float y; private float angle; private MyStack<StateOfTurtle> stack; public TurtleCommands(float x, float y, float angle) { stack = new MyStack<StateOfTurtle>(); x = this.x; y = this.y; angle = this.angle; sot = new StateOfTurtle(x,y,angle); } public void turn(float angle) { sot.setAngle(sot.getAngle() + angle); } } import java.util.*; import java.io.*; public class MyStack <T> { private DoublyLinkedList<T> dLL; public MyStack() { dLL = new DoublyLinkedList<T>(); } public void pop { dLL.remove(0); } public void push() { dLL.addFirst(); } public T top() { return dLL.get(0); } } import java.util.*; import java.io.*; public class MyQueue<T> { private DoublyLinkedList<T> dLL; public MyQueue() { dLL = new DoublyLinkedList<T>(); } public void push() { dLL.addLast(); } public void pop() { dLL.remove(0); } public T front() { return dLL.get(0); } public T back() { return dLL.get(dLL.Size() -1) } public boolean empty() { if (dLL.Size() == 0) return true; else return false; } } package Assignment4; import javax.swing.Icon; import javax.swing.ImageIcon; import javax.swing.JOptionPane; import java.util.*; import java.io.*; //Paul Adcock // Assignment 4 // Lasted Worked On: 10/12/2010 // this class is the Doubly Linked list class. It has a Node that holds a reference to some date of type T and has a reference // to the next Node and to the previous Node. public class DoublyLinkedList<T> { private class Node<T> { private T data; private Node<T> next; private Node<T> previous; public Node(T data,Node<T> next, Node<T> previous) { this.data = data; this.next = next; this.previous=previous; } public T getData() { return data; } public Node<T> getNext() { return next; } public Node<T> getPrevious() { return previous; } public void setNext(Node<T> next) { this.next = next; } public void setPrevious(Node<T> previous) { this.previous = previous; } } private Node<T> head;//head of the linked list private Node<T> tail; // tail of linked list private int size; private ImageIcon icon; private Icon icon2; public DoublyLinkedList() { head = null; tail = null; size = 0; icon = new ImageIcon("doh3.jpg"); } // returns a String of all the items in the linked list. public String toString() { String str = "["; Node<T> curr; for (curr=head;curr!=null;curr = curr.getNext()) { str = str + curr.getData(); if (curr.getNext()!=null) str = str + " "; } str = str + "]"; return str; } // adds the data as the first element. If the list size is 0, makes first element tail. If head is not null, it puts the old // tail as the second element and the new element as the new head. public void addFirst(T data) { /* Since this is the first Object, previous should be null */ Node<T> newNode = new Node<T>(data,head,null); //We know that if head is null, the list is empty if (head==null) { //If the list is empty, tail will be newNode tail = newNode; } if(head!=null) head.setPrevious(newNode); //We want to set head to be newNode // if the list was empty before, both head and tail will be set to newNode; head = newNode; //Increment Size size++; } public void removeFirst() { if (size == 0) { JOptionPane pane = new JOptionPane(); pane.setIcon(icon); pane.showMessageDialog(null, "Cannot remove from an empty list!", "Invalid removal", JOptionPane.ERROR_MESSAGE); pane.setMessageType(JOptionPane.ERROR_MESSAGE); return; } Node<T> current = head; // creates a Node called current and sets it to head. head = head.getNext(); //move head to the next element current.setNext(null); size--; } public void addLast(T data) { //If there are no elements, use the addFirst method if (tail == null) { addFirst(data); return; } /* Create the new Node from the data. Set next to null * because this will be the last element and will not * have a next. Set previous to tail because tail has * not been changed yet and is currently referencing * that element that will be directly before this element */ Node<T> newNode = new Node(data,null,tail); /* Since the tail variable still references the element * directly before the new element, we can set that node's * next to our new element. */ tail.setNext(newNode); //Set tail to our new Node tail = newNode; //Increment size size++; } public int Size() { return(size); } public void add(int index,T data) { int i; if (index == 0) { addFirst(data); return; } if (index>size) { JOptionPane.showMessageDialog(null, "Cannot add out of bounds!", "Invalid command", JOptionPane.ERROR_MESSAGE); return; } if (index < 0) { JOptionPane.showMessageDialog(null, "Cannot add out of bounds!", "Invalid command", JOptionPane.ERROR_MESSAGE); return; } if (head==null) { addFirst(data); return; } if (index == size) { addLast(data); return; } //step 1 Node<T> current; current = head; for (i=0;i<index-1;i++) { current = current.getNext(); } //current now refers to object immediately before new node //step 2 Node<T> newnode = new Node<T>(data,current.getNext(), current.getPrevious()); //step 3 current.setNext(newnode); size++; } public void remove(int index) { if ((index<0) || (index>=size)) { JOptionPane.showMessageDialog(null, "You cannot remove an out-of-bounds value!", "Invalid removal", JOptionPane.ERROR_MESSAGE); return; } Node<T> current,previous2; Node<T> NodeToRemove = head; current = head; // makes current equal to head previous2 = null; for (int v = 0; v < index; v++) { NodeToRemove = NodeToRemove.getNext(); } Node<T> previous3 = NodeToRemove.getPrevious(); for (int i=0;i<index;i++) { previous2 = current; current = current.getNext(); } // for remove(3) // for i = 0; // previous2 = head; // current = head + 1; // for i = 0; // previous2 = head + 1; // current = head + 2; // for i = 1; // previous2 = head + 2; // current = head + 3; // for i = 2 // previous2 = head + 3; // current = head + 4; // previous2.next = head + 5; Node<T> node3 = previous2; if (index!=0) { if (index == Size()-1) { previous2.setNext(null); previous2.setPrevious(previous3); } else { previous2.setNext(current.getNext()); previous2.setPrevious(previous3); } } else { head = head.getNext(); head.setPrevious(null); } size--; } public T get(int i) { if (i < 0 || i >= size) return null; if (i ==0) { Node<T> thisNode = head; return(head.getData()); } if (i == size - 1) { Node<T> thisNode = tail; return(tail.getData()); } Node<T> specialNode = head; for (int x = 1; x < i + 1; x++) { specialNode = specialNode.getNext(); } return(specialNode.getData()); // How do I get it to return the data at i? } // calls get method of first index public T front() { if (head == null) return null; return(get(0)); } // calls get Method of last index public T back() { if (tail == null) return null; return(get(size - 1)); } public void removeLast() { if (head == null) { JOptionPane.showMessageDialog(null, "Cannot remove from an empty list!", "Invalid removal", JOptionPane.ERROR_MESSAGE); return; } remove(Size() -1 ); } // gets a String for the first bracket. Then stores each set of first and last, 2nd and 2nd to last, etc, in a String array; // it then sets the third string to the concatination of all of the Strings in the array. It thens puts these together and adds // the last set of brackets and returns the final string. public String printAlternate() { /* This method returns a string that has the data in the following fashion (assume for this example that the list stores String objects) If the list is currently [amit is now here], it will return a string �[amit here is now]� If the list is currently [amit is now currently here], it will return a string �[amit here is currently now]� */ String str = "["; String [] str2 = new String[size]; for (int v = 0; v < size; v++) { str2[v] = this.get(v) + " " + this.get(size - (v+1) ); } String str3 = ""; for (int x = 0; x < size - 2; x++) { str3 = str2[x].concat(str2[x+1]); } String str4 = str + " " + str3 + " " + "]"; return(str4); } } This is the assignment: The important part is to figure out how to get the code to work. Assignment 6: Turtles Out: 21st October 2010 Type: Large Due: 3rd November 2010 at 11:59pm Note: You must complete this assignment on Unix. You are not allowed to use Eclipse for any part of this assignment. You must use the command prompt to compile and run all your programs. How to approach this assignment: Read the assignment carefully and the helpful hints at the end. In this assignment you will implement 2 separate programs. A program that draws graphical primitives has been provided to you. You have to concentrate on generating the commands for this program. The idea behind this program is called “Turtle graphics”. Turtle graphics is based on the notion of a turtle moving on screen, drawing as it moves. At any given time, the turtle’s “state” is described by a position (x,y) and a direction in which it is pointing, in the form of an angle Ѳ with the +X axis, as shown below. The turtle can walk only in the direction that it is pointing (i.e. along a straight line), and can draw only as it walks. Through standard commands, you can make the turtle simply walk, walk and draw, and change its direction. Basically the turtle draws only straight lines. Part 1: Trying out the provided program A jar file called “TurtleGraphics.jar” has been provided to you. This program first takes the minimum and maximum values of x and y as integers within which all the lines will lie, in the order xmin xmax ymin ymax. It then takes floating-point numbers from the keyboard, four at a time. Each set of four numbers (x1,y1,x2,y2) draws a line from the point (x1,y1) to (x2,y2). Two sample input files “simple-lines.txt” and “gandhi-lines.txt” have been provided to you. In Unix, copy the jar file and these text files to the same directory, and run the program as follows: java –jar TurtleGraphics.jar < simple-lines.txt You can run the second file similarly. You should see the following pictures: simple-lines.txt gandhi-lines.txt If you see these pictures, you’re done with part 1. Part 2: Converting turtle commands to lines In this program you must write a program that converts standard turtle commands into lines. Your program must produce the output in the same format that the program in part 1 is accepting its input in. That is, the output of your program of part 2 should start with xmin, xmax, ymin, ymax and then four numbers per line to be drawn. The turtle commands are as follows: 1. turn a: This command turns the turtle by the specified angle ‘a’ in degrees without changing its position. 2. move r: This command moves the turtle by the specified distance ‘r’ along the current angle of the turtle, without drawing. If the current state of the turtle is (x,y,Ѳ), then the turtle will move to (x+r*cosѲ,y+r*sinѲ). The cosine and sine methods can be found in the Math class. Remember that the angle Ѳ is in degrees, while the cosine and sine methods accept angles in radians. The Math class has a method to do the necessary conversion. 3. trace r: This command moves the turtle by the specified distance ‘r’ along the current angle of the turtle, drawing a line between its old and new positions. Thus, the new state of the turtle will be (x+r*cosѲ,y+r*sinѲ,Ѳ) and a line will be drawn from (x,y) to (x+r*cosѲ,y+r*sinѲ). 4. push: This command pushes a copy of the current state of the turtle onto a stack. 5. pop: This command makes the state of the turtle to whatever is on top of the stack, and then pops from it. What to do: 1. Write a class to represent the state of a turtle in terms of its position (x,y) and the angle Ѳ. All are floats. 2. Write a class MyStack that behaves like a stack. You must use your own DoublyLinkedList class to write this stack. You are not allowed to use ArrayList or java’s Stack class for this purpose! 3. Write a program that accepts input from the keyboard in the following order: a. xmin, xmax, ymin, ymax as four integers. b. A sequence of the above turtle commands. You can assume that the input will always be correct. That is, if you read “move” you can expect to read a single floating point number after it. Your program should continue reading until there is nothing more to be read. 4. Your program must then take the necessary action as explained above for each turtle command. As soon as you read xmin, xmax, ymin, ymax, print them out. When you encounter the command “trace”, you must print out the four numbers corresponding to the line to be drawn using System.out.println. How to test your program: 1. Reproduce the same pictures you got in Step 1. All the provided .tur files are text files, you can open them in any text editor. a. Run this program by redirecting input from the provided file “simple.tur” and output to the file “generated-simple-lines.txt” as follows: java program2-main < simple.tur >generated-simple-lines.txt b. Now run the program in step 1 using generated-simple-lines.txt as input. You should see the same screen shot as “simple-lines.txt” created before. Thus, simple.tur stores the turtle commands to draw the shape seen in the screen shot. The file “tree.tur” contains the most complete set of turtle commands, so make sure you test for that! The pictures are provided below: simple.tur levy.tur gandhi.tur tree.tur 2. Pipe the two programs together using the other turtle commands files provided. If the turtle file you are using is “tree.tur”, run the two programs together as follows to get the same picture above: java program2-main < tree.tur | java –jar TurtleGraphics.jar Thus your program of part 2 is printing the numbers for the lines that the program of part 1 is accepting as input. This will work only if the output of one program exactly matches what the other program is expecting from its input. If “generated-simple-lines.txt” has everything in the same order as “simple-lines.txt”, piping should work successfully. Try the other files! Part 3: Generating Turtle Commands Many of the above files for turtle commands have been generated algorithmically. This is done by using “productions”. A production is like a Math equation, with a left side and a right side. An example of a production is: F = F+F-F where ‘F’, ‘+’, ‘-‘ are simply symbols (you are not performing any addition or subtraction). No two productions have the same symbol on their left side. A starting sequence of symbols is provided to you, say “++F--”. One can expand this sequence by substituting for every “F” the right side of the above production. Thus after one iteration, the sequence “++F--” becomes “++F+F-F--“. In the second iteration, one can again expand the result of the previous iteration, thus making the resulting sequence longer and longer with every iteration. When a set number of iterations is reached, the resulting sequence is converted into turtle commands. Keep in mind that there may be many productions, each with a different symbol on its left side. The starting sequence may also contain many symbols. Symbols in the sequence that are not on the left side of any production are left untouched (for example, the symbols ‘+’ and ‘-‘ above). Some of the above pictures have been generated this way. In part 3, you have to write a program that reads productions and symbols from the keyboard, iterates over a given sequence, and convert the resulting sequence into turtle commands. What to do: 1. Write a class MyQueue that behaves like a queue. You must use your own DoublyLinkedList class to write this queue. You are not allowed to use ArrayList or java’s LinkedList class for this purpose! 2. Write a program that reads input from the keyboard in the following order (follow along using one of the .pro files provided for this part): a. Read in four integers xmin, xmax, ymin, ymax. b. Read a single float number which is the “step size” of the trace command. c. Read a single float number which is the angle you turn by using the turn command. d. Read the number of productions (an integer). e. Read each production as follows: i. A single character that is the left side of the production. ii. A string that is the right side of the production. f. Read the starting sequence as a string of characters. g. Read the number of iterations, which is an integer. 3. Create two queue objects, and put the starting sequence in the first one, one character at a time. This is the “from” queue, and the other is the “to” queue.”. 5. Print the four integers xmin xmax ymin ymax. 6. For whichever queue has the final sequence, interpret the characters and print out the corresponding results using System.out.println: a. “F”: This corresponds to the “trace” command. Print “trace” followed by the step size from 2(b) above. b. “+”: This is a counter-clockwise “turn”. Print “turn” followed by the angle from 2(c) above. c. “-“: This is a clock-wise “turn”. Print “turn” followed by the negative of the angle from 2(c) above. d. “[“: This is a “push”. Print “push”. e. “]”: This is a “pop”. Print “pop”. How to test your program: 2. If your program 3 works correctly it will output the turtle commands in exactly the same order that program 2 is expecting as input. All the provided .pro files are text files, you can open them in any text editor. a. Run your program 3 by redirecting input and output as follows: java program3-main < generator-koch-smallest.pro >koch-smallest.tur b. Now run program 2 using “koch-smallest.tur” that you created in step 1. Try this for all the files provided for part 3. 3. Pipe all three programs together! For example, to see the picture for the tree (assuming all your .class files, the TurtleGraphics.jar file and the input files are in the same folder), run them as follows: java program3-main < generator-levy.pro | java program2-main | java –jar TurtleGraphics.jar In this case, your program3-main is generating turtle commands from productions that your program 2 is reading and creating lines, which the TurtleGraphics.jar is drawing. The results for generator-levy.pro should be the same as levy.txt above, and that for generator-tree.pro should be the same as tree.txt above. The other results are: generator-koch-smallest.pro generator-koch-smaller.pro generator-koch.pro Note: If you look at the above three files, they have the same productions and starting sequence, just different number of iterations. They should help you debug your program. Helpful Hints 1. Start early and pace yourself. You will not be able to complete this assignment in 2 days, so don’t expect to! 2. Take it one part at a time. Part 1 does not require you to write any code, just verify that you can run your program and see the correct picture. 3. For parts 2 and 3: if you are not confident about your DoublyLinkedList class, start by using the java classes Stack and LinkedList. Once you have the entire assignment done, replace them with your own implementations. Do not waste time in the beginning in debugging your DoublyLinkedList class. 4. At every step you will know if your program is working correctly by looking at the correct picture. Use this as a debugging tool! Expectations from a perfect program: 1. Part 2 works correctly on all input files. It uses your own Stack implementation. 2. Part 3 works correctly on all input files. It uses your own Queue implementation. 3. All the parts can be piped together and work correctly. 4. The source code should be suitably commented. a. Inside every class should be a short explanation of what the class represents and what it is used for. b. Just before every method signature, there should be information about what the method does, what every parameter means and what the method returns, if anything. c. Just inside every method there should be information about what exactly the method does (in steps). He's saved the text files in .tur format for some reason. Windows has a hard time opening those, but UNIX can. Assume that the queue is first in, first out. The queue is not double-ended, i.e. you can't add or remove from either end, just one. For the queue, it behaves like a line of people. When the first person at the front is seated, that person exits or is removed. When someone joins the line, they join at the end, no cutting!, and gradually move up as more people are added and removed till they reach the front. The stack you can add or remove only to the top. The stack either changes the head to the element being pushed and pushes all the other ones down or it is popped, removing it and making the element that was below it the new head, or top. Also, he said that, sorry about the formatting, but I've copied this part from an email. Part 3: Generating Turtle Commands Many of the above files for turtle commands have been generated algorithmically. This is done by using "productions". A production is like [Show Quoted Text - 64 lines][Hide Quoted Text] a Math equation, with a left side and a right side. An example of a production is: F = F+F-F where 'F', '+', '-' are simply symbols (you are not performing any addition or subtraction). So the equation F = F + F - F is a production. In a single file, there cannot be two such productions whose left side has the same symbol. That is, there can be no two productions whose left side has the symbol "F". You can expand a start sequence using a production. For example, if the starting sequence is "++F--", you can expand the "F" in it to "F+F-F" according to the above production. Thus after one iteration of expansion, the starting sequence "++F--" and the production "F=F+F-F" have PRODUCED "++F+F-F--". In part 3 you're essentially supposed to read in the details from the keyboard, carry out this expansion, and convert the resulting sequence into turtle commands. . All of them. They are all example input files. 2. Part 2 you have to do on your own. There is no "it". 3. Your program should read in everything from the keyboard. When you run the program, you will redirect input from one of the .tur files instead of typing them using the keyboard. 4. Part 2 is doing turtle commands --> lines. Part 3 will do productions --> turtle commands, so that when piped together, they will all do productions -->turtle commands --> lines. Note, the jar file is doing the actual drawing. The .tur file is not a Java file, so drawLine command will not make any sense there. The .tur file is simply a text file that has some turtle commands. Your part 2 program should read in this file, read in every command. It will maintain a turtle state consisting of its position and angle. In response to every command that is read, it will update the turtle state. When moving, it will move the turtle position. when tracing, in addition to moving, it will print out the resulting line exactly how TurtleGraphics.jar is expecting it. In the entire assignment I do not expect you to do any graphics yourself. TurtleGraphics.jar is supposed to do ALL the drawing. You have to worry only about giving it the correct coordinates. How do I fix the errors in my remove(int index) method in my DoublyLinkedList<T> class? That class is used for the MyQueue and MyStack classes, which means I really need to fix that error. Yes, this is the same DoublyLinkedList<T> from the program 4 that I did earlier. Are my MyQueue and MyStack classes doing what they're supposed to, i.e., how I described them earlier? UNIX is redirecting the input from the keyboard to instead be taken from the files. No FileInputStream needed. A Scanner is needed and I'm told that I'm supposed to have, for some part, static Scanner console = new Scanner(System.in): // not sure about System.in but I think that's right. while (console.hasNext()) { ..... read in data }
http://www.javaprogrammingforums.com/whats-wrong-my-code/5804-very-complex-project-its-hard-explain.html
CC-MAIN-2014-52
refinedweb
4,117
67.15
In this podcast, Asim Aslam, founder and CEO of Micro, sat down with InfoQ podcast co-host Daniel Bryant. Topics discussed included: microservices vs functions; the go-micro and micro frameworks; and the evolution of PaaS and how the new M3O platform fits into the landscape. Key Takeaways - Go Micro is a standard library for creating microservices using Go. It provides the core requirements for distributed systems development including RPC and event-driven communication. - A related project, Micro, is a CLI-based cloud native development framework. The Micro philosophy is “sane defaults with a pluggable architecture”. - The concept of cloud platforms and specifically “Platform as a Service (PaaS)” has gone through several iterations over the past two decades. Finding a “sweet spot” for developer productivity is challenging. Although both platforms provide a lot of value, it can be argued that “Heroku took too much away and AWS gives too much back.” - “PaaS 3.0” focuses on taking the modern approach of using containers, cloud, and (CNCF) open source delivery technologies, and making this easier to use for developers that want to focus on writing application code and delivering this to production. - M3O is a cloud platform for microservices development. It is a fully-managed service and has been built as “micro as a service.” Subscribe on: Transcript 00:04 Daniel Bryant: Hello, and welcome to the InfoQ podcast. I'm Daniel Bryant, news manager here at InfoQ and product architect at Ambassador Labs. I recently had the pleasure of sitting down with Asim Aslam, founder and CEO of Micro. I've followed Asim's work for a number of years via the London tech scene and I've recently been hearing increasing buzz about two projects he's been working on: first, the Micro and Go Micro projects for building Go-based microservices, and secondly, the M3O platform, new platform as a service offering that he's been working on that built on top of Micro. 00:34 Daniel Bryant: I was keen to pick his brains about the future of cloud infrastructure, continuous delivery, and what he's referring to as the rebirth of PaaS. I also wanted to discuss the topic of developer experience and learn more about what Asim thinks will be the optimal way for developers to interact with the platform. I believe that we should strive to make it easy for developers to do the right thing and I was keen to explore how Asim thinks developers should approach the implementation of cross-cutting concerns like scalability, resilience, and security across the platform. 01:00 Introductions 01:00 Daniel Bryant: Hello, Asim. Welcome to the InfoQ podcast. Thanks for joining us today. 01:03 Asim Aslam: Thanks very much for having me. 01:04 Daniel Bryant: Could you briefly introduce yourself please and share a bit about your background with the listeners? 01:08 Asim Aslam: Sure. My name is Asim Aslam. As you mentioned, I'm working on a product and a company called Micro Services, and I'm the creator of an open source framework called Micro as well. And previously worked at a startup called Hailo and did a brief stint at Google as well. 01:24 Could you introduce go-micro and talk about the motivations for building such a framework, please? 01:24 Daniel Bryant: That's what I was going to mention. Listeners may know you from your work on the Micro and the Go Micro frameworks. Could you introduce this and talk about the motivations for building such a framework please? 01:33 Asim Aslam: Yes. Sure. So the framework is effectively in the space of Rails or Spring, Rails for Ruby and Spring for Java. And the assumptions I was making was back in 2015, there was really no development model for the cloud. And what I found was everyone was focusing on microservices. It was a hot topic back then. People wanted to write it in Go, and Go didn't really have a framework surrounding it in the way that Rails and Spring had done for those previous languages. So I set out to build something based on my experiences at Hailo and Google. You can see now that Google released GRPC based on their experiences writing Stubby. And at Hailo, we had also built a framework. So I took those ideas and I basically codified them into something I thought were primitives that developers needed for building distributed systems. And here we are five years later kind of getting to version three. These things take a long time to work, really. 02:28 Did Hailo pioneer a lot of interesting microservices technology and platforms? 02:28 Daniel Bryant: Yeah, yeah. I remember when I was working at a company called OpenCredo in London around the time. Hailo was just around the corner, funnily enough, actually, from the offices. And we were hearing lots of good stuff of what you were doing. It was kind of like a Uber type, Lyft type company, if I remember. And it was very sort of pioneering in that space, I think, wasn't it? Lot's of interesting tech? 02:44 Asim Aslam: Yeah, that's right. So Hailo was a company that was founded in 2011. It was a ride hailing company focused on taxis and the regulated market. And it was a very dominant service in London and Ireland in Europe, but had a harder time in the US competing against Uber and Lyft, but effectively in the same category. And Hailo had gone through this interesting journey where they had built a monolithic application to serve a mobile app, right? PHP API, Java backend, and they had to try to scale that out over the next few years and moved into SOA and everything started to become a bit fragmented. And as the company had scaled up, raised over a hundred million dollars in funding, they knew that they needed to replatform to scale to compete against Uber and go global. 03:26 Asim Aslam: And that's when some of us sort of banded together to build this platform. That's when I came in and we effectively built it on the Netflix OSS kind of model. Netflix back then in 2013 were blogging about microservices, the pioneers in some ways, we could say. That platform that they built ended up becoming Spring Cloud, evangelized by Pivotal. And we had built something similar based on those premise. So we said, hey, developers need to be super productive. We need to build a cloud native platform, which abstracts away all of the details of distributed systems and allows these teams, these product and feature teams, to kind of build and scale independently. So it was driven largely by an organizational need, and then secondly, a technical need. It was interesting technology, it sparked a lot of ideas for me, but unfortunately the company never really worked out, but a framework came out of it, basically. 04:21 Daniel Bryant: Yeah. That's good stuff, like technical innovation paid on someone else's dime, then you can go and do your own thing, right? 04:26 Asim Aslam: Oh yeah. I actually think a lot of these companies are the leapfrogging moment for a lot of things that come out. Right? It's not necessarily that those companies end up being the thing that dominates, but actually stuff comes out of it. And you can see all sorts of innovation coming out of different places, which it's almost like it was an R and D lab of sorts. 04:44 Can you explain the relationship between go-micro and the micro CLI toolkit? 04:44 Daniel Bryant: So there's Go Micro, which is the sort of underlying Spring-like or Rails-like framework. There's also Micro, which I understand is sort of like a CLI toolkit? 04:53 Asim Aslam: Yeah. So when I started out, I was looking at what is the most minimal tool that I can give a developer that they can just pick up and use? And the idea was like, how do you get many, many companies to adopt something? And I should scale back and say the thing that made us productive at Hailo was not really a framework, but it was a platform. We had a platform tied to a development model and that's what worked really well. But as you're trying to launch something, you have to start with the smallest piece. And I thought the thing that you put in the hands of the developer as a framework. And so I wrote this thing, I called it Go Micro. Wasn't really very inventive with the name. It was Go, prefix to Micro, which is supposed to stand for microservices. 05:31 Asim Aslam: And I just wrote the basic building blocks you would need to do something to effectively write microservices, which is service discovery, RPC communication, maybe some sort of pumps-up messaging and some form of key value storage. And I made it Go's interface models so that those underlying pieces could be pluggable because every company is adopting different types of infrastructure and that would kind of allow that to scale. Over time, you came across these next-level primitives, which were required, which is, hey, I'm going to build a platform and I need to access these services through a CLI, a web dashboard, an API gateway, all this kind of stuff. So that's what Micro became. 06:06 Asim Aslam: In version three, we actually decided to consolidate all of this. So we found that there was a lot of complexity between the two components and Go Micro had gotten to a level where it was very hard to imagine. This is something I hadn't really experienced before, because I'd never worked on software long enough to see such a problem being uncovered. But once you hit a level of a number of years and amount of complexity, you have to reframe how you're solving a problem. So Go Micro for us is now a standard library for microservices development. We effectively define some building blocks, we make them pluggable, but we've moved the framework to be Micro itself, the all encompassing thing. 06:44 Asim Aslam: It looks a lot like Rails. So you have a microserver, it spins up the kind of primitives that you need, like authentication, configuration, key value storage, service discovery, everything like that. And it also includes a service library, which is effectively part of that framework, which you can use to write your services and then you just type "Micro run, hello world," and it will effectively run your service. And that's a long-winded way of saying Micro is effectively now the framework. 07:12 How does go-micro interface with infrastructure, such as VMs, Kubernetes, or things like a service mesh? 07:12 Daniel Bryant So you mentioned a number of things there, which I'm definitely hearing are sort of hot topics at the moment, right? Service discovery, service mesh this, service mesh that. Or I'm guilty as charged. I'm banging that drum. How does this framework platform run in terms of things like Kubernetes and service mesh, VMs? How does it interface with those sort of computing primitives, if you like? 07:31 Asim Aslam: So I guess the idea is to look at this divide we have between development and operations and effectively business logic and infrastructure. And you can see that service mesh is the latest point at which we're trying to create that division in terms of what the infrastructure should do and what the developer should do. And so for us, what we really want to say is, hey, the developer wants to write business logic. They need a framework that enables them to write that business logic, but they need to be able to access the underlying infrastructure in a way that they can leverage it. Right? 08:03 Asim Aslam: So the example would be, Hey, I need to be able to call other services. And a lot of people in the kind of service mesh space, they're like, well, this is transparent service mesh. You can just do whatever you want and it works. But it doesn't actually inform the developer of the design pattern of microservices, right? It doesn't actually codify that in a framework that enables them to do it. The strongest equivalent I can give you is something like an elastic search, right? If you imagine standing up search infrastructure and then just telling the developer, "Hey, you can do search. Like, you can just figure it out." They'd be like, "Well, I don't really know what you mean by that. I don't understand this." But if you then give the developer a library and you put that into their hands and then go, "Hey, use this library. Here are the five calls that you need to make. Here are the methods that you need to use to store a record into the search index. Here's how to retrieve it or to query it," it starts to make more sense. 08:57 Asim Aslam: So for us, it's the same thing as saying, "Hey, okay, service mesh is great, but actually what the developer needs is this understanding of what service discovery is, how to call the services, how to encode these things." And if we can just provide it in a very simple interface for them that's what they need. And so, what you run beneath the covers, you choose an Envoy or something else. It doesn't matter to us. We would love the infrastructure to take care of those details. But from our standpoint, the developer needs certain parameters as well. 09:25 How popular do you think Platform-as-a-Service (PaaS) is now? 09:25 Daniel Bryant: I like it. I like it. And I think this is moving nicely into the topic that you and I discussed off-mic around the rebirth of PaaS, which I like that phrase. So I've used things like Heroku, Cloud Foundry over my career. Hugely productive. I could get stuff out really quickly. I definitely know since my consulting days that a lot of developers pushed back against the notion of PaaS, this one size fits all was the kind of common thing I heard against it. What's your thoughts on that now? 09:52 Asim Aslam: Yeah. I think having done this long enough now, probably as long as you and the listeners here, we've gone through many, many different iterations of it. I'm calling this sort of PaaS 3.0 now. That's what I think of it and I think it's down to the fact that we've had five years of complexity and containers and container orchestration, and we have CNCF with a landscape of over a thousand tools to figure out how to build for this new wave. The key thing for me is I think Heroku solved the problem for web 2.0 development back in the day, right? 2010 Heroku was the epitome of a PaaS that you needed. Then we started to kind of move on and realize, hey, actually what I need is multi-service development and a platform that provides me more than just the ability to run a Rails application. 10:39 Asim Aslam: Heroku tried to scale. The cost was pretty high. The performance was not great. And again, what had worked for them initially, they effectively let go of. So what worked was, "Hey, I have this great development model, Rails, tied to this platform as a service. And I'm also using Git to basically just version control everything. So it's great." Git push. I write my stuff with Rails. Everything works really great. Once we went multi-language, it stopped having opinions. And I think this is actually a bad thing, right? So I think you kind of came up with this platform where it's like, ah, well, I could run anything, but that's not really beneficial. What really would have been an advantage was if it had given you microservices development or multi-application development, which really didn't happen. 11:22 Asim Aslam: I think the term I use is Heroku took too much away and AWS gives too much back. And I think we didn't find a happy medium. And so we go through these kind of cycles of PaaS or PaaS and infrastructure, whatever you want to call it, where we get to a point where we find the opinionated platform works really well. Every company actually ends up giving their developers and opinionated platform. Then we say we need more, and we just go through these cycles of innovation. And I think, yeah, you've seen it, right? And now Docker came about, it redefined what it meant to build infrastructure, and now we're building all the building blocks again. And so I think we're at a phase where we're like, "Hey, I've pieced together enough things. I need to PaaS 3.0 for a container in Kubernetes world that actually builds a development model into whatever this is." 12:10 Asim Aslam: And the industry hasn't come up with that model yet. But I see us converging. If you look at every tech company, they're all using Go-based microservices development in the cloud on the backend and they're using the Jamstack for the front end. And so if we start to uncover that, you feel like, "Hey, there's something coming." We're going to extract away Docker, we're going to extract away Kubernetes. We're going to come up with the development model and we're going to start this thing that we thought was a hot topic in 2014, microservices. Actually, we're going to revisit this as the cloud services development model. 12:44 What should the developer experience look like in “PaaS 3.0”? 12:44 Daniel Bryant: Very interesting. So a couple of things popped out there. I'd love to pick your brains on more. As in the first is, what do you think the developer experience can look like in this new world? I've heard you sort of use similar terms there several times, which is great to hear, because I think it's so important and it's very tempting as an engineer to not think about that stuff. I'm just building the cool Docker staff. But the developer experience, the dev workflow, is fundamental, as you said, for getting value to customers. What do you think that's going to look like in PaaS 3.0? 13:08 Asim Aslam: I think the key thing to really understand is that this is an evolution of platforms, right? So PaaS 3.0 is going to serve cloud services development, which means I am building a service that exists in the cloud. It is consumed as an API from multiple clients. So the client might be a web app. It might be a mobile app. It might be your Alexa. It might be some AR device and that can only be consumed through an API. And so I think we actually moved to this world where PaaS 2.0 was, "Hey, I'm building the whole stack." It's NBC, it's Rails. I build the web front end as well. It's all monolithic. And I think we moved to, all right, well, it's the Jamstack for the front end. It's iOS or Android for the mobile side. And the backend is a cloud-based API which is headless. And I have a PaaS that allows me to deliver that in another monolithic way, but actually a multi-service way, in a Micro Service way. 14:07 Asim Aslam: And I think Kubernetes and containers all facilitate the desire to build distributed applications. That's another way to phrase it. And we're just kind of getting to the point where it's like, okay, we have Kubernetes, we can deploy distributed applications. We have these different ingress controllers and service meshes that let us break this stuff down so that we can use GRPC to call things and a single API can be broken down per path to different apps. And so it's just a case of saying, okay, the next PaaS will pull together all of the building blocks that I need, which is authentication, configuration, key value storage, API gateway, secrets, everything else into one form and allow me to focus on writing the APIs. 14:51 What’s the difference between a microservice and function (as-a-service)? 14:51 Daniel Bryant: One interesting thing I wanted to dive in a bit deeper there was, how I'm hearing you talk, there's almost not much difference between microservices and what we're calling function as a service. Would that be a correct interpretation? 15:02 Asim Aslam: I think there is actually a distinction between the two. So the way I see the breakdown is we had monolithic services, we had SOA, we have microservices, and then we have functions. I think microservices is just a domain, so it serves a specific domain. And I think through that, you have certain actions which you can operate on in that domain, which is it's not one thing, it's many different things. So an example would be a customer service. It probably has a CRUD interface along with some other functions. And I think functions really went to the extreme, which is to say, hey, it's one function. It does one thing. And while there are frameworks of grouping them together, the function as we know it as developers is one function. It does one thing and it usually acts on events. 15:47 Asim Aslam: And so I actually think the FaaS model is an event driven model that is for inputs, from external things, whatever it might be. And yeah, there are singular functions. And I think it's a complimentary thing to microservices. I actually think on the backend in the cloud, you want to build microservices. In fact, you want to start with a monolithic app, right? You don't want to start with microservices. You want to start with a monolithic app. But eventually you moved to SOA, then microservices. And then I think there are things that need to be event driven that you don't need a long-lived service for. And that's what FaaS is for. 16:19 Can you explain what the typical workflow would look like in a modern PaaS? 16:19 Daniel Bryant: What does a typical developer workflow look like in this new world? Would I be purely on the terminal, just doing my "git push heroku master" equivalent? Would I be perhaps using a web interface to do things or maybe combination? What do you think? 16:32 Asim Aslam: I think there will be no one way. There will be no one solution. But everyone will have opinions on what they want to use, right? So you are seeing these low-code, no-code movements that is a graphical user interface and you're pulling together things. I think it has its merits, but maybe it won't replace what is the traditional backend development. If we stick to what it is, which is the evolution of backend service development as we know it, there is an element of CLI. There is an element of UIs, right? So I think you're not going to get rid of the CLI. I actually think that is the most important bit, because you don't want the context switch, right? You're doing everything in the terminal. You're maybe using them something else. I mean, if you're using an ID or whatever else, that's fine, but mostly you're in the terminal. So you want to be able to write code there, push things from there, inspect problems from their logs, stats, everything else. 17:24 Asim Aslam: And then I think the UIs are a nice to have. I think the key thing to understand there is what problem is the UI solving that the CLI can not solve? Observability is probably the biggest one. Right? And so I think when you get down to observability, you need a graph somewhere. You need a chart somewhere that's kind of useful. I don't think they're mutually exclusive. In my case, I'm starting purely in a CLI-driven manner just based on some experiences of failing development for dashboards. But I think one thing to highlight there is the world as we know it, in terms of open source, is a bunch of unique snowflake dashboards and there's nothing that has really tied it all together. So I think as 3.0 brings together a single consolidated experience on the CLI and on the web. 18:06 Could you introduce your latest project, M3O, please? 18:06 Daniel Bryant: Ooh, that seems a very interesting concept, which is actually a perfect segue, too, to what you're working on now, M3O. Could you introduce their folks what M3O is and how it relates to Go Micro and Micro? 18:17 Asim Aslam: Sure. So M3O is a cloud platform for microservices development. It's a fully-managed service and it's built as micro as a service. So our open source tool is a Go-based development framework. And what we're doing is effectively taking it, hosting it as a service, and just providing you this fully-managed, vertically integrated experience for what we're calling PaaS 3.0. 18:43 Daniel Bryant: So again, as a developer, maybe I don't have to worry about the infrastructure here, but will I still build out like CI and CD pipelines? How do I get my idea to code to value? 18:52 Asim Aslam: You will not have to worry about CI or CD. So I think the best way to think about this problem is imagine you are a developer at your current company or a past company or any other kind of technology company. You likely have a team who manages your platform and your entire pipeline. So you, as a developer, you're writing code, you push to GitHub, it goes to some CI/CD system. You see the build succeed. You have the option of pushing that manually, or maybe you're using some form of GitOps that automatically deploys that. And then you go look at the logs or look at a dashboard and see the health of that thing and you start to play around with it. 19:27 Asim Aslam: And in the same framing, we're just saying, "Hey, we want to take away all of these things that you're doing so you can just write your code, push it to the platform, see that it's working and start the query it." On our homepage, we have 10 commands from install to deploying and querying your service, including signup, and that's it. And so from my experience, 10 years ago, I was absolutely tired of the state of the world. I looked at all this complexity coming my way and I thought, why can't I just write some code and deploy it? Where is this source to running experience that I'd been promised? 20:03 Asim Aslam: And the next wave of innovation didn't actually solve that problem. It just made it worse. Now I'm sort of expected to go to some AWS dashboard of a thousand services. I'd spin up my Kubernetes cluster and then download my config and my keys. And at what point do I get from running my code to shipping it? Because even then, they're asking me to build a container. And so I just wanted to remove all of that from the developer's experience and just say, "Hey, write this code. And not just any code. Write a microservice, defined with this framework, type this command 'Micro run,' your GitHub repository. We will pull, build and run it for you. And you can query it through a HeNB gateway or via GRPC. And that's it." 20:47 Daniel Bryant: Super interesting. So you do things like build packs in the backend, I guess? 20:50 Asim Aslam: To start with, we literally just took the source code and we ran it. And I took source to running quite literally, but we realized that's not a real product. So we're actually building containers transparently for you. So we pull your source code, we build a container for you. And most people have done this before. They use some sort of scratch image or something like that. They build a Go binary, they put it in there, and they run it. So we're doing the same thing. We push it to a private registry for you and we run that thing. And that's it. 21:16 How should engineers think about things like security and performance in M3O? 21:16 Daniel Bryant: Nice. How do folks think about things like security and performance? Because there's always that trade off of making it easy to do the right thing, but not hiding some of this stuff. So developers, we atrophy on our security skills, for example. What's your opinion around that stuff? 21:29 Asim Aslam: It's quite a nuanced discussion and problem there, which is you want the developer to understand that security is a concern and these things need to be secure, but at the same time, you don't want them to have to be responsible for everything. So our goal is to really say, "Hey, look, the framework and the underlying platform will take care of a lot of the security and isolation requirements for you." So we will define all the network policies, we will use service mesh, we will use namespaces for isolation, we will use some other form of container isolation for you, like Firecracker or something else. We will define the resource limits and then we will use Let's Encrypt for the API and provide you with JWT and stuff like this. 22:09 Asim Aslam: And all we have to do is explaine to the developer, "Hey, here's the feature set that you get. Here's the secure kind of aspects of it. Here are the things that you need to take into consideration." And they should just be able to get on with things. And so the hope is that the developer doesn't actually have to think about security beyond their own code, even to the point of the data storage. Right? So if the storage system that you're using actually understands multi-tenancy, then you as a developer just have to pass through the user's credentials to ensure that the data is actually secure. And so this is the kind of approach that we're taking. We're trying to codify in the development model and it's still early days, but hopefully that works. 22:50 Daniel Bryant: If folks want to get involved in any of this kind of stuff, what's the best way for them to do it? Pop it on the website? You've got a Slack channel or something? 22:56 Asim Aslam: Yeah. So go to m3o.com. There's a link to the Slack channel. Come talk to us about it. The project Micro is totally open source. The M3O stuff currently in private beta, but we're happy to invite anyone who wants to test it. Yeah. And that's how you can get involved. 23:11 How can folks contribute to M3O, and what is the future direction of the project? 23:11 Daniel Bryant: So if folks are listening, Asim, and they're thinking, maybe I can write docs or maybe I want to do some coding, maybe I want to do some infrastructure staff, what are you and the team, what's your vision? Where you headed at the moment? 23:21 Asim Aslam: Yeah, right now, our goal is to effectively build a production PaaS for people. And so the focus is on driving that forward. I think at the moment, we're in V3B on the open source framework side. And so what we're really looking for is people who want to, as you say, help out with documentation. That's one thing we've had a lot of comments on, people really want stellar docs, so we would love help there. I think the next thing is really kicking the tires and all of this stuff to understand, Hey, is the developer experience right? Are we headed in the right direction? And then the next thing is, play a role in that. If you have ideas about where this should go, we are totally open to that. Because I think when you have a community telling you, that's effectively like customer-driven development. And so happy to hear any of it. 24:08 What’s the best way to get in contact with you? 24:08 Daniel Bryant: Super. Super. So what's the best way to get in contact with you online, Asim? 24:11 Asim Aslam: You can join the Slack channel. You can ping me there. I'm no longer on Twitter. I don't frequent that channel anymore. But otherwise you can email me at asim@m3o.com as well. 24:20 Daniel Bryant: Awesome. Thanks for your time today. 24:22 Asim Aslam: No worries. Appreciate it. Community comments
https://www.infoq.com/podcasts/microservices-go-micro-paas3/?itm_source=podcasts_about_PaaS&itm_medium=link&itm_campaign=PaaS
CC-MAIN-2020-50
refinedweb
5,732
73.07
Underscore 0.0.4 Obfuscating code by changing the variable names to underscores ========== Obfuscating code by changing the variable names to underscores ## Example ###### Input ```python # fib.py from operator import add class Fibber(object): @staticmethod def fib(n): a, b = 0, 1 for i in xrange(n): a, b = b, add(a, b) return b print Fibber().fib(10) ``` ###### Output ```python # _fib.py (___________, ____________, _____________) = (0, 1, 10) (________, _________, __________) = (object, xrange, staticmethod) from operator import add as _ class __(________): @__________ def ___(____): (_____, ______) = (___________, ____________) for _______ in _________(____): (_____, ______) = (______, _(_____, ______)) return ______ (fib,) = (___,) print __().fib(_____________) (Fibber, add) = (__, _) ``` ## Installation ``` pip install underscore ``` ## Usage ``` $ _ file.py > _file.py ``` You can also compile through python ```python from underscore import _ _(filename, output_filename) ``` ## Tests There are three flavors of tests all driven by the `nosetests` framework, to add a test simply add a python file into the `examples` folder. When running the test command, `nosetests` each test will run for each file in the `examples` folder. * `tests/diff_test.py` * This test makes sure that the output of the original file matches the output of the compiled file when ran. * `tests/empty_test.py` * This test makes sure that there are not any empty files in the example folder. * `tests/keyword_test.py` * This test makes sure that we are only using keywords and not using non `underscore` variables where possible * `tests/meta_test.py` (Not ready yet) * This test will turn the source code into underscored code, then with the underscored code we will turn the source code into underscored code again.. and check that the `source` and `output` are the same.. I know mind blowing.. ## Roadmap This project was started on Aug 28th, 2012. And is still under development. There is lots of things to do.. here is a `TODO` list for myself * ~~Refactor~~ * ~~Handle attributes~~. * ~~Handle with statements~~ * ~~Handle exception statements~~ * ~~Handle decorators~~ * ~~Handle class methods~~ * ~~Handle the case where input has underscored variables~~ * Give out warnings if users are using `exec` as this may lead to incorrect behavior. * Turn the source into obfuscate code, and make sure it executes the same behavior. - Author: Huan Do - Package Index Owner: doboy - DOAP record: Underscore-0.0.4.xml
https://pypi.python.org/pypi/Underscore/0.0.4
CC-MAIN-2017-09
refinedweb
382
70.53
Difference between revisions of "SMILA/Documentation/Xml Storage::Implementation::Berkley XML DB" Revision as of 05:53, 13 November 2008 Contents Berkeley DB XML. The attached docs from Oracle can be found [| here] Key Features and Limitations - Replication/Clustering - good for horizontal scaling with single Read/Write-Master and many Read Slave nodes. - limited to 60/1000 replication nodes on Windows/Unix respectively - when Master dies then election for new master can be done automatically or by client code. - XML Encodings - XML Capabilities - it is possible to store XML documents by whatever characteristic into one or more containers, e.g. no requirement to store only docs of the same namespace in one container - It is possible to XQuery over >1 containers at the same time. - storage of documents on node or whole doc level - in place modification of XML nodes with XQuery - Validation of XML docs possible, configurable on container level - DB Capabilities - transactional - locks are fine grained and very configurable Implementation Ideas Scenario 1: Parallel Access from diff. Clients on same Host It is possible to configure BDB such that several client processes can access the underlying data concurrently by sharing the underlying database files. This is called environment sharing. For this to work BDB needs to be configured to activate its transaction control as described in [| BerkeleyDBXML-Txn-JAVA.pdf]. However, this approach is limited to diff. clients on the same host. Placing the files on a shared recourse such a SAN, NFS is explicitly no valid solution (see) for accessing the same data from diff. hosts. Given the targeted environment of a distributed system the current scenarios seems an unlikely use case for SMILA. The only situation where this scenario could make sense nonetheless is, if all of the following conditions are met (IMHO this is unlikely to be the case): - the pre-processing overhead of the client for storing the XML Data is relatively large - that execution time occurs before transactional synchronization - parallelization with VM threads is less efficient than with native processes. Known Problem of Environment Sharing on same Host Ralf encountered a problem while testing this and which has been resolved via forum support from BDB. It seems that setRegister(true) must be turned on. This signals the environment to just have the first process start the DB with recovering and not any subsequent processes. Scenario 2: Parallel Access from diff. Clients on diff. Hosts BDBX is an embedded DB and as such it runs inside the process of a single application. This, however is hardly the use case of the DB in SMILA which needs to be able to be accessed by several modules in a distributed environment, i.e. the clients reside on different hosts. One way of handling access from diff. clients to the same data is to put all requests into a message queue (MQ). Its content is polled by a client app (listener) wrapping the access to the data. In the likelihood of high stress this scenario must be scaled horizontally, such that the listener does not become the bottle neck. To this end BDB allows replication to other nodes with one master (read/write) and many read-only clients. In such a replicated scenario 2 variants are possible: - 1. Replication is transparent to the clients and they just see the MQ, which handles the routing to the r/o nodes. - 2. The clients become the nodes themselves to which the data is replicated and thus don't have to share access to the data with any other application. This latter approach is especially good for clients that only read from but not write to the DB. If they need write access then the code must be configured such that it is possible to define/discern the master node (dynamically). If two clients reside on the same host but in different VMs, it might not be possible to share the replicated DB between these processes due to replication turned on. This is a limitation of the BDB replication framework, if I understand it correctly (see Replication-Java-GSG.pdf p. 20). Deadlocking resolver Current implementation solves the deadlock issues by synchronizing the Oracle Berkeley DB Xml container operations. This will be changed in the near future by implementing the proposed solution (Oracle BDB Xml forum).
http://wiki.eclipse.org/index.php?title=SMILA/Documentation/Xml_Storage::Implementation::Berkley_XML_DB&diff=128311
CC-MAIN-2019-39
refinedweb
716
51.99
One of the first Seam 3 Modules to appear is the Seam Faces module which provides additional functionality to JSF. While there aren’t many pain points left in JSF, one of the biggest is the issue of data converters for entity objects. This article will take a look at how Seam Faces takes the pain out of writing JSF converters. Download Download the Seam Faces Demo Source for Maven Typically, JSF converters come in two different types, the first converts strings to other strings, numbers or dates and vice-versa. Examples might be dates, formatted phone numbers or social security numbers. These converter are usually isolated and rely on internal logic defined in the converter. The second kind, which is far more problematic, involves using information outside the converter to perform the conversion. The simplest case being an entity lookup which is represented in the view layer by the primary key value and upon posting back to the server, converted to the entity based on that id. These converters require some mechanism to get the entity from the database based on the Id. JSF doesn’t allow any kind of resource injection in converters so it was always a problem. Seam 2 used a s:convertEntity tag just for the purpose of loading the entity from the database. With Seam 3 however, you’ll be able to write your own converter using standard JSF rather than using specialized tags. The Seam 3 module provides features that augment the existing JSF feature set and provides among other things, converters that can have beans injected to them like they are CDI beans. With this, we can implement converters that take the Id and load the entity from the database. Here’s some quick demo code to demonstrate the process. The application uses the jee6-sandbox Knappsack archetype and requires either Glassfish or JBoss AS 6 to run. I created a new project and in the DataFactory.java class I added a producer method to return a list of teachers. @Produces @Named("teachers") @ApplicationScoped public List<Teacher> getTeachers() { List<Teacher> teachers = entityManager.createQuery( "select t from Teacher t") .getResultList(); return teachers; } This makes an application scoped list of teachers available in our application, where they will be used to provide a lookup list. Now we’ll create a backing bean that has a teacher attribute that we want to set via the lookup list. package org.knappsack.demo.seam.seamfaces.bean; import javax.enterprise.context.RequestScoped; import javax.inject.Named; import org.knappsack.demo.seam.seamfaces.model.Teacher; @Named @RequestScoped public class PageBean { private Teacher selected; public Teacher getSelected() { return selected; } public void setSelected(Teacher selected) { this.selected = selected; } } No we’ll write our converter which implements the javax.faces.convert.Converter interface. @SessionScoped @FacesConverter("convert.teacher") public class TeacherConverter implements Converter,Serializable { @Inject @DataRepository private EntityManager entityManager; @Override public Object getAsObject(FacesContext context, UIComponent component, String value) { Long id = Long.valueOf(value); return entityManager.find(Teacher.class, id); } @Override public String getAsString(FacesContext context, UIComponent component, Object value) { return ((Teacher)value).getId().toString(); } } The @FacesConverter annotation marks this class as a converter when it is processed by CDI and gives it a converter Id for use in a JSF page. To convert the object to text, we just return the id of the object. To convert from text to an object (a Teacher instance) we load the instance from the database using the injected entity manager. To use this converter we just add this content of the home.xhtml page : <ui:define <h:form <h:outputText <br /> <h:selectOneListbox <f:selectItems </h:selectOneListbox> <h:commandButton </h:form> </ui:define> This page presents a list of teachers in which you can select one, click the update button and the selected teacher message at the top will change. The the only additional code we have is the converter which is a very simple class to write and can be re-used everywhere we have a teacher entity by just specifying the converter name. You could even create a generic entity converter to be used for all your entities. Injection is not just limited to entity managers, we can inject any kind of bean in there so our options are limitless. One good use case is the injection and re-use of application scoped lists of cached data instead of re-hitting the database all the time. Download the Seam Faces Demo Source for Maven, unzip it and see the readme.txt file for deployment instructions. It is very disapointing that JSF 2.0 doesn’t support this out of the box. I’m having trouble getting Seam 3 working, probably because it is still alpha code. I’m looking into the alternative here: Also, when I download the zip file for this project and try to build it with Maven (mvn clean package) I get the following: [INFO] Failed to resolve artifact. GroupId: org.jboss.spec ArtifactId: jboss-javaee-6.0 Version: 1.0.0.Beta4 Reason: Unable to download the artifact from any repository org.jboss.spec:jboss-javaee-6.0:pom:1.0.0.Beta4 from the specified remote repositories: central (), oss.sonatype.org/jboss-snapshots () Ryan, yes, I agree, it is a shame that this isn’t part of the spec. I can see why though to a degree, CDI is new and JSF shouldn’t have a dependency on it. However, I’m fairly confident that this kind of thing will make it into the next version in some shape or form. Thanks for the heads up on the download problem. One downside of Maven is if you have the jar locally, it won’t tell you if there is a problem. I’ll take a look and get back to you. Cheers, Andy Ok, I found the problem, in the pom.xml file, change the JBoss repository URL to point to : I need to update the Knappsack archetypes to use that repository instead. I’ll also fix it in the downloadable project source. Cheers, Andy Gibson Andy, Thanks for looking into it. However, I grabbed your latest version and I still get the same error. I finally got it to build by upgrading from maven 2.2.1 to maven 3.0. Is it possible that this project requires maven 3? I thought I saw on the Seam site that seam requires maven 3. Ryan Hey Ryan, Umm, interesting, yes, Seam 3 does require maven 3, but I wouldn’t have thought that it would make my project dependent on Maven 3. I personally use Maven 3, but I’m wondering if some of the pom syntax in the Seam 3 pom requires Maven 3….I guess that might be the case. I’ll have a look at it and see if that is the case. Cheers, Andy Gibson Since this article is about making entity converters a breeze it is probably worth mentioning in your article that you don’t even have to specify a converter name in your facelets pages if you use the forClass attribute instead of the value attribute in the @FacesConverter annotation.
http://www.andygibson.net/blog/article/seam-faces-makes-jsf-conversion-a-breeze/
CC-MAIN-2019-13
refinedweb
1,189
55.54
And then the dream came true, the moment arrived, someone made the announcement, who was he? I do not remember, all I heard was that someone announced that Qasim Saidhi, Rally No.102, Won the D Category of Jhal Magsi Desert Challenge 2009, several clapped and patted my back, I know almost all participants by name but who were near me at that moment, who patted my back and cheered, I cant remember. All I remember is that I grabbed the hand of my navigator Usman standing besides me and rushed on to the stage. I do not remember who handed me the trophy, while on stage I shook several hands, who were they? I do not remember. Several took photographs, who were they? I do not remember.<?xml:namespace prefix = o<o:p></o:p> <o:p></o:p>Usman was equally excited, he has been taking almost all the pictures that have been posted here but Usman forgot to handover the camera to anyone to take our photo, resultantly, we have none while receiving the trophy.<o:p></o:p> <o:p></o:p>The trance broke when Ahsan Ijaz came forward and hugged me and Usman as tightly as he could. Usman and I scremed and danced. Almost all around us were happy for my first ever win. PMRC members hugged both of us for bringing in our only trophy this time. I do not remember how long this insanity went on but it was one of the sweetest times of my life. I immediately left the ceremony to call my parents and wife and came back a few minutes later for this only photograph with the trophy<o:p></o:p> Amongst several other participants/fellow rallyists, M. Ashraf also come to Team Jimny's camp to congratulate. You can rightly imagine that the party at our camp continued till late<?xml:namespace prefix = o<o:p></o:p> During this time, my mechanic Naveed took the opportunity to request one of the Magsi guards for this pic Excited but tired for the 20 hour journey to Lahore, Team Jimny decided to stay for the night in Jhal and leave for the journey next morning. Other PMRC members, except Mujahid Zafar, who was to drive his BJ 60 to Bahawalpur next day, left Jhal at 4:00 in the morning to catch the 9:00am flight for Lahore 21/12/2009 Early morning, while the Rally Village was being dismantled, we packed up and left Even after the grueling 190 kms of rally, Jimny was is fine working order so it was driven further 230 kms to Sukkur. Picture of Defender with empty trolley and Usman trying to take his own Nadir Magsi's trophy winning vehicle being loaded on to his trolley for transpotation to KHI Qubo Saeed Khan, a small town 60 kms from Jhal on thr route to Indus as well as the National Highway- MCP vehicles being loaded on to their car carrier Crossing Shahdad Kot Tea break outside Shikarpur Entring Sukkur 1:00 pm, reached Al-Habib Petroleum at Sukkur Bypass to deliver the Jimny to the waiting car carrier of PMRC for transporting its vehicles to Lahore Catching up on well earned sleep in Mujahid Zafar's BJ After lunch at Taj Hotel Sukkur and tea break at a driver hotel near Zahir Pir, team Jimny on Defender and Mujahid Sb on BJ departed near BWP bypass. We later had dinner at driver hotel in Kacha Khu and reached home/Lahore around 5:00 in the morning of 23/12/2009. Journey to Jhal is over but the sweet memories of this triumph will remain with me/us forever. many congrats,excellent pics and commentary. If any property or person is recognizable, the photographer needs model releases signed before using/selling the pictures for any commercial purpose. Law exists, u just need to enforce it and ask for damages. or u can sit back an enjoy Very nice coverage saidhi, really liked going through the thread.. What is CoG? Copy of the results and a couple of videosthat Usman made while navigating during the rally will either be uploaded tonight or tomorrow. congrats Qasim bhai .... i didnt stop smiling atleast 3 hours after your excited fone call informing me of your winning .... I have missed an extraordinary event and opertunity of life time ... hope it comes again and in a more suitable time what are the celeberation plans ???? lets do a small weekend event and bar b q party .... Saidhi Sb, congratlations :). I am one of the silent admirers of your 4x4 feats posted here on Pakwheels from time to time. I am really happy for you and wish you success in future, insha Allah. Regards, Abu Jamal My dear I am in as participant as well as host for the BBQ, just let me know the time and venue
https://www.pakwheels.com/forums/t/jhal-magsi-rally-pics-test-run-dec-13th-09/129116?page=14
CC-MAIN-2018-05
refinedweb
820
67.18
version of Win32::GUI includes the modules Win32::GUI::AxWindow, Win32::GUI::DIBitmap, Win32::GUI::Grid, and Win32::GUI::Scintilla (originally by Laurent Rocher:). Please uninstall any previous versions of these modules that you may have installed before installing this version of Win32::GUI. <TAB> can now be used to move out of a multi-line Textfield when using the -dialogui option on a Window. A -wantreturn option has been added to stop the <RETURN> key firing the default Click event for a multi-line Textfield when using the -dialogui option on a Window. This replaces the previous use of -addstyle => ES_WANTRETURN.. A new script win32-gui-demos will be installed in your perl bin directory. You should be able to get a full list of the sample code distributed with Win32::GUI, view the source and run the demos by typing win32-gui-demos at your command prompt. The Win32::GUI::Splitter implementation has been re-written to provide more robust operation. The splitter bar can no longer be moved outside the parent window, which used to result in drawing problems, and the bar itself is now more nicely drawn. (Tracker:1363141) The -background option now works for splitter windows. with what was there before, but read the documentation to find out about all the new features you can use. The constructor has some new options -nofade, -noamimate and the -balloon option is documented. -balloon option along with the new SetTitle method allows you to make use of balloon tooltips. The events (NeedText, Pop, Show) now have a second parameter allowing you to correctly determine if the first parameter is a window handle or a tool id. This section documents features that have been deprecated in this release, or in recent releases, and feature that will be deprecated in up-coming releases. The introduction of Win32::GUI::Constants means that we now have access to a very large number of constants, so the current behaviour of Win32::GUI to export all constants to the calling namespace by default is no longer appropriate. So, a bare use Win32::GUI; now generates a warning that the old default behaviour will be deprecated - although the export behaviour of Win32::GUI 1 the next the next); For at least the last 6 years the Win32::GUI namespace has been aliased to the GUI namespace for backwards compatibility with very early scripts. This aliasing has been removed, and any remaining scripts will need updating.
http://search.cpan.org/~robertmay/Win32-GUI/Win32-GUI-ReleaseNotes/RN_1_04.pod
CC-MAIN-2015-27
refinedweb
410
58.42
The SHT31D is a temperature and humidity sensor with a built in I2C interface. The sensor has a typical accuracy of +/- 2% relative humidity and +/- 0.3C. Hardware The SHT31D breakout board from Adafruit is supplied with pull-up resistors installed on the SCL and SDA lines. The ADR line is tied low giving and I2C address of 0x44. This address line can also be tied high and in this case the I2C address is 0x45. Purchasing The SHT31D temperature and humidity is available on a breakout board from Adafruit: Software using Microsoft.SPOT; using Netduino.Foundation.Sensors.Barometric; using System.Threading; namespace SHT31DTest { public class Program { public static void Main() { SHT31D sht31d = new SHT31D(); Debug.Print("SHT31D Temperature / Humidity Test"); while (true) { sht31d.Read(); Debug.Print("Temperature: " + sht31d.Temperature.ToString("f2") + ", Humidity: " + sht31d.Humidity.ToString("f2")); Thread.Sleep(1000); } } } } API Constructor SHT31D(byte address = 0x44, ushort speed = 100) Create a new SHT31D temperature and humidity sensor object. The address defaults to 0x44 and the speed of the I2C bus to 100 Khz. Properties float Temperature Last temperature reading made when the Read method was called. float Humidity Last humidity reading made when the Read method was called. Methods void Read() The Read method forces a temperature and humidity reading from the SHT31D temperature and humidity sensor. The reading is made using high repeatability mode.
http://netduino.foundation/Library/Sensors/Atmospheric/SHT31D/
CC-MAIN-2018-05
refinedweb
225
50.84
This class provides bitwise access to the byte data. More... #include <CBitMemoryReadArchive.h> This class provides bitwise access to the byte data. Definition at line 21 of file CBitMemoryReadArchive.h. Definition at line 24 of file CBitMemoryReadArchive.h. Constructs a bit stream from a vector of bytes. Returns true if the archive is in valid state and internal position cursor has not reached end of archive data. Reimplemented from iser::CMemoryReadArchive. Process binary data block. Reimplemented from iser::CReadArchiveBase. Process binary data block. Reimplemented from iser::CMemoryReadArchive. Gets the value of the next bits in the stream. Seeks internal cursor to the begin of data. Reimplemented from iser::CMemoryReadArchive. Definition at line 54 of file CBitMemoryReadArchive.h. © 2007-2017 Witold Gantzke and Kirill Lepskiy
http://ilena.org/TechnicalDocs/Acf/classiser_1_1_c_bit_memory_read_archive.html
CC-MAIN-2018-51
refinedweb
124
55.71
Exuberant Ctags generates an index (or tag) file of source language objects in source files that allows these items to be quickly and easily located by a text editor or other utility. Alternatively, it can generate a cross reference file which lists, in human-readable form, information about the various objects found in a set of source code files. Supported languages include: Assembler, ASP, AWK, BETA, C, C++, C#, COBOL, Eiffel, Fortran, HTML, Java, Javascript, Lisp, Lua, Make, Pascal, Perl, PHP, PL/SQL, Python, REXX, Ruby, S-Lang, Scheme, Shell (Bourne/Korn/Z), Standard ML, Tcl, Vera, Verilog, Vim and Yacc. WWW: This port is required by: To install the port: cd /usr/ports/devel/ctags/ && make install cleanTo add the package: pkg install devel/ctags cd /usr/ports/devel/ctags/ && make install clean pkg install devel/ctags No options to configure Number of commits found: 40 Rename devel/ patch-xy patches to reflect the files they modify. Remove indefinite articles and trailing periods from COMMENT, plus minor COMMENT typos and surrounding whitespace fixes. Categories D-F. CR: D196 Approved by: portmgr (bapt) - Stage support Add NO_STAGE all over the place in preparation for the staging support (cat: devel part 1) - Remove MAKE_JOBS_SAFE variable Approved by: portmgr (bdrewery) Change maintainer address to my FreeBSD.org mail address. Approved by: kwm (mentor) Update maintainer email address PR: ports/164211 Submitted by: Niclas Zeising <zeising@daemonic.se> (maintainer) -remove MD5 - Pass maintainership to submitter PR: ports/146574 Submitted by: Niclas Zeising <niclas.zeising@gmail.com> - Reset some more - Update to 5.8 - Mark most of my ports MAKE_JOBS_SAFE=yes Tested by: several builds in P6 TB - Use SUB_FILES for pkg-message - Use SF Macro - Adopt Reset maintainership. Upgrade to v5.7. Upstream changes include: o Changes to language parsing: + Basic Support for "DIM AS" statements, BlitzBasic, PureBasic and FreeBasic. + C support for forward variable declarations. + C# support for verbatim string literals, multiple namespace declarations. + C++ support for non-extern, non-static functions returning wchar_t, optional tags for forward variable declarations. + Eiffel support for the convert keyword. + Java support for enums. + Perl support for the 'package' keyword, for multi-line subroutine, package and constant definitions, for optional subroutine declarations and formats. Comments mixed into definitions and declarations are now ignored. + PHP Support for interfaces and static/public/protected/private functions. + Python support for arbitrary nesting depth. o Support added for numerous revision control systems. o Many bug fixes. - Upgrade to v5.6. Upstream changes include: + Support for ASP constants. + Support for GNU Make extensions. + ".mk" is now recognized as a Make language file. + Extensions to the Ruby language parser. + Many bug fixes. - Use the --with-readlib configuration option instead of homebrewing our own install process. This change installs 'readtags.[oh]' under ${PREFIX}/lib and ${PREFIX}/include respectively. - Update the list of supported languages in the port's description. Add SA256 checksums. Prefer MASTER_SITE_SOURCEFORGE to MASTER_SITE_LOCAL. Prodded by: fenner@'s distfile checking script. Make this port 'portupgrade' friendly. Noticed by: Andreas Kasparz <andy@interface-projects.de> Add an install time message pointing users to this port's installed executable. Requested by: Marc Rene Arns <privat@marcrenearns.de> Upgrade to v5.5.4. Upgrade to v5.5.3. Upgrade to v5.5.2. Upgrade to v5.5.1. Upgrade to v5.5. Clear moonlight beckons. Requiem mors pacem pkg-comment, And be calm ports tree. E Nomini Patri, E Fili, E Spiritu Sancti. Upgrade to v5.4. Upgrade to v5.3 Upgrade to v5.2.3. Upgrade to v5.2.1. Update to v5.2. Update to v5.1. Update MASTER_SITE to use MASTER_SITE_SOURCEFORGE. Remove MASTER_SITE_SUNSITE as the current exctags distribution does not seem to present on the few Sunsite mirrors that I checked. Update to v5.0.1. Upgrade to v5.0. Upgrade to v4.0.3. Convert category devel to new layout. Upgrade to v4.0.2. Move the stragler's distfiles to the offical MASTER_SITE_LOCAL site. Actually look for ctags in a subdirectory in MASTER_SITE_SUNSITE. This MASTER_SITE has been broken for 1 year and 1 day, ever since rev 1.3. Servers and bandwidth provided byNew York Internet, SuperNews, and RootBSD 7 vulnerabilities affecting 28 ports have been reported in the past 14 days * - modified, not new All vulnerabilities
http://www.freshports.org/devel/ctags/
CC-MAIN-2014-42
refinedweb
707
53.27
Frank Kieviet How to fix the dreaded "java.lang.OutOfMemoryError: PermGen space" exception (classloader leaks)_1<<_3<< Clicking on the classloader link brings up the following screen: Scrolling down, I see Reference Chains from Rootset / Exclude weak refs . Clicking on this link invokes the code that I modified; the following screen comes up: >>IMAGE. Posted at 04:10PM Oct 19, 2006 by Frank Kieviet in Sun | Permalink: Posted by Matthias on October 20, 2006 at 06:30 AM PDT # Posted by Kelly O'Hair on October 20, 2006 at 09:40 AM PDT # Posted by Frank Kieviet on October 20, 2006 at 11:47 AM PDT # Posted by Nick Stephen's blog on October 31, 2006 at 02:45 AM PST # Posted by Ortwin Escher on November 03, 2006 at 05:10 AM PST # Posted by Mickael on November 14, 2006 at 08:02 AM PST # Hi Mickael, Can you send me an email (frank dot kieviet at sun dot com)? I don't know your email address. The email you may have typed in the comments box is invisible to me: it's only known to the system so that it can send you an update if this thread is updated. Frank Posted by Frank Kieviet on November 14, 2006 at 09:48 AM PST # Posted by Frank Kieviet on November 15, 2006 at 10:31 PM PST # Posted by Sebastien Chausson on November 17, 2006 at 03:38 AM PST # Hi Sebastien, In the case of the problem with the <tt>Level</tt>, you could change your application code so that it does not use a new <tt>Level</tt> subclass. It's a workaround for a problem in code that you have no control over (i.e. the <tt>Level</tt> class), and as such it fixes your problem. That's a common approach: often you cannot fix the problem properly (e.g. no control over the code, proper fix is too laborious/expensive) so you have to find a workaround. Frank Posted by Frank Kieviet on November 17, 2006 at 04:21 PM PST # Posted by Matthias on December 02, 2006 at 02:54 AM PST # > The fix has made it into JDK 7b3. Thanks! Excellent! No more need for a patched jhat! Frank Posted by Frank Kieviet on December 04, 2006 at 10:10 AM PST # Detlef Posted by Detlef Kraska on December 14, 2006 at 04:01 AM PST # Hi Detlef, Working with EJBs or Servlets shouldn't make a difference. Did you contact our support department about this? I'm interested in what references you found. Can you send me an email? (frank dot kieviet at sun dot com). Frank Posted by Frank Kieviet on December 14, 2006 at 04:34 PM PST # Posted by Chris James on May 24, 2007 at 07:15 AM PDT # Frank, I am trying to debug PermGen space OOM error and to get memory dump I added -Xrunhprof:heap=all,format=b,depth=4,file=data18 on my weblogic java options and later undeployed my application and triggered dump using ctrl+break on my weblogic console (on windows XP), weblogic is terminating the dump by throwing ClassNotFoundException, I increased the depth to 70 now I got the big dump file but Jhat on java1.6 is not showing me the classloader info. I am getting following on prompt Snapshot read, resolving... Resolving 0 objects... WARNING: hprof file does not include java.lang.Class! WARNING: hprof file does not include java.lang.String! WARNING: hprof file does not include java.lang.ClassLoader! Chasing references, expect 0 dots Appreciate if you could point out the missing step. Thanks Kumar Posted by Kumar Kartikeya on September 18, 2007 at 06:07 PM PDT # Re Kumar: I don't think you need to specify heap=all; I think the stacks are not dumped if you choose heap=dump which will make the dump a lot smaller. On which VM are you running? I have only tested the Sun VM so far. If you're running on a different one, perhaps hprof doesn't work as advertised on that VM. You could also try to specify to dump when an OOM occurs ( -XX:+HeapDumpOnOutOfMemoryError) in which case you don't need to specify hprof on the command line. Lastly, did you try to use jmap? Frank Posted by Frank Kieviet on September 24, 2007 at 11:20 PM PDT # Frank, Thanks for the response, I am using Java 5, so could not use -XX:+HeapDumpOnOutOfMemoryError or jmap directly also looks like weblogic does not support Java 6, so I ported my application on Java 6 and used tomcat to produce dump, and after running jhat, I found that two of my classes (generated classes using wsdl2java) are held by classloader that also loads Apache Axis classes, particularly XMLUtils.java, I looked at source code and found that It is using ThreadLocal and looks like reference in thread local is leaking, here I might be wrong. I am using axis 1.4 but Apache Axis2 also has same issue where if you redeploy the any sample application on any server (tomcat or weblogic) perm gen space keep on increasing. Are you or anyone aware of this issue with Apache Axis? Thanks for your time. Kumar Posted by Kumar Kartikeya on September 27, 2007 at 11:00 AM PDT # The "exhaust PermGen space to force GC" technique works very nicely - except if you have the hprof agent loaded, which seems to be stopping classes from unloading. Posted by Max Bowsher on October 27, 2007 at 05:25 PM PDT # Are you available to help identify my memory leaks on my webserver? Posted by Del Rundell on November 06, 2007 at 09:53 AM PST # Hi, I think this article is very interesting. I'm working with Oracle App Server 10.1.3.1.0 with Solaris SPARC 10 and this is not certified by working with JDK6 I would apreciate so much if you could send me the modified jhat version. Anyway, I'm trying to download JDK6 for solaris 10 (SPARC). Im thinking by replacing jhat included in jdk6 in my current jdk5 installation, I think the probability of failures is so high. If you can help me I'd really apreciate it. I've been working in this issue by 2 or 3 weeks and this would help me so much. Thank's. Posted by Hugo Martinez on November 14, 2007 at 11:22 AM PST # Thanks so much for putting this together, it helped me identify two problems with my app, and one with a third party (Axis). Unfortunately no fix for the Axis issue, but at least there's a bug for it in that project. Problem is there for Axis 1.4 and 2. My app issues were threads that run in infinite loops, that were not interrupted when the app undeployed. Still need to find how to setup a listener to determine when the app is being undeployed and interrupt those threads. Posted by Alex Quezada on December 05, 2007 at 11:22 AM PST # I am interested to know how to fix the problem in Leak class. Thanks! Posted by bob on March 11, 2008 at 04:47 AM PDT # This is a good article. But what really is the overall solution?? Without the existance of this article or just the average java developer pursuing through the JDK source, how in the heck would anyone know about this Level class issue? Secondly what other hidden gems like this exist out there in the JDK or the multitude of other java libraries out there? auughh!! Posted by borfnorton on March 18, 2008 at 05:43 PM PDT # I read both of your postings and I am still having trouble figuring out how to find the perm gen leak and how to fix it. My web application has many classes left in memory after the undeploy. This includes many t3rd party jars. When I go into the dump using jhat, I follow your instructions. One of the last things you say is "And there's the link to java.util.Logging.Level that we've been looking for!" What if I don't know what to look for? The last step in your process is: "7. inspect the chains, locate the accidental reference, and fix the code" How do I know which reference is the accidental reference? Posted by Richard on March 27, 2008 at 05:59 AM PDT # Re Richard: The idea is that there should be no links whatsoever from any of the undeployed classes to a root object. So, when looking at the from one of your application classes, there are still references after undeployment, these are leaks. In the case of the Level class, I wasn't looking for the Level class, but for any remaining references. The Level class was holding a reference and hence was a leak. To figure out which one is the accidental reference will require some work and insight into the code. For each of the links in the reference chain you would have to look at the source code and try to judge if that reference constitutes a memory leak. Frank Posted by Frank Kieviet on March 27, 2008 at 03:38 PM PDT # I too am looking into web app memory leaks and the use of Enums. using jmap and jhat, I am seeing my servlet class still in permgen space, after I had undeployed it. However, no rootset reference appears to point outside of my webapp class loader, unlike the mentioned example with the Logger. maybe i am using jmap/jhat incorrectly? Apologies for the vagueness of the post, I am trying to figure out if i really do have a leak or not. Posted by Stuart Maclean on April 01, 2008 at 01:05 PM PDT # Just to follow up, I have a simple servlet class S, loaded on startup of the webapp. An enum E is declared in S, and used in S.init. I am perplexed as why I see mention of java.lang.reflect.Field and org.apache.catalina.loader.ResourceEntry in the list of 'references to this object' for the class object for E. Is the use of enum somehow requiring some reflection? I have looked at the 1.6 source of Enum.java and don't see any static list storage issues like that of the logging example. Stu Posted by Stuart Maclean on April 01, 2008 at 01:16 PM PDT # Great paper ! I really enjoyed figuring out these tricky points. But btw, in the 7th point of your summary, you explained "inspect the chains, locate the accidental reference, and fix the code". How the hell could we fix such a problem, as it seems that the "non freed" reference comes from outside of 'our' code ? Of course we could start changing the Level implementation, but I feel uncomfortable with that ;) ? Thanks for any useful information Posted by Dual Action Cleanse on April 26, 2008 at 11:05 PM PDT # Re Stu: I haven't looked into this particular problem with Tomcat myself, but we did run into an issue where there is a bean utility in Apache that caches the accessors of Java beans. Since this cache is in the system classloader, it's a source of classloader leaks. Fortunately there's something like a flush() method on this class, so you could potentially try to call this after undeploying. I'm not aware of anything special about enums. Frank Posted by Frank Kieviet on May 02, 2008 at 09:47 AM PDT # Re Dual Action Cleanse: Indeed, if you don't own the code you do have a problem. You could raise the issue with the owner of the code, try to fix it yourself, or try to find a workaround. In case of the Level class, Sun JDK team is aware of the problem and it will hopefully be fixed in a future release; until then I'd simply recommend not using custom log levels. Frank Posted by Frank Kieviet on May 02, 2008 at 09:51 AM PDT # Thanks for your blog. IMHO I found an issue in the JSF Reference Implementation following your instructions: Unfortunatly this is not the only class leak we experience, have to hunt down the other ones. Posted by Olaf Flebbe on June 11, 2008 at 06:27 AM PDT # Frank,thanks for your insights. but this memory leak maybe happen not beacuse of AppClassLoader. LeakServlet1&1's classloader is not AppClassLoader(webApp classload),it is the AppClassLoader's parent. So LeakServlet1&1(class) not refer to AppClassLoader. it means AppClassLoader can be GCed. I has investigated this on Tomcat 6.0.13(java1.5) AppClassLoader is GCed Posted by fangsoft on June 12, 2008 at 01:32 AM PDT # Re fangsoft: I'm confused with what your statement that the servlet's classloader is different from the web classloader. How can that be? Frank Posted by Frank Kieviet on June 12, 2008 at 02:33 PM PDT # frank, servlet's classloader is the same as the web classloader. but LeakServlet1&1's classloader is not web classloader but it's parent according to web specification. So LeakServlet1&1 maybe not reference web classloader. Posted by fangsoft on June 13, 2008 at 06:05 PM PDT # frank, i think the application server used by your demo is implemented poorly. It can prevent AppClassLoader from referencing LeakServlet.class. Please refer to org.apache.catalina.loader.WebappClassLoader in Tomcat6.0.14. It has a stop method to release loaded classes.It is a robust webapp's classloader implementation. Posted by fangsoft on June 16, 2008 at 02:35 AM PDT # Hi there! Really nice post and quite helpful. However, we're using Axis2 and a lot of references are shown. I think there's a general problem with Axis(1 & 2) as proved by Kumar and Alex. Hope to see improvement really soon! :) Posted by Alejandro Andres on June 24, 2008 at 05:00 AM PDT # i mentioned earlier that my Tomcat was dying with Permgen errors, and that my web app used enums. Well, I recently ran javap on a very simple class file, generated from enum E { E1 }; E becomes a class E, with superclass java.lang.Enum (jdk 5 or 6, i cant recall now) Further, the static init of class E builds E1, using a private constructor. This calls super. Now, I do not have the src to hand Enum.java, but I wonder if somehow java.lang.Enum is keeping a reference to E1, thus exhibiting the problem highlighted in this blog. Posted by stuart maclean on August 07, 2008 at 09:33 PM PDT # OK, so now I am hunting in the code of java.lang.Enum. I notice that the method valueOf (which my web app DID use), references the Class object for my enum E, calling enumConstantDirectory. In Class.java, a member variable enumConstants is built and populated, using reflection, making a call to the static E.values (built by javac) and storing the result array, which in my case was just E1. So now I have java.lang.Class holding on to an instance of my enum E, surely a class loader leak. Of course this has nothing to do with Tomcat per se, any java app using Enum.valueOf would have this leak, of course only in web app reloads does it arise. Or maybe this is all a red herring and the Enum/Class issue is solved ;) Posted by stuart maclean on August 07, 2008 at 10:07 PM PDT # I think the key point here is that it's not java.lang.Class (statically) that's holding on to an instance of your enum - rather it's _an instance of_ java.lang.Class. I think this means that the scenario outlined in this article doesn't apply, and there is no leak. Posted by Max Bowsher on August 08, 2008 at 07:28 AM PDT # Hi Max, I think you are right. It was late and I didn't think it all through. I still have a hunch that my app is failing due to enum gremlins though ;) Posted by Stuart Maclean on August 08, 2008 at 10:04 AM PDT # Hold on though. the diagram above shows an instance of LeakServlet with a reference to its class. So if all objects point to their own class object, and surely they have to, else how does O.getClass() work, then my argument of a chain from some system CL to my enum does exist? Posted by Stuart Maclean on August 08, 2008 at 10:21 AM PDT # Well I tested this idea using Tomcat 6 on Sun's JDK 1.6. I have a single servlet, set to load on startup. Its init method does this E e2 = Enum.valueOf( E.class, "E1" ); System.out.println( e2 ); I then undeploy the app using Tomcat's manager app html page. I then use jmap to grab a heap dump and run jhat on that. I see both my servlet class and my enum listed. yet if i go through the steps above, tracing from Rootsets, I cannot see anything awry. Hence I see my classes held onto, but I (still!) cannot figure out why ;( Posted by s on August 08, 2008 at 10:42 AM PDT # Kumar/Fangsoft, I am getting into the issue you guys described here. I am using apache axis2 web archive on jboss 4.2.2.GA. JDK version is 1.5.0_14. Our server is throwing permgen errors after few days of running. Is there any fix on axis2? I am using axis2 1.3. Appreciate your help. Posted by sai on August 24, 2008 at 02:04 PM 72.163.216.217 on September 10, 2008 at 10:07 PM PDT # I'm using thebuild 10.0-b23, do you if its Jhat is fixed in that version? emerson Posted by Emerson Cargnin on September 19, 2008 at 11:37 AM PDT # Hi 22, 2008 at 05:58 AM PDT # I've tried to reproduce your example of classloader leaks but it seems not to happen. My LeakClass looks like this: import java.util.*; public class LeakClass { private final static ArrayList objectList; static { objectList = new ArrayList(); } public LeakClass() { super(); synchronized(LeakClass.class) { objectList.add(this); } } public final static String s0 = "000..."; public final static String s1 = "111..."; public final static String s2 = "222..."; ... public final static String sA = "AAA..."; public final static String sB = "BBB..."; public final static String sC = "CCC..."; ... public final static String sa = "aaa..."; public final static String sb = "bbb..."; public final static String sc = "ccc..."; ... public final static String sz = "zzz..."; } Here static Strings are to make my LeakClass "fat". The length of each String is 64K - 16 characters (i. e. 65520 characters); Here is my TestServlet: import java.io.*; import javax.servlet.*; import javax.servlet.http.*; public class TestServlet extends HttpServlet { private final static LeakClass leakClass = new LeakClass() {}; public TestServlet() { super(); } protected void doGet( HttpServletRequest request, HttpServletResponse response ) throws ServletException, IOException { response.setContentType("text/html;charset=UTF-8"); PrintWriter printWriter = response.getWriter(); printWriter.println("<html>"); printWriter.println(" <head>"); printWriter.println(" <title>Test Servlet</title>"); printWriter.println(" </head>"); printWriter.println(""); printWriter.println(" <body>"); printWriter.println(" <h1>Test Servlet</h1>"); printWriter.println(" </body>"); printWriter.println("</html>"); printWriter.println(""); printWriter.close(); } } After compilation I've got LeakClass.class, TestServlet.class, TestServlet$1.class. I've packaged them to a war. I use Sun JDK 1.6.0_07 and SJSAS 9.1 update 2. I've added these JVM options to my application server configuration: -Dcom.sun.enterprise.server.ss.ASQuickStartup=false -XX:+CMSClassUnloadingEnabled -XX:+CMSPermGenSweepingEnabled I've set PermGen size to 70 MB: -XX:MaxPermSize=70m When doing experiments I've been looking at the Memory tab with PermGen chart of jconsole connected to my application server instance. The experiments and the results are as follows: 1. Deploy application - OK 2. Request the TestServlet via web browser - OK (PermGen is nearly exhaused) 3. Change static Strings in the LeakClass (replace say the first symbol of each String with "="), recompile sources, repackage and redeploy the war - OK 4. Request the TestServlet via web browser - error: 500, exception: javax.servlet.ServletException: PWC1392: Error instantiating servlet class TestServlet, root cause: java.lang.OutOfMemoryError: PermGen space 5. Change static Strings in the LeakClass (replace say the third symbol of each String with "="), recompile sources, repackage and redeploy the war - OK 6. Request the TestServlet via web browser - OK! (GC collects something - it is quite clear from the chart, then PermGen is occupied by the new classes and is nearly exhaused) You can repeat steps 3, 4, 5, 6 and get the same results all the time. The PermGen chart in the jconsole will look like this (here numbers refer to the steps described above): 2__4__6__4__6__4__6... /---------V--------V---------V So if there was a real leak in PermGen then it could be impossible to sucessfully deploy something after "PermGen space" error or to successfully access the newly deployed application. Besides I have a question about the class diagram of your example. Why there is no any reference from the AppClassloader to the Level class? Who does load the Level class to the memory? When I looked to my dumps I've found that there are references to my Leak class from the classloader that also references my TestServlet and TestServlet$1 classes! Posted by John Headlong on September 22, 2008 at 10:16 AM PDT # After reading this post I have to admit that I will never be able to get rid of this PermGen issue. But I wonder if there is any real life scenario when it can create problems in production environment. As I see this problems happen only during redeployment and it done much less very frequently in production than during development. I most cases I can restart the server in production for redeployment. Posted by Alexey on September 24, 2008 at 03:50 AM PDT # hi, i have Exception using java application with out using any servers, i am using netbeans ide, please find the solutions using netbeans ide. thanks bharath Posted by Bharath on January 16, 2009 at 06:35 AM PST # hi Posted by 125.17.25.8 on March 11, 2009 at 05:14 AM PDT # This can definitely happen without a application server, and without redeployment. There is only so much space, and once that's exhausted you get this error. Posted by Greg Bishop on May 28, 2009 at 05:37 AM PDT # Thanks. When we added HAT (as jhat) to JDK 6 we knew it had some problems, but it was such a valuable tool in conjunction with jmap that we felt we needed to include it anyway. Sundar has done a fantastic job fixing some of the problems, and we definitely will accept this fix too. Posted by club penguin on June 06, 2009 at 06:31 PM PDT # Hi! We are running into the PermGen OOM problem with BEA Weblogic 10. BUT: We got this problem in production, where there is no redeployment at all!!! Anybody knows if Weblogic does something like redeployment automatically? Maybe to get the latest application files? We already increased PermGen size up to 384MB, but after some time (up to some weeks) after heavy load, the application runs into OOM PermGen. The server runs in production mode with nostage. I also tried to find the class responsible for this class leak, but jhat shows a hundred of static hard references to the class loader, mostly 3rd party. But anyway, without redeploy this problem shoudn't occur? Posted by Hans Auer on June 12, 2009 at 05:31 AM PDT # Re Hans: I think it's unlikely that WebLogic redeploys classes. Something you could look at: - does the number of classes actually increase when you look with JHat? - String.intern() also uses perm gen memory. Perhaps there's an application that does this a lot and keeps references to these interned strings so that they cannot be GC-ed? - Are you running the latest version of the JRE with the latest patches? Ditto for the OS? HTH, Frank Posted by Frank Kieviet on June 15, 2009 at 12:25 PM PDT # Thank you for your good information. It has helped me but I still have an issue that I have not been able to solve with my ClassLoader that is sometimes not being GarbageCollected eventually resulting in the OOM PermGen. When looking at it with jhat it was showing 100s of static references. Each one I looked at was a Class loaded by this ClassLoader. I finally modified jhat so that when I am finding the chains to rootset for a ClassLoader to not show the static references from Classes loaded by the same ClassLoader. My list then went empty suggesting to me that it was ready to be garbage collected. (I did verify that it was properly showing static references in other cases.) Then opening the hprof in NetBeans heap walker it showed that the GC root for the ClassLoader was Finalizer.unfinalized. I thought this also meant it could be garbage collected? At this point I don't know where to look and am wondering if you have any further suggestions or if you notice some failed logic in my testing. Thank you! Posted by Clint Bennion on July 03, 2009 at 05:01 PM PDT #
http://blogs.sun.com/fkieviet/entry/how_to_fix_the_dreaded
crawl-002
refinedweb
4,265
72.26
Allocates memory for an array #include <stdlib.h> void *calloc ( size_t n , size_t size ); The calloc( ) function obtains a block of memory from the operating system that is large enough to hold an array of n elements of size size. If successful, calloc( ) returns a void pointer to the beginning of the memory block obtained. void pointers are converted automatically to another pointer on assignment, so that you do not need to use an explicit cast, although you may want do so for the sake of clarity. If no memory block of the requested size is available, the function returns a null pointer. Unlike malloc( ), calloc( ) initializes every byte of the block allocated with the value 0. size_t n; int *p; printf("\nHow many integers do you want to enter? "); scanf("%u", &n); p = (int *)calloc(n, sizeof(int)); /* Allocate some memory */ if (p == NULL) printf("\nInsufficient memory."); else /* read integers into array elements ... */ malloc( ), realloc( ); free( ), memset( )
http://books.gigatux.nl/mirror/cinanutshell/0596006977/cinanut-CHP-17-22.html
CC-MAIN-2018-43
refinedweb
158
56.05
Overview Atlassian SourceTree is a free Git and Mercurial client for Windows. Atlassian SourceTree is a free Git and Mercurial client for Mac. README for Karnickel - AST Macros for Python "it's no ordinary rabbit..." What is it? Karnickel is a small library that allows you to use macros (similar to those found in Lisp) in Python code. In a nutshell, macros allow you to insert code (the macro definition) at a different point in the code (the macro call). It is different from calling functions in that the code is inserted before it is even compiled. ("Karnickel" is German for "rabbit", and there's a vicious killer rabbit in "Monty Python and the Holy Grail" that is best left alone...) Using Use Python 2.6+. You can put macros in any module. Macro definitions are Python functions, like this: from karnickel import macro @macro def macroname(arg1, arg2): ... macro contents ... Optional arguments are not supported. If the contents are a single expression (no return), the macro is an expression macro. Otherwise, it is a block macro. If it contains a statement consisting of only __body__, it is a block macro with body. For using the macros, you must install the import hook: import karnickel karnickel.install_hook() Then, you can import modules that use macros like this: from module.__macros__ import macro1, macro2 That is, append .__macros__ to the name of the module that contains the macros. Only from-imports are supported. Usage depends on the macro type: Expression macros can be used everywhere as expressions. Arguments are put into the places of macro arguments. Block macros without body can only be used as an expression statement -- i.e.: macroname(arg1, arg2) Block macros with body must be used with a with statement: with macroname(arg1, arg2): body Arguments are put into the places of macro arguments, and the body is put into the place of __body__ in the macro definition. Proper docs may follow as soon as I can find a decent documentation tool. Why? Why not? Seriously, this is a demonstration of what you can do with the Python AST, especially the standard ast module, and import hooks. Besides, it's been fun. Installing Use setup.py: sudo python setup.py install
https://bitbucket.org/birkenfeld/karnickel
CC-MAIN-2015-48
refinedweb
374
75.81
The QAbstractItemView class provides the basic functionality for item view classes. More... #include <QAbstractItemView> Inherits QAbstractScrollArea. Inherited by QHeaderView, QListView, QTableView, and QTreeView.(), columnsInserted(), columnsRemalFactor() and setVerticalFactor(). Several other functions are concerned with selection control; for example setSelectionMode(), and setSelectionBehavior(). This class provides a default selection model to work with (selectionModel()), but this can be replaced by using setSelectionModel() with an instance of QItemSelectionModel. When implimenting a view that will have scrollbars you want to overload resizeEvent to set the scrollbars range so they will turn of and off, for example: horizontalScrollBar()->setRange(0, realWidth - width());. See also Model/View Programming and QAbstractItemModel. This enum describes the different ways to navigate between items, See also moveCursor(). This enum describes actions which will initiate item editing. The EditTriggers type is a typedef for QFlags<EditTrigger>. It stores an OR combination of EditTrigger values. This enum indicates how the view responds to user selections: In other words, SingleSelection is a real single-selection list view, MultiSelection a real multi-selection list view, ExtendedSelection is a list view in which users can select multiple items, but usually want to select either just one or a range of contiguous items, and NoSelection is a list view where the user can navigate without selecting items. Describes the different states the view can be in. This is usually only interesting when reimplementing your own view.: This property holds whether autoscrolling in drag move events is enabled. If this property is set to true (the default), the QAbstractItemView automatically scrolls the contents of the view if the user drags within 16 pixels of the viewport edge. This only works if the viewport accepts drops. Autoscroll is switched off by setting this property to false. Access functions: This property holds whether the view supports dragging of its own items. Access functions: This property holds which actions will initiate item editing. This property is a selection of flags defined by EditTrigger, combined using the OR operator. The view will only initiate the editing of an item if the action performed is set in this property. Access functions: This property holds the size of items. Setting this property when the view is visible will cause the items to be laid out again. Access functions: This property holds which selection behavior the view uses. This property holds whether selections are done in terms of single items, rows or columns. Access functions: See also SelectionMode and SelectionBehavior. This property holds which selection mode the view operates in. This property controls whether the user can select one or many items and, in many-item selections, whether the selection must be a continuous range of items. Access functions: See also SelectionMode and SelectionBehavior. This property holds whether the drop indicator is shown when dragging items and dropping. Access functions: This property holds whether item navigation with tab and backtab is enabled. Access functions: This property holds the the position of the "..." in elided text. Access functions: Constructs an abstract item view with the given parent. Destroys the view. This signal is emitted when the item specified by index is activated by the user (e.g., by single- or double-clicking the item, depending on the platform). See also clicked(), doubleClicked(), entered(), and pressed(). Clears all selected items. This signal is emitted when a mouse button is clicked. The item the mouse was clicked on is specified by index (which may be invalid if the mouse was not clicked on an item). See also activated(), doubleClicked(), entered(), and pressed(). Closes the given editor, and releases it. The hint is used to specify how the view should respond to the end of the editing operation. For example, the hint may indicate that the next item in the view should be opened for editing. See also edit(). Closes the persistent editor for the item at the given index. Commit the data in the editor to the model. See also closeEditor(). This slot is called when a new item becomes the current item. The previous current item is specified by the previous index, and the new item by the current index. If you want to know about changes to items see the dataChanged() signal. Returns the model index of the current item. See also setCurrentIndex(). This slot is called when items are changed in the model. The changed items are those from topLeft to bottomRight inclusive. If just one item is changed topLeft == bottomRight. Returns the offset of the dirty regions in the view. If you use scrollDirtyRegion() and implementa paintEvent() in a subclass of QAbstractItemView, you should translate the area given by the paint event with the offset returned from this function. See also scrollDirtyRegion(). This signal is emitted when a mouse button is double-clicked. The item the mouse was double-clicked on is specified by index (which may be invalid if the mouse was not double-clicked on an item). This function is called with the given event when a drag and drop operation enters the widget. If the drag is over a valid dropping place (e.g. over an item that accepts drops), the event is accepted; otherwise it is ignored. Reimplemented from QWidget. See also dropEvent() and startDrag(). This function is called when the item being dragged leaves the view. The event describes the state of the drag and drop operation. Reimplemented from QWidget.. Reimplemented from QWidget. See also dropEvent() and startDrag(). This function is called with the given event when a drop event occurs over the widget. If there's a valid item under the mouse pointer when the drop occurs, the drop event is accepted; otherwise it is ignored. Reimplemented from QWidget. See also startDrag(). Starts editing the item item at index if it is editable. This is an overloaded member function, provided for convenience. It behaves essentially like the above function. Starts editing the item at index, creating an editor if necessary, and returns true if the view's State is now EditingState; otherwise returns false. The action that caused the editing process is described by trigger, and the associated event is specified by event. See also closeEditor(). Remove the editor editor from the map. This signal is emitted when the mouse cursor enters the item specified by index. See also viewportEntered(), activated(), clicked(), doubleClicked(), and pressed(). Executes the scheduled layouts without waiting for the event processing to begin. See also scheduleDelayedItemsLayout(). This function is called with the given event when the widget obtains the focus. By default, the event is ignored. Reimplemented from QWidget. See also setFocus() and focusOutEvent(). This function is called with the given event when the widget obtains the focus. By default, the event is ignored. Reimplemented from QWidget. See also clearFocus() and focusInEvent(). Returns the horizontal offset of the view. In the base class this is a pure virtual function. Returns the horizontal scrollbar's steps per item. See also setHorizontalStepsPerItem() and verticalStepsPerItem(). Returns the model index of the item at point p. In the base class this is a pure virtual function. Returns true if the item refered to by the given index is hidden, otherwise returns false. In the base class this is a pure virtual function. Returns the item delegate used by this view and model. This is either one set with setItemDelegate(), or the default one. See also setItemDelegate(). This function is called with the given event when a key event is sent to the widget. The default implementation handles basic cursor movement, e.g. Up, Down, Left, Right, Home, PageUp, and PageDown, and emits the returnPressed(), spacePressed(), and deletePressed() signals if the associated key is pressed. This function is where editing is initiated by key press, e.g. if F2 is pressed. Reimplemented from QWidget. See also edit(). Moves to and selects the item best matching the string search. If no item is found nothing happens. Returns the model that this view is presenting. See also setModel(). This function is called with the given event when a mouse button is double clicked inside the widget. If the double-click is on a valid item it emits the doubleClicked() signal and calls edit() on the item. Reimplemented from QWidget. This function is called with the given event when a mouse move event is sent to the widget. If a selection is in progress and new items are moved over the selection is extended; if a drag is in progress it is continued. Reimplemented from QWidget. This function is called with the given event when a mouse button is pressed while the cursor is inside the widget. If a valid item is pressed on it is made into the current item. This function emits the pressed() signal. Reimplemented from QWidget. This function is called with the given event when a mouse button is released while the cursor is inside the widget. It will emit the clicked() signal if an item was being pressed. Reimplemented from QWidget. Moves the cursor in the view according to the given cursorAction and keyboard modifiers specified by modifiers. Opens a persistent editor on the item at the given index. If no editor exists, the delegate will create a new editor. This signal is emitted when a mouse button is pressed. The item the mouse was pressed on is specified by index (which may be invalid if the mouse was not pressed on an item). See also activated(), clicked(), doubleClicked(), and entered(). Reset the internal state of the view. This function is called with the given event when a resize event is sent to the widget. Reimplemented from QWidget. See also QWidget::resizeEvent(). Returns the model index of the model's root item. The root item is the parent item to the views toplevel items. The root can be invalid. See also setRootIndex(). This slot is called when rows are about to be removed. The deleted rows are those under the given parent from start to end inclusive. The base class implementation does nothing. See also rowsInserted(). This slot is called when rows are inserted. The new rows are those under the given parent from start to end inclusive. The base class implementation calls fetchMore() on the model to check for more data. See also rowsAboutToBeRemoved(). Schedules a layout of the items in the view to be executed when the event processing starts. Even if scheduleDelayedItemsLayout() is called multiple times before events are processed, the view will only do the layout once. See also executeDelayedItemsLayout().() and dirtyRegionOffset(). Scrolls the view if necessary to ensure that the item at index is visible. The view will try to position the item according to the given hint. In the base class this is a pure virtual function. Selects all non-hidden items. This convenience function returns a list of all selected and non-hidden item indexes in the view. The list contains no duplicates, and is not sorted. The default implementation does nothing. This slot is called when the selection is changed. The previous selection (which may be empty), is specified by deselected, and the new selection by selected. Returns the SelectionFlags to be used when updating a selection with to include the index specified. The event is a user input event, such as a mouse or keyboard event. Reimplement this function to define your own selection behavior. Returns the current selection. See also setSelectionModel() and clearSelection(). Sets the current item to be the itm at index. See also currentIndex(). Sets the horizontal scrollbar's steps per item to steps. This is the number of steps used by the horizontal scrollbar to represent the width of an item. See also horizontalStepsPerItem() and setVerticalStepsPerItem(). Sets the item delegate for this view and its model to delegate. This is useful if you want complete control over the editing and display of items. See also itemDelegate(). Sets the model for the view to present. See also model(). Sets the root item to the item at index. See also root(). Sets the current selection to the given selectionModel. See also selectionModel() and clearSelection(). Sets the item view's state to the given state See also state(). Sets the vertical scrollbar's steps per item to steps. This is the number of steps used by the vertical scrollbar to represent the height of an item. See also verticalStepsPerItem() and setHorizontalStepsPerItem(). Returns the width size hint for the specified column or -1 if there is no model. Returns the size hint for the item with the specified index or an invalid size for invalid indexes. Returns the height size hint for the specified row or -1 if there is no model. Starts a drag by calling drag->start() using the given supportedActions. Returns the item view's state. See also setState(). This function is called with the given event when a timer event is sent to the widget. Reimplemented from QObject. See also QObject::timerEvent(). Returns the vertical offset of the view. In the base class this is a pure virtual function. Returns the vertical scrollbar's steps per item. See also setVerticalStepsPerItem() and horizontalStepsPerItem(). Returns QStyleOptionViewItem structure populated with the view's palette, font, state, alignments etc. This signal is emitted when the mouse cursor enters the viewport. See also entered(). This function is used to handle tool tips, status tips, and What's This? mode, if the given event is a QEvent::ToolTip, a QEvent::WhatsThis, or a QEvent::StatusTip. It passes all other events on to its base class viewportEvent() handler. Reimplemented from QAbstractScrollArea.. Returns the region from the viewport of the items in the given selection.
http://doc.trolltech.com/4.0/qabstractitemview.html
crawl-001
refinedweb
2,253
69.18
jemalloc is currently disabled on MacOS 10.7 for reasons that I don't understand at the moment. But as more Mac users transition to the new version, enabling jemalloc there becomes increasingly important. bug 670175 is the problem. Thanks. I resolved that bug so we can track turning it on (and fixing that hang) here. Steven, we need some mac-fu here to understand the hangs in bug 670175. Can you help? I'm completely swamped, so the earliest I'll have time for this is sometime next week. Furthermore I know very little about jemalloc, so I'll probably have to use brute force on this bug -- keep turning off bits of jemalloc until it stops happening, then see if the guilty "bit" can somehow be implemented differently. That's likely to be very time consuming, and may be something you guys can do as well as I can :-) Is it possible to dual-boot with the final version of 10.7? If so, I might be able to poke this and at least get to the point that I need your help. I sort of understand jemalloc, although jemalloc on mac is a complete mystery to me. :) > Is it possible to dual-boot with the final version of 10.7? Sure. You need to create extra partitions on your hard drive (or add another hard drive on which you can create additional partitions). Then do a conventional install into one of those partitions. Then you can use Startup Disk in System Preferences to choose which partition to boot from. No, I don't know how to non-destructively add new partitions to an existing hard drive. But ask Marcia Knous -- she may know how, or know who to ask. > No, I don't know how to non-destructively add new partitions to an existing hard drive. I think bootcamp can do this? I'll see what I can figure out. Note that your Lion partition should be at least 30GB in order for XCode to fit into it. 40GB would probably be better. (I use 30GB, and it can be a tight fit.) (In reply to Steven Michaud from comment #6) > No, I don't know how to non-destructively add new partitions to an existing > hard drive. But ask Marcia Knous -- she may know how, or know who to ask. I believe the stock version of Disk Utility does this just fine (unless your drive is very fragmented) — give it a try. It's very explicit about which partitions will lose data and which will remain intact, so just read the description before confirming the operation. I had to run disk utilities repair on my partition (perhaps because I'd previously had a bootcamp partition), but then it worked, and I got Lion installed. Thanks! I can reproduce using the steps in bug 670175 comment 17, about half the time. Firefox beachballs for about 5s. From the hang stack in attachment 544925 [details], it looks like jemalloc is taking a long time in mmap. (See the stacktrace for thread 1, at the bottom.) Maybe 10.7 doesn't like the hack we use to force mmap'ed pages down towards the bottom of the address space. I thought that trace indicated thread contention -- between the main thread and threads 10, 12, 13, and 23. But that's more or less a guess. Hence my hunch that we're going to need to brute-force this. > Maybe 10.7 doesn't like the hack we use to force mmap'ed pages down > towards the bottom of the address space. Can we stop doing this, and see if that makes the problem go away? I seem to be able to reproduce bug 670175 comment 17 consistently if I issue the |purge| command immediately before starting Firefox. (In reply to Steven Michaud from comment #13) > Can we stop doing this, and see if that makes the problem go away? Yep, trying... Interestingly, when I kill FF while it's hung, the whole system is unresponsive for a few seconds. Typing and switching tabs in the terminal is very slow, for example. > I thought that trace indicated thread contention -- between the main thread and threads 10, 12, 13, > and 23. But that's more or less a guess. Yes. The fact that four threads are all blocked on the main thread's arena lock indicates that the main thread has probably been sitting there for a while. So that leads me to suspect that mmap may be hanging... > Interestingly, when I kill FF while it's hung, the whole system is > unresponsive for a few seconds. Typing and switching tabs in the > terminal is very slow, for example. This is an old problem with event taps -- a buggy system resource that we're forced to use. See bug 393664 and bug 611068. Yikes, I think I understand what's happening. Jemalloc uses a loop when it tries to do an aligned mmap. The claim is that you only loop in the event of a race condition, where two threads call mmap simultaneously. But we go around and around this loop for a long time. The code assumes that if I mmap a 2mb chunk, unmap it, and then immediately try to map somewhere inside where that chunk used to be (so I get an aligned mapping), the final mmap operation will succeed in the absence of races. It appears that this is not the case on 10.7. The more sane (and loop-free) thing to do would be to map 2mb and then unmap any excess. That's what jemalloc tip does. We can't do that on Windows, because apparently Windows demands a 1:1 mapping between maps and unmaps. But we ought to be able to backport this change for Linux and Mac. Awesome, jsgcchunk is copy/pasted from jemalloc (or is it the other way around?). So whatever problem jemalloc has here, presumably the js gc has too. (In reply to Justin Lebar [:jlebar] from comment #20) > Awesome, jsgcchunk is copy/pasted from jemalloc (or is it the other way > around?). So whatever problem jemalloc has here, presumably the js gc has > too. ...although maybe not, because js uses vm_allocate instead of mmap. Anyone know why js doesn't use mmap? The code in jsgcchunk is super ugly and I spent quite of a bit of time looking at it today for 670596. (In reply to Justin Lebar [:jlebar] from comment #20) > Awesome, jsgcchunk is copy/pasted from jemalloc (or is it the other way > around?). So whatever problem jemalloc has here, presumably the js gc has > too. Igor created jsgcchunk as a copy from jemalloc about a year and a half ago in 553812. (In reply to Justin Lebar [:jlebar] from comment #19) > The more sane (and loop-free) thing to do would be to map 2mb and then unmap > any excess. That's what jemalloc tip does. Agreed. This should work well on mac and linux, assuming that we can't find a way to do aligned allocations directly. > We can't do that on Windows, because apparently Windows demands a 1:1 > mapping between maps and unmaps. But we ought to be able to backport this > change for Linux and Mac. In Windows, we can reserve address space independently of committing. We should be able to make things better on Windows by grabbing the 2MiB of address space as a reservation and only committing when we grab the 1MiB slice afterwards. I think what we need to do in jsgcchunk is push the high level AllocGCChunk interface into each OS's #define and just hyper-optimize for each OS individually. We also need to add new per-OS functions for madvise/MEM_RESET functionality. It would be nice if we could share this work with jemalloc. > assuming that we can't find a way to do aligned allocations directly. I've got to hope that if there were a way, upstream jemalloc would be using it. >. (In reply to Justin Lebar [:jlebar] from comment #23) > > assuming that we can't find a way to do aligned allocations directly. > > I've got to hope that if there were a way, upstream jemalloc would be using > it. Probably, but the linux kernel moves fast and doesn't communicate userland changes well. > >. I think I must have explained very poorly what I meant to do. I am not suggesting changing the algorithm or numbers we use when allocating memory at all, just unsetting the MEM_COMMIT bit until we actually make our 1MiB allocation. > I think I must have explained very poorly what I meant to do. I am not suggesting changing the > algorithm or numbers we use when allocating memory at all, just unsetting the MEM_COMMIT bit until > we actually make our 1MiB allocation. Ah, that might work! Can we do this in another bug, please? (In reply to Justin Lebar [:jlebar] from comment #25) > Ah, that might work! Can we do this in another bug, please? Filed under 696119. Created attachment 568432 [details] [diff] [review] Patch v1 Seems to work on my 10.7 and Linux-64 machines. Pushed to try. Aside from a mysterious linux64 make check error, which is probably nothing, this looks good on try. Steven, would you mind trying this build out on your 10.7 machine and seeing if you can get it to hang? It doesn't hang for me on my 10.7 machine. If you feel like a slightly larger population of guinea pigs to test it on, feel free to land it on the UX branch. We have about 2K ADUs, a large percentage of which I would guess are on OS X 10.7. :) I pushed this to UX: Comment on attachment 568432 [details] [diff] [review] Patch v1 This patch doesn't seem to have fixed the problem. I still crash, though now the STR is now different. Here are the crashes I've had. They seem to be the same crash, but I include them all for the sake of completeness. I'll try to get some gdb crash stacks, and to translate the symbols in the Breakpad crash stacks: bp-faaba078-ca18-4b50-a52e-ea4432111021 bp-e9738d44-e0b9-4dd0-b60f-fd1ac2111021 bp-b8a8cae1-4a37-43c3-94df-4b76a2111021 bp-a8476906-3a59-4d59-a8f8-b1e8f2111021 bp-6845a0ff-f298-41b9-8b65-e9b532111021 Here's my STR. It's still rather difficult and pretty finicky. I may be able to improve it over time. 1) Restart your computer. 2) Run the test build, and then a few seconds (5-15?) after it's started, move the mouse up to "Nightly" in the main menu and click on it to open the menu. I'll usually crash here, or sometimes even before the mouse reaches the main menu. Sometimes subsequent attempts to run the test build will crash even before the browser window has finished opening. But as soon as you fail to crash even once, the crashes become almost impossibly difficult to reproduce. Running "purge" beforehand doesn't seem to help (to make the test build more crashy). My STR above doesn't crash on recent nightlies. Even when I just turn jemalloc on for OS X 10.7, without making any of the rest of your patch's changes, my STR from bug 670175 comment #17 no longer "works" for me -- the I-beam cursor in the Google search box never stops flashing. For this test, I just applied the following part of your patch: - osx_use_jemalloc = (default_zone->version == SNOW_LEOPARD_MALLOC_ZONE_T_VERSION); + osx_use_jemalloc = (default_zone->version == SNOW_LEOPARD_MALLOC_ZONE_T_VERSION || + default_zone->version == LION_MALLOC_ZONE_T_VERSION); You'll notice that none of my Breakpad crash reports from comment #31 have symbols even for Mozilla code. This has already come up in bug 693451, and a Breakpad bug on the issue has been opened upstream at. Yeah, symbols of any kind would be very helpful... Backed out from UX: It occurs to me that jemalloc might have (at least) two different problems on OS X 10.7 -- a hang bug (which your patch may have fixed) and a crash bug. I think the patch did fix a hang I was observing. At least, there was a loop which I observed spinning, and now the loop is gone! Created attachment 568724 [details] Translation of crash-stack symbols from comment #31 Here are translations for (some of) the symbols for my crash stacks from comment #31. I used atos to do the translations. There were a few symbols it (apparently) couldn't translate. I think this pins the blame pretty well on jemalloc. Though it doesn't come near to providing enough information to fix the bug that causes the crashes :-( Note that one of the symbols is for malloc_printf, which indicates there may have been something interesting in the system log. I'll reproduce a few more crashes, and see what I find. [0x0-0x13013].org.mozilla.nightly: firefox(193,0x7fff79a83960) malloc: *** malloc_default_scalable_zone() failed to find 'DefaultMallocZone' That's weird...it looks like this is the default allocator, not jemalloc. Then I think jemalloc must be doing something to break the default allocator. I'm still working on getting a gdb stack trace of the crash, with symbols and all threads. Created attachment 568746 [details] Gdb stack trace of crash, with symbols and all threads Here it is! This is from a non-debug custom build (with this bug's patch applied) with optimization turned off and no symbols stripped. It's based on trunk code pulled at the beginning of this week (Monday). Note that it looks a lot like my gdb crash traces from bug 670175. This is a shot in the dark, but what if you change jemalloc_zone->zone_name = "jemalloc_zone"; to jemalloc_zone->zone_name = "DefaultMallocZone"; ? Created attachment 568750 [details] Another gdb stack trace of crash (In reply to comment #44) I'm trying it. I'll let you know my results when the build finishes. btw, I find that to rebuild jemalloc, I need to do rm -f browser/app/nsBrowserApp.o && make -C memory && make -C browser && make -C toolkit/library if you don't remove nsBrowserApp.o, the jemalloc changes might not be reflected, even if you do a top-level rebuild. (In reply to Justin Lebar [:jlebar] from comment #47) > if you don't remove nsBrowserApp.o, the jemalloc changes might not be > reflected, even if you do a top-level rebuild. That doesn't make sense. jemalloc is a shared library on mac. Rebuilding libmozutils.dylib should be enough. Created attachment 568760 [details] Gdb trace of crash after following suggestion from comment #44 Following your suggestion from comment #44 didn't help, though it did change the stack trace slightly, and got rid of the malloc_printf error message. I think the problem reported by malloc_printf must be a symptom, and not the cause. By the way, I didn't need to delete browser/app/nsBrowserApp.o. I just changed the source and did a top-level rebuild. (I always do that -- doing otherwise just wastes all the time I might have saved doing non-top-level builds, or more than the time, on fixing the problems it causes.) > Following your suggestion from comment #44 didn't help, though it did > change the stack trace slightly, and got rid of the malloc_printf > error message. Okay, that's pretty good! Maybe it's then trying to call one of the zone functions which is null. You could replace the NULLs in create_zone and szone2ozone with 0x1, 0x2, 0x3, etc. Maybe that would tell us which function it's trying to call. > You could replace the NULLs in create_zone and szone2ozone with 0x1, > 0x2, 0x3, etc. Maybe that would tell us which function it's trying > to call. I did that, without backing out your suggestion from comment #44, and it made no difference -- I still get exactly the same null dereference, and no output from malloc_printf. Next I'll try backing out your suggestion from comment #44, to see if that makes any difference. > Next I'll try backing out your suggestion from comment #44, to see > if that makes any difference. This just restored the original error message from malloc_printf and the original crash at 0x10f0. I've got a few minutes left at the end of the day, that I can't really spend doing anything else, so I'm going to try a few of my own random shots in the dark. I just tried putting #define NO_TLS at the top of jemalloc.c -- that doesn't help. I'm pretty sure mac doesn't build without NO_TLS. I think we've made some progress here. I'll try to get things set up so I can debug this locally. Now I've found something that *does* help: The crashes disappear if you never call szone2ozone()! I tested twice (with my STR from comment #31) just to be sure. > I'll try to get things set up so I can debug this locally. That's probably a good idea :-) Ah, so AIUI, szone2ozone sets us up so that certain system-level allocations go through jemalloc rather than the default allocator. It might not like the zone name... > It might not like the zone name... I suspect the name isn't the problem ... though I've no idea what the problem actually is. In any case, though, szone2ozone is a lot smaller target to brute-force than the whole of jemalloc.c. (Following up comment #57) >> It might not like the zone name... > > I suspect the name isn't the problem ... though I've no idea what > the problem actually is. Actually you were right (and I was wrong). Making the following change in szone2ozone is enough to make my crashes go away: - default_zone->zone_name = "jemalloc_ozone"; + if (default_zone->version < LION_MALLOC_ZONE_T_VERSION) { + default_zone->zone_name = "jemalloc_ozone"; + } else { + default_zone->zone_name = "DefaultMallocZone"; + } I still don't know *why* this change is necessary on Lion. As best I can tell, "DefaultMallocZone" has been this zone's name for years. But maybe it's only with Lion that the OS actually starts checking the name. It's also puzzling why we don't *always* crash on Lion, if the default zone's name is "wrong". But I think it's worthwhile landing this change on the trunk, to see what happens. Once jemalloc is re-enabled on Lion, we should (of course) check that it really does give us performance gains and/or a reduction in memory usage. And note that the default zone's name is wrong (on Lion) even when we don't call szone2ozone, and we still don't crash. So (sigh) I've changed my mind. I no longer think we should land my change from comment #58 without understanding better what's going on (why that change "works", or appears to "work"). I'll continue to brute-force szone2ozone for a while longer, to see if I find anything interesting. You may want to disassemble the calling stackframes (or use gdb watchpoints) to try to see where default_zone->zone_name is being read. I've been reading machine code, but I haven't seen it yet... So I did + default_zone->zone_name = (char*)0xdeadbeef; and then waited for it to crash. It crashes in strcmp() malloc_default_purgable_zone() ImageIO_Malloc() malloc_default_purgable_zone appears to be in a loop looking for something. Perhaps the string "DefaultMallocZone". :) > malloc_default_purgable_zone appears to be in a loop looking for > something. Perhaps the string "DefaultMallocZone". :) Nope, it's the string "DefaultPurgeableMallocZone" :-) See. Which shows that you're debugging in libc code, and that Apple's libc's source is available at. That may make your life easier :-) OS X 10.7.2's malloc.c is at: You can also download the source. See and start browsing under "Mac OS X". Ah, that's much nicer than reading asm. There's this, which explains comment 38: static inline malloc_zone_t * inline_malloc_default_scalable_zone(void) { unsigned index; if (malloc_def_zone_state < 2) _malloc_initialize(); // _malloc_printf(ASL_LEVEL_INFO, "In inline_malloc_default_scalable_zone with %d %d\n", malloc_num_zones, malloc_has_debug_zone); MALLOC_LOCK(); for (index = 0; index < malloc_num_zones; ++index) { malloc_zone_t *z = malloc_zones[index]; if(z->zone_name && strcmp(z->zone_name, "DefaultMallocZone") == 0) { MALLOC_UNLOCK(); return z; } } MALLOC_UNLOCK(); malloc_printf("*** malloc_default_scalable_zone() failed to find 'DefaultMallocZone'\n"); return NULL; // FIXME: abort() instead? } Another small piece of the puzzle: My crashes stop happening if you don't change the zone_name in szone2ozone -- if you comment out the following line: default_zone->zone_name = "jemalloc_ozone"; So the zone name doesn't have to be "DefaultMallocZone" (without the above change it's "jemalloc_zone"). > So the zone name doesn't have to be "DefaultMallocZone" (without the above change it's > "jemalloc_zone"). But in create_zone, don't we need to change the zone name from "jemalloc_zone" to "DefaultMallocZone"? > But in create_zone, don't we need to change the zone name from > "jemalloc_zone" to "DefaultMallocZone"? What I reported in comment #67 tells me "no". But I'm not yet sure that's the end of the story ... I'm closing in on the real problem, I think: I can also fix my crashes by adding the following (an extra call to malloc_zone_register()) just after the single call to szone2ozone(): malloc_zone_register(default_zone); Apparently Lion is fussy about changes made to the default zone after it's been "registered". But I suspect we should only call malloc_zone_register() once, and that the single call should be the one after szone2ozone(). Just a sec and I'll try that out. Why do we have to register the jemalloc zone to begin with? Registering the zone doesn't make it the default one, afaict. It looks like only the szone overlay actually does something. > But I suspect we should only call malloc_zone_register() once, and > that the single call should be the one after szone2ozone(). If registering the zone *is* important, then this could be dangerous, because after szone2ozone, the system might be able to start using the jemalloc zone (effectively, not the zone itself, but its functions) before we've registered it. (?) (Following up comment #70) Oops, I got it wrong. Adding the following after the call to szone2ozone() doesn't help: malloc_zone_register(default_zone); Likewise with making it the only call to malloc_zone_register(). I was confused about default_zone (returned by a call to malloc_default_zone()) and the "custom zone" created by create_zone(). default_zone is the default malloc zone created (and registered) by the OS. It's not a copy of the "custom zone", and doesn't become one. In fact I don't know what the "custom zone" is for, and why (or even if) it's needed. We get the default zone from the OS. And though we modify it, we intend the OS to continue using it as if it had been unchanged (we in fact hook it). We also never check for any zone by name. So we have no reason to change the default zone's zone_name in szone2ozone(), on any version of OS X. In fact we're lucky we got away with this on versions of OS X prior to Lion. So the fix for the crash bug is very simple. Just get rid of the following line in szone2ozone(): default_zone->zone_name = "jemalloc_ozone"; > Why do we have to register the jemalloc zone to begin with? > Registering the zone doesn't make it the default one, afaict. It > looks like only the szone overlay actually does something. I assume that by "the jemalloc zone" you mean what I've been calling the "custom zone". If so, then I agree that we (probably) don't need to either create it or register it -- as far as I can tell it's not used for anything. But I don't believe that creating and registering the "custom zone" does any harm. It'd be interesting to know the history of this code, and whether or not the "custom zone" once served some purpose. But I'll save that for later, because right now I'm too tired to pursue it :-) By the way, "registering" a zone doesn't make it the default. Created attachment 569277 [details] [diff] [review] Patch v2 How about we get rid of this zone registration altogether? This seems to work for me, but knowing you, you'll find some way to crash it. :) As far as I can tell, allocations are still going through jemalloc with this patch. Or at least, the jemalloc heap size isn't significantly smaller with this patch than without. Comment on attachment 569277 [details] [diff] [review] Patch v2 - jemalloc_zone->zone_name = "jemalloc_zone"; + jemalloc_zone->zone_name = "DefaultMallocZone"; We don't want this. We don't want anything besides the real default zone to have that name. I'll do the rest of my review tomorrow. Created attachment 569278 [details] [diff] [review] Patch v3 Comment on attachment 569278 [details] [diff] [review] Patch v3 Really get rid of create_zone(). > In fact I don't know what the "custom zone" is for, and why (or even > if) it's needed. We midaired each other and independently arrived at the same conclusion. Cool. :) We should check whether we need to somehow lock around the modifications to default_zone. If we're not the only thread alive at that point, we might be better off reordering the modifications so that free is set before malloc and friends. On my 10.7 machine, szone2ozone is called when only one thread is alive. Comment on attachment 569278 [details] [diff] [review] Patch v3 This looks fine to me. And I no longer crash or hang :-) Though I do have a couple of nits: + /* Don't modify default_zone->zone_name; Mac libc may rely on the name + * being uncahnged. */ It's "unchanged". And you might want to reference this bug (bug 694896). /* Likewise for l_zone_introspect. */ static lion_malloc_introspection l_zone_introspect, l_ozone_introspect; static malloc_introspection_t * const zone_introspect = (malloc_introspection_t*)(&l_zone_introspect); static malloc_introspection_t * const ozone_introspect = (malloc_introspection_t*)(&l_ozone_introspect); We can get rid of l_zone_introspect and zone_introspect. We can also get rid of the following, which are all now dead code: zone_size() zone_free() zone_realloc() zone_force_lock() zone_force_unlock() zone_free_definite_size() > On my 10.7 machine, szone2ozone is called when only one thread is > alive. On the Mac szone2ozone is ultimately called from the following, which is called by the OS when the module containing jemalloc.c is loaded: __attribute__((constructor)) void jemalloc_darwin_init(void); For good measure I double-checked that the structures in osx_zone_types.h still match their version-specific counterparts in malloc.h on SnowLeopard and Lion -- they do. I also checked that our use of the malloc_zone_t pointers in szone2ozone() is sane -- it is. Specifically, we don't NULL out any pointers that aren't documented to sometimes be NULL, or are in malloc_introspection_t (which it seems can all safely be NULL). I also found when create_zone(), jemalloc_zone and friends were "orphaned". More on that in my next comment. Created attachment 569477 [details] [diff] [review] Patch v4 Paul, I've got a question for you concerning your "part 1" patch for bug 414946 (). As best we can tell, as of that patch the create_zone() function no longer does anything useful, and can safely be removed. But your patch didn't get rid of it. Did you have a reason for that? Thanks in advance! The history of the jemalloc_zone is that the original implementation by jasone added a zone to the linked list of zones. After that, jasone started overriding the default zone, which actually worked and stuck. Looking at the code now, it looks like the jemalloc_zone doesn't do anything. I really don't remember why this is here - apart from the fact that it was in the original (it's still upstream by the look of it). Certainly, if it works to remove it, remove it. Not going to get to this review until next week. Comment on attachment 569477 [details] [diff] [review] Patch v4 Review of attachment 569477 [details] [diff] [review]: ----------------------------------------------------------------- ::: memory/jemalloc/jemalloc.c @@ +178,5 @@ > + * MALLOC_PAGEFILE causes all mmap()ed memory to be backed by temporary > + * files, so that if a chunk is mapped, it is guaranteed to be swappable. > + * This avoids asynchronous OOM failures that are due to VM over-commit. > + */ > +/* #define MALLOC_PAGEFILE */ Why are you making this change? @@ +2452,5 @@ > return (false); > } > #endif > > +#if defined(MOZ_MEMORY_WINDOWS) || defined(JEMALLOC_USES_MAP_ALIGN) || defined(MALLOC_PAGEFILE) Especially since you then add a conditional for it here. >> +/* #define MALLOC_PAGEFILE */ > > Why are you making this change? AFAICT, Linux defines MALLOC_PAGEFILE but doesn't actually enable it. I want Linux to use the (presumably faster) chunk map routines. Kyle, are you going to have a chance to look at this again before we branch? It would be cool to get this in this week. Yeah, I should be able to review it tomorrow. Comment on attachment 569477 [details] [diff] [review] Patch v4 Review of attachment 569477 [details] [diff] [review]: ----------------------------------------------------------------- Ok, this wasn't as scary as I expected. The code looks fine. Whether or not it manages to not explode on some odd configuration, I can't really predict. Let's try it and see! ::: memory/jemalloc/jemalloc.c @@ +2452,5 @@ > return (false); > } > #endif > > +#if defined(MOZ_MEMORY_WINDOWS) || defined(JEMALLOC_USES_MAP_ALIGN) || defined(MALLOC_PAGEFILE) Ok, now that I understand what's going on the conditional for the pagefile is ok. @@ +5963,5 @@ > default_zone = malloc_default_zone(); > > /* > + * We only use jemalloc on Mac 10.6 and 10.7. On Mac 10.5, our tests show > + * a memory regression, but this may not be real. See Mozilla bug 694335. Yeah, that comment is wrong now ;-) @@ +6662,5 @@ > * Actually create an object of the appropriate size, then find out > * how large it could have been without moving up to the next size > * class. > + * > + * I sure hope this doesn't get called often. lol Fingers crossed. This is a big change we should track for FF10. We'll need to keep an eye that this doesn't cause hangs or crashes for users on 10.7. Created attachment 571676 [details] Gdb stack trace of crash in current code We're not out of the woods yet. I just got this crash starting up a custom build made from current mozilla-central code. The patch that got landed should probably be backed out again. Or at least jemalloc should be turned off for OS X 10.7 -- which is where I saw the crash. > > The wrong patch got landed! This isn't "Patch v4" (attachment 569477 [details] [diff] [review]). Well, that's bad! Backed out and re-pushed to inbound. Thanks for catching this, Steven! I'm not sure how I screwed this up... No problem. If I hadn't caught it, someone else would have very soon :-) By the way, you should probably also at least back the patch out from mozilla-central -- otherwise the shit will hit the fan in tomorrow's mozilla-central nightly. > This is still wrong, and won't finish building. A quick glance turned up that it still calls create_zone(). There may also be other problems. The comment 99 relanding backed out from inbound due to failure to build on OS X: { Undefined symbols: "_create_zone", referenced from: _jemalloc_darwin_init in jemalloc.o ld: symbol(s) not found collect2: ld returned 1 exit status make[7]: *** [libmozutils.dylib] Error 1 } Okay, gonna do this right. Compiling on my 10.7 machine now... Okay, I see what's going on. Problem lies between keyboard and chair. I missed a hunk in my .rej file. :-/ I have a patch tested on my 10.7 machine. This one should work. > By the way, you should probably also at least back the patch out from mozilla-central -- otherwise > the shit will hit the fan in tomorrow's mozilla-central nightly. I presume we're going to merge m-i again today before the nightly? > I presume we're going to merge m-i again today before the nightly? I don't think so. Lately it's been happening just once a day, 5-7AM PST. The nightlies normally get built starting around 3-4AM. <jlebar> Are we going to merge m-i to m-c again before the next nightly? <edmorley> jlebar: yeah I imagine so, since that's ~13-14 hours out and a fair bit has landed on inbound already :-) <edmorley> jlebar: particular reason? <jlebar> edmorley, If we don't merge, we need to back out the broken jemalloc 10.7 patch I landed, which was merged earlier. <edmorley> jlebar: ah <edmorley> jlebar: ok so at the very least I'll merge from to m-c if not more First backout: Relanding: Second backout: The 3rd (and hopefully final) landing is on just inbound at present, and will come across on the next merge (presuming it's green hehe). Sorry for the bugspam, the third URL in comment 109 should have been I just built with current trunk (rev f8d66a792ddc) and running mochitests locally on Mac OS X 10.7 doesn't work any more. Lots of malloc errors to the console and the browser doesn't start. Fingers crossed! :-) ? (In reply to Justin Lebar [:jlebar] from comment #113) > ? Updating to tip fixed the problem, mochitests work again. Is there a test case to verify this fix? (In reply to Anthony Hughes, Mozilla QA (irc: ashughes) from comment #115) > Is there a test case to verify this fix? Unfortunately not. You could verify that jemalloc is enabled on 10.7 by putting a breakpoint or malloc_printf() in jemalloc.c's malloc() definition. I don't think this is something which QA will be able to verify easily (in the face of prioritizing resources). Marking qa- in the hopes that someone else will be able to verify the fix.
https://bugzilla.mozilla.org/show_bug.cgi?id=694896
CC-MAIN-2016-44
refinedweb
5,587
73.68
. ' Obtain Running the examples LINQ providers. Structure of a LINQ query. Dim customers = GetCustomers() Dim queryResults = From cust In customers For Each result In queryResults Debug.WriteLine(result.CompanyName & " " & result.Country) Next ' Output: ' Contoso, Ltd Canada ' Margie's Travel United States ' Fabrikam, Inc. Canada. Dim queryResults = From cust In customers Where cust.Country = "Canada". Dim queryResults = From cust In customers Where cust.Country = "Canada" Select cust.CompanyName, cust.Country. Dim..) Dim There are several additional LINQ query operators that you can use to create powerful query expressions. The next section of this topic discusses the various query clauses that you can include in a query expression. For details about Visual Basic query clauses, see Queries. Visual Basic LINQ query operators The classes in the System.Linq namespace and the other namespaces that support LINQ queries include methods that you can call to create and refine queries based on the needs of your application. Visual Basic includes keywords for the following common query clauses. For details about Visual Basic query clauses, see Queries. From clause Either a From clause or an Aggregate clause is required to begin a query. A From clause specifies a source collection and an iteration variable for a query. For example: ' Returns the company name for all customers for which ' the Country is equal to "Canada". Dim names = From cust In customers Where cust.Country = "Canada" Select cust.CompanyName Select clause Optional. A Select clause. Where clause Optional. A Where clause specifies a filtering condition for a query. For example: ' Returns all product names for which the Category of ' the product is "Beverages". Dim names = From product In products Where product.Category = "Beverages" Select product.Name Order By clause] |Optional. An Order By clause specifies the sort order for columns in a query. For example: ' Returns a list of books sorted by price in ' ascending order. Dim titlesAscendingPrice = From b In books Order By b.price Join clause Optional. A Join clause Group By clause Optional. A Group By clause groups the elements of a query result. It Group Join clause Optional. A Group Join clause combines two collections into a single hierarchical collection. For example: ' Returns a combined collection of customers and ' customer orders. Dim customerList = From cust In customers Group Join ord In orders On cust.CustomerID Equals ord.CustomerID Into CustomerOrders = Group, TotalOfOrders = Sum(ord.Amount) Select cust.CompanyName, cust.CustomerID, CustomerOrders, TotalOfOrders. Dim orderTotal = Aggregate order In orders Into Sum(order.Amount) You can also use the Aggregate clause to modify a query. For example, you can use the Aggregate clause to perform a calculation on a related query collection. For example: ' Returns the customer company name and largest ' order amount for each customer. Dim customerMax = From cust In customers Aggregate order In cust.Orders Into MaxOrder = Max(order.Amount) Select cust.CompanyName, MaxOrder Let clause Optional. A Let clause Distinct clause Optional. A Distinct clause restricts the values of the current iteration variable to eliminate duplicate values in query results. For example: ' Returns a list of cities with no duplicate entries. Dim cities = From item In customers Select item.City Distinct Skip clause Optional. A Skip clause bypasses a specified number of elements in a collection and then returns the remaining elements. For example: ' Returns a list of customers. The first 10 customers ' are ignored and the remaining customers are ' returned. Dim customerList = From cust In customers Skip 10 Skip While clause Optional. A Skip While clause) Take clause Optional. A Take clause returns a specified number of contiguous elements from the start of a collection. For example: ' Returns the first 10 customers. Dim customerList = From cust In customers Take 10 Take While clause Optional. A Take While clause) Use additional LINQ query features You can use additional LINQ query features by calling members of the enumerable and queryable types provided by LINQ. You can use these additional capabilities by calling a particular query operator on the result of a query expression. For example, the following example uses the Enumerable.Union method to combine the results of two queries into one query result. It uses the Enumerable.ToList method to return the query result as a generic list. Public Function GetAllCustomers() As List(Of Customer) Dim customers1 = From cust In domesticCustomers Dim customers2 = From cust In internationalCustomers Dim customerList = customers1.Union(customers2) Return customerList.ToList() End Function For details about additional LINQ capabilities, see Standard Query Operators Overview. Connect to a database by using LINQ to SQL and How to: Call a Stored Procedure. Visual Basic features that support LINQ. Deferred and immediate query execution. XML in. Related resources How to and walkthrough topics How to: Call a Stored Procedure How to: Modify Data in a Database How to: Combine Data with Joins How to: Sort Query Results How to: Filter Query Results How to: Count, Sum, or Average Data How to: Find the Minimum or Maximum Value in a Query Result How to: Assign stored procedures to perform updates, inserts, and deletes (O/R Designer) Featured book chapters Chapter 17: LINQ in Programming Visual Basic 2008
https://docs.microsoft.com/en-us/dotnet/visual-basic/programming-guide/language-features/linq/introduction-to-linq
CC-MAIN-2018-51
refinedweb
845
52.05
25 August 2004 23:09 [Source: ICIS news] HOUSTON (CNI)--?xml:namespace> On Tuesday, nitration-grade toluene traded at a high of $2.62/gal while MX business peaked at $2.42/gal, with MX numbers rising 6.5 cent/gal during the day. Market sources attributed the price spikes to elevated Asian demand for toluene and to a price correction for MX.?xml:namespace> One analyst said that historically, MX seldom trades more than 10 cent/gal below toluene, but in recent weeks MX prices had been about 20 cent/gal under toluene, leaving room for an upward correction. In a similar scenario, toluene spot prices had been bolstered by benzene hikes. Historically, a benzene/toluene price spread of 30 cent/gal would have been considered large, but recently benzene had been valued more than $1.30/gal over toluene. This resulted in stronger demand draw for toluene, which is used as feedstock for benzene production in toluene disproportionation (TDPs)
http://www.icis.com/Articles/2004/08/26/608476/toluene-and-mixed-xylene-prices-set-records-on-us-gulf.html
CC-MAIN-2014-35
refinedweb
161
66.74
Problem I am getting this error message - "Cannot connect to WMI provider. You do not have permission or the server is unreachable. Note that you can only manage SQL Server 2005 and later servers with the SQL Server Configuration Manager. Invalid namespace[0x80041010]" when trying to launch SQL Server Configuration Manager. I have been uninstalling some SQL Server instances on my machine. Could this have contributed to the issue? Check out this tip to learn more. Solution I have seen this error message ("Cannot connect to WMI provider. You do not have permission or the server is unreachable. Note that you can only manage SQL Server 2005 and later servers with the SQL Server Configuration Manager. Invalid namespace[0x80041010]") when you install a 32-bit version of Microsoft SQL Server 2008 on 64-bit machine and you install an 64-bit version of SQL Server 2008 on the same machine. If you uninstall any instances you will receive the error message when you open SQL Server Configuration Manager. Here is a screen shot of the error:. The WMI (Windows Management Instrumentation).10 namespace of the computer. After a connection is established with the WMI provider on the specified computer, the services, network settings, and aliases can be queried using WQL or a scripting language. To fix this problem, type the following in a Command Prompt window and then press ENTER: mofcomp "%programfiles(x86)%\Microsoft\Microsoft SQL Server\100\Shared\sqlmgmproviderxpsp2up.mof" Note: For this command to succeed, the sqlmgmproviderxpsp2up.mof file must be present in the %programfiles(x86)%\Microsoft\Microsoft SQL Server\100\Shared folder. If you do not see this file in above location then you can search this for this file on that server and then you can refer the new location in the command above. Next Steps - If you run into this issue, use the command above in order to use the SQL Server Configuration Manager. - If possible, when uninstalling a SQL Server instance on a machine with multiple instances, try to keep the shared files during uninstallation. Last Update: 2011-06-08 About the author View all my tips
https://www.mssqltips.com/sqlservertip/2382/sql-server-configuration-manager-cannot-connect-to-wmi-provider/
CC-MAIN-2017-43
refinedweb
353
65.32
Update of bug #55940 (project octave): Status: Ready For Test => In Progress _______________________________________________________ Follow-up Comment #55: I pushed a different change here (). The problem with reverting is that the code is likely to be forgotten. Instead, I used #if 0 / #endif to disable the blocks of code for the App Nap patch. I also added a note with the bug number above each such block. Hopefully the visual reminder will be sufficient that someone returns to this soon. The problem is not with the original code, but with Apple, who has recently changed the function signature in their libraries for objc_msgSend(). To resolve this a configure test needs to be written, probably AC_COMPILE_IFELSE, which attempts to compile with the older function signature. THat can set set a configuration variable in config.h which can then be used to condition the function call within disable_app_nap. _______________________________________________________ Reply to this item at: <> _______________________________________________ Message sent via Savannah
https://lists.gnu.org/archive/html/octave-bug-tracker/2019-11/msg00018.html
CC-MAIN-2019-51
refinedweb
156
72.76
software of the application. But if the software is Open-source then the source code is also...Open-source software Hi, What is Open-source software? Tell me the most popular open source software name? Thanks Hi, Open-source Open Source E-mail Open Source Content Management code for open-source CMSes is freely available so it is possible to customize...Open Source Content Management Introduction to Open Source Content Management Systems In this article we'll focus on how Open Source and CMS combine Source code Source code source code of html to create a user login page Open Source Exchange ; DDN Open Source Code Exchange The DDN site...Open Source Exchange Exchange targeted by open-source group A new open-source effort dubbed Open Source Software Open Source Software Open Source Software Open source doesn't just mean access to the source code. The program must include source code, and must allow distribution in source code Open Source Defination Open Source Defination The Open Source Definition Open source doesn't just mean access to the source code. The distribution terms of open-source... Strategy Against Open Source These Open Source consultants do not sell software Open Source Accounting but play with code. With easy access to an open source application, the techie can...Open Source Accounting Turbocase open source Accounting software TurboCASH .7 is an open source accounting package that is free for everyone source code source code sir...i need an online shopping web application on struts and jdbc....could u pls send me the source E-mail Server code for complete control. POPFile: Open Source E-Mail...Open Source E-mail Server MailWasher Server Open Source MailWasher Server is an open-source, server-side junk mail filter package Open Source Directory includes source code for both directory client access and directory servers. Open...Open Source Directory Open Source Java Directory The Open Source Java...; Open Source Directory Services Apple's Open Directory source code source code hellow!i am developing a web portal in which i need to set dialy,weekly,monthly reminers so please give me source code in jsp Open Source Metaverses Open Source Metaverses OpenSource Metaverse Project The OpenSource Metaverse Project provides an open source metaverse engine along the lines... of an emerging concept in massively multiplayer online game circles: the open Software . Open source software is very useful because source code also distributed... is also know as OSS. Simply open source means the availability of the Source code...Open Source Software In this section we are discussing Open source software Open Source Code Coverage Tools For Analyzing Unit Tests written in Java Best Open Source Software Best Open Source Software Best Open Source Open source software. Often (and sometimes incorrectly) called freeware, shareware, and "source code," open... the term open source, and even fewer are aware that this alternative software Open Source web mail Open Source web mail Open Web Mail Project Open WebMail... Outlook to Open WebMail.Open WebMail project is an open source effort made... WebMail project is working to accomplish. Open Source Open Source Browser Open Source Browser Building an Open Source Browser One year ago -ages ago by Internet standards- Netscape released in open source... browser. Based on KHTML and KJS from KDE's Konqueror open source project Open Java Code Beautifiers written in Java Open Source Business Model with publication of their source code on the Internet, as the Open CASCADE...Open Source Business Model What is the open source business model It is often confusing to people to learn that an open source company may give its source code for the following question source code for the following question source code for the fuzzy c-means Java source code Java source code How are Java source code files named Why Open Source? Why Choose Open Source? Introduction Open source refers to a production and development system... behind the concept of open source software is that it enables rapid evolution source code in jsp source code in jsp i need the source code for online payment in jsp Open Source Code Coverage written in Java Open Source e-commerce code and J2EE best practices. Open Source E...Open Source e-commerce Open Source Online Shop E-Commerce Solutions Open source Commerce is an Open security source code security how to obfuscate the code? please answer as soon as possible source code - Swing AWT source code source code for a program to shutdown, restart, log off the pc after clicking jbutton on jpanel or a jframe by user. thanks in advance Open Source PHP with phc. PHP shopping cart with open source code Most... by hindering competition and plagiarism. X-Cart is a software with open source code. We... with open source code is a good way to get the right features quickly.   Java Source code Java Source Code for text chat application using jsp and servlets Code for text chat application using jsp and servlets source code in jsp source code in jsp how to insert multiple images in jsp page of product catalog Source Code - Java Beginners Source Code What do I have to add to my original code to make it work Open Source Images Open Source Images Open Source Image analysis Environment TINA (TINA Is No Acronym) is an open source environment developed to accelerate... the development of key open source software within Europe and represents clear recognition source code in jsp source code in jsp sir...i need the code for inserting images into the product catalog page and code to display it to the customers when they login to site..pls help me Intelligence Open source Intelligence The Open Source..., a practice we term "Open Source Intelligence". In this article, we use three...; Open source intelligence Wikipedia Open Source Intelligence (OSINT source code of java source code of java how to create an application using applets is front end and back end is sqlwith jdbc application is enter the student details and teaching & non teaching staff details and front end is enter in internet source code - JSP-Servlet source code I want source code for Online shopping Cart. Application Overview The objective of the Shopping Cart is to provide an abstract view... Rakesh, I am sending a link where u will found complete source code Source Code - Java Beginners Source Code How to clear screen on command line coding like cls in DOS? Hi Friend, Here is the code to execute cls command of DOS through the java file but only one restriction is that it will execute Open Source Encryption Open Source Encryption Open Source encryption module loses FIPS... certification of the open-source encryption tool OpenSSL under the Federal Information Processing Standard. OpenSSL in January became one of the first open-source source code - Java Beginners source code Hi...i`m new in java..please help me to write program that stores the following numbers in an array named price: 9.92, 6.32, 12,63... message dialog box. Hi Friend, Try the following code: import Open Source Database -source database that comes with a newer code base and an open-source reporting... Version 8.0.4, the most recent version of the open-source code upon which...Open Source Database
http://www.roseindia.net/tutorialhelp/comment/21036
CC-MAIN-2015-14
refinedweb
1,211
69.52
Tested}" Search Criteria Package Details: apparmor 2.11.0-1 Package Actions (2) Sources (5) Latest Comments lukeyeager commented on 2017-09-12 00:54 Tested working patch:/" edh commented on 2017-09-03 11:23 Due the change in the way perl packages are handled, apparmor-libapparmor is broken. The problem lies in line 117 of your PKGBUILD where you try to install a perl module to a non-existing directory. To comply with the recent announcement [1] one should use the vendorarch directory (see `perl -V:vendorarch`). [1] egrupled commented on 2017-06-13 22:00 some apparmor utils are currently broken with python 3.6, see I recommend adding patch which fixes issue: DoctorSelar commented on 2017-06-10 21:22 I'm getting an error whenever I try to run aa-genprof: ERROR: Include file /etc/apparmor.d/local/usr.sbin.sshd not found When I copy that file in from /usr/share/apparmor/extra-profiles and then run aa-genprof again, I get ERROR: local/usr.sbin.sshd profile in local/usr.sbin.sshd contains syntax errors in line 19: missing "profile" keyword. If I try to run apparmor_parser on the usr.sbin.ssh file, I get: AppArmor parser error for usr.sbin.sshd in /etc/apparmor.d/tunables/home at line 16: syntax error, unexpected TOK_EQUALS, expecting TOK_MODE dimosd commented on 2017-05-17 16:04 To have aa-genprof generate non-empty profiles, I had to increase the log size of auditd: max_log_file in /etc/audit/auditd.conf. Also sysctl -w kernel.printk_ratelimit = ? ddreamer commented on 2015-08-12 15:27 Hi, I just installed the latest apparmor from this AUR and activated the kernel module by "apparmor=1 security=apparmor" (as instructed in). However, it failed. <<My. #aa-status apparmor module is not loaded. ================ Somebody helps me ? teekay commented on 2015-08-11 16:35 @ygyfygy: please try a possible fix (no pkgrel bump, just download it again) Also make sure you do have podchecker in /usr/bin/core_perl/ ygyfygy commented on 2015-08-09 13:39 Doesn't build: -> Building: apparmor-libapparmor Running aclocal Running autoconf Running libtoolize Running automake Unescaped left brace in regex is deprecated, passed through in regex; marked by <-- HERE in m/\${ <-- HERE ([^ \t=:+{}]+)}/ at /usr/bin/automake line 3936. configure.ac:8: warning: AM_INIT_AUTOMAKE: two- and three-arguments forms are deprecated. For more info, see: configure.ac:8: doc/Makefile.am:10: warning: subst .2,.pod,$(man_MANS: non-POSIX variable name doc/Makefile.am:10: (probably a GNU make extension) doc/Makefile.am:10: warning: subst .3,.pod,$(man_MANS: non-POSIX variable name doc/Makefile.am:10: (probably a GNU make extension) doc/Makefile.am:17: warning: '%'-style pattern rules are a GNU make extension doc/Makefile.am:26: warning: '%'-style pattern rules are a GNU make extension src/Makefile.am:60: warning: '%'-style pattern rules are a GNU make extension src/Makefile.am:1: warning: 'INCLUDES' is the old name for 'AM_CPPFLAGS' (or '*_CPPFLAGS') testsuite/Makefile.am:5: warning: 'INCLUDES' is the old name for 'AM_CPPFLAGS' (or '*_CPPFLAGS') style of include used by make... GNU checking for gcc... /usr/bin /usr/bin/gcc accepts -g... yes checking for /usr/bin/gcc option to accept ISO C89... none needed checking whether /usr/bin/gcc understands -c and -o together... yes checking dependency style of /usr/bin/gcc... gcc3 checking for flex... flex checking lex output file root... lex.yy checking lex library... -lfl checking whether yytext is a pointer... yes checking for bison... bison -y checking for a sed that does not truncate output... /usr/bin/sed checking for pkg-config... /usr/bin/pkg-config checking pkg-config is at least version 0.9.0... yes checking for swig... /usr/bin/swig checking whether the libapparmor debug output should be enabled... no checking whether the libapparmor man pages should be generated... yes checking for podchecker... no configure: error: The podchecker program was not found in the default path. podchecker is part of Perl, which can be retrieved from: :(. I recompiled my kernel to use AppArmor. teekay commented on 2015-07-31 04:21 md5sum was wrong, indeed - no idea how that happened. I did build and was using that package. Anyways, switched to sha256sums. Thanks, mattoufoutu. mattoufoutu commented on 2015-07-30 20:25 Checksum for apparmor archive (apparmor-2.10.tar.gz) seems to be wrong, it should be 9fd9b6b3525882fdb9441d0f0a8f9162. Also, as MD5 is now considered obsolete and shouldn't be trusted anymore, it would be nice to add SHA256 sums to the PKGBUILD. teekay commented on 2015-07-30 18:36 @adventurer: added. Take care! :-) adventurer commented on 2015-07-30 17:53 @teekay: In principle I would be willing to co-maintain. Unfortunately, I've never done it before, so I have to make myself familiar with the whole process of creating and uploading packages etc. We'll see ... teekay commented on 2015-07-30 17:09 Updated. Any volunteers for the co-maintainers? adventurer commented on 2015-07-19 11:07 Newest stable version 2.10 published on 2015-07-14: Release notes: teekay commented on 2015-05-03 04:59 @tequa: bison is part of base-devel group. After=local-fs.target makes sense, added. Thanks! @ievans3024: fixed. ievans3024 commented on 2015-05-01 18:42 Please fix the PKGBUILD so that arch= are consistently labeled for every subpackage. apparmor fails to install because apparmor-profiles, apparmor-utils and apparmor-vim are labeled arch=('any') while the base package arch is set to ('i686' 'x86_64') This causes pacman/pacaur to look for apparmor-profiles-2.9.1-1-x86_64.pkg.tar.xz, even though the file created by makepkg is apparmor-profiles-2.9.1-1-any.pkg.tar.xz, for example. adventurer commented on 2015-04-27 09:42 AppArmor 2.9.2 released: tequa commented on 2015-03-08 19:43 2 feature requests: - It seems that 'bison' is another build dependency needed for this package. - for my installation the apparmor.service needs an additional line "After=local-fs.target" to be able to access /var/log/apparmor.init.log on boot. Harvie commented on 2014-11-18 02:00 Well... it's really important to make sure that apparmor profiles are loaded as soon as possible (= once filesystems with apparmor and profiles are mounted) and before starting any services. I am not sure which target is best for this... Probably i should read something about systemd's targets to make this clear. teekay commented on 2014-10-25 08:11 @falconindy: thanks. I commited it, but I don't think it makes sense. The basic.target sounds like the right place to be for apparmor. It would be interesting to know which services caused problems for that user. falconindy commented on 2014-10-24 19:22 Your service doesn't do what you actually want it to -- start before basic.target is activated. As a result, profiles are loaded in parallel with services and may not be applied properly. I'd suggest the following unit instead: (I'm not an apparmor user, but someone in #systemd on freenode used this unit and pointed out the ordering problem). seletskiy commented on 2014-03-29 13:08 @Lekensteyn: No, there is no problem with modules, everything works just fine. Also, there is weak (or no at all) mount restriction in stock kernel, which was critical for me, so I've decided to rebuild kernel with according patch. Lekensteyn commented on 2014-03-29 13:04 I see, you just take the stock arch kernel config, in addition enable apparmor and make bzImage. Won't all modules get marked with an OOT taint then? Personally I just use a stripped config, throw the PKGBUILD and related files on a fast build machine and fetch it a few minutes later. For me it doesn't really matter that the profile list is invisible, as long as the rules can be loaded. seletskiy commented on 2014-03-29 12:54 @Lekensteyn: By the way, apparmor in stock kernel is pretty useless (e.g., you can not see profiles list). However, it is possible to recompile kernel without modules, it will be much faster to do; in that case apparmor kernel will use native kernel modules. I use this kind of kernel on production servers () Lekensteyn commented on 2014-03-29 11:27 Just to let you know, there is a discussion[1] on arch-general to drop apparmor support from the stock kernel. This will require you to build your own kernel to have apparmor support. [1]: AnAkkk commented on 2014-03-26 10:32 There's a profile for skype here as well: Dunno which one is the best. teekay commented on 2014-03-25 19:32 @Nowaker: :D I just added a fixed _majorver=2.8 as other hacks are just annoying.. Nowaker commented on 2014-03-25 19:27 @teekay You are right, Chrome aggressive cache just hit me. Without ANY possibility to turn it completely off. Anyway, it looks like AUR doesn't know what ${pkgver%.*} means. I'd prefer to have some manual _pkgver= just below pkgver= so AUR interface renders the link correctly. But it's up to you. @Lekensteyn, @Iqualfragile, thanks, I will try them. teekay commented on 2014-03-25 19:17 @Nowaker: not sure what you mean with where to put the fake _bigver. Are you maybe looking at an old version of the PKGBUILD (e.g. from browser cache)? I dropped those two "test $BASH_VERSION test bigver blah" conditionals, because it makes the PKGBUILD look ugly. Lekensteyn commented on 2014-03-25 18:47 Perhaps you can also get some inspiration from: Not sure if it still works due to pulseaudio changes. Iqualfragile commented on 2014-03-25 18:40 I already have a skype profile, which is based on a profile I found online. You could base your work on that Nowaker commented on 2014-03-25 18:33 @teekay, You could add a fake bigver="..." before conditional bigver so sources list in AUR looks good. In the next day I will write a profile for Skype and let you know how it works for me. Thanks for adopting! teekay commented on 2014-03-25 17:41 Forgot one of the most important changes: - fix all sample profiles wrt /usr merge That one may cause file conflicts. Again, backup your existing profiles before upgrading! teekay commented on 2014-03-25 17:31 Okay, adopted. So, here's the update - use sed -i in a prepare() fashion - fix build to use make -C ... (fixes common/rules not found warnings) - added backup() "hack" for profiles - dropped ruby bindings - use python3 - fixed vim file generation & installation - remove the old conflicts/replaces galore Please test and report any issues. I advice anyone to do a backup of your generated apparmor.d profiles if you're upgrading - just in case. teekay commented on 2014-03-25 16:04 @Nowaker & AnAkkk: I'm the mantainer of apparmor-stable-bzr. If you want I can adopt this one, too. Nowaker commented on 2014-03-24 22:32 @AnAkkk Thanks for letting me know. I offered him to adopt this one and remove the latter. AnAkkk commented on 2014-03-24 21:51 FYI this package seem to be the same and up to date Not sure if the two should be merged, but it might help upgrading it. Nowaker commented on 2014-03-24 21:49 I asked the maintainer to upgrade or disown. Since he disowned, I adopted it and will fix the package ASAP. AnAkkk commented on 2014-03-13 13:29 It works fine by just setting pkgver to 2.8.3 (and the matching MD5). It doesn't even need to use an older bison anymore to cmpile. AnAkkk commented on 2014-03-07 10:47 Well, I'm not sure how much work it is. Isn't this package just the same but with missing systemd scripts though? thestinger commented on 2014-03-07 00:52 @AnAkkk: Are you willing to put in the work to update it? AnAkkk commented on 2014-03-07 00:46 Any chance this can be updated? The latest version is 2.8.3. Some of the profiles of the current one are outdated, for example this cause ntpd to fails reading some stuff it needs to access to. seletskiy commented on 2014-01-29 12:07 For those ones who have problems with newest kernel ("Feature buffer full." error), here is patch to solve this issue: --- PKGBUILD 2014-01-29 19:02:57.743319904 +0700 +++ PKGBUILD.new 2014-01-29 18:02:12.945156309 +0700 @@ -39,6 +39,7 @@ msg2 "Building: apparmor-parser" cd "${srcdir}/${pkgbase}-${pkgver}/parser" msg2 'Patching: apparmor-parser' + sed -e 's/FLAGS_STRING_SIZE 1024/FLAGS_STRING_SIZE 8192/' -i "${srcdir}/${pkgbase}-${pkgver}/parser/parser_main.c" # Patch (maybe we can avoid patching by ./configuring things better) patch=Makefile; { rm "$patch" sed -e 's/pdflatex/true/g' > "$patch" # just workaround until we'll get pdflatex package Lekensteyn commented on 2013-12-03 00:15 @soko1, see previous comments, you need an older bison. soko1 commented on 2013-12-02 23:04 %name-prefix = "regex_" ^^^^^^^^^^^^^^ 'int regex_parse(Node**, const char*)': parse.cc:1233:30: error: too few arguments to function 'int regex_lex(YYSTYPE*, const char**)' yychar = yylex (&yylval); ^ parse.y:39:5: note: declared here int regex_lex(YYSTYPE *, const char **); ^ <builtin>: recipe for target 'parse.o' failed make[1]: *** [parse.o] Error 1 make[1]: Leaving directory '/tmp/yaourt-tmp-root/aur-apparmor/src/apparmor-2.8.1/parser/libapparmor_re' Makefile:240: recipe for target 'libapparmor_re/libapparmor_re.a' failed make: *** [libapparmor_re/libapparmor_re.a] Error 2 ==> ERROR: A failure occurred in build(). Aborting... ==> ERROR: Makepkg was unable to build apparmor. ==> Restart building apparmor ? [y/N] ==> -- Spheerys commented on 2013-09-20 12:47 I have an error during compiling : ==> Lancement de check()... PERL_DL_NONLAZY=1 /usr/bin/perl "-MExtUtils::Command::MM" "-e" "test_harness(0, 'blib/lib', 'blib/arch')" t/*.t t/00_load.t ..................... ok t/10_data.t ..................... ok t/11_base64_fh.t ................ ok t/12_nil.t ...................... ok t/13_no_deep_recursion.t ........ ok t/14_datetime_iso8601.t ......... skipped: DateTime::Format::ISO8601 not available t/15_serialize.t ................ 1/20 # Failed test 'Fault-response content is correct' # at t/15_serialize.t line 99. # got: '<?xml version="1.0" encoding="us-ascii"?><methodResponse><fault><value><struct><member><name>faultCode</name><value><int>1</int></value></member><member><name>faultString</name><value><string>test</string></value></member></struct></value></fault></methodResponse>' # expected: '<?xml version="1.0" encoding="us-ascii"?><methodResponse><fault><value><struct><member><name>faultString</name><value><string>test</string></value></member><member><name>faultCode</name><value><int>1</int></value></member></struct></value></fault></methodResponse>' # Looks like you failed 1 test of 20. t/15_serialize.t ................ Dubious, test returned 1 (wstat 256, 0x100) Failed 1/20 subtests t/20_xml_parser.t ............... ok t/21_xml_libxml.t ............... ok t/25_parser_negative.t .......... ok t/29_parserfactory.t ............ ok t/30_procedure.t ................ ok t/35_namespaces.t ............... ok t/40_server.t ................... ok t/40_server_xmllibxml.t ......... ok t/41_server_hang.t .............. ok t/50_client.t ................... ok t/51_client_with_host_header.t .. ok t/60_net_server.t ............... skipped: Net::Server not available t/70_compression_detect.t ....... ok t/90_rt50013_parser_bugs.t ...... ok t/90_rt54183_sigpipe.t .......... ok t/90_rt54494_blessed_refs.t ..... ok t/90_rt58065_allow_nil.t ........ ok t/90_rt58323_push_parser.t ...... ok Test Summary Report ------------------- t/15_serialize.t (Wstat: 256 Tests: 20 Failed: 1) Failed test: 8 Non-zero exit status: 1 Files=25, Tests=958, 43 wallclock secs ( 0.16 usr 0.03 sys + 1.91 cusr 0.22 csys = 2.32 CPU) Result: FAIL Failed 1/25 test programs. 1/958 subtests failed. make: *** [test_dynamic] Erreur 255 ==> ERREUR : Une erreur s’est produite dans check(). Abandon... What's going wrong ? Lekensteyn commented on 2013-09-08 10:25 @afader You can install any version of bison afterwards, it is only needed during the build process. Personally I have put bison in IgnorePkg. afader commented on 2013-09-08 10:21 Worked for me with bison 2.7. Would it be OK to upgrade bison again after installing? Lekensteyn commented on 2013-09-05 07:49 crodjer, try removing your src/ directory (starting over the build) when you experience the bison issue. crodjer commented on 2013-09-05 06:49 The build fails with this error: cc -march=x86-64 -mtune=generic -O2 -pipe -fstack-protector --param=ssp-buffer-size=4 -Wall -Wsign-compare -Wmissing-field-initializers -Wformat-security -Wunused-parameter -D_GNU_SOURCE -Wstrict-prototypes -Wnested-externs -DPACKAGE=\"apparmor-parser\" -DLOCALEDIR=\"/usr/share/locale\" -DSUBDOMAIN_CONFDIR=\"/etc/apparmor\" -c -o parser_misc.o parser_misc.c make[1]: Entering directory `/home/rohan/downloads/packages/apparmor/src/apparmor-2.8.1/parser/libapparmor_re' **); ^ make[1]: *** [parse.o] Error 1 make[1]: Leaving directory `/home/rohan/downloads/packages/apparmor/src/apparmor-2.8.1/parser/libapparmor_re' make: *** [libapparmor_re/libapparmor_re.a] Error 2 As, per Lekensteyn's suggestion I tried downgrading bison to 2.7.1-1, but that too results in the same error. Lekensteyn commented on 2013-08-10 16:41 I had to install bison 2.7.1-1 as the newer 3.0-1 does not build: **); ^ seletskiy commented on 2013-07-11 10:54 Sorry, I've accidentally hit "Flag out of date" button. Please, unflag it... teekay commented on 2013-06-04 08:07 The /bin /sbin /usr/sbin move requires changes to install locations, the load/unload scripts and renaming of all profiles. The rc file isn't required anymore. Here is an updated tarball with all changes, including the backup stuff from below: teekay commented on 2013-04-02 11:57 Out of interest, I wanted the do the backup() in my local copy of the PKGBUILD. Obviously pacman doesn't support wildcards in the backup() array. Looking at "man 5 PKBUILD", package() is run using "bash -e", so I used a BASH built-in to generate the array "dynamically" like this: package_apparmor-profiles() { pkgdesc='AppArmor sample pre-made profiles' arch=('any') cd "${srcdir}/${pkgbase}-${pkgver}/profiles/apparmor.d" declare -a _profiles=(`find -type f|sed 's@./@etc/apparmor.d/@'`) backup=(`echo ${_profiles[@]}`) cd "${srcdir}/${pkgbase}-${pkgver}/profiles" make install DESTDIR=${pkgdir} } Really nasty, but works.. teekay commented on 2013-03-29 21:46 --with-ruby is broken here since ruby 2.0.0, too. Also, please backup=() /etc/apparmor.d/* as aa-logprof can't handle apparmor.d/local/ includes it seems, so in my case an update would overwrite my trained dovecot profiles. SuSE and Ubuntu don't overwrite those, too. Anonymous comment on 2013-03-18 20:33 Can't compile this ... Making install in ruby make[2]: Entering directory `/tmp/yaourt-tmp-weltio/aur-apparmor/src/apparmor-2.8.1/libraries/libapparmor/swig/ruby' make[3]: Entering directory `/tmp/yaourt-tmp-weltio/aur-apparmor/src/apparmor-2.8.1/libraries/libapparmor/swig/ruby' make -fMakefile.ruby install make[4]: Entering directory `/tmp/yaourt-tmp-weltio/aur-apparmor/src/apparmor-2.8.1/libraries/libapparmor/swig/ruby' make[4]: *** No rule to make target `/tmp/yaourt-tmp-weltio/aur-apparmor/pkg/apparmor-libapparmor/usr/include/ruby-2.0.0/ruby.h', needed by `LibAppArmor_wrap.o'. Stop. make[4]: Leaving directory `/tmp/yaourt-tmp-weltio/aur-apparmor/src/apparmor-2.8.1/libraries/libapparmor/swig/ruby' make[3]: *** [install-exec-local] Error 2 make[3]: Leaving directory `/tmp/yaourt-tmp-weltio/aur-apparmor/src/apparmor-2.8.1/libraries/libapparmor/swig/ruby' make[2]: *** [install-am] Error 2 make[2]: Leaving directory `/tmp/yaourt-tmp-weltio/aur-apparmor/src/apparmor-2.8.1/libraries/libapparmor/swig/ruby' make[1]: *** [install-recursive] Error 1 make[1]: Leaving directory `/tmp/yaourt-tmp-weltio/aur-apparmor/src/apparmor-2.8.1/libraries/libapparmor/swig' make: *** [install-recursive] Error 1 ==> ERROR: A failure occurred in package_apparmor-libapparmor(). Aborting... ==> ERROR: Makepkg was unable to build apparmor. Does anybody know how to fix? 3ED_0 commented on 2013-02-11 12:05 patch to 2.8.1-1: grawity commented on 2013-02-06 15:47 Small comment on the PKGBUILD: It's entirely pointless to call `test -n "$BASH_VERSION"`, since makepkg always uses bash, and the pkgbuild already uses bash-specific features (arrays) before calling that test. if [[ $pkgver =~ ^[0-9]*\.([0-9]*) ]]; then bigver=${BASH_REMATCH[1]} fi Many pkgbuilds use a simpler method: pkgver=2.8.0 _pkgver=2.8 graysky commented on 2013-02-05 23:46 What's the status of this project under systemd? 3ED_0 commented on 2012-10-30 09:55 "API is going to change in the future" That is very good information. This code from novell is that ugly that more can not be (like always).. Lekensteyn commented on 2012-10-28 10:26 @Harvie what do you mean? My modifications to packaging can be found on: NOTE: those startup scripts from apparmor REQUIRE a patched kernel (which I do run). I am told by apparmor developers that the API is going to change in the future which is the reason why the kernel patches are not merged. Harvie commented on 2012-10-27 22:27 Lekensteyn: 1up! Lekensteyn commented on 2012-10-03 21:29 Has anyone already created a systemd unit file for this? silvik commented on 2012-09-02 22:37 Is this still true:? I hope patching the kernel is not needed anymore... I only want to restrict the web browser, flash and java with RBAC. Can someone give me some ideas where to start, some up to date guides? Is this doable in arch? Thanks a lot! Anonymous comment on 2012-07-16 10:03 WARNING: UNINSTALL AND REINSTALL APPARMOR, OR IT WILL FAIL TO UPGRADE. - moved things in /lib under /usr/lib, as required by the big /lib move. big_bum commented on 2012-07-14 09:34 @webstrand, @thestinger told me that "base-devel is an implicit dependency for building package" webstrand commented on 2012-07-13 19:05 I appear to need the packages bison and flex to successfully build. Anonymous comment on 2012-07-09 05:32 @david.runge Since building the perl-rpc-xml has foiled even the greatest perl wizards, I suggest that we simply extract a Debian binary package (though Perl is interpreted, not sure what there is to compile). dvzrv commented on 2012-07-07 10:36 Hmm, perl-rpc-xml is orphaned and un-buildable. Does anyone have the power to fix the PKGBUILD? Otherwise noone will be able to install apparmor anyways big_bum commented on 2012-06-24 09:47 Done. It builded now. Thank you! ==> Leaving fakeroot environment. ==> Finished making: apparmor 2.8.0-1 (Sun Jun 24 12:47:20 EEST 2012) ==> Continue installing apparmor ? [Y/n] Anonymous comment on 2012-06-24 09:21 I believe that is the problem, yes. The Makefile of libapparmor use PREFIX=, and that like the error says is not compatible with INSTALL_BASE. That's an upstream problem really, not a problem with the package itself. Anyway, i pushed in a new PKGBUILD that unset $PERL_MM_OPT before compiling libapparmor and that should allow you (and everyone else that has a custom INSTALL_BASE) to compile. Could you test it? big_bum commented on 2012-06-23 14:17 echo $PERL_MM_OPT gives me INSTALL_BASE=/home/cristi/perl5 Is this the problem? I don't know anything about perl. Can you tell me were is the correct path to INSTALL_BASE? Anonymous comment on 2012-06-23 10:03 I've read of similar problems caused by $PERL_MM_OPT set to "INSTALL_BASE=/something/here" to install perl modules locally. Could you check that? big_bum commented on 2012-06-23 08:47? EDIT: I've manually made the Makefile.PL file but not it's giving me this error: install: target ‘/tmp/yaourt-tmp-cristi/aur-apparmor/pkg/apparmor-libapparmor/usr/lib/perl5/vendor_perl/’ is not a directory: No such file or directory ==> ERROR: A failure occurred in package_apparmor-libapparmor(). big_bum commented on 2012-06-23 08:43? Anonymous comment on 2012-06-23 08:15 Updated. - Added some hacks for new python2 based scripts in utils - Modified logprof.conf to use syslog-ng default log names, instead of link hack. big_bum commented on 2012-06-15 09:11 @thestinger Yes indeed. Somehow it wasn't installed on my system. Sorry. But still, I am unable to build: /usr/bin/perl Makefile.PL PREFIX=/usr MAKEFILE=Makefile.perl Only one of PREFIX or INSTALL_BASE can be given. Not both. make[2]: *** [Makefile.perl] Error 2 make[2]: Leaving directory `/tmp/yaourt-tmp-cristi/aur-apparmor/src/apparmor-2.7.2/libraries/libapparmor/swig/perl' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/tmp/yaourt-tmp-cristi/aur-apparmor/src/apparmor-2.7.2/libraries/libapparmor/swig' make: *** [all-recursive] Error 1 ==> ERROR: A failure occurred in build(). big_bum commented on 2012-06-15 09:07 @thestinger Yes indeed. Somehow it wasn't installed on my system. Sorry. thestinger commented on 2012-06-14 20:26 @big_bum: base-devel is an implicit dependency for building packages big_bum commented on 2012-06-14 17:28 Dependencies are broken. It requires bison and flex Harvie commented on 2012-06-02 09:51 It would be nice to also have SystemD service ready for this packages as it will replace SysV init in future of ArchLinux... sverdj commented on 2012-06-02 03:37 There are tiny kernel patches shipped in the tarball that you probably want to apply if you care about this functionality. Anonymous comment on 2012-06-01 18:00 Was a solution ever found for the problem f45 reported? I'm having the same problem and getting the same messages on the queries below. Also, I ran cat /sys/module/apparmor/parameters/enabled and received a Y reply. Anonymous comment on 2012-03-23 03:25 Apparmor support should be built into the latest stock Arch kernels (what I'm using). [23:21:17 /]# mount -t securityfs securityfs /sys/kernel/security mount: securityfs already mounted or /sys/kernel/security busy mount: according to mtab, securityfs is already mounted on /sys/kernel/security [23:21:19 /]# modprobe apparmor [23:21:22 /]# aa-genprof AppArmor does not appear to be started. Please enable AppArmor and try again. [23:21:25 /]# aa-status apparmor module is loaded. You do not have enough privilege to read the profile set. I've set up apparmor by putting "apparmor=1 security=apparmor" in the kernel boot line and putting apparmor in the daemons array. Profiles themselves work fine. Just when trying to use genprof I get the above error. m4xm4n commented on 2012-03-21 05:36 @f45: Are you using a kernel with AppArmor enabled? If not, you'll need to compile your own. If you are, check and make sure you have a filesystem of type securityfs mounted at /sys/kernel/security. Try the following line: # mount -t securityfs securityfs /sys/kernel/security Also make sure the apparmor module is loading. Usually this is done automatically if you've specified AppArmor as the default security module to load when you compiled the kernel, or you specified it in the kernel cmdline boot options. Other wise give this a try: # modprobe apparmor Anonymous comment on 2012-03-14 00:45 Having trouble getting genprof working: [20:39:30 apparmor.d]$ aa-genprof AppArmor does not appear to be started. Please enable AppArmor and try again. [20:39:33 apparmor.d]$ apparmor_status apparmor module is loaded. You do not have enough privilege to read the profile set. Same if run as root or regular user. m4xm4n commented on 2012-02-17 01:22 I do apologize for the lengthy wait. I applied most of 3ED_0's changes and updated apparmor to 2.7.2. m4xm4n commented on 2012-02-10 02:27 I'll update the PKGBUILD tomorrow afternoon/evening. Sit tight. m4xm4n commented on 2012-01-23 23:02 Thanks, I'll look them over this weekend after finals are over. 3ED_0 commented on 2012-01-23 12:47 I make few changes: - Removed packages that no more possible to build (no source) - Added packages eg: apparmor-vim - Cleaned PKGBUILD (more clearly looks and Archlinux way) - - eg: build() make builds thing and package_*() make install files - Added info about bootloader kernel line (new apparmor.install) - New package "apparmor" - is a metapackage for AUR - to simplify checking version or so.. Please consider this changes in a future version.. -------------------------------------------------- PKGBUILD: apparmor: apparmor-utils.install: apparmor.install: kermana commented on 2012-01-19 19:19 Harvie thanks for the tip, worked like a charm. I'm way too lazy for selinux and too paranoid for default DAC :) Apparmor gives the perfect balance. That being said, I am on a fresh install and when I tried aa-status it gave: bash: /usr/sbin/aa-status: /usr/bin/python: bad interpreter: No such file or directory I explicitly installed python and it seems to work ok so python package might be needed as a dependency. Harvie commented on 2012-01-19 18:37 kermana: yeah... it's just trying to build something that is no longer in tarball. problem is that it's trying it only when you have libapparmor already installed, which is why m4xm4n didn't found the bug :-) you can comment out these lines to make it work: pacman -Qi apparmor-libapparmor &>/dev/null && true && pkgname=(${pkgname[*]} apparmor-profile-editor apparmor-dbus) && depends=(${depends[*]} apparmor-libapparmor) && msg "Building with libapparmor dependent packages..." kermana commented on 2012-01-19 15:18 Thnx for the hard work you guys have been putting into this. Does anyone else have this problem while installing? (on i686) ==> Starting package_apparmor-profile-editor()... /tmp/yaourt-tmp-kermana/aur-apparmor/./PKGBUILD: line 103: cd: /tmp/yaourt-tmp-kermana/aur-apparmor/src/apparmor-2.7.0/deprecated/management/profile-editor: No such file or directory ==> ERROR: A failure occurred in package_apparmor-profile-editor(). Aborting... ==> ERROR: Makepkg was unable to build apparmor. m4xm4n commented on 2012-01-19 00:52 Package updated with the new rc.d script by Harvie. Harvie commented on 2012-01-18 13:07 I've just made profile for makepkg, that will protect you if you are building lot's of untrusted packages from AUR without reading the PKGBUILD carefully: Harvie commented on 2012-01-18 12:06 Harvie commented on 2012-01-18 11:51 I have tuned few profiles (pidgin, firefox, epiphany, opera, chromium, netsurf and more...), so i will share them later in git repository on github... Harvie commented on 2012-01-18 06:25 m4xm4n: i've made the rc.d script and it works well! please include it to /etc/rc.d/apparmor Here is demo, i've made apparmor profile for /bin/ping that disables net_raw capability: [root@insomnia ~]# ping google.com -c 1 PING google.com (173.194.70.147) 56(84) bytes of data. 64 bytes from fa-in-f147.1e100.net (173.194.70.147): icmp_req=1 ttl=49 time=38.4 ms --- google.com ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 38.481/38.481/38.481/0.000 ms [root@insomnia ~]# rc.d start apparmor :: Enabling AppArmor profiles [DONE] [root@insomnia ~]# ping google.com -c 1 ping: icmp open socket: Operation not permitted [root@insomnia ~]# rc.d stop apparmor :: Disabling AppArmor profiles [DONE] [root@insomnia ~]# ping google.com -c 1 PING google.com (173.194.70.104) 56(84) bytes of data. 64 bytes from fa-in-f104.1e100.net (173.194.70.104): icmp_req=1 ttl=49 time=18.8 ms --- google.com ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 18.849/18.849/18.849/0.000 ms m4xm4n commented on 2012-01-18 05:56 I'll look into it in about a week and half when all this finals nonsense is over. Harvie commented on 2012-01-18 05:29 Oh we just need to add rc.d script... Harvie commented on 2012-01-18 05:26 Awesome! I've got this working... Now i am going to try to boot in enforced mode :-) m4xm4n commented on 2012-01-18 03:13 I think I fixed it. It's compiling, packaging, and installing fine using yaourt on x86-64 Arch. Harvie commented on 2012-01-18 02:41 m4xm4n: Awesome, if i will get AppArmor working, i will surely send you some policies for my favourite packages... Anyway... in the past i had problems with matching userspace utils version to latest archlinux kernel which prevented me from doing so... I hope this will get better as AppArmor API will get more stable. BTW i can't compile this package (on latest i686 arch). says something about missing file/directory in $srcdir... :-( m4xm4n commented on 2012-01-17 19:20 Harvie: I plan on adding more rules for AppArmor until I'm satisfied that there are sufficient policies for the average desktop user. AppArmor is an integral part of my project to provide a comprehensive security package for Arch, so I will continue to improve on it until I am satisfied. Harvie commented on 2012-01-17 08:48 m4xm4n: THX for adopting and updating the package :-) I really wish that there will be lot of people willing to submit apparmor rules for their favourite packages and enough will to get this at least into [community] repo... BTW you can enable notifications, so you will receive comments done to packages you own by email... m4xm4n commented on 2012-01-17 00:04 AppArmor updated to 2.7.0. Enjoy folks. If anything is broken, shoot me an email (it's in the PKGBUILD). Anonymous comment on 2012-01-14 15:53 Impossible to install, it shows error. The part of error is: /50_client.t (Wstat: 512 Tests: 0 Failed: 0) Non-zero exit status: 2 Parse errors: No plan found in TAP output t/51_client_with_host_header.t (Wstat: 512 Tests: 0 Failed: 0) Non-zero exit status: 2 Parse errors: No plan found in TAP output t/70_compression_detect.t (Wstat: 512 Tests: 4 Failed: 2) Failed tests: 1-2 Non-zero exit status: 2 t/90_rt54183_sigpipe.t (Wstat: 512 Tests: 0 Failed: 0) Non-zero exit status: 2 Parse errors: No plan found in TAP output Files=25, Tests=648, 1 wallclock secs ( 0.16 usr 0.03 sys + 1.04 cusr 0.12 csys = 1.35 CPU) Result: FAIL Failed 7/25 test programs. 4/648 subtests failed. make: *** [test_dynamic] Error 255 ==> ERROR: Se produjo un error en check(). Cancelando... ==> ERROR: Makepkg was unable to build perl-rpc-xml sverdj commented on 2011-12-18 01:44 AppArmor 2.7 seems to be available on launchpad. Still working nicely with 3.1.5. The compat kernel patches are included in the tarball if you want/need them. sverdj commented on 2011-11-17 11:01 oneeyed: Better just get rid of the static linkage, seems pretty stupid. passing AARE_LDFLAGS="$LDFLAGS -lstdc++" AAREOBJECTS="libapparmor_re/libapparmor_re.a" to make during the parser build ought to fix that. oneeyed commented on 2011-11-08 08:59 apparmor does not support parallel build: MAKEFLAGS="-j5" fails on my 4 cores machine because ranlib has not been run before the static library is used in link. Could you please either have it fixed upstream or, if you can't, add "options=(!makeflags)" to the PKGBUILD? Anonymous comment on 2011-09-16 02:41 waseem: perl-rpc-xml is in the AUR. If you are using makepkg, you should install perl-rpc-xml first. Anonymous comment on 2011-08-28 14:26 pacman can not resolve perl-rpc-xml. Says target not found. 0xfc commented on 2011-08-17 07:56 Added apparmor-pam and rc.d script(not fully functional; more tests needed). Add the securityfs mounting in /etc/fstab is no longer needed(it will be mounted when you start the apparmor rc.d script), and an existing mount on /sys/kernel/security will prevent apparmor from starting. Please unmount /sys/kernel/security before executing "/etc/rc.d/apparmor start". 0xfc commented on 2011-08-17 03:24 added missing dependency perl-rpc-xml. Anonymous comment on 2011-08-17 00:20 Spanish: falla al generar apparmor-utils, por no detectar dependencia perl-rpc-xml 0xfc commented on 2011-08-16 13:08 updated version to 2.6.1. 0xfc commented on 2011-08-16 13:05 Harvie commented on 2011-08-13 21:49 Disowned. I hope someone will be able to take care of this and get it to community (along with some tested and debuged presets for popular packages). sverdj commented on 2011-08-13 15:08 AppArmor works fine here even on kernel 3.0.1, the "old compat patchset" still applies just fine. Make sure you don't install outdated user space though, given that is package is out of date.. Anonymous comment on 2011-07-01 12:32 Was pretty easy to set up, some user space tools are not working properly but the most important indeed do. Applied the compability patchset to 2.6.39 from and bumped the PKGVER to 2.6.1. Succesfully using it to confine the browser stack in enforce mode (ff + plugins, dbus, gconf) on a seperate user. Set up the profiles by hand using the existing onces as guidance - it's basically just mindless trial and error. Harvie commented on 2011-05-16 07:32 soo lazy morning ;) any patches? :) Harvie commented on 2011-03-26 20:59 pejuko: true, that's why it's in depends=() of apparmor-utils... Anonymous comment on 2011-03-26 11:33 building apparmor-utils (2.6.0) fails without perl-rpc-xml t-8ch commented on 2011-03-21 10:10 harvie: now it works without rpm, i'm quite sure it got some rpm archive error last time t-8ch commented on 2011-03-21 09:57 harvie: now it works without rpm, i'm quite sure it got some rpm archive error last time Harvie commented on 2011-03-20 02:06 t-8ch: i don't have rpm... t-8ch commented on 2011-03-19 22:32 build fails without rpm Harvie commented on 2011-03-12 02:47 WOW. I've got this actually working: I've disabled ping to use raw network access (i've commented few lines in /etc/apparmor.d/bin.ping): [root@insomnia ~]# apparmor_parser -r /etc/apparmor.d/bin.ping Cache read/write disabled: /sys/kernel/security/apparmor/features interface file missing. (Kernel needs AppArmor 2.4 compatibility patch.) Warning from /etc/apparmor.d/bin.ping (/etc/apparmor.d/bin.ping line 28): profile /bin/ping network rules not enforced [root@insomnia ~]# ping harvie.cz ping: icmp open socket: Operation not permitted messages: Mar 12 03:41:34 insomnia kernel: type=1400 audit(1299897694.671:11): apparmor="STATUS" operation="profile_replace" name="/bin/ping" pid=17131 comm="apparmor_parser" Mar 12 03:41:38 insomnia kernel: type=1400 audit(1299897698.841:12): apparmor="DENIED" operation="capable" parent=14726 profile="/bin/ping" pid=17142 comm="ping" capability=13 capname="net_raw" I will update package as soon as possible Harvie commented on 2011-02-03 22:27 johnthekipper: plz be so kind and prepend all commands with LANG=C or use export LANG=C before pasting error messages. i can translate it using google translate, but if someone in "worldwide linux community" will have same problem as you he will be probably able to google your post only if it's in english. thx. But well... build failed for me too... i'll take a look at it... johnthekipper commented on 2011-02-02 20:13 g++ -O2 -pipe -Wall -Wstrict-prototypes -Wsign-compare -Wmissing-field-initializers -Wnested-externs -Wformat-security -Wunused-parameter -D_GNU_SOURCE -DPACKAGE=\"apparmor-parser\" -DLOCALEDIR=\"/usr/share/locale\" -DSUBDOMAIN_CONFDIR=\"/etc/apparmor\" -o apparmor_parser parser_lex.o parser_yacc.o parser_main.o parser_interface.o parser_include.o parser_merge.o parser_symtab.o parser_misc.o parser_regex.o parser_variable.o parser_policy.o parser_alias.o pcre/pcre.o \ libapparmor_re/libapparmor_re.a -static-libgcc -L. /usr/lib/perl5/core_perl/pod2man apparmor.d.pod --release=NOVELL/SUSE --center=AppArmor --section=5 > apparmor.d.5 /bin/sh: /usr/lib/perl5/core_perl/pod2man: Datei oder Verzeichnis nicht gefunden make: *** [apparmor.d.5] Fehler 127 Harvie commented on 2010-11-05 01:48 BTW please add your experiences to It's more valuable to have docs in wiki than some discussions in forums... THX Harvie commented on 2010-11-05 01:48 jelly: well not really. we can start "implementing AppArmor on ArchLinux" :-) At least we need following things before we can say that we have AppArmor on ArchLinux: - init (rc.d) scripts! - chase missing dependencies - configuration, etc...) - test everything And we can also look at - apparmor gnome applet (can't build, deprecated...) jelly commented on 2010-11-04 23:46 nice work, now we just need rc.d script and we can use apparmor! Harvie commented on 2010-11-04 22:34 There was a question about how i have managed to get split-pkg to AUR. Hackity-hack: :-) graysky commented on 2010-11-04 21:07 How did you get the split package to upload to AUR? Harvie commented on 2010-11-03 18:57 Added first split-package release. There is still BIG mess in dependencies and i don't even know how to solve makedependencies between sub-packages... Harvie commented on 2010-10-31 15:51 BTW please add your experiences to It's more valuable to have docs in wiki than some discussions in forums... THX Harvie commented on 2010-10-31 15:51 wonder: thx. i didn't known that. i just supposed that everyone who wants to build something already have those tools installed, but we should rather add them to makedepends. but well if it's in the wiki they will be removed in next PKGBUILD release. wonder commented on 2010-10-31 13:24 @Harvie don't add makedepends just because some users are unable to read a wiki. gcc, make, bision, flex or whatever are in base-devel and is the first step in the wiki. Anonymous comment on 2010-10-31 03:07 Please add bison to dependencies. Harvie commented on 2010-10-30 13:45 BTW please add your experiences to It's more valuable to have docs in wiki than some discussions in forums... THX Harvie commented on 2010-10-30 13:41 jelly: i've fixed it, but following files will be included in package only if you have apparmor already installed: will fix it later... 34a35,37 > pkg/usr/bin > pkg/usr/bin/apparmor-dbus > pkg/usr/bin/profileeditor 305a309,311 > pkg/usr/share/doc > pkg/usr/share/doc/profileeditor > pkg/usr/share/doc/profileeditor/AppArmorProfileEditor.htb Harvie commented on 2010-10-30 13:06 jelly: aalogparse/aalogparse.h seems to me that it's some kind of recursive dependency: you have to install apparmor package to build apparmor package :-) this happend as i had old build already installed while building the package :-) i hate that because cool way to fix this are split-packages which are not supported by AUR (i can't even upload them)... I just think that wiki is more valuable for future users. than forums. (BTW I've been working until 5am :-) jelly commented on 2010-10-30 12:45 I can't build the package here. aadbus.c:14:35: fatal error: aalogparse/aalogparse.h: No such file or directory jelly commented on 2010-10-30 10:47 Hey Harvie, i wasn't really complaining :P , maybe it was because i was busy getting AppArmor to work until 3 am :) Great effort so far when i can get it working, i will add docs and look into the rc.d script ;) Anonymous comment on 2010-10-30 08:16 If gathering minds together on the forum to try and get apparmor to work counts as complaining, then I am truly sorry :) Thanks for your work on the apparmor package and wiki :) Harvie commented on 2010-10-30 03:02 BTW please add your experiences to Instead of chating (and complaining) about AppArmor in forums, we need some documentation... THX Harvie commented on 2010-10-30 03:02 Someone could try to make rc.d script working flawlessly on ArchLinux, here's draft: Harvie commented on 2010-10-30 01:56 BTW please add your experiences to Instead of chating (and complaining) about AppArmor in forums, we need some documentation... THX Harvie commented on 2010-10-30 01:53 flamelab: it's strange, but try reinstalling ruby on 64b. maybe they forgoten to increase package release number. i wasn't able to build it on x86_64 with ruby until i reinstalled it with pacman -S ruby maybe there's some other package providing ruby (but incorectly) Harvie commented on 2010-10-30 01:47 flamelab: i'm working on it... in meantime you can try adding swig to makedepends... Harvie commented on 2010-10-30 01:33 BTW please add your experiences to Instead of chating (and complaining) about AppArmor in forums, we need some documentation... THX Harvie commented on 2010-10-30 01:33 - Split-package (AUR does not support this...) -:32 BTW please add your experiences to Instead of chating (and complaining) about AppArmor in forums, we need some documentation... THX Harvie commented on 2010-10-30 01:28 Aaargh! Can't delete previous comments :-( BTW see Harvie commented on 2010-10-30 01:21:19 flamelab commented on 2010-10-30 00:40 There are a lot of errors during the package building, it searches for rpm (wtf?) and more. Did anybody else have problems building it (x86_64) ? Harvie commented on 2010-10-30 00:20 jelly: internal dependencies have been fixed... Well we should put this in split-package, but AUR does not support it right now :( i'll take a look at the external dependencies... jelly commented on 2010-10-29 22:51 it looks like LibAppArmor.pm is build from this package, so we need to fix something here jelly commented on 2010-10-29 22:24 this package also needs: Can't locate LibAppArmor/site_perl/5.10.1 /usr/share/perl5/site_perl/5.10.1 /usr/lib/perl5/current /usr/lib/perl5/site_perl/current .) at /usr/lib/perl5/vendor_perl/Immunix/SubDomain.pm line 43. So a package with libapparmor bindings too perl ;) jelly commented on 2010-10-29 22:19 There are some missing dependencies: perl-locale-gettext perl-term-readkey perl-rpc-xml jelly commented on 2010-10-29 22:13 here is a topic about AppArmor ;) Harvie commented on 2010-10-29 21:13 dyscoria: Patches are welcome. There are sh*tloads of thing to do before AppArmor will be ready to deploy on ArchLinux :-) Anonymous comment on 2010-10-29 21:03 Harvie commented on 2010-10-28 15:23 Version 2.5.1-1 have been tested to build on x86_64 and on kernel without apparmor module. Harvie commented on 2010-10-28 14:56 See for more informations
https://aur.archlinux.org/packages/apparmor/?ID=42279&comments=all
CC-MAIN-2017-39
refinedweb
7,857
58.38
Java.io.PrintStream.print() Method Advertisements Description The java.io.PrintStream.print() method prints an object. The string produced by the String.valueOf(Object) method is translated into bytes according to the platform's default character encoding, and these bytes are written in exactly the manner of the write(int) method. Declaration Following is the declaration for java.io.PrintStream.print() method public void print(Object obj) Parameters obj -- The Object to be printed Return Value This method does not return a value. Exception NA Example The following example shows the usage of java.io.PrintStream.print() method. package com.tutorialspoint; import java.io.*; public class PrintStreamDemo { public static void main(String[] args) { Object x = 50; Object s = "Hello World"; // create printstream object PrintStream ps = new PrintStream(System.out); // print objects ps.print(x); ps.print(s); // flush the stream ps.flush(); } } Let us compile and run the above program, this will produce the following result: 50Hello World
http://www.tutorialspoint.com/java/io/printstream_print_object.htm
CC-MAIN-2014-42
refinedweb
158
51.95
< parse avoid using the whole URL avoid using the whole URL How can I avoid using the whole URL php parse string for url php parse string for url Do i need a regex to parse a URL-type string in PHP Append Arguments to the URL Append Arguments to the URL Hi, How to append the query string to the Current Url when i click on a link. AM using Spring MVC i.e. My URL : In my page having multiple links (a href Call Https Url Call Https Url I have written web service and hosted project on server with SSL certificate.If I want to call HTTPS url, I need to import... to do extra work on his side before calling my https url, I would loose my URL Block - Java Beginners URL Block Hello sir, How to block one website using java.for example if we want block "" site,how to block this site using java... to block a URL like this?please help me.. Thanking you Access URL Failed!? Access URL Failed!? Hi, May i know how to convert my domain name into IP address? becuase i working on J2ME but don't know why the my application unable to access the URL. "connector.open("")". I Multiple url handling Multiple url handling I want to pick URLs from excel/CSV file and assign those urls to String variable and use that string variable to open the specified url from browser. I have written code to open browser using selenium How to do url rewritting? How to do url rewritting? Dear Sir/Mam I have this link i want to change the link to Ho url capture in javaScript url capture in javaScript Hi All, I m creating a bookmark to capture the url of the site and the user's input in the bookmark,in javaScript.this url and user input is to be saved into DB. Following is my code.Result url masking - JSP-Servlet url masking hi.........i am using tomcat in my local machine i have a project in tomcat named myproject for example to request that i use can i use abc.localhost in place of localhost HTTP URL Address Encoding in Java HTTP URL Address Encoding in Java HTTP URL Address Encoding in Java download image using url in jsp download image using url in jsp how to download image using url in jsp JavaScript Navigate to URL JavaScript Navigate to URL... to the specified URL "". Here is the code... url will get open: Download Source Code:   Access URL Access URL This example is used to access the data from the specific url. The Stream Connection is used to connect the application to the specific url by Airtime (connect URL Regular Expression for Struts Validator URL Regular Expression for Struts Validator I want to know how to validate a url link in struts validation.xml. Please share me the regular expression for url link validation JQuery url validation jQuery url Validation : jQuery "required url" is the method to validate or check url if the element is empty (text input) or wrong url type entered .We can validate our url in another type by writing the "url JSP Get URL JSP Get URL JSP Get URL is used to get the url of the current... ( ) that return the url of the current page. Understand with Example The Tutorial url parameters displayed in short sentence url parameters displayed in short sentence In my url parmeter like..." It will displayed the value of id=1 ... I want to change this url to bookmarked url like .... this url also retrive dynamicaly change URL in struts 1.2 dynamicaly change URL in struts 1.2 I need to change the url dynamically on particular button click. for example:- exactly same what this URL is showing, instead creating a friendly url for web application creating a friendly url for web application Hi all, I don't know how to create a friendly url for a web application. Please help me to resolve this problem. Thanks, Suresh How to open URL in iPhone SDK How to open URL in iPhone SDK In my iPhone application ..i wanted to open a url in the application. Is it possible? If yes how will i come back to my application PHP add parameter to url - PHP PHP add parameter to url need an example of parse_url in PHP. Thanks in advance php download file from url php download file from url how to download multiple files in PHP using the URL's Bad URL for SVN Eclipse plugin Bad URL for SVN Eclipse plugin I am trying to install the SVN... Name: Subclipse 1.2.x (Eclipse 3.2+) URL: Name: Subclipse 1.0.x (Eclipse 3.0/3.1) URL: http Java file to url Java file to url In this section, you will learn how to convert file to url. Description of code: The File class provides a method toURL() for this purpose... easily convert file to url whether there is a simple path or contains special How to store url path in file ? How to store url path in file ? Hi, How to store url path in file ? this my program public class Image implements Runnable..."; This is store phiscal directory but i want store url path like Opening a URL from an Applet Opening a URL from an Applet Introduction This is the example of opening a url in same window from an applet. This program shows that how a url is opened in same URL validation in JavaScript URL validation in JavaScript In this tutorial we are going to discuss, how to validate a URL in JavaScript. Java Script is a Scripting language. A Java... into the html page. Using Java Script we are going to validate a URL, entered post method does not support this url . but I am receiving one error that is post method does not supported by this url URL in term of Java Network Programming URL in term of Java Network Programming A URL (Uniform Resource Locator.... In this section we will provide the complete information about the way of using URL in java Java Servlet : URL Rewriting Java Servlet : URL Rewriting In this tutorial, we will discuss about URL Rewriting. It is one of session tracking technique. URL Rewriting : You can use another way of session tracking that is URL rewriting where your browser does How to retrieve URL information How to retrieve URL information  ... the network information for a given URL. In the given below example we use the URL class and make a object. After creating a object we assign a new URL Creating URL using <c:url> Creating URL using <c:url>  ... the cookies then the container will automatically use the URL rewriting. In servlets while using the URL rewriting we still have to say the container to append the c:url core action Converting a Filename to a URL Converting a Filename to a URL  ... if the file exists, first of all we need to convert the file object in URL, for this we use a method toURL(). It returns a URL object and throws Java Servlet : Get URL Example Java Servlet : Get URL Example In this tutorial, You will learn how to get url... the requested client URL. It returns the current URL which containing protocol... parameters. It returns URL in StringBuffer in place of String so if required you SWATO SWATO Swato is an opensource framework that help you develop your webapps easier via Features The server side Java library can be easily deployed in all Servlet 2.3 How to print a webpage without url and date in jsp ? How to print a webpage without url and date in jsp ? How to print a webpage without url and date in jsp Get current page URL from backing bean Get current page URL from backing bean How to get current page URL from backing bean URL file Download and Save in the Local Directory URL file Download and Save in the Local Directory This Program file download from URL and save this Url File in the specified directory. This program specifies Struts 2 Url Validator Struts 2 Url Validator  ... the given field is a valid URL or not. If the entered value is not a valid URL... to use the URL validator. Follow the steps to develop the URL Create URL using <c:url> tag of JSTL Core tag library Create URL using <c:url> tag of JSTL Core tag library... to create a url according to the user's given parameter by using <c:url> tag... to create url with some optional parameters. Attributes How to Overcome proxy in Java URL connection How to Overcome proxy in Java URL connection Hi, I'm just trying to make java url connection. But it shows some errors due to proxy ..... I would like to read a URL () using java url, i made the url Saving URL in local disk - Java Beginners Saving URL in local disk Hi Friend, How can i Save URL like " " on local disk throgh java Statement. Thanks ,in advance java how to get domain name from url a url string? Example program in java for "how to get domain name from url" Thanks Hi, You can use the following example code: URL linkURL = new URL(links); String hostName = linkURL.getHost(); Thanks Java - Opening a URL from an Applet Java - Opening a URL from an Applet This is the example of opening a url in same window from an applet. This program shows that how a url is opened in same JSP decode URL JSP decode URL JSP decode URL is the process of converting all... to decode the specific URL. Understand with Example The Tutorial illustrate how to retrieve the url pattern of the requesting servlet? how to retrieve the url pattern of the requesting servlet? Hi friends, I need to know this. For example if you have Servlet1 and Servlet2. Now from... the url pattern of the servlet1? This is my question. please answer me soon Open Source Metaverses Open Source Metaverses OpenSource Metaverse Project The OpenSource.... The OpenSource Metaverse Project was created because a strong demand exists, and large... individual proprietary worlds. The key deliverables of the OpenSource Metaverse DIRECT ACCESS USING URL SHOULD NOT PERMIT DIRECT ACCESS USING URL SHOULD NOT PERMIT if someone know the url address like : then he/she can open my page.i... me help USING LOGIN I DONE THIS BUT PROBLEM WITH DIRECT URL ADDRESS HE/SHE CAN how do i grab the url in php? how do i grab the url in php? I want to grab the 'entire' url, including any special characters like & and #, from a site. I then want... # = %23 I tried URL encoding but it only grabs the part until the #, which Convert URI to URL Convert URI to URL  ... Identifier (URI) reference to uniform resource locator (URL). Here we are give... in the example we make a URI object and a URL object. After that we pass a URI " Request URl using Retrive data from dtabase Request URl using Retrive data from dtabase Using With GWT the user... the contents from the db based on the event id. But this jsp url should be a public url. Means anyone can access it directly. ( something like Advertisements If you enjoyed this post then why not add us on Google+? Add us to your Circles
http://www.roseindia.net/tutorialhelp/comment/94339
CC-MAIN-2015-32
refinedweb
1,917
71.44
54246/find-out-time-it-took-for-python-script-to-complete-execution I am working with the following code in a python script: def fun(): #Code here fun() I want to execute this script and also find out how much time it took to execute in minutes. How do I find out how much time it took for this script to execute? Can anyone help me with this? You can try the following code and see if it works: from datetime import datetime startTime = datetime.now() #do something #Python 2: print datetime.now() - startTime #Python 3: print(datetime.now() - startTime) First download pytube using the following code pip ...READ MORE Use the traceback module: import sys import traceback try: ...READ MORE Yes, you can do this. Python allows ...READ MORE Since I am using Python 3.6, I ...READ MORE suppose you have a string with a ...READ MORE if you google it you can find. ...READ MORE Syntax : list. count(value) Code: colors = ['red', 'green', ...READ MORE can you give an example using a ...READ MORE Python includes a profiler called cProfile. It ...READ MORE You can use the enumerate iterator: for i, ...READ MORE OR Already have an account? Sign in.
https://www.edureka.co/community/54246/find-out-time-it-took-for-python-script-to-complete-execution
CC-MAIN-2020-16
refinedweb
203
77.94
- 1 Introduction - 2 Installation - 3 An Introduction to Using DIVINE - 4 Commandline Interface - 5 Model Checking C and C++ Code via LLVM Bitcode - 6 Interactive Debugging - 7 DiVM: A Virtual Machine for Verification - 8 DiOS, A Small, Verification-Oriented Operating System 1 Introduction The DIVINE project aims to develop a general-purpose, fast, reliable and easy-to-use model checker. The roots of the project go back to a special-purpose, explicit-state, asynchronous system model checking tool for LTL properties. However, rigorous development processes are in a steady decline, being displaced by more agile, flexible and dynamic methods. In the agile world, there is little place for large-scale, long-term planning and pondering on "paper only" designs, which would favour the use of a traditional model checker. The current version of DIVINE strives to keep up with this dynamic world, bringing "heavy-duty" model checking technology much closer to daily programming routine. Our major goal is to express model checking problems in a language which every developer is fluent with: the programming language of their own project. Even if you don't apply model checking to your resulting program directly, writing throwaway models makes much more sense in a language you understand well and use daily. Current versions of DIVINE provide out-of-the-box support for the C (C99) and C++ (C++14) programming languages, including their respective standard libraries. Additional libraries may be rebuilt for use with DIVINE by the user. 2 Installation This section is only relevant when you are installing from source. We will assume that you have obtained a source tarball from, e.g. divine-4.1.8.tar.gz. DIVINE can be built on Linux and possibly on other POSIX-compatible systems including macOS (not tested). There is currently no support for DIVINE on Windows. If you do not want to build DIVINE from sources, you can download a virtual machine image with pre-built DIVINE. 2.1 Prerequisites If you use recent Ubuntu, Fedora or Arch Linux (or possibly another distribution which uses apt-get, yum or pacman as a package manager) the easiest way to get dependencies of DIVINE is to run make prerequisites in the directory with the sources (you will need to have make installed): $ tar xvzf divine-4.1.8.tar.gz $ cd divine-4.1.8 $ make prerequisites Otherwise, to build DIVINE, you will need the following: - A POSIX-compatible operating system, make(tested with BSD and GNU), - GNU C++ (4.9 or newer) or clang (3.2 or newer), - CMake [] 3.2 or newer, libedit[thrysoee.dk/editline], - about 12GB of disk space and 4GB of RAM (18GB for both release and debug builds), Additionally, DIVINE can make use of the following optional components: - ninja build system [ninja-build.org] for faster builds, - pandoc [pandoc.org] for formatting the manual (HTML and PDF with pdflatex). 2.2 Building & Installing First, unzip the distribution tarball and enter the newly created directory $ tar xvzf divine-4.1.8.tar.gz $ cd divine-4.1.8 The build is driven by a Makefile, and should be fully automatic. You only need to run: $ make This will first build a C++14 toolchain and a runtime required to build DIVINE itself, then proceed to compile DIVINE. After a while, you should obtain the main DIVINE binary. You can check that this is indeed the case by running: $ ./_build.release/tools/divine help You can now run DIVINE from its build directory, or you can optionally install it by issuing $ make install This will install DIVINE and its version of LLVM into /opt/divine/. You can also run the test-suite if you like: $ make check 3 An Introduction to Using DIVINE In this section, we will give a short example on how to invoke DIVINE and its various functions. We will use a small C program (consisting of a single compilation unit) as an example, along with a few simple properties. 3.1 Basics of Program Analysis We will start with the following simple C program: The above code contains a bug: an out of bounds access to array at index i == 4; we will see how this is presented by DIVINE. The program can be compiled by your system's C compiler and executed. If you do so, it will probably run OK despite the out-of-bounds access (this is an example of a stack buffer overflow –- the program will incorrectly overwrite an adjacent value on the stack which, in most cases, does not interfere with its execution). We can proceed to check the program using DIVINE: $ divine check program.c DIVINE will compile your program and run the verifier on the compiled code. After a short while, it will produce the following output: compiling program.c loading bitcode … LART … RR … constants … done booting … done states per second: 419.192 state count: 83 error found: yes error trace: | [0] writing at index 0 [0] writing at index 1 [0] writing at index 2 [0] writing at index 3 [0] writing at index 4 FAULT: access of size 4 at [heap* 295cbfc3 10h ddp] is 4 bytes out of bounds [0] FATAL: memory error in userspace active stack: - symbol: void {Fault}::handler<{Context} >(_VM_Fault, _VM_Frame*, void (*)(), ...) location: /divine/include/dios/core/fault.hpp:189 - symbol: init location: program.c:9 - symbol: main location: program.c:16 - symbol: _start location: /divine/src/libc/functions/sys/start.cpp:77 a report was written to program.report.mtjmlg The output begins with compile- and load-time report messages, followed by some statistics. Starting with error found: yes, the detected error is introduced. The error-related information contains: error trace-- shows the output that the program printed during its execution, until the point of the error; a description of the error concludes this section, active stack-- this field contains the stack trace of the thread responsible for the error (with a fault handler at the top) In our case, the most important information is FAULT: access of size 4 at [heap* 295cbfc3 10h ddp] is 4 bytes out of bounds which indicates the error was caused by an invalid memory access. The other crucial information is the line of code which caused the error: - symbol: init location: program.c:9 Hence we can see that the problem is caused by an invalid memory access on line 9 in our program. Note: one might notice that the addresses in DIVINE are printed in the form [heap* 295cbfc3 10h ddp]; the meaning of which is: the pointer in question is a heap pointer (other types of pointers are constant pointers and global pointers; stack pointers are not distinguished from heap pointers); the object identifier (in hexadecimal, assigned by the VM) is 295cbfc3; the offset (again in hexadecimal) is 10 and the value is a defined pointer ( ddp, i.e. it is not an uninitialised value). 3.2 Debugging Counterexamples with the Interactive Simulator Now that we have found a bug in our program, it might be useful to inspect the error in more detail. For this, we can use DIVINE's simulator. $ divine sim program.c After compilation, we will land in an interactive debugger environment: Welcome to 'divine sim', an interactive debugger. Type 'help' to get started. # executing __boot at /divine/src/dios/core/dios.cpp:315 > There are a few commands we could use in this situation. For instance, the start command brings the program to the beginning of the main function (fast-forwarding through the internal program initialisation process). Another alternative is to invoke sim with --load-report (where the name of the report is printed at the very end of the $ divine sim --load-report program.report.mtjmlg The simulator now prints identifiers of program states along the violating execution, together with the output of the program (prefixed with T:). The replay stops at the error that was found by divine check. One can use the up and down commands to move through the active stack, to examine the context of the error. To examine local variables, we can use the show, including the value of i: .i$1: type: int value: [i32 4 d] The entry suggests that i is an int variable and its value is represented as [i32 4 d]: meaning it is a 32 bit integer with value 4 and it is fully defined ( d). If we go one frame up and use show again, we can see the entry for x: .x: type: int[] .[0]: type: int value: [i32 42 d] .[1]: type: int value: [i32 42 d] .[2]: type: int value: [i32 42 d] .[3]: type: int value: [i32 42 d] We see that x is an int array and that it contains 4 values: the access at x[4] is clearly out of bounds. More details about the simulator can be found in the section on interactive debugging. 3.3 Controlling the Execution Environment Programs in DIVINE run in an environment provided by DiOS, DIVINE's operating system, and by runtime libraries (including C and C++ standard libraries and pthreads). The behaviour of this runtime can be configured using the -o option. To get the list of options, run divine info program.c: $ divine info program.c compiling program.c DIVINE 4.0.22 Available options for test/c/2.trivial.c are: - [force-]{ignore|report|abort}: configure the fault handler arguments: - assert - arithmetic - memory [...] - config: run DiOS in a given configuration arguments: - default: async threads, processes, vfs - passthrough: pass syscalls to the host OS - replay: re-use a trace recorded in passthrough mode - synchronous: for use with synchronous systems [...] use -o {option}:{value} to pass these options to the program It is often convenient to assume that malloc never fails: this can be achieved by passing the -o nofail:malloc option to DiOS. Other important options are those controlling the fatalness of errors (the default option is abort -- if an error of type abort is encountered, the verification ends with an error report; on the other hand, the verifier will attempt to continue when it detects an error that was marked as ignore). Furthermore, it is possible to pass arguments to the main function of the program by appending them after the name of the source file (e.g. divine verify program.c main-arg-1 main-arg-2), and to add environment variables for the program using the -DVAR=VALUE option. 3.4 Compilation Options and Compilation of Multiple Files Supposing you wish to verify something that is not just a simple C program, you may have to pass compilation options to DIVINE. In some cases, it is sufficient to pass the following options to divine verify: -std=<version>, (e.g. -std=c++14) option sets the standard of C/C++ to be used and is directly forwarded to the compiler; other options can be passed using the -Coption, i.e. -C,-O3to enable optimisation, or -C,-I,include-pathto add include-pathto the directories in which compiler looks for header files. However, if you need to pass more options or if your program consists of more than one source file, it might be more practical to compile it to LLVM bitcode first and pass this bitcode to divine verify: $ divine cc -Iinclude-path -O1 -std=c++14 -DFOO=bar program.c other.c $ divine verify program.bc divine cc is a wrapper for the clang compiler and it is possible to pass most of clang's options to it directly. 4 Commandline Interface The main interface to DIVINE is the command-line tool simply called divine. Basic information about the binary can be obtained by issuing divine --version: version: 4.0.12+8fbbb7005640 source sha: 37282b999cc10a027cabbb9669365f9e630fb685 runtime sha: b8939c99ca81161cbe69076e31073807f69dda72 build date: 2017-09-19, 14:02 UTC build type: Debug 4.1 Synopsis divine cc [compiler flags] <sources> divine verify [...] <input file> divine draw [...] <input file> divine run [...] <input file> divine sim [...] <input file> 4.2 Input Options All commands that work with inputs share a number of flags that influence the input program. divine {...} [-D {string}] [--autotrace {tracepoint}] [--sequential] [--disable-static-reduction] [--relaxed-memory {string}] [--lart {string}] 4.3 State Space Visualisation & Simulation divine draw [--distance {int}] [--render {string}] {input options} To visualise (a part of) the state space, you can use divine draw, which creates a graphviz-compatible representation. By default, it will run " dot -Tx11" to display the resulting graph. You can override the drawing command to run by using the --render switch. The command will get the dot-formatted graph description on its standard input. Out of common input options, the --autotrace option is often quite useful with draw. divine sim [--batch] [--load-report {file}] [--skip-init] {input options} The sim sub-command is used to interactively explore a program using a terminal-based interface. Use help in the interactive prompt to obtain help on available commands. See also Interactive Debugging. 4.4 Model Checking divine {check|verify} [model checking options] [exploration options] {input options} These two commands are the main workhorse of model checking. The verify command performs full model checking under conservative assumptions. The check command is more liberal, and will assume, for instance, that malloc will not fail. While check is likely to cover most scenarios, it omits cases that are either very expensive to check or that appear in programs very often and make the tool cumbersome to use. The algorithms DIVINE uses are often resource-intensive and some are parallel (multi-threaded). The verify and check commands will, in common cases, use a parallel algorithm to explore the state space of the system. By default, parallel algorithms will use available cores, 4 at most. A few switches control resource use: divine {...} [--threads {int}] [--max-memory {mem}] [--max-time {int}] --threads {int} | -T {int} - The number of threads to use for verification. The default is 4 or the number of cores if less than 4. For optimal performance, each thread should get one otherwise mostly idle CPU core. Your mileage may vary with hyper-threading (it is best to run a few benchmarks on your system to find the best configuration). --max-memory {mem} - Limit the amount of memory divineis allowed to allocate. This is mainly useful to limit swapping. When the verification exceeds available RAM, it will usually take extremely long time to finish and put a lot of strain on the IO subsystem. It is recommended that you do not allow divineto swap excessively, either using this option or by some other means. --max-time {int} - Put a limit of {int}seconds on the maximal running time. Verification results can be written in a few forms, and resource use can also be logged for benchmarking purposes: divine {...} [--report {fmt}] [--no-report-file] [--report-filename {string}] [--num-callers {int}] --report {fmt} - At the end of a verification run, produce a comprehensive, machine-readable report. Currently the available formats are: none-- disables the report entirely, yaml-- prints a concise, yaml-fomatted summary of the results (without memory statistics or machine-readable counterexample data) and yaml-longwhich prints everything as yaml. --no-report-file - By default, the long-form verification results (equivalent to --report yaml-long) are stored in a file. This switch suppresses that behaviour. --report-filename {string} - Store the long-form verification results in a file with the given name. If this option is not used, a unique filename is derived from the name of the input file. 5 Model Checking C and C++ Code via LLVM Bitcode The traditional "explicit-state" model checking practice is not widely adopted in the programming community, and vice-versa, use of mainstream programming languages is not a common practice in the model checking community. Hence, model checking of systems expressed as C programs comes with some novelties for both kinds of users. First of all, the current main application of DIVINE is verification of safety (and some liveness) properties of asynchronous, shared-memory programs. The typical realisation of such asynchronous parallelism is programming with threads and shared memory. Often for performance and/or familiarity reasons, programming with threads, shared memory and locking is the only viable alternative, even though the approach is fraught with difficulties and presents many pitfalls that can catch even expert programmers unaware -- not to say novices. Sadly, resource locking is inherently non-compositional, hence there is virtually no way to provide a reliable yet powerful abstraction, all that while retaining speed and scalability. Despite all its shortcomings, lock-based programming (or alternatively, lock-free shared memory programming, which is yet more difficult) is becoming more prevalent. Model checking provides a powerful tool to ascertain correctness of programs written with locks around concurrent access to shared memory. Most programmers will agree that bugs that show up rarely and are nearly impossible to reproduce are the worst kind to deal with. Sadly, most concurrency bugs are of this nature, since they arise from subtle interactions of nondeterministically scheduled threads of execution. A test-case may work 99 times out of 100, yet the 100th time die due to an unfathomable invalid memory access. Even sophisticated modern debugging tools like valgrind are often powerless in this situation. This is where DIVINE can help, since it systematically and efficiently explores all relevant execution orderings, discovering even the subtlest race conditions, restoring a crucially important property of bugs: reproducibility. If you have ever changed a program and watched the test-case run in a loop for hundreds of iterations, wondering if the bug is really fixed, or it just stubbornly refuses to crash the program... well, DIVINE is the tool you have been waiting for. Of course, there is a catch. Model checking is computationally intensive and memory-hungry. While this is usually not a problem with comparably small unit tests, applying model checking to large programs may not be feasible -- depending on your use-case, and on the amount of memory and time that you have. With the universal LLVM backend, DIVINE can support a wide range of compiled programming languages. However, out of the box, language-specific support is only provided for C and C++. A fairly complete ISO C runtime library is provided as part of DIVINE, with appropriate hooks into DIVINE, as well as an implementation of the pthread specification, i.e. the POSIX threading interface. Additionally, an implementation of the standard C++ library is bundled with DIVINE. Besides the standard library, DIVINE also provides an adapted version of the runtime library required by C++ programs to implement runtime type identification and exception handling; both are fully supported by DIVINE. If your language interfaces with the C library, the libc part of language support can be re-used transparently. However, currently no other platform glue is provided for other languages. Your mileage may vary. Data structure and algorithmic code can be very likely processed with at most trivial additions to the support code, in any language that can be compiled into LLVM bitcode. 5.1 Compiling Programs The first step to take when you want to use DIVINE for C/C++ verification is to compile your program into LLVM bitcode and link it against the DIVINE-provided runtime libraries. When you issue divine cc program.c, divine will compile the runtime support and your program and link them together automatically, using a builtin clang-based compiler. The cc subcommand accepts a wide array of traditional C compiler flags like -I, -std, -W and so on. Alternatively, you can pass a single-file C or C++ program directly to other divine commands, like verify or sim, in which case the program will be compiled and linked transparently. 5.2 Limitations When DIVINE interprets your program, it does so in a very strictly controlled environment, so that every step is fully reproducible. Hence, no real IO is allowed; the program can only see its own memory. While the pthread interface is fully supported, the fork system call is not: DIVINE cannot verify multi-process systems at this time. Likewise, C++ exceptions are well-supported but the C functions setjmp and longjmp are not yet implemented. The interpreter simulates an "in order" CPU architecture, hence any possible detrimental effects of instruction and memory access reordering won't be detected. Again, an improvement in this area is planned for a future release. 5.3 State Space of a Program Internally, DIVINE constructs the state space of your program -- an oriented graph, whose vertices represent various states your program can reach, i.e. the values of all its mapped memory locations, the values in all machine registers, and so on. DIVINE normally doesn't store (or construct) all the possible states -- only a relevant subset. In a multi-threaded program, it often happens that more than one thread can run at once, i.e. if your program is in a given state, the next state it will get into is determined by chance -- it all hinges on which thread gets to run. Hence, some (in fact, most) states in a parallel program can proceed to multiple different configurations, depending on a scheduling choice. In such cases, DIVINE has to explore both such "variant" successors to determine the behaviour of the program in all possible scenarios. You can visualise the state space DIVINE explores by using divine draw -- this will show how the "future" of your program branches through various configurations, and how it converges back into a common point -- or, if its behaviour had changed depending on scheduling, diverges into multiple different outcomes. 5.4 Non-Deterministic Choice Scheduling is not the only source of non-determinism in typical programs. Interactions with the environment can have different outcomes, even if the internal state of the program is identical. A typical example would be memory allocation: the same call in the same context could either succeed or fail, depending on the conditions outside the control of the program. In addition to failures, the normal input of the program (files, network, user input on the terminal) are all instances of non-determinism due to external influences. The sum of all such external behaviours that affect the outcome of the program is called the environment (this includes the scheduling of threads done by the operating system and/or CPU). When testing, the environment is controlled to some degree: the inputs are usually fixed, but eg. thread scheduling is (basically) random -- it changes with every test execution. Resource exhaustion can be simulated in a test environment to some degree. In a model checker, the environment is controlled much more strictly: when a test case is run through a model checker, it will come out the same every time. 5.5 ω-Regular Properties and LTL 5.6 Symbolic verification In DIVINE, we support verification of programs with inputs, using a symbolic representation of data. You can denote a symbolic variable with an annotation when you declare it in the code: _SYM int x; To do so, you need to include a header file abstract/domains.h. The annotated value does not need to be initialized since it is inherently considered to have an arbitrary value of given type. A verification of program with symbolic variables requires a special exploration algorithm. You can turn it on with --symbolic option when you verify a program: $ divine check --symbolic program.cpp Besides annotations, DIVINE supports SV-COMP intrinsics as defined in competition rules. Implementation of intrinsics using symbolic domain is compiled and linked with a verified program and can be directly used with verified program without any includes. DIVINE provides following intrinsics: __VERIFIER_nondet_X()to model nondeterministic values of type X( bool, char, int, uint, short, ushort, long, ulong), __VERIFIER_assume(int expression)restricts a program behaviour according to the expression, __VERIFIER_assert(int condition), __VERIFIER_error(), __VERIFIER_atomic_begin(), __VERIFIER_atomic_end(). The implementation of the intrinsics can be found in the file runtime/abstract/svcomp.cpp. 6 Interactive Debugging DIVINE comes with an interactive debugger, available as divine sim. The debugger loads programs the same way verify and other DIVINE commands do, so you can run it on standalone C or C++ source files directly, or you can compile larger programs into bitcode and load that. 6.1 Tutorial Let's say you have a small C program which you wish to debug. We will refer to this program as program.c. To load the program into the debugger, simply execute $ divine sim program.c and DIVINE will take care of compiling and linking your program. It will load the resulting bitcode but will not execute it immediately: instead, sim will present a prompt to you, looking like this: # executing __boot at /divine/src/dios/dios.cpp:79 > The __boot function is common to all DIVINE-compiled programs and belongs to DiOS, our minimalist operating system. When debugging user-mode programs, the good first command to run is > start which will start executing the program until it enters the main function: # a new program state was stored as #1 # active threads: [0:0] # a new program state was stored as #2 # active threads: [0:0] # executing main at program.c:14 > We can already see a few DIVINE-specific features of the debugger. First, program states are stored and retained for future reference. Second, thread switching is quite explicit -- every time a scheduling decision is made, sim informs you of this fact. We will look at how to influence these decisions later. Now is a good time to familiarise ourselves with how to inspect the program. There are two commands for listing the program itself: source and bitcode and each will print the currently executing function in the appropriate form (the original C source code and LLVM bitcode, respectively). Additionally, when printing bitcode, the current values of all LLVM registers are shown as inline comments: label %entry: >> %01 = alloca [i32 1 d] # [global* 0 0 uun] dbg.declare %03 = getelementptr %01 [i32 0 d] [i32 0 d] # [global* 0 0 uun] call @init %03 Besides printing the currently executing function, both source and bitcode can print code corresponding to other functions; in fact, by default they print whatever function the debugger variable $frame refers to. To print the source code of the current function's caller, you can issue > source caller 97 void _start( int l, int argc, char **argv, char **envp ) { >> 98 int res = __execute_main( l, argc, argv, envp ); 99 exit( res ); 100 } To inspect data, we can instead use show and inspect. We have mentioned $frame earlier: there is, in fact, a number of variables set by sim. The most interesting of those is $_ which normally refers to the most interesting object at hand, typically the executing frame. By default, show will print the content of $_ (like many other commands). However, when we pass an explicit parameter to these commands, the difference between show and inspect becomes apparent: the latter also sets $_ to its parameter. This makes inspect more suitable for exploring more complex data, while show is useful for quickly glancing at nearby values: > step --count 2 > show attributes: address: heap* 9cd25662 0+0 shared: 0 pc: code* 2 2 insn: dbg.declare location: program.c:16 symbol: main .x: type: int[] .[0]: type: int value: [i32 0 u] .[1]: type: int value: [i32 0 u] .[2]: type: int value: [i32 0 u] .[3]: type: int value: [i32 0 u] related: [ caller ] This is how a frame is presented when we look at it with show. 6.2 Collecting Information Apart from show and inspect which simply print structured program data to the screen, there are additional commands for data extraction. First, backtrace will print the entire stack trace in one go, by default starting with the currently executing frame. It is also possible to obtain all stack traces reachable from a given heap variable, e.g. > backtrace $state # backtrace 1: __dios::_InterruptMask<true>::Without::Without(__dios::_InterruptMask<true>&) at /divine/include/libc/include/sys/interrupt.h:42 _pthread_join(__dios::_InterruptMask<true>&, _DiOS_TLS*, void**) at /divine/include/libc/include/sys/interrupt.h:77 pthread_join at /divine/src/libc/functions/sys/pthread.cpp:539 main at test/pthread/2.mutex-good.c:22 _start at /divine/src/libc/functions/sys/start.cpp:76 # backtrace 2: __pthread_entry at /divine/src/libc/functions/sys/pthread.cpp:447 Another command to gather data is call, which allows you to call a function defined in the program. The function must not take any parameters and will be executed in debug mode (see Section 7.10 -- the important caveat is that any dbg.call instructions in your info function will be ignored). Execution of the function will have no effect on the state of the simulated program. If you have a program like this: #include <sys/divm.h> void print_hello() { __vm_trace( _VM_T_Text, "hello world" ); } int main() {} The call command works like this: > call print_hello hello world Finally, the info command serves as a universal information gathering alias -- you can set up your own commands that then become available as info sub-commands: > info --setup "call print_hello" hello > info hello hello world The info command also provides a built-in sub-command registers which prints the current values of machine control registers (see also Section 7.2): > info registers Constants: 220000000 Globals: 120000000 Frame: 9cd2566220000000 PC: 340000000 Scheduler: eb40000000 State: 4d7b876d20000000 IntFrame: 1020000000 Flags: 0 FaultHandler: b940000000 ObjIdShuffle: faa6693f User1: 0 User2: 201879b120000000 User3: 6ca5bc2260000000 User4: 0 7 DiVM: A Virtual Machine for Verification Programs in DIVINE are executed by a virtual machine (called DiVM). The machine code executed by this virtual machine is an extension of the LLVM bitcode. The LLVM part of this “machine language” is described in detail in the LLVM Documentation. The extensions of the language and the semantics specific to DiVM are the subject of this chapter. 7.1 Activation Frames Unlike in a ‘traditional’ implementations of C, there is no continuous stack in DiVM. Instead, each activation record (frame) is allocated from the heap and its size is fixed. Allocations that are normally done at runtime from the stack are instead done from the heap, using the alloca LLVM instruction. Additionally, since LLVM bitcode is in partial SSA form, what LLVM calls 'registers' are objects quite different from traditional machine registers. The registers used by a given function are bound to the frame of that function (they cannot be used to pass values around and they don't need to be explicitly saved across calls). In the VM, this is realized by storing registers (statically allocated) in the activation record itself, along with DiVM-specific "program counter" register (this is an actual register, but is saved across calls automatically by the VM, see also Control Registers below) and a pointer to the caller's activation frame. The header of the activation record has a C-compatible representation, available as struct _VM_Frame in sys/divm.h. 7.2 Control Registers The state of the VM consists of two parts, a set of control registers and the heap (structured memory). All available control registers are described by enum _VM_ControlRegister defined in sys/divm.h and can be manipulated through the __vm_control hypercall (see also Section ¿sec:hypercalls?] below). Please note that control registers and LLVM registers (SSA values) are two different things. The control registers are of two types, holding either an integer or a pointer. There only integer register, _VM_CR_Flags, is used as a bitfield. Four control registers govern address translation and normal execution: _VM_CR_Constantscontains the base address of the heap object (see Heap below) used by the VM to hold constants _VM_CR_Globalsis the base address of the heap object where global variables are stored _VM_CR_Framepoints to the currently executing activation frame _VM_CR_PCis the program counter Additional 4 registers are concerned with scheduling and interrupt control (for details, see Section 7.5 below): _VM_CR_Scheduleris the entry address of the scheduler _VM_CR_Stateis the object holding persistent state of the scheduler _VM_CR_IntFramethe address of the interrupted frame (see also Section 7.5) _VM_CR_Flagsis described in more detail below Finally, there's _VM_CR_FaultHandler (the address of the fault handler, see Section 7.6) and 4 user registers ( _VM_CR_User1 through _VM_CR_User4) of unspecified types: they can hold either a 64 bit integer or a pointer. The VM itself never looks at the content of those registers. The flags register ( _VM_CR_Flags) is further broken down into individual bits, described by enum _VM_ControlFlags, again defined in sys/divm.h. These are: _VM_CF_Mask, if set, blocks all interrupts _VM_CF_Interrupted, if set, causes an immediate interrupt (unless _VM_CF_Maskis also set, in which case the interrupt happens as soon as _VM_CF_Maskis lifted). _VM_CF_KernelModewhich indicates whether the VM is running in user or kernel mode; this bit is set by the VM when __bootor the scheduler is entered and whenever an interrupt happens; the bit can be cleared (but not set) via __vm_control The remaining 3 flags indicate properties of the resulting edge in the state space (see also State Space of a Program). These may be set by the program itself or by a special monitor automaton, a feature of DiOS which enables modular specification of non-trivial (sequence-dependent) properties. These 3 flags are reset by the VM upon entering the scheduler (see Section 7.5). The edge-specific flags are: _VM_CF_Errorindicates that an error -- a safety violation -- ought to be reported (a good place to set this is the fault handler, see Section 7.6), _VM_CF_Acceptingindicates that the edge is accepting, under a Büchi acceptance condition (see also ω-Regular Properties and LTL). _VM_CF_Cancelindicates that this edge should be abandoned (it will not become a part of the state space and neither will its target state, unless also reachable some other way) 7.3 Heap The entire persistent state of the VM is stored in the heap. The heap is represented as a directed graph of objects, where pointers stored in those objects act as the edges of the graph. For each object, in addition to the memory corresponding to that object, a supplemental information area is allocated transparently by the VM for tracking metadata, like which bytes in the object are initialised (defined) and a list of addresses in the object where pointers are stored. Activation frames, global variables and even constants are all stored in the heap. The heap is also stored in a way that makes it quite efficient (both time- and memory-wise) for the VM to take snapshots and store them. This is how model checking and reversible debugging is realized in DIVINE. 7.4 The Hypercall Interface The interface between the program and the VM is based on a small set of hypercalls (a list is provided in tbl. 1). This way, unlike pure LLVM, the DiVM language is capable of encoding an operating system, along with a syscall interface and all the usual functionality included in system libraries. 7.5 Scheduling The DIVINE VM has no intrinsic concept of threads or processes. Instead, it relies on an "operating system" to implement such abstractions and the VM itself only provides the minimum support necessary. Unlike with "real" computers, a system required to operate DiVM can be extremely simple, consisting of just 2 C functions (one of them is __boot, see Boot Sequence below). The latter of those is the scheduler, the responsibility of which is to organize interleaving of threads in the program to be verified. However, the program may not use threads but some other form of concurrency -- it is up to the scheduler, which may be provided by the user, to implement the correct abstractions. From the point of view of the state space (cf. State Space of a Program), the scheduler decides what the successors of a given state are. When DIVINE needs to construct successors to a particular state, it executes the scheduler in that state; the scheduler decides which thread to run (usually with the help of the non-deterministic choice operator) and transfers control to that thread (by changing the value of the _VM_CR_Frame control register, i.e. by instructing DIVINE to execute a particular activation frame). The VM then continues execution in the activation frame that the scheduler has chosen, until it encounters an interrupt. When DIVINE loads a program, it annotates the bitcode with interrupt points, that is, locations in the program where threads may need to be re-scheduled. When such a point is encountered, the VM sets the _VM_CF_Interrupted bit in _VM_CR_Flags and unless _VM_CF_Mask is in effect, an interrupt is raised immediately. Upon an interrupt, the values of _VM_CR_IntFrame and _VM_CR_Frame are swapped, which usually means that the control is transferred back to the scheduler, which can then read the address of the interrupted frame from _VM_CR_IntFrame (this may be a descendant or a parent of the frame that the scheduler originally transferred control to, or may be a null pointer if the activation stack became empty). Of course, the scheduler needs to store its state -- for this purpose, it must use the _VM_CR_State register, which is set by __boot to point to a particular heap object. This heap object can be resized by calling __vm_obj_resize if needed, but the register itself is read-only after __boot returns. The object can be used to, for example, store pointers to activation frames corresponding to individual threads (but of course, those may also be stored indirectly, behind a pointer to another heap object). In other words, the object pointed to by _VM_CR_State serves as the root of the heap. 7.6 Faults An important role of DIVINE is to detect errors -- various types of safety violations -- in the program. For this reason, it needs to interpret the bitcode as strictly as possible and report any problems back to the user. Specifically, any dangerous operations that would normally lead to a crash (or worse, a security vulnerability) are caught and reported as faults by the VM. The fault types that can arise are the following (enumerated in enum _VM_Fault in divine.h): _VM_F_Arithmeticis raised when the program attempts to divide by zero _VM_F_Memoryis raised on attempts at illegal memory access and related errors (out-of-bounds loads or writes, double free, attempts to dereference undefined pointers) _VM_F_Controlis raised on control flow errors -- undefined conditional jumps, invalid call of a null or invalid function pointer, wrong number of arguments in a callinstruction, selector switchon an undefined value or attempt to execute the unreachableLLVM instruction _VM_F_Hypercallis raised when an invalid hypercall is attempted (wrong number or type of parameters, undefined parameter values) When a fault is raised, control is transferred to a user-defined fault handler (a function the address of which is held in the _VM_CR_FaultHandler control register). Out of the box, DiOS (see [DiOS, DIVINE's Operating System]) provides a configurable fault handler. If a fault handler is set, faults are not fatal (the only exception is a double fault, that is, a fault that occurs while the fault handler itself is active). The fault handler, possibly with cooperation from the scheduler (see Section 7.5), can terminate the program, or raise the _VM_CF_Error flag, or take other appropriate actions. The handler can also choose to continue with execution despite the fault, by transferring control to the activation frame and program counter value that are provided by the VM for this purpose. (Note: this is necessary, because the fault might occur in the middle of evaluating a control flow instruction, in which case, the VM could not finish its evaluation. The continuation passed to the fault handler is the best estimate by the VM on where the execution should resume. The fault handler is free to choose a different location.) 7.7 Boot Sequence The virtual machine explicitly recognizes two modes of execution: privileged (kernel) mode and normal, unprivileged user mode. When the VM is started, it looks up a function named __boot in the bitcode file and starts executing this function, in kernel mode. The responsibility of this function is to set up the operating system and set up the VM state for execution of the user program. There are only two mandatory steps in the boot process: set the _VM_CR_Scheduler and the _VM_CR_State control registers (see above). An optional (but recommended) step is to inform the VM (or more specifically, any debugging or verification tools outside the VM) about the C/C++ type (or DWARF type, to be precise, as this is also possible for non-C languages) associated with the OS state. This is accomplished by an appropriate __vm_trace call (see also below). 7.8 Memory Management Hypercalls Since LLVM bitcode is not tied to a memory representation, its apparatus for memory management is quite limited. Just like in C, malloc, free, and related functions are provided by libraries, but ultimately based on some lower-level mechanism, like, for example, the mmap system call. This is often the case in POSIX systems targeting machines with a flat-addressed virtual memory system: mmap is tailored to allocate comparatively large, contiguous chunks of memory (the requested size must be an integer multiple of hardware page size) and management of individual objects is done entirely in user-level code. Lack of any per-object protections is also a source of many common programming errors, which are often hard to detect and debug. It is therefore highly desirable that a single object obtained from malloc corresponds to a single VM-managed and properly isolated object. This way, object boundaries can easily be enforced by the model checker, and any violations reported back to the user. This means that, instead of subdividing memory obtained from mmap, the libc running in DiVM uses obj_make to create a separate object for each memory allocation. The obj_make hypercall obtains the object size as a parameter and writes the address of the newly created object into the corresponding LLVM register (LLVM registers are stored in memory, and therefore participate in the graph structure; this is described in more detail in Section ¿sec:frames?). Therefore, the newly created object is immediately and atomically connected to the rest of the memory graph. The standard counterpart to malloc is free, which returns memory, which is no longer needed by the program, into the pool used by malloc. Again, in DiVM, there is a hypercall -- obj_free -- with a role similar to that of standard free. In particular, obj_free takes a pointer as an argument, and marks the corresponding object as invalid. Any further access to this object is a fault (faults are described in more detail in Section 7.6). The remaining hypercalls in the obj_ family exist to simplify bookkeeping and are not particularly important to the semantics of the language. 7.9 Non-deterministic Choice and Counterexamples It is often the case that the behaviour of a program depends on outside influences, which cannot be reasonably described in a deterministic fashion and wired into the SUT. Such influences are collectively known as the environment, and the effects of the environment translate into non-deterministic behaviour. A major source of this non-determinism is thread interleaving -- or, equivalently, the choice of which thread should run next after an interrupt. In our design, all non-determinism in the program (and the operating system) is derived from uses of the choose hypercall (which non-deterministically returns an integer between 0 and a given number). Since everything else in the SUT is completely deterministic, the succession of values produced by calls to choose specifies an execution trace unambiguously. This trait makes it quite simple to store counterexamples and other traces in a tool-neutral, machine-readable fashion. Additionally, hints about which interrupts fired can be included in case the counterexample consumer does not wish to reproduce the exact interrupt semantics of the given VM implementation. Finally, the trace hypercall serves to attach additional information to transitions in the execution graph. In particular, this information then becomes part of the counterexample when it is presented to the user. For example, the libc provided by DIVINE uses the trace hypercall in the implementation of standard IO functions. This way, if a program prints something to its standard output during the violating run, this output becomes visible in the counterexample. 7.10 Debug Mode The virtual machine has a special debug mode which allows instrumentation of the program under test with additional tracing or other information gathering functionality. This is achieved by a special dbg.call instruction, which is emitted by the bitcode loader whenever it encounters an LLVM call instruction that targets a function annotated with divine.debugfn. For instance, the DiOS tracing functions ( __dios_trace*) carry this annotation. The virtual machine has two operation modes differentiated by how they treat dbg.call instructions. In the debug allowed mode, the instruction is executed and for the duration of the call, the VM enters a debug mode. In the other mode (debug forbidden), the instruction is simply ignored. Typically, verification (state space generation) would be done in the debug forbidden operation mode, while the counter-example trace would be obtained or replayed in the debug allowed mode. To make this approach feasible, there are certain limitations on the behaviour of a function called using dbg.call: - when in debug mode (i.e. when already executing a dbg.callinstruction), all further dbg.callinstructions are ignored - faults in debug mode always cause the execution of dbg.callto be abandoned and the program continues executing as if the dbg.callreturned normally - interrupts and the vm_choosehypercall are forbidden in dbg.calland both will cause a fault (and hence abandonment of the call) 8 DiOS, A Small, Verification-Oriented Operating System Programs traditionally rely on a wide array of services provided by their runtime environment (that is, a combination of libraries, the operating system kernel, the hardware architecture and its peripherals and so on). When DIVINE executes a program, it needs to provide these services to the program. Some of those, especially library functions, can be obtained the same way they are in a traditional (real) execution environment: the libraries can be compiled just like other programs and the resulting bitcode files can be linked just as easily. The remaining services, though, must be somehow supplied by DIVINE, since they are not readily available as libraries. Some of them are part of the virtual machine, like memory management and interrupt control (cf. [The DIVINE Virtual Machine]). The rest is provided by an "operating system". In principle, you can write your own small operating system for use with DIVINE; however, to make common verification tasks easier, DIVINE ships with a small OS that implements a subset of POSIX interfaces that should cover the requirements of a typical user-level program. 8.1 DiOS Compared to Traditional Operating Systems The main goal of DiOS is to provide a runtime environment for programs under inspection, which should not be distinguishable from the real environment. To achieve this goal, it presents an API for thread (and process in the future) handling, faults and virtual machine configuration. DiOS API is then used to implement the POSIX interface and supports the implementation of standard C and C++ library. However, there are a few differences to the real POSIX-compatible OSs the user should be aware of: - First of all, DiOS is trapped inside DIVINE VM, and therefore, it cannot perform any I/O operations with the outside world. All I/O has to be emulated. - Consequently, DiOS cannot provide an access to a real file system, but supplies tools for capturing a file system snapshot, which can be used for emulation of file operations. See the file system section of this manual for further information. - As the goal of the verification is to show that a program is safe no matter what scheduling choices are made, the thread scheduling differs from that of standard OSs. The user should not make any assumptions about it. - DiOS currently does not cover the entire POSIX standard; however the support is commonly growing. 8.2 Fault Handling And Error Traces When DIVINE 4 VM performs an illegal operation (e.g. division by zero or null pointer dereference), it performs a so-called fault and a user supplied function, the fault handler, is called. This function can react to the error - collect additional information or decide how the error should be handled. DiOS provides its own fault handler, so that verified programs do not have to. The DiOS fault handler prints a simple human readable stack trace together with a short summary of the error. When a fault is triggered, it can either abort the verification or continue execution -- depending on its configuration for a given fault. Please see the next section for configuration details. Consider the following simple C++ program: int main() { int *a = 0; *a = 42; return 0; } This program does nothing interesting, it just triggers a fault. If we execute it using divine run test.cpp, we obtain the following output: FAULT: null pointer dereference: [global* 0 0 ddp] [0] FATAL: memory error in userspace To make debugging the problem easier, DiOS can print a backtrace when a fault is encountered (this is disabled by default, since the verify command prints a more detailed backtrace without the help of DiOS -- see below): $ divine run -o debug:faultbt test.cpp The output then is: FAULT: null pointer dereference: [global* 0 0 ddp] [0] FATAL: memory error in userspace [0] Backtrace: [0] 1: main [0] 2: _start By inspecting the trace, we can see that a fault was triggered. When the VM triggers a fault, it prints the reason -- here a null pointer dereference caused the problem. The error caused the DiOS fault handler to be called. The fault handler first communicates what the problem was and whether the fault occurred in the DiOS kernel or in userspace. This is followed by a simple backtrace. Note that _start is a DiOS function and is always at the bottom of a backtrace. It calls all global constructors, initialises the standard libraries and calls main with the right arguments. As mentioned above, divine verify will give us more information about the problem: $ divine verify test.cpp produces the following: error found: yes error trace: | FAULT: null pointer dereference: [global* 0 0 ddp] [0] FATAL: memory error in userspace active stack: - symbol: void {Fault}::handler<{Context} >(_VM_Fault, _VM_Frame*, void (*)(), ...) location: /divine/include/dios/core/fault.hpp:184 - symbol: main location: test.cpp:3 - symbol: _start location: /divine/src/libc/functions/sys/start.cpp:76 a report was written to test.report The error trace is the same as in the previous case, the 'active stack' section contains backtraces for all the active threads. In this example, we only see one backtrace, since this is a single threaded program. In addition to the earlier output provided by DiOS, the fault handler is also visible. 8.3 DiOS Configuration DIVINE supports passing of boot parameters to the OS. Some of these parameters are automatically generated by DIVINE (e.g. the name of the program, program parameters or a snapshot of a file system), others can be supplied by the user. These parameters can be specified using the command line option -o {argument}. DiOS expects {argument} to be in the form {command}:{argument}. DiOS help can be also invoked with the shortcut -o help. All arguments are processed during the boot phase: if an invalid command or argument is passed, DiOS fails to boot. DIVINE handles this state as unsuccessful verification and the output contains a description of the error. DiOS supports the following commands: debug: print debug information during the boot. By default no information is printed during the boot. Supported arguments: help: print help and abort the boot, machineparams: print user specified machine parameters, e.g. the number of cpus, mainargs: print argvand envp, which will be passed to the mainfunction, faultcfg: print fault and simfail configuration, which will be used for verification; note that if the configuration is not forced, the program under inspection can change the configuration. traceand notrace: report/do not report state of its argument back to VM; supported arguments: threads: report all active threads back to the VM, so it can e.g. allow user to choose which thread to run. By default, threads are not traced. ignore, abortwith variants prefixed with force-: configure handling of a given fault type. When abortis specified, DiOS passes the fault as an error back to the VM and the verification is terminated. Faults marked as reportare reported back to the VM, but are not treated as errors -- the verification process may continue past the fault. When a fault is ignored, it is not reported back to the VM at all. If the execution after a fault continues, the instruction causing the fault is ignored or produces an undefined value. The following fault categories are present in DIVINE and these faults can be passed to the command: assert: an assertcall fails, arithmetic: arithmetic errors -- e.g. division by 0, memory: access to uninitialised memory or an invalid pointer dereference, control: check the return value of a function and jump on undefined values, hypercall: an invalid parameter to a VM hypercall was passed, notimplemented: attempted to perform an unimplemented operation, diosassert: an internal DiOS assertion was violated. simfailand nofail: simulate a possible failure of the given feature. E.g. malloccan fail in a real system and therefore, when set to simfail, both success and failure of mallocare tested. Supported arguments: malloc: simulate failure of memory allocation. ncpus:<num>: specify number of machine physical cores -- this has no direct impact on verification and affects only library calls which can return number of cores. stdout: specify how to treat the standard output of the program; the following options are supported: notrace: the output is ignored. unbuffered: each write is printed on a separate line, buffered: each line of the output is printed (default) stderr: specify how to treat the standard error output; see also stdout. 8.4 Virtual File System DiOS provides a POSIX-compatible filesystem implementation. Since no real I/O operations are allowed, the Virtual File System (VFS) operates on top of a filesystem snapshot and the effects of operations performed on the VFS are not propagated back to the host. The snapshots are created by DIVINE just before DiOS boots; to create a snapshot of a directory, use the option --capture {vfsdir}, where {vfsdir} consists of up to three :-separated components: - path to a directory (mandatory), followor nofollow-- specifies whether symlink targets should or should not be captured (optional, defaults to - the “mount point” in the VFS -- if not specified, it is the same as the capture path DIVINE can capture files, directories, symlinks and hardlinks. Additionally, DiOS can also create pipes and UNIX sockets but these cannot be captured from the host system by DIVINE. The size of the snapshot is by default limited to 16 MiB. This prevents accidental capture of a large snapshot. Example: divine verify --capture testdir:follow:/home/test/ --vfslimit 1kB test.cpp Additionally, the stdin of the program can be provided in a file and used using the DIVINE switch --stdin {file}. Finally, the standard and error output can be handled in one of several ways: - it can be completely ignored - it is traced and becomes part of transition labels - in a line-buffered fashion (this is the default behaviour) - in an unbuffered way, where each writeis printed as a single line of the trace All of the above options can be specified via DiOS boot parameters. See Section 8.3 for more details. The VFS implements basic POSIX syscalls -- write, read, etc. and standard C functions like printf, scanf or C++ streams are implemented on top of these. All functions which operate on the filesystem only modify the internal filesystem snapshot.
http://divine.fi.muni.cz/manual.html
CC-MAIN-2018-26
refinedweb
9,286
51.38
1,1 Equivalently, primes of form 2^n - 1 for integers n. See A000043 for the values of p. Primes that are All these primes, except 3,) [Conjecture] For n > 2, the Mersenne number M(n) = 2^n - 1 is a prime if and only if 3^M(n-1) == -1 (mod M(n)). - Thomas Ordowski, Aug 12 2018 [This needs proof! - Joerg Arndt, Mar 31 2019] N. J. A. Sloane, A Handbook of Integer Sequences, Academic Press, 1973 (includes this sequence). N. J. A. Sloane and Simon Plouffe, The Encyclopedia of Integer Sequences, Academic Press, 1995 (includes this sequence). B. Tuckerman, The 24th Mersenne prime, Notices Amer. Math. Soc., 18 (Jun, 1971), Abstract 684-A15, p. 608. Harry J. Smith, Table of n, a(n) for n = 1..18 P. Alfeld, The 39th Mersenne prime Andrew R. Booker, The Nth Prime Page John Rafael M. Antalan, A Recreational Application of Two Integer Sequences and the Generalized Repetitious Number Puzzle, arXiv:1908.06014 [math.HO], 2019. J. Brillhart et al., Factorizations of b^n +- 1, Contemporary Mathematics, Vol. 22, Amer. Math. Soc., Providence, RI, 3rd edition, 2002. Kevin A. Broughan and Qizhi Zhou, On the Ratio of the Sum of Divisors and Euler's Totient Function II, Journal of Integer Sequences, Vol. 17 (2014), Article 14.9.2. D. Butler, Mersenne Primes C. K. Caldwell, Mersenne primes C. K. Caldwell, "Top Twenty" page, Mersenne Primes Luis H. Gallardo, Olivier Rahavandrainy, On (unitary) perfect polynomials over F_2 with only Mersenne primes as odd divisors, arXiv:1908.00106 [math.NT], 2019. R. K. Guy, The strong law of small numbers. Amer. Math. Monthly 95 (1988), no. 8, 697-712. [Annotated scanned copy]). Benny Lim, Prime Numbers Generated From Highly Composite Numbers, Parabola (2018) Vol. 54, Issue 3. Math Reference Project, Mersenne and Fermat Primes R. Mestrovic, Euclid's theorem on the infinitude of primes: a historical survey of its proofs (300 BC--2012) and another new proof, arXiv preprint arXiv:1202.3670 [math.HO], 2012 - From N. J. A. Sloane, Jun 13 2012 Romeo Meštrović, Goldbach-type conjectures arising from some arithmetic progressions, University of Montenegro, 2018. Romeo Meštrović, Goldbach's like conjectures arising from arithmetic progressions whose first two terms are primes, arXiv:1901.07882 [math.NT], 2019.. Yunlan Wei, Yanpeng Zheng, Zhaolin Jiang, Sugoog Shon, A Study of Determinants and Inverses for Periodic Tridiagonal Toeplitz Matrices with Perturbed Corners Involving Mersenne Numbers, Mathematics (2019) Vol. 7, No. 10, 893.. Chai Wah Wu, Can machine learning identify interesting mathematics? An exploration using empirically observed laws, arXiv:1805.07431 [cs.LG], 2018. 2^Array[MersennePrimeExponent, 18] - 1 (* Jean-François Alcover, Feb 17 2018, Mersenne primes with less than 1000 digits *) (PARI) forprime(p=2, 1e5, if(ispseudoprime(2^p-1), print1(2^p-1", "))) \\ Charles R Greathouse IV, Jul 15 2011 (PARI) LL(e) = my(n, h); n = 2^e-1; h = Mod(2, n); for (k=1, e-2, h=2*h*h-1); return(0==h) \\ after Joerg Arndt in A000043 forprime(p=1, , if(LL(p), print1(p, ", "))) \\ Felix Fröhlich, Feb 17 2018 (GAP) A000668:=Filtered(List(Filtered([1..600], IsPrime), i->2^i-1), IsPrime); # Muniru A Asiru, Oct 01 2017 (Python) from sympy import isprime, primerange print([2**n-1 for n in primerange(1, 1001) if isprime(2**n-1)]) # Karl V. Keller, Jr., Jul 16 2020336720
http://oeis.org/A000668
CC-MAIN-2020-40
refinedweb
561
67.04
AirDrop has always been a truly useful feature of Apple's ecosystem, as it lets you share text, files, and images between Apple devices with ease. However, if you're like me and prefer using a Windows PC with an iOS device, there's no way to seamlessly share data between the two devices, without having to email yourself or something like that. So, I came up with this idea, to recreate iOS's famous AirDrop, but available for Windows as well. The project's architecture I opted for was the following: - An iOS shortcut sends data to a database when something is shared - A Python program on my PC picks up that data - Then the data is presented in a notification - If that notification is clicked the data is copied to the clipboard Easy, right? The database I chose was Firebase's realtime database, by Google, as it is incredibly easy to use, and is completely free. Once the realtime database was set up, I had to make the iOS shortcut. An interesting feature that fit exactly my needs, was the "Show in Share Sheets" feature for shortcuts. As the name says, it shows the shortcut in the Share menu, which lets you create a shortcut that does various stuff with the shared data as input. Exactly what I needed! As you can see the shortcut appears in share sheets: Then, the shortcut would simply make an HTTP POST request to Firebase, with the shortcut's input passed in the body of the request. So now, I had a shortcut that could send anything shareable on iOS, to Firebase. Now my PC just had to pick up that data. First things first, so I started with creating a connection to Firebase. For that a single dependency was needed: import firebase_admin from firebase_admin import credentials from firebase_admin import db Imported using pip like so: pip install firebase-admin Then, my Python script had to get authenticated in order to access Firebase, which was done using an authentication JSON file. If you’re following this article as a tutorial, the method to get that JSON file is explained here. So here’s the code for authentication: configFile = 'config.json' cred = credentials.Certificate(configFile) app = firebase_admin.initialize_app(cred, { 'databaseURL': 'myDataBaseUrl' # this is not my actual database URL, for security purposes }) Once that was done, my script had to listen for new data in the real-time database. That was easily done with the listen method, however one issue remained. The listen method actually sends the entire contents of the database the first time it is executed, which wasn’t the behaviour I was looking for. That was easily fixed though, whit a boolean toggle, like so: hasGotFirst = False def listener(data): global hasGotFirst if hasGotFirst: # Here we’ll use the fetched data hasGotFirst = True # Here we just listen at the root of the database db.reference('/').listen(listener) Then, I wanted the data to be displayed in a beautiful notification when it was received, and then copied to the clipboard when that notification was clicked. For that, I added another two dependencies: from win10toast_click import ToastNotifier # Notifications import pyperclip # Clipboard Imported using pip like so: pip install win10toast-click pip install pyperclip Then, the Python script would display a notification when data was retrieved and copy that data to the clipboard in the notification’s click callback. Here’s the full code: def listener(data): global hasGotFirst if hasGotFirst: msg = str(data.data['value']) # Get the 'value' property from the data toaster.show_toast("Data Sent From Phone", msg, callback_on_click=click, icon_path='') hasGotFirst = True def click(data): pyperclip.copy(data) So now the AirDrop was basically working! I could share a string of text from my phone and it would magically appear in a notification on my PC that I could copy by clicking on it! If I were to share a note containing the text "Hello from iOS!", a notification would appear, like this: However to make the process even smoother, I added automatic URL detection, which added a couple more packages: import re import webbrowser Both are available by default, so no installing is needed for those two. As for the code, it simply checks for a URL pattern when data is retrieved using the re module, and if it evaluates to true, that URL is then opened using the webbrowser module, like so: def listener(data): global hasGotFirst if hasGotFirst: msg = str(data.data['value']) matches = re.match(r'https?:\/\/(www\.)?[-a-zA-Z0-9@:%._\+~#=]{1,256}\.[a-zA-Z0-9()]{1,6}\b([-a-zA-Z0-9()@:%_\+.~#?&//=]*)', msg) if matches: webbrowser.open(msg) else: toaster.show_toast("Data Sent From Phone", msg, callback_on_click=lambda val=msg: click(val), icon_path='') hasGotFirst = True And now we’re completely done! With the Python script running in the background, I can now share data between my iOS phone and my Windows computer with ease. If you want the full source code, it is available on GitHub: Discussion (2) Some comments have been hidden by the post's author - find out more
https://practicaldev-herokuapp-com.global.ssl.fastly.net/oskarcodes/how-i-remade-airdrop-from-ios-on-windows-2pa7
CC-MAIN-2021-31
refinedweb
849
59.13
Compatibility - 0.5.0 and main5.35.25.15.04.2 - 0.5.0 and mainiOSmacOS(Intel)macOS(ARM)LinuxtvOSwatchOS 🎼 A library for function composition. A library for function composition. pipe withand update concat curry, flip, and zurry get prop overand set mprop, mver, and mut zip We work with functions all the time, but function composition is hiding in plain sight! For instance, we work with functions when we use higher-order methods, like map on arrays: [1, 2, 3].map { $0 + 1 } // [2, 3, 4] If we wanted to modify this simple closure to square our value after incrementing it, things begin to get messy. [1, 2, 3].map { ($0 + 1) * ($0 + 1) } // [4, 9, 16] Functions allow us to identify and extract reusable code. Let's define a couple functions that make up the behavior above. func incr(_ x: Int) -> Int { return x + 1 } func square(_ x: Int) -> Int { return x * x } With these functions defined, we can pass them directly to map! [1, 2, 3] .map(incr) .map(square) // [4, 9, 16] This refactor reads much better, but it's less performant: we're mapping over the array twice and creating an intermediate copy along the way! While we could use lazy to fuse these calls together, let's take a more general approach: function composition! [1, 2, 3].map(pipe(incr, square)) // [4, 9, 16] The pipe function glues other functions together! It can take more than two arguments and even change the type along the way! [1, 2, 3].map(pipe(incr, square, String.init)) // ["4", "9", "16"] Function composition lets us build new functions from smaller pieces, giving us the ability to extract and reuse logic in other contexts. let computeAndStringify = pipe(incr, square, String.init) [1, 2, 3].map(computeAndStringify) // ["4", "9", "16"] computeAndStringify(42) // "1849" The function is the smallest building block of code. Function composition gives us the ability to fit these blocks together and build entire apps out of small, reusable, understandable units. pipe The most basic building block in Overture. It takes existing functions and smooshes them together. That is, given a function (A) -> B and a function (B) -> C, pipe will return a brand new (A) -> C function. let computeAndStringify = pipe(incr, square, String.init) computeAndStringify(42) // "1849" [1, 2, 3].map(computeAndStringify) // ["4", "9", "16"] withand update The with and update functions are useful for applying functions to values. They play nicely with the inout and mutable object worlds, wrapping otherwise imperative configuration statements in an expression. class MyViewController: UIViewController { let label = with(UILabel()) { $0.font = .systemFont(ofSize: 24) $0.textColor = .red } } And it restores the left-to-right readability we're used to from the method world. with(42, pipe(incr, square, String.init)) // "1849" Using an inout parameter. update(&user, mut(\.name, "Blob")) concat The concat function composes with single types. This includes composition of the following function signatures: (A) -> A (inout A) -> Void <A: AnyObject>(A) -> Void With concat, we can build powerful configuration functions from small pieces. let roundedStyle: (UIView) -> Void = { $0.clipsToBounds = true $0.layer.cornerRadius = 6 } let baseButtonStyle: (UIButton) -> Void = { $0.contentEdgeInsets = UIEdgeInsets(top: 12, left: 16, bottom: 12, right: 16) $0.titleLabel?.font = .systemFont(ofSize: 16, weight: .medium) } let roundedButtonStyle = concat( baseButtonStyle, roundedStyle ) let filledButtonStyle = concat(roundedButtonStyle) { $0.backgroundColor = .black $0.tintColor = .white } let button = with(UIButton(type: .system), filledButtonStyle) curry, flip, and zurry These functions make up the Swiss army knife of composition. They give us the power to take existing functions and methods that don't compose (e.g, those that take zero or multiple arguments) and restore composition. For example, let's transform a string initializer that takes multiple arguments into something that can compose with pipe. String.init(data:encoding:) // (Data, String.Encoding) -> String? We use curry to transform multi-argument functions into functions that take a single input and return new functions to gather more inputs along the way. curry(String.init(data:encoding:)) // (Data) -> (String.Encoding) -> String? And we use flip to flip the order of arguments. Multi-argument functions and methods typically take data first and configuration second, but we can generally apply configuration before we have data, and flip allows us to do just that. flip(curry(String.init(data:encoding:))) // (String.Encoding) -> (Data) -> String? Now we have a highly-reusable, composable building block that we can use to build pipelines. let stringWithEncoding = flip(curry(String.init(data:encoding:))) // (String.Encoding) -> (Data) -> String? let utf8String = stringWithEncoding(.utf8) // (Data) -> String? Swift also exposes methods as static, unbound functions. These functions are already in curried form. All we need to do is flip them to make them more useful! String.capitalized // (String) -> (Locale?) -> String let capitalized = flip(String.capitalized) // (Locale?) -> (String) -> String ["hello, world", "and good night"] .map(capitalized(Locale(identifier: "en"))) // ["Hello, World", "And Good Night"] And zurry restores composition for functions and methods that take zero arguments. String.uppercased // (String) -> () -> String flip(String.uppercased) // () -> (String) -> String let uppercased = zurry(flip(String.uppercased)) // (String) -> String ["hello, world", "and good night"] .map(uppercased) // ["HELLO, WORLD", "AND GOOD NIGHT"] get The get function produces getter functions from key paths. get(\String.count) // (String) -> Int ["hello, world", "and good night"] .map(get(\.count)) // [12, 14] We can even compose other functions into get by using the pipe function. Here we build a function that increments an integer, squares it, turns it into a string, and then gets the string's character count: pipe(incr, square, String.init, get(\.count)) // (Int) -> Int prop The prop function produces setter functions from key paths. let setUserName = prop(\User.name) // ((String) -> String) -> (User) -> User let capitalizeUserName = setUserName(capitalized(Locale(identifier: "en"))) // (User) -> User let setUserAge = prop(\User.age) let celebrateBirthday = setUserAge(incr) // (User) -> User with(User(name: "blob", age: 1), concat( capitalizeUserName, celebrateBirthday )) // User(name: "Blob", age: 2) overand set The over and set functions produce (Root) -> Root transform functions that work on a Value in a structure given a key path (or setter function). The over function takes a (Value) -> Value transform function to modify an existing value. let celebrateBirthday = over(\User.age, incr) // (User) -> User The set function replaces an existing value with a brand new one. with(user, set(\.name, "Blob")) mprop, mver, and mut The mprop, mver and mut functions are mutable variants of prop, over and set. let guaranteeHeaders = mver(\URLRequest.allHTTPHeaderFields) { $0 = $0 ?? [:] } let setHeader = { name, value in concat( guaranteeHeaders, { $0.allHTTPHeaderFields?[name] = value } ) } let request = update( URLRequest(url: url), mut(\.httpMethod, "POST"), setHeader("Authorization", "Token " + token), setHeader("Content-Type", "application/json; charset=utf-8") ) zipand zip(with:) This is a function that Swift ships with! Unfortunately, it's limited to pairs of sequences. Overture defines zip to work with up to ten sequences at once, which makes combining several sets of related data a snap.") // ] It's common to immediately map on zipped") // ] Because of this, Overture provides a zip(with:) helper, which takes a tranform function up front and is curried, so it can be composed with other functions using pipe. zip(with: User.init)(ids, emails, names) Overture also extends the notion of zip to work with optionals! It's an expressive way of combining multiple optionals together. let optionalId: Int? = 1 let optionalEmail: String? = "blob@pointfree.co" let optionalName: String? = "Blob" zip(optionalId, optionalEmail, optionalName) // Optional<(Int, String, String)>.some((1, "blob@pointfree.co", "Blob")) And zip(with:) lets us transform these tuples into other values. } Should I be worried about polluting the global namespace with free functions? Nope! Swift has several layers of scope to help you here. fileprivateand privatescope. staticmembers inside types. Overture.pipe(f, g)). You can even autocomplete free functions from the module's name, so discoverability doesn't have to suffer! Are free functions that common in Swift? It may not seem like it, but free functions are everywhere in Swift, making Overture extremely useful! A few examples: String.init. String.uppercased. Optional.some. map, filter, and other higher-order methods. max, min, and zip. If you use Carthage, you can add the following dependency to your Cartfile: github "pointfreeco/swift-overture" ~> 0.5 If your project uses CocoaPods, just add the following to your Podfile: pod 'Overture', '~> 0.5' If you want to use Overture in a project that uses SwiftPM, it's as simple as adding a dependencies clause to your Package.swift: dependencies: [ .package(url: "", from: "0.5.0") ] Submodule, clone, or download Overture, and drag Overture.xcodeproj into your project. This library was created as an alternative to swift-prelude, which provides these tools (and more) using infix operators. For example, pipe. These concepts (and more) are explored thoroughly in Point-Free, a video series exploring functional programming and Swift hosted by Brandon Williams and Stephen Celis. The ideas in this episode were first explored in Episode #11: All modules are released under the MIT license. See LICENSE for details.
https://swiftpackageindex.com/pointfreeco/swift-overture
CC-MAIN-2021-10
refinedweb
1,482
51.65
I have created a simple program to create a graphical user interface GUI window that contains the text Hello World in it, and in NetBeans it compiles and runs fine. Unfortunately, it does not run in the command prompt, although it does compile fine. Here is my code: and as I said when I run this in NetBeans there are no errors or problems at all.and as I said when I run this in NetBeans there are no errors or problems at all.//*Zachariah A Lloyd //* HELLO WORLD Interface*/ package guihelloworld; import javax.swing.JOptionPane; public class GUIHelloWorld { public static void main(String[] args){ JOptionPane.showMessageDialog(null, "HELLO WORLD");}} Attachment 1613Attachment 1614 However, when I try to run it in the command prompt, it won't work right, complaining about having some problem locating the main class?? Attachment 1615 I KNOW my code is correct, but it just WON'T RUN???? Can someone please tell me what this is all about?? Thanks in advance!!Thanks in advance!!
http://www.javaprogrammingforums.com/whats-wrong-my-code/20355-where-my-class.html
CC-MAIN-2014-15
refinedweb
168
73.78
This is the most important feature, Wilson says. If your application is slow, most users will get frustrated and move on. Applications must be designed with speed in mind. This will help its utility become immediately clear. Tools like Pingdom can help you keep track of your web app's speed. Instant Utility Unless you're the next Facebook or LinkedIn, no one is going to want to spend an hour trying to configure a service, enter data, and import contacts. Web applications should be friendly to social media sites like Facebook and Twitter, and they should allow users to import data from their existing social data stores. "These days, social media is as important as search" in matters of visibility, Wilson said. Voice Just like TV, Books, and Movies, Software is a form of media. Your app needs personality because a genuine, original personality can endear users to your app. Wilson says the cute "failwhale" on Twitter is a perfect example. Less is More 'Start simple' is a good mantra for the web developer. Sure there are popular sites that have become a total mess (I'm looking at you, Facebook), but back in 2004 Facebook was simple too. Wilson's venture capital firm invested in Delicious because it was powerful, yet simple. It did one thing really really well. As long as your app does that, it can be successful. Programmable Open Source is an important concept in the world of development. By making your application open to programming modifications, you give others the chance to build on your application and make it richer and more successful. Don't launch your app without a read/write API ready. Personal This principle is not about conveying your own personality (that was "Voice"). Instead "Personal" means you should facilitate the end user's desire to add their own personality into the mix. Let users customize liberally. This gives users the feeling of co-ownership: a belief that they also helped make this application great. Just don't let them have too much control, otherwise you'll have anarchy. RESTful Twitter lists is a great example here. Anything that helps keep your application URLs clean and concise is worth implementing. This makes your app's namespace more memorable and portable. Wilson says that LinkedIn doesn't do a very good job in this area. Discoverable "A new Web app is a needle in a haystack," said Wilson. You don't need to a marketing professional, you just need to understand SEO (Search Engine Optimization) so that Google and the other major search engines can index your application and its content. It must also be discoverable by social media. For startups that are strapped for cash, "guerrilla" or viral marketing can be effective because it's powerful and authentic if done right. Clean An overly complicated or unintuitive page can smother your app in the cradle. The layout needs to be "clean" with plenty of empty space and large fonts. Wilson says Tumblr's login is a perfect example. Playful Finally, a web application must be engaging, and nothing is more engaging than good old fashioned fun. Wilson says your applications should have a "game dynamic" that can turn your app into a pleasant little diversion. "The ability to play in an application is really important," Wilson said. It will keep your users coming back for more. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/10-keys-successful-web-app
CC-MAIN-2017-43
refinedweb
577
57.06
LINQ Overview, part zero LINQ Overview, part one (Extension Methods) LINQ Overview, part two (Lambda Expressions) Note: I realize it has been a really long time since I've posted anything. It is both exciting and humbling that I continue to receive such positive feedback on these articles. In fact, that is why I am trying to put in the effort and finish off this series before moving on to more recent topics. This nomad has been on some interesting journeys these past months, and I am really excited to share what I've learned. In the world of computer programming data typing is one of the most hotly debated issues. There are two major schools of thought as far as type systems go. The first is static typing in which the programmer is forced by the programming environment to write her code so that the type of data being manipulated is known at compile-time. The purpose of course being that if the environment knows the type of data being manipulated it can, in theory, better enforce the rules that govern how the data should be manipulated and produce an error if the rules are not followed. It would be a mistake to assume exactly when the environment knows the types involved, however. For example, the .NET CLR is a statically typed programming environment but it provides facilities for reflection and therefore sometimes type checking is done at run-time as opposed to compile-time. Take a look at the following code sample: static void TypeChecking() { string s = "a string for all seasons"; double d = s; object so = s; d = (double)so; } If you look at line two, you'll see we are trying to assign a string value to a double variable. This trips up the compiler since string can not be implicitly cast to double. This is an example of compile-time type checking and demonstrates why it can be useful as in this case it would have saved us a few minutes of debugging time. On line four we are assigning our string value to an object variable. Since anything can be stored in an object variable we are essentially masking the type of the value from the compiler. This means that on the subsequent line our explicit cast of so to double doesn't cause a compilation error. Basically, the compiler doesn't have enough information to determine if the cast will fail because it doesn't know for certain what data type you will be storing in so until runtime. Of course, since C# is a statically typed language the cast will generate an exception when it is executed. Don't minimize the damage that this can pose. What if a line like this was laying around in some production code and that method was never properly tested? You'd wind up with an exception in your production application and that's bad! The second major school of thought in regards to typing is referred to as dynamic typing and as its name implies, dynamic type systems are a little bit more flexible about what they will accept at compile-time. Dynamic type systems, especially if you have a solid background in traditionally static typed environments, may be a little harder to grok at first, but the most important thing to understand when working with a dynamic type system is that it doesn't consider a variable as having a type; only values have types. As with anything in modern programming languages, everything is shades of gray. For example even within the realm of static type systems there are those that are considered to be strongly typed and those that are considered to be weakly typed. The difference being that weakly typed languages allow for programs to be written where the exact type of a variable isn't known at compile-time. The "why they matter" bit should be self evident. You simply can not write effective code in a high-level language like C#, Python, or even C without interacting with the type system and therefore you can't write effective code without understanding the type system. As a language, C# is currently tied to the CLR and therefore its syntax is designed to help you write statically typed code, more often than not code intended to provide compile-time type checks. However, as the language and developer community using it have evolved there has been a greater call to at least simulate some aspects of dynamic languages. Introduced in C# 3.0 the var keyword can be used in place of the type in local variable definitions. It can't be used as a return type, the type of a parameter in a method or delegate definition, or in place of a property or field's type declaration. The var keyword relies on C#3.0's type inference feature. We first encountered type inference in part one where we saw that we didn't have to explicitly state our generic parameters as long as we made it obvious to the compiler what the types involved were. A simple example is as follows: static void UseVar() { var list = new List<Book>(); var book = new Book(); book.Title = "The C Programming Language"; list.Add(book); foreach (var b in list) Console.WriteLine(b.Title); } In the above we are using var on three separate lines, but all for the same purpose. We've simply used the var keyword in place of the data type in our variable declarations and let the compiler handle figuring out what the correct type should be. The first two uses are pretty straightforward and will save you a lot of typing. The third use of var is also common, but be forewarned as it will only work well with collections that implement IEnumerable<T>. Collections that implement the non-generic IEnumerable interface do not provide the compiler with enough information to infer the type of b. You have to be careful with var as even though the compiler can tell what type you meant when you wrote it, you or the other developers you work with might not be able to in three or four months. You also have to look out for situations like the following: static void UnexpectedInference() { var d = 2.5; float f = d; } As you can see you may have meant for number to be a float, but the compiler decided that you were better off with a double. Naturally, if you wrote the following the compiler would have more information as to your intent and act appropriately: static void UnexpectedInference() { var d = 2.5f; float f = d; } The var keyword is nice and all, but was it added to the language just to stave off carpal tunnel? No, they needed to provide the var keyword so that they could give you something more powerful, anonymous types. To look at why var was introduced into the language, lets look at the following program. class Program { static void Main(string[] args) { var books = new List<Book>() { new Book() { Title = "The Green Mile", Genre = "Drama", Author = new Author() { Name = "Stephen King", Age = 62, Titles = 1000 } }, new Book() { Title = "Pandora Star", Genre = "Science Fiction", Author = new Author() { Name = "Peter F. Hamilton", Age = 49, Titles = 200 } } }; var kings = from b in books where b.Author.Name == "Stephen King" select new { Author = b.Author.Name, BookTitle = b.Title }; foreach (var k in kings) { Console.WriteLine("{0} wrote {1}", k.Author, k.BookTitle); } Console.ReadLine(); } } public class Author { public string Name { get; set; } public int Age { get; set; } public int Titles { get; set; } } public class Book { public Author Author { get; set; } public string Title { get; set; } public string Genre { get; set; } } The above code is pretty straight forward so let's focus on the LINQ query : var kings = from b in books where b.Author.Name == "Stephen King" select new { Author = b.Author.Name, BookTitle = b.Title }; Do you notice anything interesting? WOAH! We just used the new operator to instantiate an object without saying what type it was, in fact, you couldn't tell the compiler what type it was even if you wanted to. If you can't write out the name of the type, then how are you supposed to declare a variable of that type? Oh, right, we have the var keyword which uses type inference to figure out the correct data type to use! So, how does C# 3.0 provide such a cool feature? It is supposed to be a statically typed language, right? Well, the answer to that isn't actually complicated in the least. Think about it. What do compilers do? They read in a program definition and then generate code in, typically, a lower level language like machine code or CLR byte code. We have already established that type inference is being used to determine the correct data type for our var variable above, but the obvious problem is that we didn't define the type it winds up using. After reading in our query the compiler can tell we are defining a class that has two properties, Author and BookTitle. Further, it knows we are assigning System.String values to both properties. Therefore, it can deduce the exact class definition required for our code to work in a statically type checked way. If you were to fire up reflector you'd be able to find the following class definition: [DebuggerDisplay(@"\{ Author = {Author}, BookTitle = {BookTitle} }", Type="<Anonymous Type>"), CompilerGenerated] internal sealed class <>f__AnonymousType0<<Author>j__TPar, <BookTitle>j__TPar> { // Fields [DebuggerBrowsable(DebuggerBrowsableState.Never)] private readonly <Author>j__TPar <Author>i__Field; [DebuggerBrowsable(DebuggerBrowsableState.Never)] private readonly <BookTitle>j__TPar <BookTitle>i__Field; // Methods [DebuggerHidden] public <>f__AnonymousType0(<Author>j__TPar Author, <BookTitle>j__TPar BookTitle); [DebuggerHidden] public override bool Equals(object value); [DebuggerHidden] public override int GetHashCode(); [DebuggerHidden] public override string ToString(); // Properties public <Author>j__TPar Author { get; } public <BookTitle>j__TPar BookTitle { get; } } First of all, the above looks pretty funky. In fact, if you copy and paste it into visual studio it isn't going to compile. The import thing to realize is that the class was built for you at compile-time not at runtime. Another interesting point is that the compiler is smart enough to reuse this class if your code calls for a second anonymous type with the same properties. All in all, anonymous types work like any other class with the same constraint (i.e. internal sealed). See, I told you that this stuff isn't magic! Anonymous types are cool and all, but due to the restrictions on their use I am sure you'll find, as I have, that they are most useful in LINQ queries. The way in which you would use an anonymous type in conjunction with LINQ is as follows: As you can see above, we are using an anonymous type in the select part of the query to define what our result set looks like. In database terminology this is called a projection, and our example above is really no different than selecting a subset of columns from a table in a relational database. At this point you may be saying, "so what? In the example you could have just selected a book object and gotten access to the author using normal dot notation". You'd be correct in that instance, however consider the LINQ query below: var kings = from b in books join a in authors on b.AuthorID equals a.ID where a.Name == "Stephen King" select new { Author = a.Name, BookTitle = b.Title }; In the above, we are essentially joining two collections of in memory objects on what amounts to a foreign key, i.e. they both know the ID of the author. Hopefully you can now see the utility of anonymous types. If we had simply selected the books, we would then need to do a subsequent query in order to find the corresponding author. The only other place I see a lot of value in using anonymous types is to flatten out a set of related objects for the purpose of databinding or other UI rendering. We have now examined the anonymous type feature added to C#3.0 as well as how to use the new var keyword. It is my guess that you'll find yourself using var quite frequently, but only if you do a lot of LINQ will you being using it for anonymous types. The next segment will focus on the actual architecture of LINQ, once you understand that there really isn't anything mysterious left.
http://geekswithblogs.net/dotnetnomad/archive/2009/10/29/135842.aspx
CC-MAIN-2017-30
refinedweb
2,097
60.45
I woul be travelling to America. Which flights offer open tickets ? I woul be travelling to America. Which flights offer open tickets ? Welcome to the forum. If you could provide more information about your travel plans it would help us greatly to help you. Such as, Where will you be flying from? When do you plan to fly, and return (dates please)? Where in America do you want to fly to? Any additional information you care to provide would also help us help you. I doubt any airline sells "open" tickets anymore....it is all about fees fees and more fees now. Although some student agents allow changes for free Of course they do, in fact most do, you can buy fully flexible tickets and move the date with no charge accordingly, I do often for work, Try doing some research instead of asking others to do it for you. TG does if you buy a one year fully flexible. << Try doing some research instead of asking others to do it for you >> A tad harsh on a new poster vegas-guide, but you have a point. Quite often posters don't help themselves when they ask such basic questions as this. 'Travelling to America' is about as obscure as it gets. I think a lot of new posters are under the impression that TA is in fact a full service Travel Agent, when of course it's nothing of the sort. You won't get much help if you don't post more info like where you want to go to in America. Posts like this are so strange...I understand that English is not a first language in many cases, but terse one line questions without a please, thank you or details are just not the way to get help. We need some sort of sticky with links to sites like Matrix and sky scanner etc. "Welcome to the Air Travel Forum. Please read this first. Thank you very much!" I suspect it's a language issue, coupled with possibly a misunderstanding of what the site is, but I do find it quite strange that when learning a new language, manners, like please and thank you, are lost. My language skills are not great, but for me it's a basic cultural issue, I live in Brussels, occasionally I need to speak rather appalling French, but saying s'il vous plaît et merci, is pretty much a reflex. I doubt very much the OP will come back. Many 1st time posters don't for any number of reasons, but I agree with froggy about simple basic manners which cost nothing.
https://www.tripadvisor.com/ShowTopic-g1-i10702-k6588851-Travel_forum-Air_Travel.html
CC-MAIN-2017-30
refinedweb
442
81.73
. (For more resources related to this topic, see here.) Creating, running, and managing Docker containers Docker is a technology that seemed to come from nowhere and took the IT world by storm just a few years ago. The concept of containerization is not new, but Docker took this concept and made it very popular. The idea behind a container is that you can segregate an application you'd like to run from the rest of your system, keeping it sandboxed from the host operating system, while still being able to use the host's CPU and memory resources. Unlike a virtual machine, a container doesn't have a virtual CPU and memory of its own, as it shares resources with the host. This means that you will likely be able to run more containers on a server than virtual machines, since the resource utilization would be lower. In addition, you can store a container on a server and allow others within your organization to download a copy of it and run it locally. This is very useful for developers developing a new solution and would like others to test or run it. Since the Docker container contains everything the application needs to run, it's very unlikely that a systematic difference between one machine or another will cause the application to behave differently. The Docker server, also known as Hub, can be used remotely or locally. Normally, you'd pull down a container from the central Docker Hub instance, which will make various containers available, which are usually based on a Linux distribution or operating system. When you download it locally, you'll be able to install packages within the container or make changes to its files, just as if it were a virtual machine. When you finish setting up your application within the container, you can upload it back to Docker Hub for others to benefit from or your own local Hub instance for your local staff members to use. In some cases, some developers even opt to make their software available to others in the form of containers rather than creating distribution-specific packages. Perhaps they find it easier to develop a container that can be used on every distribution than build separate packages for individual distributions. Let's go ahead and get started. To set up your server to run or manage Docker containers, simply install the docker.io package: # apt-get install docker.io Yes, that's all there is to it. Installing Docker has definitely been the easiest thing we've done during this entire article. Ubuntu includes Docker in its default repositories, so it's only a matter of installing this one package. You'll now have a new service running on your machine, simply titled docker. You can inspect it with the systemctl command, as you would any other: # systemctl status docker Now that Docker is installed and running, let's take it for a test drive. Having Docker installed gives us the docker command, which has various subcommands to perform different functions. Let's try out docker search: # docker search ubuntu What we're doing with this command is searching Docker Hub for available containers based on Ubuntu. You could search for containers based on other distributions, such as Fedora or CentOS, if you wanted. The command will return a list of Docker images available that meet your search criteria. The search command was run as root. This is required, unless you make your own user account a member of the docker group. I recommend you do that and then log out and log in again. That way, you won't need to use root anymore. From this point on, I won't suggest using root for the remaining Docker examples. It's up to you whether you want to set up your user account with the docker group or continue to run docker commands as root. To pull down a docker image for our use, we can use the docker pull command, along with one of the image names we saw in the output of our search command: docker pull ubuntu With this command, we're pulling down the latest Ubuntu container image available on Docker Hub. The image will now be stored locally, and we'll be able to create new containers from it. To create a new container from our downloaded image, this command will do the trick: docker run -it ubuntu:latest /bin/bash Once you run this command, you'll notice that your shell prompt immediately changes. You're now within a shell prompt from your container. From here, you can run commands you would normally run within a real Ubuntu machine, such as installing new packages, changing configuration files, and so on. Go ahead and play around with the container, and then we'll continue on with a bit more theory on how it actually works. There are some potentially confusing aspects of Docker we should get out of the way first before we continue with additional examples. The most likely thing to confuse newcomers to Docker is how containers are created and destroyed. When you execute the docker run command against an image you've downloaded, you're actually creating a container. Each time you use the docker run command, you're not resuming the last container, but creating a new one. To see this in action, run a container with the docker run command provided earlier, and then type exit. Run it again, and then type exit again. You'll notice that the prompt is different each time you run the command. After the root@ portion of the bash prompt within the container is a portion of a container ID. It'll be different each time you execute the docker run command, since you're creating a new container with a new ID each time. To see the number of containers on your server, execute the docker info command. The first line of the output will tell you how many containers you have on your system, which should be the number of times you've run the docker run command. To see a list of all of these containers, execute the docker ps -a command: docker ps -a The output will give you the container ID of each container, the image it was created from, the command being run, when the container was created, its status, and any ports you may have forwarded. The output will also display a randomly generated name for each container, and these names are usually quite wacky. As I was going through the process of creating containers while writing this section, the codenames for my containers were tender_cori, serene_mcnulty, and high_goldwasser. This is just one of the many quirks of Docker, and some of these can be quite hilarious. The important output of the docker ps -a command is the container ID, the command, and the status. The ID allows you to reference a specific container. The command lets you know what command was run. In our example, we executed /bin/bash when we started our containers. Using the ID, we can resume a container. Simply execute the docker start command with the container ID right after. Your command will end up looking similar to the following: docker start 353c6fe0be4d The output will simply return the ID of the container and then drop you back to your shell prompt. Not the shell prompt of your container, but that of your server. You might be wondering at this point, then, how you get back to the shell prompt for the container. We can use docker attach for that: docker attach 353c6fe0be4d You should now be within a shell prompt inside your container. If you remember from earlier, when you type exit to disconnect from your container, the container stops. If you'd like to exit the container without stopping it, press CTRL + P and then CTRL + Q on your keyboard. You'll return to your main shell prompt, but the container will still be running. You can see this for yourself by checking the status of your containers with the docker ps -a command. However, while these keyboard shortcuts work to get you out of the container, it's important to understand what a container is and what it isn't. A container is not a service running in the background, at least not inherently. A container is a collection of namespaces, such as a namespace for its filesystem or users. When you disconnect without a process running within the container, there's no reason for it to run, since its namespace is empty. Thus, it stops. If you'd like to run a container in a way that is similar to a service (it keeps running in the background), you would want to run the container in detached mode. Basically, this is a way of telling your container, "run this process, and don't stop running it until I tell you to." Here's an example of creating a container and running it in detached mode: docker run -dit ubuntu /bin/bash Normally, we use the -it options to create a container. This is what we used a few pages back. The -i option triggers interactive mode, while the -t option gives us a psuedo-TTY. At the end of the command, we tell the container to run the Bash shell. The -d option runs the container in the background. It may seem relatively useless to have another Bash shell running in the background that isn't actually performing a task. But these are just simple examples to help you get the hang of Docker. A more common use case may be to run a specific application. In fact, you can even run a website from a Docker container by installing and configuring Apache within the container, including a virtual host. The question then becomes this: how do you access the container's instance of Apache within a web browser? The answer is port redirection, which Docker also supports. Let's give this a try. First, let's create a new container in detached mode. Let's also redirect port 80 within the container to port 8080 on the host: docker run -dit -p 8080:80 ubuntu /bin/bash The command will output a container ID. This ID will be much longer than you're accustomed to seeing, because when we run docker ps -a, it only shows shortened container IDs. You don't need to use the entire container ID when you attach; you can simply use part of it, so long as it's long enough to be different from other IDs—like this: docker attach dfb3e Here, I've attached to a container with an ID that begins with dfb3e. I'm now attached to a Bash shell within the container. Let's install Apache. We've done this before, but to keep it simple, just install the apache2 package within your container, we don't need to worry about configuring the default sample web page or making it look nice. We just want to verify that it works. Apache should now be installed within the container. In my tests, the apache2 daemon wasn't automatically started as it would've been on a real server instance. Since the latest container available on Docker Hub for Ubuntu hasn't yet been upgraded to 16.04 at the time of writing this (it's currently 14.04), the systemctl command won't work, so we'll need to use the legacy start command for Apache: # /etc/init.d/apache2 start We can similarly check the status, to make sure it's running: # /etc/init.d/apache2 status Apache should be running within the container. Now, press CTRL + P and then CTRL + Q to exit the container, but allow it to keep running in the background. You should be able to visit the sample Apache web page for the container by navigating to localhost:8080 in your web browser. You should see the default "It works!" page that comes with Apache. Congratulations, you're officially running an application within a container! Before we continue, think for a moment of all the use cases you can use Docker for. It may seem like a very simple concept (and it is), but it allows you to do some very powerful things. I'll give you a personal example. At a previous job, I worked with some embedded Linux software engineers, who each had their preferred Linux distribution to run on their workstation computers. Some preferred Ubuntu, others preferred Debian, and a few even ran Gentoo. For developers, this poses a problem—the build tools are different in each distribution, because they all ship different versions of all development packages. The application they developed was only known to compile in Debian, and newer versions of the GNU Compiler Collection (GCC) compiler posed a problem for the application. My solution was to provide each developer a Docker container based on Debian, with all the build tools baked in that they needed to perform their job. At this point, it no longer mattered which distribution they ran on their workstations. The container was the same no matter what they were running. I'm sure there are some clever use cases you can come up with. Anyway, back to our Apache container: it's now running happily in the background, responding to HTTP requests over port 8080 on the host. But, what should we do with it at this point? One thing we can do is create our own image from it. Before we do, we should configure Apache to automatically start when the container is started. We'll do this a bit differently inside the container than we would on an actual Ubuntu server. Attach to the container, and open the /etc/bash.bashrc file in a text editor within the container. Add the following to the very end of the file: /etc/init.d/apache2 start Save the file, and exit your editor. Exit the container with the CTRL + P and CTRL + Q key combinations. We can now create a new image of the container with the docker commit command: docker commit <Container ID> ubuntu:apache-server This command will return to us the ID of our new image. To view all the Docker images available on our machine, we can run the docker images command to have Docker return a list. You should see the original Ubuntu image we downloaded, along with the one we just created. We'll first see a column for the repository the image came from. In our case, it's Ubuntu. Next, we can see the tag. Our original Ubuntu image (the one we used docker pull command to download) has a tag of latest. We didn't specify that when we first downloaded it, it just defaulted to latest. In addition, we see an image ID for both, as well as the size. To create a new container from our new image, we just need to use docker run but specify the tag and name of our new image. Note that we may already have a container listening on port 8080, so this command may fail if that container hasn't been stopped: docker run -dit -p 8080:80 ubuntu:apache-server /bin/bash Speaking of stopping a container, I should probably show you how to do that as well. As you could probably guess, the command is docker stop followed by a container ID. This will send the SIGTERM signal to the container, followed by SIGKILL if it doesn't stop on its own after a delay: docker stop <Container ID> To remove a container, issue the docker rm command followed by a container ID. Normally, this will not remove a running container, but it will if you add the -f option. You can remove more than one docker container at a time by adding additional container IDs to the command, with a space separating each. Keep in mind that you'll lose any unsaved changes within your container if you haven't committed the container to an image yet: docker rm <Container ID> The docker rm command will not remove images. If you want to remove a docker image, use the docker rmi command followed by an image ID. You can run the docker image command to view images stored on your server, so you can easily fetch the ID of the image you want to remove. You can also use the repository and tag name, such as ubuntu:apache-server, instead of the image ID. If the image is in use, you can force its removal with the -f option: docker rmi <Image ID> Before we conclude our look into Docker, there's another related concept you'll definitely want to check out: Dockerfiles. A Dockerfile is a neat way of automating the building of docker images, by creating a text file with a set of instructions for their creation. The easiest way to set up a Dockerfile is to create a directory, preferably with a descriptive name for the image you'd like to create (you can name it whatever you wish, though) and inside it create a file named Dockerfile. Following is a sample—copy this text into your Dockerfile and we'll look at how it works: FROM ubuntu MAINTAINER Jay <jay@somewhere.net> # Update the container's packages RUN apt-get update; apt-get dist-upgrade # Install apache2 and vim RUN apt-get install -y apache2 vim # Make Apache automatically start-up` RUN echo "/etc/init.d/apache2 start" >> /etc/bash.bashrc Let's go through this Dockerfile line by line to get a better understanding of what it's doing: FROM ubuntu We need an image to base our new image on, so we're using Ubuntu as a base. This will cause Docker to download the ubuntu:latest image from Docker Hub if we don't already have it downloaded: MAINTAINER Jay <myemail@somewhere.net> Here, we're setting the maintainer of the image. Basically, we're declaring its author: # Update the container's packages Lines beginning with a hash symbol (#) are ignored, so we are able to create comments within the Dockerfile. This is recommended to give others a good idea of what your Dockerfile does: RUN apt-get update; apt-get dist-upgrade -y With the RUN command, we're telling Docker to run a specific command while the image is being created. In this case, we're updating the image's repository index and performing a full package update to ensure the resulting image is as fresh as can be. The -y option is provided to suppress any requests for confirmation while the command runs: RUN apt-get install -y apache2 vim Next, we're installing both apache2 and vim. The vim package isn't required, but I personally like to make sure all of my servers and containers have it installed. I mainly included it here to show you that you can install multiple packages in one line: RUN echo "/etc/init.d/apache2 start" >> /etc/bash.bashrc Earlier, we copied the startup command for the apache2 daemon into the /etc/bash.bashrc file. We're including that here so that we won't have to do this ourselves when containers are crated from the image. To build the image, we can use the docker build command, which can be executed from within the directory that contains the Dockerfile. What follows is an example of using the docker build command to create an image tagged packt:apache-server: docker build -t packt:apache-server Once you run this command, you'll see Docker create the image for you, running each of the commands you asked it to. The image will be set up just the way you like. Basically, we just automated the entire creation of the Apache container we used as an example in this section. Once this is complete, we can create a container from our new image: docker run -dit -p 8080:80 packt:apache-server /bin/bash Almost immediately after running the container, the sample Apache site will be available on the host. With a Dockerfile, you'll be able to automate the creation of your Docker images. There's much more you can do with Dockerfiles though; feel free to peruse Docker's official documentation to learn more. Summary In this article, we took a look at virtualization as well as containerization. We began by walking through the installation of KVM as well as all the configuration required to get our virtualization server up and running. We also took a look at Docker, which is a great way of virtualizing individual applications rather than entire servers. We installed Docker on our server, and we walked through managing containers by pulling down an image from Docker Hub, customizing our own images, and creating Dockerfiles to automate the deployment of Docker images. We also went over many of the popular Docker commands to manage our containers. Resources for Article: Further resources on this subject: - Configuring and Administering the Ubuntu Server[article] - Network Based Ubuntu Installations[article] - Making the most of Ubuntu through Windows Proxies[article]
https://www.packtpub.com/books/content/virtualizing-hosts-and-applications
CC-MAIN-2017-22
refinedweb
3,556
60.55
KTRACE(2) NetBSD System Calls Manual KTRACE(2)Powered by man-cgi (2021-06-01). Maintained for NetBSD by Kimmo Suominen. Based on man-cgi by Panagiotis Christias. NAME ktrace, fktrace -- process tracing LIBRARY Standard C Library (libc, -lc) SYNOPSIS #include <sys/param parameter_EMUL Trace emulation changes. KTRFAC_INHERIT Inherit tracing to future children. Each tracing event outputs a record composed of a generic header followed by a trace point specific structure. The generic header is: struct ktr_header { int ktr_len; /* length of buf */ short ktr_type; /* trace record type */ short ktr_version; /* trace record version */ pid_t ktr_pid; /* process id */ char ktr_comm[MAXCOMLEN+1]; /* command name */ struct timespec ktr_time; /* timestamp */ lwpid_t ktr_lid; }; The ktr_len field specifies the length of the data that follows this header. The ktr_type and ktr_version fields (whose ordering in the structure depends on byte order) specify the format of this data. The ktr_pid, ktr_lid, and ktr_comm fields specify the process and command generating the record. The ktr_time field gives the time (with nanosec- ond resolution) that the record was generated. The generic header is followed by ktr_len bytes of a ktr_type record of version ktr_version. The type specific records are defined in the <sys/ktrace.h> include file. RETURN VALUES On successful completion a value of 0 is returned. Otherwise, a value of -1 is returned and errno is set to show the error. ERRORS ktrace() will fail if: [EACCES] Search permission is denied for a component of the path prefix. [EINVAL] The pathname contains a character with the high-order bit set. tracefile does not exist. [ENOTDIR] A component of the path prefix is not a directory. SEE ALSO kdump(1), ktrace(1) HISTORY A ktrace function call first appeared in 4.4BSD. NetBSD 9.99 March 19, 2016 NetBSD 9.99
http://man.netbsd.org/ktrace.2
CC-MAIN-2022-21
refinedweb
293
66.64
I solved it I figured it out but don't know how to delete a post. public static void main(String [] args) { Scanner keyboard = new Scanner(System.in); int num = 0; int num_of_rows = 1; while(num < 1 || num > 10)... Not working, I may have done it wrong. I enter 2 it prints out "$$" and the "The number you entered is invalid". Even if I enter 20 it says the same thing. But it doesn't ask for the user to enter... Yes, that is correct. So, a while loop would be the thing to use? Right now my code is generating half of what it is supposed to. Program asks the user to enter a number from 1 to 10. Enter 7 output will be $$$$$$$ I'm having issues with my if... Can't use arrays. Even if we were I wouldn't know how. I am in an intro class. I only know to a certain point. Can I ask why you didn't think my code produced what it should. um, if you... This is my output: Enter some numbers: 2 You entered numbers 2. Your total are 2 Enter some numbers: 7 You entered numbers 7. Is there a simpler way to print everything out on two lines? Example: You entered numbers 2,7, 3, and 0. Your totals are 2, 9, and 12. { Scanner keyboard = new Scanner... I need a little help. Not sure where to go from here. public static void main(String[] args) { Scanner keyboard = new Scanner (System.in); That is all you had to say...I saw it after you mentioned this. Thanks public static void main(String[] args) { int income = 0; char answer; String student, input; Scanner keyboard = new Scanner (System.in); ... package lab4_ex2; import java.util.Scanner; public class Lab4_Ex2 { public static void main(String[] args) { double firstTest,secondTest, thirdTest; I know how to do the rest of it...didn't need it done for me. I just wanted to know if the first part was correct. I want to make sure I am doing this programming assignment the right way. Question: Your history instructor gives three tests worth 50 points each. You can drop one of the first two grades.... I figured it out but don't know how to delete a thread. This isn't a what is wrong with my code thread. It is a thread about why my program does what it does. package example1; import java.util.Scanner; public class example1 { Thanks, glad to help. I figured it out Final code: public static void main(String[] args) { Scanner keyboard = new Scanner (System.in); public static void main(String[] args) { Scanner keyboard = new Scanner (System.in); float x; int y; char ch1, ch2; String name; ...
http://www.javaprogrammingforums.com/search.php?s=1391c48d724a9df0c7300151a44f0294&searchid=837314
CC-MAIN-2014-15
refinedweb
458
87.21
How to Implement an OAuth2 Workflow in Node.js January 14th, 2022 What You Will Learn in This Tutorial How to implement an OAuth2 workflow in JavaScript and Node.js by setting up an OAuth connection to the Github API. add one dependency: node-fetch. Terminal cd app && npm i node-fetch With that installed, go ahead and start up your app: Terminal joystick start After this, your app should be running and we're ready to get started. Fair warning While OAuth2 itself is a standard for implementing authentication patterns, the implementation of that standard is not always consistent. We've chosen Github as our example API as their OAuth implementation is well-done and well-documented. This is not always the case for your API of choice. The point being: look at the steps we cover here as an approximation of what an OAuth2 implementation should look like for an API. Sometimes you get lucky, sometimes you end up with a noise complaint from the police. A few common inconsistencies to watch out for: - Undocumented or poorly documented parameters that need to be passed in the HTTP headers, query params, or body. - Undocumented or poorly documented response types that need to be passed in the HTTP headers. For example, some APIs may require the Acceptheader being set to application/jsonin order to get back a response in a JSON format. - Bad example code in the documentation. - Bad error codes when incorrect parameters (see the previous items above) are passed. While this isn't everything you'll encounter, these are usually the ones that will waste your time and energy. If you're certain you're following your APIs documentation perfectly and still having issues: review the list above and play around with what you're passing (even if it's not documented by the API in question, as frustrating as that may be). Note: there is always help available. Book an on-demand pair programming session with CheatCode here to get hands-on help. Getting credentials from the Github API To start, we'll need to register our application with Github and obtain security credentials. This is a common pattern with all OAuth2 implementations. In particular, you will need two things: a client_id and a client_secret. The client_id tells the API who or what app is trying to get permission to authenticate on behalf of a user while the client_secret authorizes the connection by proving ownership of the app specified by the client_id (this is public so technically anybody can pass it to an API, while the client_secret is, like the name implies, secret). If you don't already have a Github account, head to this link and create an account. Once you're logged in, in the top-right hand corner of the site, click the circle icon with your avatar and a down arrow next to it. From the menu that pops up, select "Settings." Next, near the bottom of the left-hand menu on that page, locate and click the "Developer Settings" option. On the next page, in the left-hand menu, locate and click the "OAuth Apps" option. If this is your first time registering an OAuth app with Github, you should see a green button that prompts you to "Register a new application." Click that to start the process of obtaining your client_id and client_secret. On this page, you will need to provide three things: - A name for your OAuth application. This is what Github will display to users when they confirm your access to their account. - A homepage URL for your app (this can just be a dummy URL for testing). - An "Authorization callback URL" which is where Github will send a special codein response to a user's approval to grant our app permission to access their account. For #3, in this tutorial, we want to enter (this is different from what you'll see in the screenshot above but is equivalent in terms of intent). is where the app we created using CheatCode's Joystick framework will run by default. The /oauth/github part is the path/route that we'll wire up next where we'll expect Github to send us an authorization code that we can exchange for an access_token for the user's account. After this is filled out, click "Register application" to create your OAuth app. On the next screen, you will want to locate the "Client ID" and click the "Generate a new client secret" button near the middle of the page. Note: when you generate your client_secret Github will intentionally only show it to you on screen one time. It's recommended that you back this and your client_id up in a password manager or other secrets manager. If you lose it, you will need to generate a new secret and delete the old one to avoid a potential security issue. Keep this page up or copy the client_id and client_secret for use in the next step. Adding our credentials to our settings file Before we dig into the code, next, we need to copy our client_id and client_secret into our application's settings file. In a Joystick app, this is automatically created for us when we run joystick create. Open up the settings-development.json file at the root of your app: /settings-development.json { "config": { "databases": [ ... ], "i18n": { "defaultLanguage": "en-US" }, "middleware": {}, "email": { ... } }, "global": {}, "public": { "github": { "client_id": "dc47b6a0a67b904c58c7" } }, "private": { "github": { "client_id": "dc47b6a0a67b904c58c7", "client_secret": "<Client Secret Here>", "redirect_uri": "" } } } We want to focus on two places: the public and private objects already present in the file. Inside of both, we want to nest a github object that will contain our credentials. Pay attention here: we only want to store the client_id under the public.github object while we want to store both the client_id and client_secret under the private.github object. We also want to add the redirect_uri we typed in on Github (the one). Once you've got these set, we're ready to dig into the code. Wiring up the client request for authorization To begin, we're going to add a simple page in our UI where we can access a "Connect to Github" button our users can click to initialize an OAuth request. To build it, we're going to reuse the / route that's automatically defined for us when we generate an app with joystick create. Real quick, if we open up /index.server.js at the root of the project, we can see how this is being rendered by Joystick: , }, }); }, }, }); In a Joystick app, routes are defined via an Express.js instance that's automatically set up via the node.app() function imported from the @joystick.js/node package. To that function, an object is passed with a routes option set to an object where all of the routes for our app are defined. Here, the / index route (or "root" route) uses the res.render() function defined by Joystick on the HTTP response object we get from Express.js. That function is designed to render a Joystick component created using Joystick's UI library @joystick.js/ui. Here, we can see the ui/pages/index/index.js path being passed. Let's open up that file now and modify it to display our "Connect to Github" button. /ui/pages/index/index.js import ui from "@joystick.js/ui"; const Index = ui.component({ events: { 'click .login-with-github': (event) => { location.href = `{joystick.settings.public.github.client_id}&scope=repo user`; }, }, css: ` div { padding: 40px; } .login-with-github { background: #333; padding: 15px 20px; border-radius: 3px; border: none; font-size: 15px; color: #fff; } .login-with-github { cursor: pointer; } .login-with-github:active { position: relative; top: 1px; } `, render: () => { return ` <div> <button class="login-with-github">Connect to Github</button> </div> `; }, }); export default Index; Here, we've overwritten the existing contents of our /ui/pages/index/index.js file with the component that will render our button. In Joystick, components are defined by calling the ui.component() function imported from the @joystick.js/ui package and passed an object of options to describe the behavior and appearance of the component. Here, down in the render function, we return a string of HTML that we want Joystick to render in the browser for us. In that string, we have a simple <button></button> element with a class name .login-with-github. If we look at the option above render, css, we can see some styles being applied to our component, adding a bit of padding to the page and styling our button up. The important part here is up in the events object. Here, we define an event listener for a click event on an element with the class .login-with-github. When that event is detected in the browser, the function we've assigned to 'click .login-with-github here will be called. Inside, our goal is to redirect the user to Github's URL for kicking off an OAuth authorization request. To do it, we set the global location.href value in the browser to a string containing the URL along with some query parameters: client_idhere is assigned to the value of joystick.settings.public.github.client_idthat we set in our settings-development.jsonfile earlier. scopeset equal to two "scopes" that grant specific permissions to the access_tokenwe get from Github for this user. Here, we're using the repoand user(space-separated as per the Github documentation) scopes to give us access to a users repositories on Github and their full user profile. A full list of scopes to request is available here. If we save these changes with our app running, Joystick will auto-refresh in the browser. Assuming our credentials are correct, we should be redirected to Github and see something like this: Next, before we click the "Authorize" button, we need to wire up the endpoint that Github will redirect the user to (the "Authorization callback URL" that we set to earlier). Handling the token exchange The final step to get everything working is to perform a token exchange with Github. In order to approve our request and finalize our connection, Github needs to verify the request to connect with our server. To do it, when the user clicks "Authorize" in the UI we just saw on Github, they will send a request to the "Authorization callback URL" we specified when setting up our app, passing a temporary code value in the query params of the request URL that we can "exchange" for a permanent access_token for our user. To start, the first thing we need to do is wire up that URL/route back in our index.server.js file: /index.server.js import node from "@joystick.js/node"; import api from "./api"; import github from "./api/oauth/github"; node.app({ api, routes: { "/": (req, res) => { res.render("ui/pages/index/index.js", { layout: "ui/layouts/app/index.js", }); }, "/oauth/github": async (req, res) => { await github({ req }); res.status(200).redirect('/'); }, "*": (req, res) => { res.render("ui/pages/error/index.js", { layout: "ui/layouts/app/index.js", props: { statusCode: 404, }, }); }, }, }); Some minor changes to what we saw earlier. Here, we're adding our route /oauth/github in the exact same way we learned about / earlier. Inside, we add the async keyword to the function that will be called when our route is loaded, anticipating a call to a function github() which will return a JavaScript Promise that we can await before responding to the request to the route. Once that function completes, we want to respond to the request from Github with a status of 200 and call .redirect() to redirect the user back to the page in our app where they originated the request (our / index route). Next, let's wire up that function we anticipated being available at /api/oauth/github.js in our project: /api/oauth/github.js /* eslint-disable consistent-return */ import fetch from 'node-fetch'; import { URL, URLSearchParams } from 'url'; const getReposFromGithub = (username = '', access_token = '') => { return fetch(``, { headers: { Accept: 'application/json', Authorization: `token ${access_token}`, }, }).then(async (response) => { const data = await response.json(); return data; }).catch((error) => { console.warn(error); throw new Error(error); }); }; const getUserFromGithub = (access_token = '') => { return fetch('', { headers: { Accept: 'application/json', Authorization: `token ${access_token}`, }, }).then(async (response) => { const data = await response.json(); return data; }).catch((error) => { console.warn(error); throw new Error(error); }); };}`); } }; const validateOptions = (options) => { try { if (!options) throw new Error('options object is required.'); if (!options.req) throw new Error('options.req is required.'); } catch (exception) { throw new Error(`[github.validateOptions] ${exception.message}`); } }; const github = async (options, { resolve, reject }) => { try { validateOptions(options); const { access_token } = await getAccessTokenFromGithub(options?.req?.query?.code); const user = await getUserFromGithub(access_token); const repos = await getReposFromGithub(user?.login, access_token); // NOTE: Set this information on a user in your database or store elsewhere for reuse. console.log({ access_token, user, repos, }); resolve(); } catch (exception) { reject(`[github] ${exception.message}`); } }; export default (options) => new Promise((resolve, reject) => { github(options, { resolve, reject }); }); To make everything easier to understand, here, we're doing a full code dump and then stepping through it. In this file, we're using a pattern known as the action pattern (something I came up with a few years back for organizing algorithmic or multi-step code in an app). The basic construction of an action pattern is that we have a single main function (here, defined as github) that calls other functions in sequence. Each function in that sequence performs a single task and if necessary, returns a value to hand off to the other functions in the sequence. Each function is defined as an arrow function with a JavaScript try/catch block immediately inside of its body. In the try block, we run the code for the function and in the catch we call to throw passing a standardized string with our error. The idea at play here is to give our code some structure and keep things organized while making errors easier to track down (if an error occurs within a function, the [github.<functionName>] part tells us where exactly the error occurred). Here, because this is a "Promise" action, we wrap the main github() function with a JavaScript Promise at the bottom of our file and export that function. Back in our /index.server.js file, this is why we can use the async/await pattern. For our "action," we have three steps: - Exchange the codethat we get from Github for a permanent access_token. - Get the user associated with that access_tokenfrom the Github API. - Get the repos for the user associated with that access_tokenfrom the Github API. The idea here is to showcase the process of getting a token and then performing API requests with that token. So it's clear, this is kept generic so that you can apply this pattern/login to any OAuth API. /api/oauth/github.js}`); } }; Focusing on the first step in the sequence getAccessTokenFromGithub(), here, we need to perform a request back to the endpoint in the Github API to get a permanent access_token. To do it, we want to perform an HTTP POST request (as per the Github docs and the standard for OAuth implementations), passing the required parameters for the request (again, per Github but similar for all OAuth2 requests). To do that, we import the URL and URLSearchParams classes from the Node.js url package (we don't have to install this package—it's automatically available in a Node.js app). First, we need to create a new URL object for the /login/oauth endpoint on Github with new URL() passing in that URL. Next, we need to generate the search params for our request ?like=this and so we use the new URLSearchParams() class, passing in an object with all of the query parameters we want to add to our URL. Here, we need four: client_id, client_secret, code, and redirect_uri. Using these four parameters, Github will be able to authenticate our request for an access_token and return one we can use. For our client_id, client_secret, and redirect_uri, we pull these in from the joystick.settings.private.github object we defined earlier in the tutorial. The code is the code that we retrieved from the req?.query?.code value passed to us by Github (in an Express.js app, any query params passed to our server are set to the object query on the inbound request object). With that, before we perform our request we add our search params to our URL by setting the url.search value equal to the result of calling .toString() on our searchParams variable. This will generate a string that looks like ?client_id=xxx&client_secret=xxx&code=xxx&redirect_uri=. Finally, with this, up top we import fetch from the node-fetch package we installed earlier. We call it, passing our url object we just generated, followed by an options object with a method value set to POST (signifying we want the request performed as an HTTP POST request) and a headers object. In that headers object, we pass the standard Accept header to tell the Github API the MIME type we will accept for their response to our request (in this case application/json). If we omit this, Github will return the response using the default url-form-encoded MIME type. Once this is called, we expect fetch() to return us a JavaScript Promise with the response. To get the response as a JSON object, we take in the response passed to the callback of our .then() method and then call to response.json() to tell fetch to format the response body it received as JSON data (we use async/await here to tell JavaScript to wait on the response from the response.json() function). With that data on hand, we return it from our function. If all went according to plan, we should get back an object that looks something like this from Github: { access_token: 'gho_abc123456', token_type: 'bearer', scope: 'repo,user' } Next, if we review our main github function for our action, we can see that the next step is to take the resulting object we get from the getAccessTokenFromGithub() function and destructure it, plucking off the access_token property we see in the example response above. With this, now we have permanent access to this user's repos and user account on Github (completing the OAuth part of the workflow) until they revoke access. While we're technically done with our OAuth implementation, it's helpful to see the why behind what we're doing. Now, with our access_token we're able to perform requests to the Github API on behalf of our users. Meaning, as far as Github is concerned (and within the limitations of the scopes we requested), we are that user until the user says we aren't and revokes our access. /api/oauth/github.js const getUserFromGithub = (access_token = '') => { return fetch('', { headers: { Accept: 'application/json', Authorization: `token ${access_token}`, }, }).then(async (response) => { const data = await response.json(); return data; }).catch((error) => { console.warn(error); throw new Error(error); }); }; Focusing on our call to getUserFromGithub() the process to make our API request is nearly identical to our access_token request with the minor addition of a new header Authorization. This is another standard HTTP header which allows us to pass an authorization string to the server we're making our request to (in this case, Github's API server). In that string, following the conventions of the Github API (this part will be different for each API—some require the bearer <token> pattern while others require the <user>:<pass> pattern while still others require a base64 encoded version of one of those two or another pattern), we pass the keyword token followed by a space and then the access_token value we received from the getAccessTokenFromGithub() function we wrote earlier. To handle the response, we perform the exact same steps we saw above using response.json() to format the response as JSON data. With that, we should expect to get back a big object describing our user! We're going to wrap up here. Though we do have another a function call to getReposFromGithub(), we've already learned what we need to understand to perform this request. Back down in our main github() function, we take the result of all three calls and combine them together on an object we log to our console. That's it! We now have OAuth2 access to our Github user's account. Wrapping up In this tutorial, we learned how to implement an OAuth2 authorization workflow using the Github API. We learned about the difference between different OAuth implementations and looked at an example of initializing a request on the client and then handling a token exchange on the server. Finally, we learned how to take an access_token we get back from an OAuth token exchange and use that to perform API requests on behalf of the user. Get the latest free JavaScript and Node.js tutorials, course announcements, and updates from CheatCode in your inbox. No spam. Just new tutorials, course announcements, and updates from CheatCode.
https://cheatcode.co/tutorials/how-to-implement-an-oauth2-workflow-in-node-js
CC-MAIN-2022-27
refinedweb
3,509
63.29
I'm Working on A Game Called Super Tic Tac Toe Haven't Programmed In A While Really Need Some Help/Ideas Here Are The Requirements For The Game Below: 1. The Program Will Allow Up to 5 Players Capture First And Last Names 1 Player At a Time Then Using First Name Afterwards. 2. The First Players Piece is "a" second is "b" and so on throughout the entire game. 3. The Players can choose the board (up to 10 by 15) for each game. Each game can have a different size. 4. Starts with player "a". Subsequent game starts with the winner of the previous and follows with players in the round robin sequence "a"..."e". If it ends in a draw the next game starts with the most recent winner. 5. Move is specified as row character (A...J) followed by column number (1...15) i.e. D5, J11 6. Must Redraw board after each move. Below Is the Code I Have So Far It's Due In A Little Over A Week. Once Again Any Suggestion/Help/Ideas Will Be Helpful. Thanks. Code:#include <iostream> #include <string> using namespace std; void displayBoard(); void clear(); void stat(); void GetName(string&, string&); bool checkWin(); void move(bool): bool isLegal(int); char board[9] = {'1','2','3','4','5','6','7','8','9'}; string FirstPlName, SecondPlName; int main() { bool player = false; char H; string FirstName; string SecondName; GetNames(FirstName, SecondName); displayBoard(; while(!checkWin()) { if(player == true) player = false; else player = true; move(player); } if(player == true) cout << FirstPlName << "Wins" << endl; else cout << SecondPlName << "Wins" << endl; cout << " Would you like to play again? Y or N" << endl; cin >> H; if (H='y') { clear(); bool player = false; displayBoard(); while(!checkWin()); } else if(H='n') { cout << endl << " Thank You for playing"; } void displayBoard() { system("cls"); cout << "\n -------------" << endl << "|" << board[0] << "|" << board[1]<< "|" << board[2] << "|" << endl << " ---------------" << endl << "|" << board[3] << "|" << board[4]<< "|" << board[5] << "|" << endl << " ---------------" << endl << "|" << board[6] << "|" << board[7]<< "|" << board[8] << "|" << endl << " ---------------" << endl; } bool checkWin() { if(board[0] == board[1] && board[2] == board[0] ) return true; else if (board[3] == board[4] && board[5] == board[3]) return true; else if (board[6] == board[7] && board[8] == board[6]) return true; else if (board[0] == board[3] && board[6] == board[0]) return true; else if (board[1] == board[4] && board[7] == board[1]) return true; else if (board[2] == board[5] && board[8] == board[2]) return true; else if (board[0] == board[4] && board[8] == board[0]) return true; else if (board[2] == board[4] && board[6] == board[2]) return true; else return false; } void move(bool who) { int spot; if (who == true) cout << "\nEnter your mmove " <<FirstPlName <<"; "; else cout << "\nEnter your mmove " <<SecondPlName <<"; "; cin >> spot; if(isLegal(spot)) { if (who == true) board[spot-1] = 'x'; else board[spot-1] = 'o'; } else move(who); displayBoard(); } bool isLegal(int spot) { if(board[spot-1] == 'X' || board[spot-1] == 'O') return false; else return true; }
https://cboard.cprogramming.com/cplusplus-programming/119565-super-tic-tac-toe.html
CC-MAIN-2017-22
refinedweb
489
74.32
iddle->something() but don't want to mess up the PDL namespace (a worthy goal, indeed!). The other is that you wish to provide special handling of some functions or more information about the data the piddle contains. In the first case, you can do with package BAR; @ISA=qw/PDL/; sub foo {my($this) = @_; fiddle;} package main; $a = PDL::pdl(BAR,5); $a->foo(); However, because a PDL object is an opaque reference to a C struct, it is not possible to extend the PDL class by e.g. extra data via subclassing. To circumvent this problem PerlDL has built-in support to extent the PDL class via the has-a relation for blessed hashes. You can get the HAS-A behave like IS-A simply in that you assign the PDL object to the attribute named PDL and redefine the method initialize(). package FOO; @FOO::ISA = qw(PDL); sub initialize { my $class = shift; my $self = { creation_time => time(), # necessary extension :-) PDL => null, # used to store PDL object }; bless $self, $class; } All PDL constructors will call initialize() to make sure that your func( a(), [o]b() ) PDL will call $a->copy to create the output object. In the spirit of the Perl philosophy of making Easy Things Easy, This behavior enables PDL-subclassed objects to be written without having to overload the many simple PDL functions in this category. The file t/subclass4.t in the PDL Distribution tests for this behavior. See that file for an example..
https://metacpan.org/pod/PDL::Objects
CC-MAIN-2015-35
refinedweb
249
67.38
Re: Virtual function and multiple inheritance - From: George <George@xxxxxxxxxxxxxxxxxxxxxxxxx> - Date: Sat, 2 Feb 2008 04:40:00 -0800 Hi Bo Persson, Great reply!? I have made the sample easy to show that the non-override methods -- even if virtual is not in vtable. I am very confused. Here is the code and result, and I am using Visual Studio 2008. In class Goo, there are three virtual methods - virtual int func1() {return 0;} - virtual int func2() {return 0;} - virtual int myFunc() {return 1;} But in vtable of Goo class object, you can only find func1 and func2. In class Zoo, there is only one virtual method called zoo_func, and it is in vtable of Zoo class object. Why myFunc is missing in vtable of Goo? Here is my complete code. Easy program to understand and debug. :-) #include <iostream> using namespace std; class Foo { virtual int func1() = 0; virtual int func2() = 0; virtual int func3() {return 0;} }; class Goo: Foo { public: virtual int func1() {return 0;} virtual int func2() {return 0;} virtual int myFunc() {return 1;} }; class Zoo { public: virtual int zoo_func() {return 0;} }; int main() { Goo g; Zoo z; return 0; } regards, George "Bo Persson" wrote: George wrote:. Thanks Alf, Your reply is great! I think you mean, 1. containing two __vfptr is for the simple reason to maintain the same memory structure as sub-object; It could be used in place of either a Foo or a Goo object, for example when passed by reference to a function. To do this, it probably needs two different vtables, at least in this implementation. Note that the langage as such doesn't say anything about how virtual functions are to be implemented, just how they should behave. 2. __vfptr for Foo contains Derived overridden virtual methods for Foo, not Foo itself's virtual methods implementation, and __vfptr for Goo contains Derived overridden virtual methods for Goo, not Goo itself's virtual methods implementation. Right? I'll make a guess that these two vtables are specific for Derived. If you were to define two separate objects of the Foo and Goo classes, I bet there will be one or two separate vtables for those. As Alf suggests, the implementation just might share (or overlay) the vtables for Foo and Derived, especially if Derived is the only class deriving from Foo. As the Foo and Goo subobjects cannot possibly have the same offset from the start of Derived, one vtable might be shared, but not both. That's likely why you found two vtable pointers in Derived. 3.? Bo Persson regards, George "Alf P. Steinbach" wrote: * George "the mysterious BOT": (this question is posted to vc.language newsgroup) Hello - Follow-Ups: - Re: Virtual function and multiple inheritance - From: Bo Persson - References: - Re: Virtual function and multiple inheritance - From: Alf P. Steinbach - Re: Virtual function and multiple inheritance - From: George - Re: Virtual function and multiple inheritance - From: Bo Persson - Prev by Date: Re: Virtual function and multiple inheritance - Next by Date: Re: IUnknown - Previous by thread: Re: Virtual function and multiple inheritance - Next by thread: Re: Virtual function and multiple inheritance - Index(es):
http://www.tech-archive.net/Archive/VC/microsoft.public.vc.language/2008-02/msg00046.html
crawl-002
refinedweb
520
59.33
03-15-2020 09:40 PM Setup: I'm developing a bare-metal embedded system using a Zynq Ultrascale+ MPSOC that communicates with a host PC over TCP/IP using the LWIP stack provided in Vitis 2019.2. The host PC sends commands/queries to the Zynq, which replies. Problem: The replies occasionally (after several seconds of 100Hz messaging) get stuck in the LWIP transmit buffers, even though I call tcp_write followed by tcp_output. I've confirmed that the reply data never gets onto the wire using Wireshark and a managed switch. After getting stuck, a subsequent TCP write will "unstick" the LWIP stack, and both the stuck and subsequent data show up at the host, but without that nudge it will stay stuck indefinitely, regardless of calls to tcp_output. tcp_sndbuf() shows that the LWIP stack has the data in its buffers, since it will report n bytes less buffer space than normal (eg 8188 vs 8192 if 4 bytes are stuck). Clues: I've reduced this phenomenon to a minimal case, using a slightly modified version of the LWIP echo example Vitis generates (listed below). If I reply to the host in the recv_callback, this problem does not manifest. If I reply to the host from the main loop, this problem occurs. You can switch the code below between the two cases by switching the IN_RCV define on or off. I don't want to generate all replies in the receive callback because some commands require a long operation before replying, and I want to keep calling the LWIP tcp_fasttmr/tcp_slowtmr routines during those operations. To get a stall to happen, I run the code below on the Zynq without IN_RCV defined, and the PC has a simple Python script that sends ">xxSTATUS\n" messages (where xx counts up), and the Zynq replies with "<xxSTATUS\n" (transmitted from the main loop). If IN_RCV is defined, replies are transmitted from recv_callback(), and no stall ever happens. //#define IN_RCV #include <stdio.h> #include <string.h> #include "lwip/err.h" #include "lwip/tcp.h" #include "xil_printf.h" int rxlen; char rxbuf[100]; struct tcp_pcb *rxpcb; int transfer_data() { #ifndef IN_RCV int i; if (rxlen) { rxbuf[0] = '<'; if (tcp_sndbuf(rxpcb) > rxlen) { for (i = 0; i < rxlen; i++) if (rxbuf[i] == '>') rxbuf[i] = '<'; tcp_write(rxpcb, rxbuf, rxlen, 1); if (tcp_output(rxpcb) != ERR_OK) printf("tcp_output error\r\n"); } else xil_printf("no space in tcp_sndbuf\n\r"); rxlen = 0; } #endif return 0; } void print_app_header() { #if (LWIP_IPV6==0) xil_printf("\n\r\n\r-----lwIP TCP echo server ------\n\r"); #else xil_printf("\n\r\n\r-----lwIPv6 TCP echo server ------\n\r"); #endif xil_printf("TCP packets sent to port 4242 will be echoed back\n\r"); } err_t poll_callback(void *arg, struct tcp_pcb *tpcb) { printf("sendbuf: %d\r\n", tcp_sndbuf(tpcb)); return ERR_OK; } err_t recv_callback(void *arg, struct tcp_pcb *tpcb, struct pbuf *p, err_t err) { int i; char *s; if (!p) { tcp_close(tpcb); tcp_recv(tpcb, NULL); return ERR_OK; } #ifdef IN_RCV if (tcp_sndbuf(tpcb) > p->len) { s = (char *)p->payload; for (i = 0; i < p->len; i++) if (s[i] == '>') s[i] = '<'; err = tcp_write(tpcb, p->payload, p->len, 1); } else xil_printf("no space in tcp_sndbuf\n\r"); #else memcpy(rxbuf, p->payload, p->len); rxlen = p->len; rxpcb = tpcb; #endif tcp_recved(tpcb, p->len); pbuf_free(p); return ERR_OK; } err_t accept_callback(void *arg, struct tcp_pcb *newpcb, err_t err) { static int connection = 1; tcp_recv(newpcb, recv_callback); tcp_poll(newpcb, poll_callback, 1); tcp_arg(newpcb, (void*)(UINTPTR)connection); connection++; return ERR_OK; } int start_application() { struct tcp_pcb *pcb; err_t err; unsigned port = 4242; rxlen = 0; pcb = tcp_new_ip_type(IPADDR_TYPE_ANY); err = tcp_bind(pcb, IP_ANY_TYPE, port); tcp_arg(pcb, NULL); pcb = tcp_listen(pcb); tcp_accept(pcb, accept_callback); xil_printf("TCP echo server started @ port %d\n\r", port); return 0; } Guesses: My first thought was that calling tcp_write from the main loop was somehow in the wrong context. Putting a breakpoint in recv_callback shows that it is indeed called from the main loop's context, and not from some interrupt, so I believe calling tcp_write from the main loop is legal. The echo example even has a transfer_data() routine called from the main loop.(). Request: Has anyone else seen similar behavior, and found a solution? Thanks! -Greg 03-17-2020 06:01 PM Update: I built the same echo server on the ZCU102, and saw the same bug when using Vitis 2019.2. However, the bug goes away for both the ZCU102 and my hardware when compiled with Vivado 2017.3. I'm not sure if this is a bug that got introduced in the 2019.2 LWIP code or the emacps driver. I'll stick with 2017.3 for now, but this will be worth tracking down and correcting in the future. -Greg 03-19-2020 03:50 AM Hi Greg If you have 2018.1, I would suggest to once check with this version and check to see if you still see this issue. The reason is in 2017.3, we have lwip141 and starting from 2018.1, we have implemented echo server targeting lwip2.0.2 Best Regards Shabbir 06-05-2020 11:51 AM Another update: I've spent a while debugging this issue in 2017.3. The root problem is that the emacps driver does not check to see if a DMA is already in progress before trying to start a new DMA, and fails silently if they overlap. So, if my code calls tcp_output() from the main loop and LWIP had just started a packet (even just an ACK), there's a chance that the second send gets swallowed. An ugly hack to fix the problem is to create a global flag (I called it "tx_in_progress") that gets set when a DMA is started (at the bottom of emacps_sgsend() in xemacpsif_dma.c), and cleared when the DMA completes (in XEmacPs_IntrHandler() in xemacps_intr.c). I then check tx_in_progress, and don't call tcp_output() if it's set, and the stalling issue goes away. A clean fix would require correcting the Xilinx driver code, but I don't know if having it return an error or waiting and retrying would be best. --Greg 06-10-2020 08:06 AM HI @gbredthauer , Thanks for the follow-up and this is great info. Is there a test case and steps you can share so I could try to reproduce the issue on ZCU102 or ZCU106? If so, I can go ahead to file a change request to clean up Xilinx driver. 07-01-2020 11:19 AM I've done additional testing on the ZCU102. I used Vivado 2019.1, so I could use the example design from XAPP1305 to use an SFP network connection via AXI Ethernet instead of the PS GEM. SFP test: The script ran overnight, and exchanged 27M messages without an error. GEM test: The script will typically run for 10-60 seconds, and then give an error (as a response from the ZCU102 was not received). I think this is a pretty conclusive apples-to-apples test that shows there's a bug in the GEM driver or hardware. Short term, I'll respin my board to route the RGMII interface to the PL instead of the PS so I can bypass the GEM. --Greg
https://forums.xilinx.com/t5/Ethernet/LWIP-transmit-stalls-on-Zynq-Ultrascale/m-p/1085557
CC-MAIN-2020-29
refinedweb
1,199
67.59
PowerS. Laerte explains how. Why should a DBA learn PowerShell ? It is all about solutions. In this article, I want to explain how one can integrate PowerShell, TSQL, SQL Jobs and SQL WMI alerts into a complete solution. I will go further in this topic in my new written project along three great friends. Stay Tuned, as we will soon have a complete guide about day-to-day solutions for the DBA, using PowerShell and SQL Server. When you read about using PowerShell and SQL Server, you are usually learning about the way that you use PowerShell to access SQL Server. Sometimes, instead, you’ll want to use PowerShell directly from SQL Server to create solutions. You may want to do it from TSQL, getting data back in a form that can then be inserted into a table, or execute it on the server from SSMS. You might want to run PowerShell scripts from the SQL Server Agent, or to set up sophisticated alerts using WMI that then execute jobs that are written in PowerShell. I’ll be showing you how to do all this; but let’s take things in easy stages. Did you know that you can run your PowerShell cmdlets and functions, along with their parameters, very simply from the Management Studio (SSMS) Query Editor, executing them on the server? Yeah, by using xp_cmdshell. Before I start showing you how to use xp_cmdshell to run PowerShell cmdlets from within TSQL, I must make you aware that, by enabling xp_cmdshell on a server, you’re creating potential security issues. There are good reasons why xp_CmdShell is disabled by default. When using xp_CmdShell to run PowerShell in SSMS , you’ll Just need to remember three things: Once you’ve enabled xp_CmdShell, and you have the necessary permissions to use it, PowerShell can give you valuable information easily. I’ll show you a couple of examples: Getting Disk space, and seeing what services are running. We’ll start simply by listing all the services on the server. xp_cmdshell 'PowerShell.exe -noprofile Get-Service' You have the full list of services, whatever their status. What if you wanted only those that had stopped? You’ll need to combine two CmdLets to do this in a pipeline , so it is now time to use a command-line parameter to run the command, the –Command parameter xp_cmdshell 'PowerShell.exe -noprofile -command "Get-Service | where {$_.status -eq ''Stopped''}"' And with only a small change we can see all the SQL services that have stopped. xp_cmdshell 'PowerShell.exe -noprofile -command "Get-Service -name *sql* | where {$_.status -eq ''Stopped''}"' But you can also query a server remotely once it has been configured : xp_cmdshell 'PowerShell.exe -noprofile -command "Get-Service -computername ObiWan -name *sql* | where {$_.status -eq ''Stopped''}"' Here, we are using a Get-Diskspace function (@sqlvariant) for the host of the SQL Server instance. This requires a function that you can download at the top of the article, and which will need to be placed on the server. xp_cmdshell 'PowerShell.exe -command "get-diskspace ."' To get the disk space for a different, remote, server, for example, use … xp_cmdshell 'PowerShell.exe -command "get-diskspace -servername ObiWan"' To get the disk space for all the Servers into a file called Servers.txt: xp_cmdshell 'PowerShell.exe -command "get-diskspace -servername (get-content c:\temp\servers.txt)"' Alternatively, to get just the percentage disk space, you can also use Get-Counter and \LogicalDisk(*)\% Free Space to get all counter instances. You can do this locally, for the host of your instance …. xp_cmdshell 'PowerShell.exe -noprofile -command "Get-counter -counter ''\LogicalDisk(*)\% Free Space'' | select -expand countersamples"' …or for a remote server xp_cmdshell 'PowerShell.exe -noprofile -command "Get-counter -computername ObiWan -counter ''\LogicalDisk(*)\% Free Space'' | select -expand countersamples"' So can you do more than this and run scripts the same way? Well, no, because there is a limitation. You can’t use the “ (double-quote) character, which is essential for PowerShell, because it is used in the command-line parameter to delimit the script-fragment being executed. To do this, you’ll need to save the script as a file and execute that. If you run a PowerShell CmdLet in xp_cmdshell, how do you get the data back into SQL as tabular data? We’ve shown you the output, but it is not immediately obvious as to how to read it. Xp_cmdshell actually returns a table consisting of a single column called ‘output’. You can insert it into a table using INSERT..EXEC, but INSERT..EXEC has certain restrictions. You cannot nest them and it cannot contain an output clause. However we can use this method to return an XML representation of the PowerShell objects being returned. All we then have to do is to shred it into a relational form and create a table. Taking a more refined version of the previous PowerShell command This will give the result Disk % Free Space -------------------- --------------------------------------- c: 91.38 d: 79.61 _total 83.53 (3 row(s) affected) Which any DBA will recognise as data! What have we done here? We have chosen to create an XML representation of the report which was then returned to SQL Server line by line. We had to re-assemble it into an XML file and shred it in the way that we needed. This is laborious for an ad-hoc request but it makes a lot of sense for a scheduled monitoring job. PowerShell can be run from the scheduler to do regular jobs such as ETL. Although this generally takes little more effort than testing it in the PowerShell ISE, just sometimes PowerShell gives you a culture-shock. Sometimes things happen that you don’t expect, even though they make sense when you think about it later. For example, I recently developed a script that created a lot of PowerShell Jobs. For some reason, when I ran it in the PowerShell Command-line console, it all worked fine. When I then ran it on the SQL Server scheduler, using a CMDExec jobtype, calling PowerShell, nothing happened: and there was no error message in Jobs History. The script invoked a process that retrieved all the windows updates applied to a list of servers in the past 24 hours, and saved the results into a SQL Server Table within a repository server. It was using runspaces, though what I’ll describe will be useful for anyone that is using background jobs. I was using PowerShell to create jobs that ran in parallel, one for each server I was getting information from. I was getting a list of servers from a file called ‘c:\temp\Servers.txt’ and for each server name, I was starting a background job on the local computer. This job then obtained the windows update information for the server which was then filtered by a Where cmdlet for only those within the past day. The results were reported back to a tbl_WindowsPatches table in a SQL Server repository. The code is $_ } You can download the Get-WindowsUpdates , Out-Datatable and Write-DataTable functions at the top of this article in the Functions.psm1 file . In this case I have a module called Functions that join all these functions . As I am using PowerShell jobs and it runs in another runspace, these functions are not visible. So I need to explicit load them in , in the functions module, in the line $_ -InitializationScript{Ipmo Functions -Force -DisableNameChecking} Why should it have worked in the PowerShell console, but not when run from the SQL Server Agent? My first test was to create a .bat and run : PowerShell.exe "C:\Temp\Automation\GetWindowsUpdates.ps1" Ok. So what is happening ? Nothing was stored. My script was creating the jobs, each of which was running in its own independent runspace, and then it was closing the PowerShell session. The script seemed to run in an open command console, but not when it was closed immediately after the script was run. Were these separate jobs being closed prematurely when the parent session was closed? To check that this was the problem, I added, to the line in the .bat file that executed PowerShell, the parameter –noexit so as to prevent the closure of the session… PowerShell.exe –noexit "C:\Temp\Automation\GetWindowsUpdates.ps1" … and it worked. Why? My script was creating the jobs, each of which was running in its own independent runspace, and then closed the PowerShell session. The PowerShell jobs hadn’t been completed when the main session was closed, so nothing was returned to the table, but no error was raised. Why was the session closed before the jobs had completed? It was because the PowerShell jobs run in another runspace, but within the same session that was called. The session must not be closed until all PowerShell jobs finish. What do I need to do? All I have to do is to wait until all PowerShell Jobs are finished, it is as simple as that. I’d forgotten to add a ‘wait-job *’! Get-Job | Wait-Job | Out-Null Remove-Job -State Completed The final code $_ } Get-Job | Wait-Job | Out-Null Remove-Job -State Completed I save this code into C:\Temp\Automation\GetWindowsUpdates.ps1 on the server, The Command to run as a CMDExec SQL Agent Step is : PowerShell.exe "C:\Temp\Automation\GetWindowsUpdates.ps1" Although I’ve shown you how to get data from a PowerShell job that is running in a batch, there are times when all you need is a record of what a script did, so you can check afterwards If, for example, you have a SQL Agent PowerShell Job that deletes old files in a log shipping process, and you want to output a list of the files that were removed in the Job History you can just to use write-output. This script shows you what happens: $FilesRemoved = 'Files Deleted : ' gci "c:\test\*.*" | foreach { $FilesRemoved += "Name: {0}, " -f $_.name Remove-Item $_.fullname } write-output $FilesRemoved Then if you look at your Job History, you’ll then see the list. The trick to get this to work with SQL Agent PowerShell Job, is that instead of using $FilesRemoved += Name: $($_.name)" you need to use format specifier : "Name: {0}, " -f $_.name This solution is using PowerShell scripts of course, SQL Server Jobs and a SQL WMI Alert Imagine that you have a download folder on a SQL Server host that has several files downloaded automatically by FTP. It is called FTPDownload. A file with a specific name is downloaded once a day. The contents of this file must be loaded into a staging table in another SQL Server. Firstly, lets create the Windows Query Language (WQL) query to monitor the specific file in a specific folder : for us the folder is c:\FTPDownload and the file is FileImport.CSV @wmi_query=N'Select * from __InstanceCreationEvent WITHIN 300 WHERE TargetInstance ISA ''CIM_DataFile'' AND TargetInstance.Drive=''C:''and TargetInstance.path=''\\ftpdownload\\'' and TargetInstance.Name = ''c:\\ftpdownload\\FileImport.csv''', To a complete explanation about the WMI and WQL I suggest you read the excellent ebook from my good friend and PowerShell Jedi Ravikanth Chaganti -WMI Query Language via PowerShell Then Lets create the Job called IMPORTCSV with a PowerShell code called importCSV.ps1 on c:\FTPScripts (you’ll need to get these CmdLets from SQLPSX) try { $DataImport = Import-Csv -Path "c:\FTPDownLoad\FileImport.csv" -ErrorAction Stop $DataTable = Out-DataTable -InputObject $DataImport Write-DataTable -ServerInstance YOURSERVER -Database YOURDATABASE -TableName YOURTABLE -Data $DataTable $Msg = "FileImport.csv successfully imported" Rename-Item -Path "c:\FTPDownLoad\FileImport.csv" -NewName "c:\FTPDownLoad\FileImport_$(Get-date -format 'yyyyMMdd').csv" Write-Output $Msg} catch { $ex = $_.Exception Write-Error "$ex.Message" throw "Failure"} Now it is time to create the SQL WMI Alert to monitor the arrival of this file in the FTP folder, based on our WQL : Now let’s set the response to the alert to execute the IMPORTCSV job Here is the code that creates the alert if you’d rather do it via TSQL and you already have the ID of the job that you wish to execute when the alert is fired. USE [msdb]GO/****** Object: Alert [CheckCSVFile] Script Date: 8/9/2012 8:47:15 PM ******/EXEC msdb.dbo.sp_delete_alert @name=N'CheckCSVFile'GO/****** Object: Alert [CheckCSVFile] Script Date: 8/9/2012 8:47:15 PM ******/EXEC msdb.dbo.sp_add_alert @name=N'CheckCSVFile', @message_id=0, @severity=0, @enabled=1, @delay_between_responses=0, @include_event_description_in=0, @category_name=N'[Uncategorized]', @wmi_namespace=N'\\.\root\CIMV2', @wmi_query=N'Select * from __InstanceCreationEvent WITHIN 300 WHERE TargetInstance ISA ''CIM_DataFile'' AND TargetInstance.Drive=''C:''and TargetInstance.path=''\\ftpdownload\\'' and TargetInstance.Name = ''c:\\ftpdownload\\FileImport.csv''', @job_id=N'990ef94a-a96d-41f2-809d-323c5e60d375'GO And all is done . Every time that a file called Fileimport.csv is created on folder c:\FTPDownload then, 5 minutes later, (the reason for the ‘WITHIN 300’ clause in the WQL), the alert is fired and the job is then run. Why 300? Just to allow time for the file to arrive and be written to disk. If you have some problem with the file and the routine generates a error, the job will finish with no errors even you using Try-Catch. This is because the exit code is 0. The error will be recorder on the job history if you look for it, but that job should finish with an error being flagged to SQL Server Agent. How to solve this ? Just add the line ‘throw “Failure” ’ into the catch block and then change the exit code to 1 : } catch { $ex = $_.Exception Write-Error "$ex.Message" throw "Failure"} and the Job will finish with an error : Now that you’ve got that running, you can send an email using PowerShell or by the SQL job, informing whether the job was successful . it is up to you! For now that is it folks! I hope you guys liked it the Posh DBA series. Some cool stuff are coming . As usual, I cannot forget the awesome Jedi that is always helping this young Padawan and , of course, everyone that is needing help. My good friends Ravikanth Chaganti, Shay Levy, my editor Andrew Clarke and the mysterious Sir Phil Factor (thanks for the XML part, Phil) Sir Bob Beauchemin , my brother Mark Broadbent and all people that kindly give their time and knowledge to share.
https://www.simple-talk.com/content/print.aspx?article=1561
CC-MAIN-2014-15
refinedweb
2,402
63.8
[[trivial_abi]] 101 [[trivial_abi]]101 Finally, a blog post on [[trivial_abi]]! This is a brand-new feature in Clang trunk, new as of about February 2018. It is a vendor extension to the C++ language — it is not standard C++, it isn’t supported by GCC trunk, and there is no active WG21 proposal to add it to the standard C++ language, as far as I know. Full disclosure: I am totally not involved in the implementation of this feature. I’m just watching its patches go by on the cfe-commits mailing list and applauding quietly to myself. But this is such a cool feature that I think everyone should know about it. Okay, first of all, since this is a non-standard attribute, Clang trunk doesn’t actually support it under the standard attribute spelling [[trivial_abi]]. Instead, you must spell it old-style as one of the following: __attribute__((trivial_abi)) __attribute__((__trivial_abi__)) [[clang::trivial_abi]] Also, being an attribute, the compiler will be super picky about where you put it — and passive-aggressively quiet if you accidentally put it in the wrong place (because unrecognized attributes are supposed to be quietly ignored). This is one of those “it’s a feature, not a bug!” situations. So the proper syntax, all in one place, is: #define TRIVIAL_ABI __attribute__((trivial_abi)) class TRIVIAL_ABI Widget { // ... }; What is the problem being solved? Remember my blog post from 2018-04-17 where I showed two versions of a class (there called Integer): struct Foo { int value; ~Foo() = default; // trivial }; struct Bar { int value; ~Bar() {} // deliberately non-trivial }; In that post’s particular code snippet, the compiler produced worse codegen for Foo than it did for Bar. This was worth blogging about because it was surprising. Programmers intuitively expect that the “trivial” code will do better than the “non-trivial” code. In most situations, this is true. Specifically, this is true when we go to do a function call or return: template<class T> T incr(T obj) { obj.value += 1; return obj; } incr<Foo> compiles into the following code: leal 1(%rdi), %eax retq ( leal is x86-speak for “add”.) We can see that our 4-byte obj will be passed in to incr<Foo> in the %edi register; and then we’ll add 1 to its value and return it in %eax. Four bytes in, four bytes out, easy peasy. Now look at incr<Bar> (the case with the non-trivial destructor). movl (%rsi), %eax addl $1, %eax movl %eax, (%rsi) movl %eax, (%rdi) movq %rdi, %rax retq Here, obj is not being passed in a register, even though it’s the same 4 bytes with all the same semantics. Here, obj is being passed and returned by address. So our caller has set up some space for the return value and given us a pointer to that space in %rdi; and our caller has given us a pointer to the value of obj in the next argument register %rsi. We fetch the value from (%rsi), add 1 to it, store it back into (%rsi) (so as to update the value of obj itself), and then (trivially) copy the 4 bytes of obj into the return slot pointed to by %rdi. Finally, we copy the caller’s original pointer %rdi into %rax, because the x86-64 ABI document (page 22) says we have to. The reason Bar behaves so differently from Foo is that Bar has a non-trivial destructor, and the x86-64 ABI document (page 19) says specifically: If a C++ object has either a non-trivial copy constructor or a non-trivial destructor, it is passed by invisible reference (the object is replaced in the parameter list by a pointer […]). The later Itanium C++ ABI document defines a term of art: If the parameter type is non-trivial for the purposes of calls, the caller must allocate space for a temporary and pass that temporary by reference. […] A type is considered non-trivial for the purposes of calls if: - it has a non-trivial copy constructor, move constructor, or destructor, or - all of its copy and move constructors are deleted. So that explains it: Bar gets worse codegen because it is passed by invisible reference. It is passed by invisible reference because of the unfortunate conjunction of two independent premises: - the ABI document says that things with non-trivial destructors are passed by invisible reference, and Barhas a non-trivial destructor. By the way, this is a classical syllogism: the first bullet point above is the major premise, and the second is the minor premise. The conclusion is “ Bar is passed by invisible reference.” Suppose someone presents us with the syllogism - All men are mortal. - Socrates is a man. - Therefore Socrates is mortal. If we wish to quibble with the conclusion “Socrates is mortal”, we must rebut one of the premises: either rebut the major premise (maybe some men aren’t mortal) or rebut the minor premise (maybe Socrates isn’t a man). To get Bar to be passed in registers (just like Foo), we must rebut one or the other of our two premises. The standard-C++ way to do it is simply to give Bar a trivial destructor, negating the minor premise. But there is another way! How [[trivial_abi]] solves the problem Clang’s new trivial_abi attribute negates the major premise above. Clang extends the ABI document to say essentially the following: If the parameter type is non-trivial for the purposes of calls, the caller must allocate space for a temporary and pass that temporary by reference. […] A type is considered non-trivial for the purposes of calls if it has not been marked [[trivial_abi]]AND: - it has a non-trivial copy constructor, move constructor, or destructor, or - all of its copy and move constructors are deleted. That is, even a class type with a non-trivial move constructor or destructor will be considered trivial for the purposes of calls, if it has been marked by the programmer as [[trivial_abi]]. So now (using Clang trunk) we can go back and write this: #define TRIVIAL_ABI __attribute__((trivial_abi)) struct TRIVIAL_ABI Baz { int value; ~Baz() {} // deliberately non-trivial }; and compile incr<Baz>, and we get the same code as incr<Foo>! Caveat #1: [[trivial_abi]] is sometimes a no-op I would hope that we could make “trivial-for-purposes-of-calls” wrappers around standard library types like this: template<class T, class D> struct TRIVIAL_ABI trivial_unique_ptr : std::unique_ptr<T, D> { using std::unique_ptr<T, D>::unique_ptr; }; Unfortunately, this doesn’t work. If your class has any base classes or non-static data members which are themselves “non-trivial for purposes of calls”, then Clang’s extension as currently written will make your class sort of “irreversibly non-trivial” — the attribute will have no effect. (It will not be diagnosed. This means you can use [[trivial_abi]] on a class template such as optional and have it be “conditionally trivial”, which is sometimes a useful feature. The downside, of course, is that you might mark a class trivial and then find out later that the compiler was giving you the silent treatment.) The attribute will also be silently ignored if your class has virtual bases or virtual member functions. In these cases it probably won’t even fit in a register anyway, and I don’t know what you’re doing passing it around by value, but, just so you know. So, as far as I know, the only ways to use TRIVIAL_ABI on “standard utility types” such as optional<T>, unique_ptr<T>, and shared_ptr<T> are - implement them from scratch yourself and apply the attribute, or - break into your local libc++ and apply the attribute by hand there. (In the open-source world, these are essentially the same thing anyway.) Caveat #2: Destructor responsibility In our Foo/ Bar example, the class had a no-op destructor. Suppose we gave our class a really non-trivial destructor? struct Up1 { int value; Up1(Up1&& u) : value(u.value) { u.value = 0; } ~Up1() { puts("destroyed"); } }; This should look familiar; it’s unique_ptr<int> stripped to its bare essentials, and with printf standing in for delete. Without TRIVIAL_ABI, incr<Up1> looks just like incr<Bar>: movl (%rsi), %eax addl $1, %eax movl %eax, (%rdi) movl $0, (%rsi) movq %rdi, %rax retq With TRIVIAL_ABI added, incr<Up2> looks much bigger and scarier! pushq %rbx leal 1(%rdi), %ebx movl $.L.str, %edi callq puts movl %ebx, %eax popq %rbx retq Under the traditional calling convention, types with non-trivial destructors are always passed by invisible reference, which means that the callee ( incr in our case) always receives a pointer to a parameter object that it does not own. The caller owns the parameter object. This is what makes copy elision work! When a type with [[trivial_abi]] is passed in registers, we are essentially making a copy of the parameter object. There is only one return register on x86-64 (handwave), so the callee has no way to give that object back to us when it’s finished. The callee must take ownership of the parameter object we gave it! Which means that the callee must call the destructor of the parameter object when it’s finished with it. In our previous Foo/ Bar/ Baz examples, this destructor call was happening, but it was a no-op, so we didn’t notice. Now in incr<Up2> we see the additional code that is produced by a callee-side destructor. It is conceivable that this extra code could add up, in certain use-cases. However, counterpoint: this destructor call is not appearing out of nowhere! It is being called in incr because it is not being called in incr’s caller. So in general the costs and benefits might be expected to balance out. Caveat #3: Destructor ordering The destructor for the trivial-abi parameter will be called by the callee, not the caller (Caveat 2). Richard Smith points out that this means it will be called out of order with respect to the other parameters’ destructors. struct TRIVIAL_ABI alpha { alpha() { puts("alpha constructed"); } ~alpha() { puts("alpha destroyed"); } }; struct beta { beta() { puts("beta constructed"); } ~beta() { puts("beta destroyed"); } }; void foo(alpha, beta) {} int main() { foo(alpha{}, beta{}); } This code prints alpha constructed beta constructed alpha destroyed beta destroyed when TRIVIAL_ABI is defined as [[clang::trivial_abi]], and prints alpha constructed beta constructed beta destroyed alpha destroyed when TRIVIAL_ABI is defined away. Only the latter — with destruction in reverse order of construction — is C++-standard-conforming. Relation to “trivially relocatable” / “move-relocates” None… well, some? As you can see, there is no requirement that a [[trivial_abi]] class type should have any particular semantics for its move constructor, its destructor, or its default constructor. Any given class type will likely be trivially relocatable, simply because most class types are trivially relocatable by accident. We can easily design an offset_ptr which is super duper non-trivially relocatable: template<class T> class TRIVIAL_ABI offset_ptr { intptr_t value_; public: offset_ptr(T *p) : value_((const char*)p - (const char*)this) {} offset_ptr(const offset_ptr& rhs) : value_((const char*)rhs.get() - (const char*)this) {} T *get() const { return (T *)((const char *)this + value_); } offset_ptr& operator=(const offset_ptr& rhs) { value_ = ((const char*)rhs.get() - (const char*)this); return *this; } offset_ptr& operator+=(int diff) { value_ += (diff * sizeof (T)); return *this; } }; int main() { offset_ptr<int> top = &a[4]; top = incr(top); assert(top.get() == &a[5]); } With TRIVIAL_ABI defined, Clang trunk passes this test at -O0 or -O1, but at -O2 (i.e., as soon as it tries to inline the calls to trivial_offset_ptr::operator+= and the copy constructor) it fails the assertion. So there’s a caveat here too. If your type is doing something crazy with the this pointer, you probably don’t want to be passing it in registers. Filed 37319, essentially a documentation request. In this case, it turns out there’s just no way to make the code do what the programmer intends. We’re saying that the value of value_ should depend on the value of the this pointer; but at the caller–callee boundary, the object is in a register and there is no this pointer! So when the callee spills it back to memory and gives it a this pointer again, how should the callee compute the correct value to put into value_? Maybe the better question is, how does it even work at -O0? It shouldn’t work at all. So anyway, if you’re going to use [[trivial_abi]], you must avoid having member functions (not just special member functions, but any member functions) that significantly depend on the object’s own address (for some hand-wavy value of “significantly”). The intuition here is that when a thing is marked [[trivial_abi]], then any time you expect a copy you might actually get a copy plus memcpy: The “put it in a register and then take it back out” operation is essentially tantamount to memcpy. And similarly when you expect a move you might actually get a move plus memcpy. Whereas, when a type is “trivially relocatable” (according to my definition from this C++Now talk), then any time you expect a copy and destroy you might actually get a memcpy. And similarly when you expect a move and destroy you might actually get a memcpy. You actually lose calls to special member functions when you’re talking about “trivial relocation”; whereas with the Clang [[trivial_abi]] attribute you never lose calls. You just get (as if) memcpy in addition to the calls you expected. This (as if) memcpy is the price you pay for a faster, register-based calling convention.
https://quuxplusone.github.io/blog/2018/05/02/trivial-abi-101/
CC-MAIN-2021-21
refinedweb
2,275
58.32
TELOS MAINNET HAS LAUNCHED! On December 12th, 2018 at 17:46 UTC, the six Appointed Block Producers (ABPs) of the Telos Launch Group executed the previously published launch script for the Telos Mainnet, bringing it into existence. Like a rocket launch, there are distinct stages to the launch process: injection, validation, pre-activation, and activation. The first transaction on the Telos Mainnet occurred 19 seconds later at block 39. This was the beginning of the injection stage when all 164,645 of the accounts and their keys and balances were written to the blockchain from the genesis snapshot file. Once all the accounts were created, the validation stage began as the ABPs froze the chain at block 2,516 so that no transactions could change any of the injection values and each performed their own validation script to ensure that all accounts, keys, and balances had been correctly recorded on the Telos Mainnet. When all confirmed that the accounts had been correctly written, they published the public connection information so that independent validators can publicly test and ensure the community that the chain is starting off with correct values for everyone. This stage is scheduled to end on December 13th at 20:00 UTC. The pre-activation stage begins at this point where the chain will once again start producing blocks again. At this point TLOS-holders can do almost any action on the blockchain. That means they can create new accounts, bid on namespaces, transfer their small amounts of TLOS that are already liquid, and perhaps most importantly, vote for block producers or assign their votes to proxies to vote for them. This will decide who the initial BPs running the network will be once it activates. The only thing that most TLOS holder can’t do is unstake their TLOS to make them liquid. Every Telos account starts out with almost all of their tokens staked to NET and CPU and these cannot be moved until they are liquid. To make TLOS liquid, the owner needs to unstake them. This starts a 3 day process of unstaking, after which the tokens are liquid and can be moved. Unstaking is not permitted until the chain activates at block 1,000,000. In the middle of the pre-activation stage will be the worldwide online conference Telos World 2018 on December 14th beginning at 15:00 UTC. Learn more about it at: On December 19th at 14:33 UTC, the Telos Mainnet is expected to reach block 1,000,000 when it will activate. At that point users may unstake their TLOS and perform any other action on the chain and the BPs who have the most votes at that time will begin running the chain and earning block rewards. It will take three days for unstaked TLOS tokens to become liquid and be transferred. This includes transferring tokens to exchanges. So the earliest that exchanges can be expected to have TLOS tokens to trade is December 22nd at 14:33 UTC. Refer to the article “What happens when Telos votes to launch?” for an extended launch timeline: Please check in on the Telos Telegram and Twitter for more information. Join the Telos conversation and get more info! Telegram: Twitter: YouTube: Reddit: Discord: Instagram:
https://medium.com/telos-foundation/telos-mainnet-has-launched-cba1efabdc3b
CC-MAIN-2019-22
refinedweb
544
60.04
Back to article In the first installment of this article we looked at getting Google's Scripting Layer for Android (SL4A) downloaded and installed on your Android phone. We examined the basics of writing scripts using Python and even included a short script to set a few of the profile settings. This time we'll take a look at some of the sample scripts found on the SL4A website and talk about how you might write a script of your own. The first thing you should do after installing the SL4A application on your phone is to add a shortcut to the scripts folder on one of the home screens. On the HTC EVO you can do this by pressing on an empty space and hold until the Add to Home dialog appears. From here you can add a Widget, App, Shortcut or Folder. Selecting Folder presents a list of available folders and an option to create a New Folder. Selecting the Scripts folder will add a folder icon to the current home screen. Once that's done, you'll have single-click access to any of the scripts stored on your SD card in the sl4a/scripts directory. It seems as though there is a universal rule when teaching a new programming language that you must create a program to output the phrase "Hello World". SL4A holds to this premise and includes a link to a version of this program on the Wiki Tutorials page. If you take a look at some of the examples on that page, you'll see a number of items with potential uses like Twitter clients, talking weather forecast, a silent mode trigger to set the phone to vibrate during sleeping hours and more. We'll take a look at a few of those samples to help explore the possibilities. One of the really useful examples in the tutorial list is under the title "How to get your own IP address". This example creates a simple HTTP server on your phone that you can connect to from your host machine. If you have WiFi enabled on the phone and it's on the same network as your host machine, you'll be able to use a web browser to view the files on your SD card. This can come in really handy when you're out and about and you'd like to quickly move a picture from your phone to say your netbook. The source code for this looks like: import SimpleHTTPServer from os import chdir chdir('/sdcard/ /dcim/Camera'') SimpleHTTPServer.test() Then you just hit the IP address of your phone from a browser and download images using a right mouse click and 'Save As'.
http://www.linuxplanet.com/linuxplanet/print/7166
CC-MAIN-2018-09
refinedweb
453
73.41
[ ] Sanjay Radia commented on HDFS-3077: ------------------------------------ * JournalNode States - I am a little confused about how the state of JN is captured in the code. ** The "inWritingState" seems to be captured by (curSegment != null) - this is used fairly often, lets hide this behind a method isJournalSegmentOpen(...) ** The journal state should be more concrete: Init, writing, recovering (perhaps more than one recovering states) * JournalNodes joining a pack Can you please explain the following two cases ** a JournalNode (previously down) that is joining a set of other JNs especially when the others are in writing mode. ** a new JournalNode join the pack. * Exceptions ** journal operation (ie write) shouldn't this thrown EpochException/FencedException" -- this exception is critical so that client side does not retry the operation. (perhaps this can be a EpochException which is turned into a FencedException on client side.) ** Should there be some other more concrete exception that are subclasses of IOException? * AsyncLogger JavaDoc states "This is essentially a wrapper around {@link QJournalProtocol} with the key differences being ..." Should this be "This is essentially a wrapper around {@link JournalManager } with the key differences being ..." * Javadoc ** QJM contructor - document at least the URI. ** Qprotocol - some methods do not have the parameters documented. Referring to the doc is fine for method semantics in some cases. ** RequestInfo - document parameters ** Javadoc for class Journal: A JournalNode can manage journals for several independent NN namespaces. The Journal class implements a single journal i.e. the part that stores the journal transactions persistently. Each such journal (identified by a journal id) is entirely independent despite being hosted by a single journalNode daemon (ie the same JVM). >:
http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-issues/201210.mbox/%3C1594962375.11988.1349744648514.JavaMail.jiratomcat@arcas%3E
CC-MAIN-2015-35
refinedweb
269
55.64
Package to add bwi methods into your project Project description BWI-lib python client for BWI interactions Usage Instantiate a client import bwi client = bwi.Client(api_key='xxxxxxxxxxxxxxxx', workflow='shop') Manipulate your logs # provide step duration with client.Step('fetch client information') as bwi: bwi.logger.info('found client with user id %d', 18) Metric management # manipulate metrics client.Step('validate order') # Your business-oriented code goes here # ... total_paid = 220 # increment the income metric for this step bwi.metrics.inc('income', total_paid) Error-handling # report any unknown exception to the bwi handler try: # Your business-oriented code goes here except Exception as err: bwi.handler.catch(err) # the error is now available for this specific step Mark a step as (un)succesful step = bwi.Step('process order') # Your business-oriented code goes here step.success() # other scenario, where thing go bad step.failed() Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/bwi/
CC-MAIN-2019-47
refinedweb
166
51.34
Quasiquotation is the mechanism that makes it possible to program flexibly with tidy evaluation grammars like dplyr. It is enabled in all tidyeval quoting functions, the most fundamental of which are quo() and expr(). Quasiquotation is the combination of quoting an expression while allowing immediate evaluation (unquoting) of part of that expression. We provide both syntactic operators and functional forms for unquoting. The !! operator unquotes its argument. It gets evaluated immediately in the surrounding context. The !!! operator unquotes and splices its argument. The argument should represents a list or a vector. Each element will be embedded in the surrounding call, i.e. each element is inserted as an argument. If the vector is named, the names are used as argument names. Use qq_show() to experiment with quasiquotation or debug the effect of unquoting operators. qq_show() quotes its input, processes unquoted parts, and prints the result with expr_print(). This expression printer has a clearer output than the base R printer (see the documentation topic). UQ(x) UQE(x) UQS(x) "!!"(x) ":="(x, y) qq_show(expr) An expression to unquote. An R expression that will be given the argument name supplied to x. An expression to be quasiquoted. When a function takes multiple named arguments (e.g. dplyr::mutate()), it is difficult to supply a variable as name. Since the LHS of = is quoted, giving the name of a variable results in the argument having the name of the variable rather than the name stored in that variable. This problem is right up the alley for the unquoting operator !!. If you were able to unquote the variable when supplying the name, the argument would be named after the content of that variable.: name <- "Jane" dots_list(!!name := 1 + 2) exprs(!!name := 1 + 2) quos(!!name := 1 + 2) Like =, the := operator expects strings or symbols on its LHS. Formally, quo() and expr() are quasiquote functions, !! is the unquote operator, and !!! is the unquote-splice operator. These terms have a rich history in Lisp languages, and live on in modern languages like Julia and Racket. Calling UQ() and UQS() with the rlang namespace qualifier is soft-deprecated as of rlang 0.2.0. Just use the unqualified forms instead. Supporting namespace qualifiers complicates the implementation of unquotation and is misleading as to the nature of unquoting operators (these are syntactic operators that operates at quotation-time rather than function calls at evaluation-time). UQ() and UQS() were soft-deprecated in rlang 0.2.0 in order to make the syntax of quasiquotation more consistent. The prefix forms are now `!!`() and `!!!`() which is consistent with other R operators (e.g. `+`(a, b) is the prefix form of a + b). Note that the prefix forms are not as relevant as before because !! now has the right operator precedence, i.e. the same as unary - or +. It is thus safe to mingle it with other operators, e.g. !!a + !!b does the right thing. In addition the parser now strips one level of parentheses around unquoted expressions. This way (!!"foo")(...) expands to foo(...). These changes make the prefix forms less useful. Finally, the named functional forms UQ() and UQS() were misleading because they suggested that existing knowledge about functions is applicable to quasiquotation. This was reinforced by the visible definitions of these functions exported by rlang and by the tidy eval parser interpreting rlang::UQ() as !!. In reality unquoting is not a function call, it is a syntactic operation. The operator form makes it clearer that unquoting is special. UQE() was deprecated in rlang 0.2.0 in order to make the is deprecated in order to simplify the quasiquotation syntax. You can replace its use by a combination of !! and get_expr(). E.g. !! get_expr(x) is equivalent to UQE(x). The use of := as alias of ~ is defunct as of rlang 0.2.0. It caused surprising results when invoked in wrong places. For instance in the expression dots_list(name := 1) this operator was interpreted as a synonym to = that supports quasiquotation, but not in dots_list(list(name := 1)). Since := was an alias for ~ the inner list would contain formula-like object. This kind of mistakes now trigger an error. # NOT RUN { # Quasiquotation functions quote expressions like base::quote() quote(how_many(this)) expr(how_many(this)) quo(how_many(this)) # In addition, they support unquoting. Let's store symbols # (i.e. object names) in variables: this <- sym("apples") that <- sym("oranges") # With unquotation you can insert the contents of these variables # inside the quoted expression: expr(how_many(!!this)) expr(how_many(!!that)) # You can also insert values: expr(how_many(!!(1 + 2))) quo(how_many(!!(1 + 2))) # Note that when you unquote complex objects into an expression, # the base R printer may be a bit misleading. For anstance compare # the output of `expr()` and `quo()` (which uses a custom printer) # when we unquote an integer vector: expr(how_many(!!(1:10))) quo(how_many(!!(1:10))) # This is why it's often useful to use qq_show() to examine the # result of unquotation operators. It uses the same printer as # quosures but does not return anything: qq_show(how_many(!!(1:10))) # Use `!!!` to add multiple arguments to a function. Its argument # should evaluate to a list or vector: args <- list(1:3, na.rm = TRUE) quo(mean(!!!args)) # You can combine the two var <- quote(xyz) extra_args <- list(trim = 0.9, na.rm = TRUE) quo(mean(!!var , !!!extra_args)) # The plural versions have support for the `:=` operator. # Like `=`, `:=` creates named arguments: quos(mouse1 := bernard, mouse2 = bianca) # The `:=` is mainly useful to unquote names. Unlike `=` it # supports `!!` on its LHS: var <- "unquote me!" quos(!!var := bernard, mouse2 = bianca) # All these features apply to dots captured by enquos(): fn <- function(...) enquos(...) fn(!!! args, !!var := penny) # Unquoting is especially useful for building an expression by # expanding around a variable part (the unquoted part): quo1 <- quo(toupper(foo)) quo1 quo2 <- quo(paste(!!quo1, bar)) quo2 quo3 <- quo(list(!!quo2, !!!syms(letters[1:5]))) quo3 # }
https://www.rdocumentation.org/packages/rlang/versions/0.2.1/topics/quasiquotation
CC-MAIN-2021-25
refinedweb
983
60.21
It's not the same without you Join the community to find out what other Atlassian users are discussing, debating and creating. Good morning! I am new with groovy / scriptrunner and I am having difficulties in writing a script. The request is to use a post-function in the Create transition workflow to re-route tickets to a specific user when issue type = ‘Incident’ and Component is not set. My idea is to use the inline postfunction option to achieve this. Can anyone help with the script to do this? Thanks. Hi Daniel, To do this using a post-function you wil need to add this code: import com.atlassian.jira.user.util.UserManager import com.atlassian.jira.issue.MutableIssue import com.atlassian.jira.component.ComponentAccessor UserManager userManager = ComponentAccessor.getUserManager(); MutableIssue issue = issue if(issue.issueType.name == "Incident" && issue.getComponents().size() == 0 ){ issue.setAssignee(userManager.getUserByName("Test")) } Replace the 'Test' user with whomever you would like to set the issue to. The post-function will need to appear first in the list for this to work. You could also do this as a behaviour on the 'Assignee' field and change it depending on the form field value of componant and issue type. Let me know if this is what you were looing for, or if I have misundertood and you need some other solution. Regards, Johnson Howard Thanks for your prompt response Johnson. I am still to test this as our associates have a new type of request now. As soon as I have this tested, I will update this thread. Thanks. Good morning Johnson, Sorry for the late reply. Just able to test it now in our non-prod instance. I am attaching three screen shots with the error messages. Hope you can get them. Any suggestions? Thanks. If you update your JIRA and Scriptrunner, the previous code will work. I have a similar case, I want to copy custom field value of one issue type to all other issues of the project. We have Issue type "Project info" and we created one issue of "Project info" type per project to maintain all project related information. So anything changes in the project we update the respective field on of project info issuetype of that project. Now I want to copy one of the custom field value of project info issue type to all other issue of that project. Can anybody help me with this? Thanks in.
https://community.atlassian.com/t5/Marketplace-Apps-questions/ScriptRunner-Groovy-Script-based-on-issue-type/qaq-p/461725
CC-MAIN-2019-13
refinedweb
407
66.74
The QSerialIODevice class is the base for all serial devices. More... #include <QSerialIODevice> Inherits QIODevice. Inherited by QNullSerialIODevice, QSerialPort, and QSerialSocket. The QSerialIODevice class is the base for all serial devices. The abstract QSerialIODevice class extends QIODevice with functionality that is specific to serial devices. Subclasses implement specific kinds of serial devices. In particular, QSerialPort implements a physical hardware serial port. See also QIODevice and QSerialPort. Construct a new QSerialIODevice object attached to parent. Destruct a QSerialIODevice object. Abort an ATD dial command. The default implementation transmits a single CR character. If a modem needs to change the way abortDial() works, a multiplexer plug-in should be written which overrides the abortDial() function on the channels that need different abort logic. See also QSerialIODeviceMultiplexerPlugin. Returns the modem AT command chat object for this serial device. This is an alternative to accessing the raw binary data via read() and write(). See also QAtChat. Returns true if the current state of the DCD (carrier) modem status line is active; otherwise returns false. See also setCarrier() and carrierChanged(). Signal that is emitted when the state of the DCD (carrier) modem status line changes to value. See also carrier(). Returns true if current state of the CTS modem status line is active; otherwise returns false. See also ctsChanged(). Signal that is emitted when the state of the CTS modem status line changes to value. See also cts(). Discard pending buffered data without transmitting it. This function will do nothing if the underlying serial device implementation cannot discard buffers. Returns true if the current state of the DSR modem status line is active; otherwise returns false. See also dsrChanged(). Signal that is emitted when the state of the DSR modem status line changes to value. See also dsr(). Returns true if the current state of the DTR modem status line is active; otherwise returns false. See also setDtr(). Emit the readyRead() signal. If a process is currently running on this device, then the readyRead() signal will be suppressed. Subclasses should call this slot rather than emit readyRead() directly. Returns if this device is sequential in nature; false otherwise. Serial devices are always sequential. Reimplemented from QIODevice. Returns true if this serial device can validly transfer data; false otherwise. If this function returns false, there is no point writing data to this device, or attempting to read data from it, because it will never give a useful result. The default implementation returns the same value as isOpen(). The QNullSerialIODevice class overrides this to always return false, even if the device is technically open. Returns the serial device's baud rate. The default implementation returns 115200. Subclasses are expected to override this value. Signal that is emitted when the device is ready to accept data after being opened. This signal will only be emitted if the waitForReady() function returns true. See also waitForReady(). Returns true if the current state of the RTS modem status line is active; otherwise returns false. See also setRts(). Run a program with the supplied arguments, redirecting the device's data to stdin and stdout on the new process via a pseudo-tty. Redirection will stop when the process exits. This function returns a QProcess object that can be used to control the new process. The caller is responsible for destroying this object when it is no longer required. Returns null if the process could not be started for some reason. If addPPPdOptions is true, then the caller is attempting to start pppd. The base class implementation will insert the pseudo-tty's device name at the beginning of the command arguments. Subclasses may override this method to run pppd directly on an underlying device. The readyRead() signal will be suppressed while the process is running. Sets the DCD (carrier) modem status line to value. This is used by programs that accept incoming serial connections to indicate to the peer machine that the carrier has dropped. By default, this calls setDtr(), which supports the case where a null modem cable is connecting DTR to DSR and DCD on the peer machine. The caller should wait for a small delay and then setDtr(true) to restore DTR to the correct value. This function will return false if it uses setDtr() to transmit the carrier change, or true if it can use a real carrier signal (QGsm0710MultiplexerServer supports real carrier signals). See also carrier(). Sets the state of the DTR modem status line to value. See also dtr(). Sets the state of the RTS modem status line to value. See also rts(). Returns true if the caller should wait for the ready() signal to be emitted before sending data on this device. Returns false if the caller can send data immediately after calling open(). The default implementation returns false. See also ready().
https://doc.qt.io/archives/qtextended4.4/qserialiodevice.html
CC-MAIN-2021-25
refinedweb
797
60.61
Hi, I have a .NET forms application that I have created in C#. This forms application is built as an exe and has a set of public interfaces to public classes contained within the exe. Call this exe App1. Question 1: I want all these classes to be non-createable by other .NET applications that call them (similar to the noncreatable attribute in COM). How can I specify this for a .NET C# class? I now have another .NET forms application that I created in C# as well. Call this exe App2. Now from within App2 I would like to create an object of one of the types of public classes in App1. This I am able to do with no problem. What I would really like this to do is launch the executable when I create the class from App1 in App2, as opposed to just creating an instance of the class. Question 2: How can I do this? Background: I am trying to duplicate functionality previously done in a C++ MFC exe with a COM interface. This exe exposed several COM CoClasses, all of which where noncreatable. When an outside application tried to instantiate a new object of one of these COM CoClasses, the exe would actually launch with its associated GUI and message loop and return an interface to that object that was instantiated inside the exe. I would like to replicate this behavior with a C# exe. Unfortunately I am not quite sure how this actually worked in COM, so I really don't know where to start when trying to do this in .NET C#. Any help would be appreciated. Thanks in advance. Since these two appear to be two different packages, you can declare the constructor as "internal". Forum Rules
http://forums.codeguru.com/showthread.php?486088-store-arraylist-in-database&goto=nextnewest
CC-MAIN-2016-30
refinedweb
296
66.94
Simple program that can parse Google Protobuf encoded blobs (version 2 or 3) without knowing their accompanying definition. It will print a nice, colored representation of their contents. Example: As you can see, the field names are obviously lost, together with some high-level details such as: But protobuf-inspector is able to correctly guess the message structure most of the time. When it finds embedded binary data on a field, it'll first try to parse it as a message. If that fails, it'll display the data as a string or hexdump. It can make mistakes, especially with small chunks. It shows the fields just in the order they are encoded in the wire, so it can be useful for those wanting to get familiar with the wire format or parser developers, in addition to reverse-engineering. You can install with pip: pip install protobuf-inspector This installs the protobuf_inspector command. Run it, feeding the protobuf blob on stdin: protobuf_inspector < my-protobuf-blob After reading the first (blind) analysis of the blob, you typically start defining some of the fields so protobuf-inspector can better parse your blobs, until you get to a point where you have a full protobuf definition and the parser no longer has to guess anything. Read about defining fields here. If a parsing error is found, parsing will stop within that field, but will go on unaffected at the outside of the hierarchy. The stack trace will be printed where the field contents would go, along with a hexdump indicating where parsing was stopped in that chunk, if applicable. So, if you specified a uint32 and a larger varint is found, you'd get something like: If you specified that some field contained an embedded message, but invalid data was found there, you'd get: Please note that main.py will exit with non-zero status if one or more parsing errors occurred. There are some tricks you can use to save time when approaching a blob: If you are positive that a varint does not use zig-zag encoding, but are still not sure of the signedness, leave it as varint. If it does use zig-zag encoding, use sint64 unless you are sure it's 32-bit and not 64-bit. If a chunk is wrongly being recognized as a packed chunk or an embedded message, or if you see something weird with the parsed message and want to see the raw bytes, specify a type of bytes. Conversely, if for some reason it's not being detected as an embedded message and it should, force it to message to see the reason. If you want to extract a chunk's raw data to a file to analyze it better, specify a type of dump and protobuf-inspector will create dump.0, dump.1, etc. every time it finds a matching blob. protobuf-inspector parses the blob as a message of type root, but that's just a default. If you have lots of message types defined, you can pass a type name as optional argument, and protobuf-inspector will use that instead of root: protobuf_inspector request < my-protobuf-blob Simple example: from protobuf_inspector.types import StandardParser parser = StandardParser() with open('my-blob', 'rb') as fh: output = parser.parse_message(fh, "message") print(output) This project was not initially designed for use as a library, though, and its API might change. For a more complex example, see protobuf_inspector/__main__.py.
https://awesomeopensource.com/project/mildsunrise/protobuf-inspector
CC-MAIN-2022-33
refinedweb
578
57.91
git rebase (not) --interactive Explore our services and get in touch. tl;dr: How to build a Node.JS script to re-write history. pre-requisites: Familiarity with git rebase --interacitve.. For the end user it was all cakes and ale, but maintenance was hell. To put things simple, imagine you had the following git-log: Step 100: description of step 100th . . . Step 3: description of third step Step 2: description of second step Step 1: description of first step Now let’s say that you would like to remove step 2. The only solution for that would be using a git-rebase --interactive starting step 2 and save the following file: reword xxxxxxx Step 100: description of step 100th . . . reword xxxxxxx Step 3: description of third step This means that the editor process would have to be opened and closed 98 times (100 - 3 included), each time it does so we would manually have to change step n to step (n + 1). Do you understand now why it was a maintenance hell? I’ll save the explanation for myself. The obvious question is — what if a script could do that for me? Followed by — how can I implement such a script? Following that, I have wandered across git’s documentation and Stack Overflow and have found an answer. Here’s the method which starts the editing process written in git-rebase--interactive.sh, a file in git’s implementation: git_sequence_editor () { if test -z "$GIT_SEQUENCE_EDITOR" then GIT_SEQUENCE_EDITOR="$(git config sequence.editor)" if [ -z "$GIT_SEQUENCE_EDITOR" ] then GIT_SEQUENCE_EDITOR="$(git var GIT_EDITOR)" || return $? fi fi eval "$GIT_SEQUENCE_EDITOR" '"$@"' } As you can see (or not), git looks for the editor’s file path in a global var named GIT_SEQUENCE_EDITOR and executes it with all the given arguments. Without getting into more of the implementation, knowing nano and vim which are the most commonly used git-editors, the first argument that their process accepts the edited file’s path, which makes total sense. BUT! Why does the GIT_SEQUENCE_EDITOR environment variable has to reference an actual text editor? What if we set that to reference Node.JS’s executable? Aha! JACKPOT! Now, hypothetically instead of opening nano or vim and editing the file manually we can run whatever manipulation we want on the file using a script and then once the process exists with no errors (code 0) git will just proceed with the rebase as usual. Using this principle, here’s a cool script that will remove a range of commits from the middle of the commits-stack: #!/usr/bin/env node const execa = require('execa'); const fs = require('fs'); const tmp = require('tmp'); // Main { const [anchor, amount = 1] = process.argv.slice(-2).map(Number); gitRebaseInteractive( anchor, function (operations, amount) { operations = operations // Replace comments .replace(/#.*/g, '') // Each line would be a cell .split('\n') // Get rid of empty lines .filter(Boolean); // Commits we would like to drop const dropOperations = operations .slice(0, amount) .map((operation) => operation.replace('pick', 'drop')); // Commits we would like to pick const pickOperations = operations.slice(amount); // Composing final rebase file return [].concat(dropOperations).concat(pickOperations).join('\n'); }, [amount] ); console.log(`Removed ${amount} commits starting ${anchor}`); } // Runs a git-rebase-interactive in a non interactive manner by providing a script // which will handle things automatically function gitRebaseInteractive(head, fn, args) { execa.sync('git', ['rebase', '-i', head], { env: { GIT_SEQUENCE_EDITOR: gitEdit(fn, args), }, }); } // Evaluates a script in a new process which should edit a git file. // The input of the provided function should be the contents of the file and the output // should be the new contents of the file function gitEdit(fn, args) { args = args.map((arg) => `'${arg}'`).join(', '); const body = fn.toString().replace(/\\/g, '\\\\').replace(/`/g, '\\`'); const scriptFile = tmp.fileSync({ unsafeCleanup: true }); fs.writeFileSync( scriptFile.name, ` const fs = require('fs') const file = process.argv[process.argv.length - 1] let content = fs.readFileSync(file).toString() content = new Function(\`return (${body}).apply(this, arguments)\`)(content, ${args}) fs.writeFileSync(file, content) fs.unlinkSync('${scriptFile.name}') ` ); return `node ${scriptFile.name}`; } Using the code snippet above we can take an initial step towards solving the problem presented at the beginning of this article by simply running $ git-remove.js [where anchor represents a git object and amount represents the amount of commits that we would like to remove]. Sure, we still need to figure out which step we would like to remove by its index, and we need to take care of automatic rewording, but at least now you have the idea behind such method where you can solve problems like these as well as far more complex ones, with a little bit of creativity.
https://the-guild.dev/blog/git-rebase-not-interactive
CC-MAIN-2021-31
refinedweb
767
56.45
cycling rhoconnect after createAlexey Mironov Sep 9, 2013 12:03 AM Hello, Im try to sync with MS SQL Server (I use TinyTds) by Rhoconnect (sync my app log file) - class TsdLog < SourceAdapter def initialize(source) @client = TinyTds::Client.new(:username => 'sa', :password => '', :host => '10.39.3.200', :database => 'cp_suite') super(source) end def login end def query(params=nil) end def sync super end def create(create_hash) puts "Call Create Log" rez = @client.execute("INSERT INTO [TSD_Log] ([UserId] ,[DeviceID] ,[Date] ,[Time] ,[Operation] ,[Status] ,[Comment]) VALUES ('" + create_hash["UserId"].to_s + "', '" + create_hash["DeviceID"].to_s + "', CONVERT(DATETIME, '" + create_hash["Date"].to_s + "', 126), " + create_hash["Time"].to_s + ", " + create_hash["Operation"].to_s + ", " + "0" + ", '" + create_hash["Comment"].to_s + "')") end end INSERT command do whell and my log records add to the TSD_log table, but after thet rho connect cecling with log - what can i do to stop it? When I stay onle puts "some" in create method cycling don't hapend.... Is it TinyTds bag? Re: cycling rhoconnect after createKutir Mobility Sep 9, 2013 1:59 AM (in response to Alexey Mironov) The create method in a RhoConnect source must return the primary key of the row you inserted Thanks, Javier Kutir Mobility Re: cycling rhoconnect after createAlexey Mironov Sep 9, 2013 2:38 AM (in response to Kutir Mobility) Thenk Javier, how can I return it (i can do some select and buid nested hash)? Can you show some example (return hash name) please.... Re: Re: cycling rhoconnect after createKutir Mobility Sep 9, 2013 3:30 AM (in response to Alexey Mironov) There is an example of building a RhoConnect application here: The fast way to multiplatform data-aware mobile applications. Part 2. Look for the "create" method and you will see it returns a simple value (not a hash). The way to obtain that value depends on your database server and your table schema. If the primary key in your SQL Server table is of type IDENTITY, I believe TinyTds lets you call "insert" to get the last inserted primary key, which is what you want, so you could just return: rez.insert Javier Kutir Mobility Re: cycling rhoconnect after createAlexey Mironov Sep 9, 2013 4:14 AM (in response to Kutir Mobility) Thenk very mach! Im try rez.insert erly but it does not halp becose my TSD_Log table have not any primary key. I add some composite primary key and so was work as well. Re: cycling rhoconnect after createAlexey Mironov Sep 9, 2013 4:48 AM (in response to Kutir Mobility) Javier, I have one litle question about Rhoconnect logic: For example - i have table T in me backend and model T in me Rho App. Im aync in by Rhoconnect. 1. Insert one record (a) into table T 2. Do sync and record a able in model T 3. Delete record a from model T in me Rho Mobile App 4. Do sync. Rhoconnect generate delete command in T adapter but I dont calling DELETE command. (It is not nided for app logic). And becose record a stil live in table T. 5. Do sync and whait what record a in the table T sync to model T, but nobody hapend. Rhoconect not call query in adapter. Why? Re: cycling rhoconnect after createKutir Mobility Sep 9, 2013 5:22 AM (in response to Alexey Mironov)1 of 1 people found this helpful Rhoconnect keeps a local cache in Redis so that it does not have to query your database every time. You can use the poll_interval setting to tweak how long this cache lasts, or even use pass_through if you don't want to keep any data for that model in Redis. Regarding row deletion, if you delete a record in your application but not run a DELETE to your database, the record may reappear in your application if you reset the database and sync again. You should think whether this is desirable or not for your use case but in general, things should be consistent between your application and the DB. If you do not want to use DELETE, you can: - add a column to your table called "deleted" and set it to true when a record is deleted - in your "query" method, only select records where deleted is false That gives you a consistent view between the database and the application, and you still keeping the records in the database if you need them. The mobile app does not need to know about this "deleted" column, the Rhoconnect server together with the database will handle it. Hope that helps, Javier Kutir Mobility Re: cycling rhoconnect after createAlexey Mironov Sep 9, 2013 6:03 AM (in response to Kutir Mobility) Thenk you Javier.
https://developer.zebra.com/thread/3842
CC-MAIN-2017-34
refinedweb
780
68.1
The following samples show what the standards and procedures documentation might look like as a paper-based manual, and Software Development Standards: C The "C" language standards presented in the following subsections are designed to enhance debugging and maintenance of application programs by ensuring that coding is of a uniform style and is well structured and properly commented. These standards are to be followed when programming in "C" on Tandem. Section - Software Development Standards contains the standards which are common to all development (COBOL, SCOBOL, TAL and "C") and must be read in conjunction with this section to understand all the development standards. A block of comments: /* * Comment Comment Comment Comment Comment Comment */ A single line of comment: · When comments contain just a few words and pertain to a single line of code, they can be written on the same line as the code, to the right side. In such cases, for readability, comments shall be lined up vertically on a page. · All local variables shall be in lower case. · All constants defined by macros shall be in upper case. · Macro functions shall be in upper case. · All programs shall use a standard set of header files. · All definitions of compile time values and structure formats shall be put into the header file assigned for these data items if they are used by more than one program. · As a general practice, header files shall not include other header files. · Header files shall not contain executable statements, only declarations. · Each program shall include a program header as the first header file in the main unit. · Initializers shall be written using the equal sign (i.e., assignment). For example: int lower = MIN; int upper = MAX; · Counters shall be given a value that is valid at the point they are declared. · Initializers for structures, unions and arrays shall be formatted with one row per line. int Matrix[2][5] = { {1, 2, 3, 4, 5}, {6, 7, 8, 9, 10} }; · Each variable being declared shall appear on a separate line with a comment appended on the right to explain the variable. · The declaration descriptor shall be written for each variable. · The first letter of each variable, in a group of declared variables, shall be vertically lined up. /* COUNTERS */ int rec_read; /* records read in */ int rec_mod; /* records modified */ int rec_del; /* records deleted */ long total_update; /* total records updated */ /* POINTERS */ int ptr_items; /* points to item array */ char *ptr_rec_buf; /* points to record buffer */ /* FLAGS */ int action_code; /* value:1=no action on record */ /* value:2=add record */ /* value:3=modify record */ /* value:2=delete record */ · The bitwise operators "&", "1", "~", "^", ">>" and "<<" shall be explicitly parenthesized when combined with other operators. Example of parentheses needed: if ((value & MASK) != SET) Example of parentheses for readability: field = ((w >> OFFSET) | FLAG); · The rightshift operator >> shall not be applied to signed operands since sign extention is either compiler dependent or machine dependent. · Programs shall not depend upon the order in which "side effects" take place since the "side effects" may perform differently when ported on a new compiler or machine. Example: a[i] = i++ ... is not acceptable · The braces { and } which enclose compound statements shall begin in the same column. Example of Compound Statement Using Braces: { statement; } · Each case label shall be on a separate line indented under the switch statement. · Statements associated with a label shall be indented under the case statement. · The default label shall be included in the switch statement to show that this condition has been taken into consideration regardless of whether or not any action is to be performed. Example of switch/case Statement: switch (k) case 2: statement; break; case 4: case 5: default: } · Wherever possible, the for statement and its three control expressions shall be placed on the same line. · Statements belonging to the for loop shall be indented and shall appear on separate lines. Example of Simple for: for (initialize; test; update) statement; Example of for With Compound Statement: · The while statement and its expression shall be placed on the same line. · The statement portion shall be placed on a separate line and indented. Example of Simple while statement: while (expression) Example of while Statement with Compound Statement: · The do and the while statements shall be aligned vertically. · The do statement shall be placed on a line by itself. · Each statement in the loop shall be indented from the do and placed on a separate line. Example of do/while Statement: do while (expression); A null statement appearing as the body of a loop shall be placed on a line by itself. When a null statement is used, there must be a comment stating that it was the intention of the programmer. while (expression) /* while (condition) */ ; /* nothing is to happen */ Whenever there are no overriding circumstances, loops which are tested at the beginning (e.g., while, for) are preferable to loops which are tested at the end (e.g., do-while). The for loop and the while loop can be used to handle identical conditions. for (;test;) is the same as: while (test) initialize; body; update; It is recommended that the for loop be used when an initialization and an update are required and that the while loop be used when initialization and update are not required. A switch construction is easier to read and less error prone than a series of if/else constructions. Its use is therefore preferred. Example of nested if statements: if (value == ONE) else if (value == TWO) else if value == THREE) statement; else Example of switch statement: switch (value) case 1: break; case 3: · The use of break in a switch construction is acceptable and good. · The break and continue control statements can be used effectively in nested if/else statements, but should not be used if they can be eliminated (e.g., by reversing an if condition). · Where control is permitted to flow past a case label (i.e., no break or continue statement is present), there must be a comment stating that it was the intention of the programmer. Example of continue with if/else: if (char =='\n') continue; putchar (char); The above shall be coded as: if (char != '\n') Goto statements shall not be used. · Functions shall begin on a new page. · Source code for functions should generally not exceed 50 lines of code. · An explanatory comment is required before each function. The comment shall consist of the function name and a list of objectives followed by a comment to describe the control flow of the function itself. The format is as follows: *************************************************** * function_name - objective(s) * description of control flow in paragraph form · The arguments are listed vertically, following the function name, together with their declarations and description comments. TYPE function_name (TYPE1 a1, /* description of parameter */ TYPE1 a2, /* description of parameter */ TYPE2 a3) /* description of parameter */ <local declarations> <statements> · The #include directive shall be used to source in header files. · Refer - Function Prototypes - for prototype provision, for access functions, to be included in a header file. · Header files unique to one specific source directory are allowed and are included as follows: #include "file.h" · All other header files shall use the angle bracket notation: #include <file.h> · Macros shall not redefine the parentheses which surround a group of statements. That is, "begin" and "end" shall not redefine { and }. · Where a macro can be used to good effect, it should be used. Macros may be used in place of functions when the saving of processing time (e.g., for a realtime application) is necessary. A call to a function takes longer to process than embedded code, and, since macros are inline coded, the trade off is a saving of time for space. (e.g. using a macro 20 times places 20 lines of code in a program.) The use and operation of such a macro must be explained in a comment. · Where practical, environment specific code shall be placed in separate functions. Such functions shall be kept as small and as limited in number as possible. · When recognized as such, environment specific code shall be documented as such. · Name formats specific to one operating system (such as "[1,1] file.h" or <subdir/file.h>) should be avoided, when practical. prevents portability to other systems and shall not be used. · Programs shall not depend on system specific I/O formats or command invocations. · Always check return codes from functions where they are meaningful. When the return value is meaningless, cast it to type void. · Never redefine standard identifiers. (e.g., NULL). · In general, it is wise to declare functions before referring to them. In Tandem "C", it is necessary to fully prototype a function before referring to it. · In Unix, for example, before calling a function "x" returning a pointer, it is sufficient to declare the type of "x" before using it. char *x (); y = x (a, b, c); · In Tandem C, it is necessary to supply a full prototype before using it. char *x (int *, char, struct z **); · Such prototypes are neither required nor understood by standard (Kernighan and Ritchie) Unix "C" compilers, but are necessary on the Tandem. · Each "C" unit or group of related units shall provide full prototypes for all of its access functions in a header (".h") file, or in the source file for function "x" as a separate "?SECTION", for inclusion by all other units which use those functions. · For example, the unit "abc_module" which supplies the functions "abc_set_values ()", "abc_init_table ()" etc. must also supply a header file "abc_module.h" containing full prototypes of "abc_set_values ()", "abc_init_table ()" etc. together with any other definitions carried in these headers. · If another unit "def_module" uses "abc_set_values ()", it must include the "abc_module.h" header file and thereby obtain the required prototype for this function. · On the Sun, a UNIX tool exist which will accept standard "C" programs without prototypes and will generate full prototypes for use in the header file. Compiler pragmas enable control of various elements of compiler listings, code generation and building of the object file. Pragmas can be specified either in the RUN command that executes the compiler or in the source code. Following is a list of all the Tandem "C" pragmas and the standards that shall be followed as to their use. Refer to the "C" Reference Manual for a definition of each of the pragmas. "Default" is the pragma value that the compiler will assume if the pragma is not specified . CHECK (default: CHECK) · Development: default · Promotion to Int: NOCHECK *COLUMNS (default: the full length of each source line is processed) · Promotion to Int: default *ERRORS (default: a compilation is completed, regardless of the number of errors) · Development: optional HEAP (default: a heap of one page is provided) · Promotion to Int: optional · Use: only in a bind file, not source code *ICODE (default: NOICODE) *INNERLIST (default: NOINNERLIST) INSPECT (default: INSPECT) · Promotion to Int: default. Turn SYMBOLS and INSPECT off in the bind file (i.e., NOINSPECT). If needed for later debugging, turn on in the bind file and rebind. LIST (default: LIST) · Development: on and off, as required, throughout the source code · Promotion to Int: N/A LMAP (default: LMAP ALPHA) · Development: NOLMAP · Promotion to Int: NOLMAP *MAP (default: NOMAP) NEST (default: NONEST) · Development: Set to NEST only at the point that nesting is desired in the source code; reset to NONEST immediately following. · Promotion to Int: as for Development *OLDCALLS (default: NOOLDCALLS) OPTIMIZE (default: OPTIMIZE 1) · Promotion to Int: OPTIMIZE 2 · Use: only in a compile file, not source code PAGE (default: no explicit page breaks) · Development: PAGE before each procedure RUNNABLE (default: bindable object file for a single-module program) · Development: where applicable · Use: only in a bind or compile file, not source code SAVEABEND (default: NOSAVEABEND) · Development: optional. Good idea to set, but keep disk clean. *SEARCH (default: no search made outside of current subvolume) · Development: not recommended · Promotion to Int: not allowed. The location of all files will be established by Configuration Management. SECTION (default: no sections of a source file are named) · Development: use, for an #include directive SQL (default: no processing of SQL statements) · Development: necessary, where applicable STRICT (default: no warnings generated) · Development: STRICT. Good for portability. · Promotion to Int: STRICT *SUPRESS (default: NOSUPRESS) SYMBOLS (default: NOSYMBOLS) · Development: see INSPECT *SYNTAX (default: no checking for syntactic or semantic errors. Object code will be produced.) *WARN (default: WARN) XMEM (default: NOXMEM) · Development: XMEM · Promotion to Int: XMEM. To be specified in the compile file, not the source code. Note: For code being promoted to Integration, default values shall be used for the pragmas denoted with an asterisk (*) and, therefore, shall NOT be specified. The word "Int" or "Integration" is referring to Integration and/or System Testing. Technorati Profile Blog Weblog Directory Blog Catalog Blog Directory
http://it.toolbox.com/blogs/enterprise-solutions/sample-software-development-standards-c-12768
crawl-002
refinedweb
2,117
53.1
Are you tired of scanning images and trying to shrink them under 25 mbs just so you can send them via email ? Look no further, I am here to save you from this trouble. (that kinda rhymed.) This basic shell script uses ghostscript to compress your scanned pdfs significantly. Just yesterday I scanned 100 pages of documents and it was over 90 mbs. I searched for a way to compress them under 25 mbs and voila. Here I was with only 6 mbs of pdfs. Much wow. Such compression. Why we need it, - Fast. - Ez. - No need to go online. Especially no need to upload company top-secret documents to any website you see on the Internet. - Possible huge compression without notable loss. Command usage: ./shrinkpdf.sh in.pdf out.pdf Don't forget to install ghostscript. If you are in a situation exactly like me where there are 100 pdf files under one folder that you want to compress altogether, a simple for loop will suffice. import os import subprocess import sys infolder = sys.argv[1] outfolder = "compressed_" + infolder os.mkdir(outfolder) for f in os.listdir(infolder): subprocess.run(["./shrinkpdf.sh", os.path.join(infolder, f), os.path.join(outfolder, f)]) Run this python script, python shrink.py pdfs/ and all of your pdfs will be put under compressed_pdfs/. All the relevant details and more, inside. I humbly wanted to let you know such a useful tool exists. Top comments (0)
https://dev.to/burakcank/want-to-compress-your-pdf-files-5ff9
CC-MAIN-2022-40
refinedweb
243
69.68
One popular gripe about Silverlight has been the lack of integrated testing tools. There are several types of tests you may perform against a software project. Unit tests can be performed with the aid of the Silverlight Unit Testing Framework and automated with a third-party tool such as StatLight. Automation testing involves hosting the actual Silverlight application in a browser and performing a walkthrough based on a script. Typically, this script will follow a "happy path" through the application, but more detailed automation tests check for known error conditions as well. The automation tests simulate entering text, clicking buttons, and scanning results. Silverlight automation testing is possible, and has been for some time, but it is far easier to do with some helper projects supplied by the Prism team. If you are interested in running actual tests within the Visual Studio IDE, this post will provide step-by-step instructions to get you where you need to be. Please take the time to go through this step-by-step if you are serious about automation testing. There are lots of small pieces that may seem complex at first, but once you've walked through the steps, it should be fairly straightforward and easy for you to set up new projects and automate the testing to a greater extent than shown here. Get to Know Your Peers The automation testing is performed with the help of Automation Peers, a feature that has been around for Silverlight since at least 2.0. Automation peers provide a consistent interface for interacting with controls. They have a very practical use for accessibility (that article shows how long it's been around — so why do so many people ignore it?) While the base controls supplied by Silverlight have their own peers, you'll need to learn how to create your own custom automation peers if you wish to automate the testing of custom controls. Grab the Latest Prism Drop Now we need to get Prism. The latest drop as of this writing is Drop 4 but they are releasing fast and furious and should have the full release out by by winter of 2010. (Keep in mind this blog post may quickly become obsolete, as Microsoft is working on a native solution for this). OK, you've grabbed it and installed it? Great, let's get going. I'm not going to provide a completed project for this because everything you need - all source code and steps - are included in this post. Create a Simple Project Let's create a very simple project to get started. What we'll do is create two text boxes and a button. When you enter text in the first box and click the button, it should get updated to the other box. Easy enough, but then we'll automate the tests to ensure the update is happening. Create a new Silverlight application. Give it a name UIAutomation and include a solution (check the option to create a directory) UIAutomationSln (or whatever your naming preference is). Host it in a web application, and make sure the version is Silverlight 4. Now we'll add a simple set of controls. Set the grid to three columns, then add two text boxes and one button. The XAML looks like this (be sure to key in the Click attribute so it auto-generates the code-behind handler). <Grid x: <Grid.ColumnDefinitions> <ColumnDefinition/> <ColumnDefinition/> <ColumnDefinition/> </Grid.ColumnDefinitions> <TextBox x: <TextBox x: <Button x: </Grid> In the code-behind, handle the click event and move the source text to the target: using System.Windows; namespace UIAutomation { public partial class MainPage { public MainPage() { InitializeComponent(); } private void btnSubmit_Click(object sender, RoutedEventArgs e) { txtTarget.Text = txtSource.Text; } } } At this point, you can hit F5 and run it to see it does what we want. Preparing for Automation Right now, we need to do one small thing to prepare the controls for automation. Instead of building a custom automation peer, we're going to take advantage of some built-in Silverlight functionality that exposes automation for us. First, we'll simply define some automation-specific identifiers for the controls. Open up the XAML for the main page and add the automation properties you see below: As you can see, these do not have to be the same as the names of the controls. These identifiers consistently expose the controls to the automation system. Now we're ready to test it! Add the Prism Acceptance Test Library We're going to add a library from Prism that will help with the test automation. To do so, you'll need to right-click on the solution, and choose "add ... existing project." Navigate to the Prism acceptance test library, and select the project file. Add the Test Project Now, we'll add a test project. This project is a regular Visual Studio Test Project, not a Silverlight test project. Call the project UIAutomation.Test. Next, we'll need to add a few references. First, add the reference to the acceptance test library to the newly added test project. Right click on references, choose "add reference" and select the acceptance test library from the "Projects" tab. Finally, go into the same dialog, but this time add the UI automation references from the .NET tab: Add your Control Resources The Prism acceptance test library is designed to be used with projects that share code between Silverlight and WPF. For this reason, a special resource file is used to map between the project types and the automation identifiers for the controls. We'll go ahead and build our own resource dictionary to map the controls. Anything in Silverlight should end with a _Silverlight prefix. Create a folder called "TestData" in the test project, and add a resource file called "Controls": In the resource dictionary, fill out the mapping for the control names to the automation identifiers that we added earlier. Notice by convention, we're using the control id, followed by the Silverlight designation, for the key, then putting the automation id as the value. Because the test class must use this resource file, we want to make sure it gets copied to the output directory. There are two things we'll need to do. I'll cover the second step later. The first step is to make sure the resource is set to "copy always". Simply select the resource file, go into properties, and set the copy attribute: Add the Application Configuration The acceptance test library drives from a configuration file that you place in the test project. Right click on the test project and choose "add new item." Select "Application Configuration File" and keep the default, then click "Add." This will add an App.config file to the root of your test project. Open this file up, and paste the following: <?xml version="1.0" encoding="utf-8" ?> <configuration> <configSections> <sectionGroup name="BrowserSupport"> <section name="Browsers" type="System.Configuration.NameValueSectionHandler, System, Version=1.0.3300.0, Culture=neutral, PublicKeyToken=b77a5c561934e089, Custom=null" /> </sectionGroup> </configSections> <appSettings> <!-- Browser Path and process parameters --> <add key="IEPartialPath" value="\\Internet Explorer\\iexplore.exe"/> <add key="FirefoxPartialPath" value="\\Mozilla Firefox\firefox.exe"/> <add key="SafariPartialPath" value="\\Safari\Safari.exe"/> <add key="IEAppProcessName" value="iexplore"/> <add key="FirefoxAppProcessName" value="firefox"/> <add key="SafariAppProcessName" value="Safari"/> <!-- Time to wait for the application to be launched --> <add key="ApplicationLoadWaitTime" value="60000"/> <!-- Test Data config files --> <!--<add key="TestDataInputFile" value=".\TestData\TestDataInput.resx"/>--> <add key="ControlIdentifiersFile" value=".\TestData\Controls.resx"/> </appSettings> <!-- Config section for Cross-Browser support --> <BrowserSupport> <Browsers> <add key ="InternetExplorer" value ="AcceptanceTestLibrary.Common.CrossBrowserSupport.InternetExplorerLauncher" /> <!--<add key="FireFox" value="AcceptanceTestLibrary.Common.CrossBrowserSupport.FirefoxLauncher" /> <add key="Safari" value="AcceptanceTestLibrary.Common.CrossBrowserSupport.SafariLauncher" />--> </Browsers> </BrowserSupport> </configuration> Note that we've done the bare minimum to set up an Internet Explorer launch and pointed to the controls resource we created. You can obviously tinker with these settings and include other browsers as part of the test. The Prism example has more in the application settings, such as the path to the application and some other settings, but these are the key ones for the application to work. Set up a Base Automation Helper We're not quite ready for the test class. To make it easy to access our automation peers, we can build a base class that exposes the properties we need. This will help us abstract access to the controls for our tests. In the test project, add a new C# class called using System.Windows.Automation; using AcceptanceTestLibrary.Common; using AcceptanceTestLibrary.TestEntityBase; namespace UIAutomation.Test { public static class MainPageBase<TApp> where TApp : AppLauncherBase, new() { public static AutomationElement Window { get { return PageBase<TApp>.Window; } set { PageBase<TApp>.Window = value; } } public static AutomationElement TextBoxSource { get { return PageBase<TApp>.FindControlByAutomationId("txtSource"); } } public static AutomationElement TextBoxTarget { get { return PageBase<TApp>.FindControlByAutomationId("txtTarget"); } } public static AutomationElement Button { get { return PageBase<TApp>.FindControlByAutomationId("btnSubmit"); } } } } Notice we are using some helper classes to find the control. However, what happens is that the context for this application is recognized as Silverlight, so the label we pass (for example, txtTarget) is appended with the Silverlight designation. Ultimately, our control dictionary is accessed with the key txtTarget_Silverlight, which then maps to our automation id of "TargetText" and this is how the automation peer is found. If we had a WPF application that was sharing code, we could simply add a WPF-specific entry and name the automation peer something completely different. Add the Test Class OK, now that all of the infrastructure is in place, we can add our test! Go into the automatically generated UnitTest1.cs class. Remember how I told you there were two steps we needed to take in order for the control resource file to be available for testing? This is where we'll make the second step. We're going to add deployment items for the test project as well as the web project. This ensures that both the resources and the test web page are copied to the test sandbox so they are available. Add this code to the top of the class: namespace UIAutomation.Test { #if DEBUG [DeploymentItem(@".\UIAutomation.Test\bin\Debug")] [DeploymentItem(@".\UIAutomation.Web", "SL")] #else [DeploymentItem(@".\UIAutomation.Test\bin\Release")] [DeploymentItem(@".\UIAutomation.Web\","SL")] #endif [TestClass] public class UnitTest1 We're instructing the test engine to copy the contents of the test output to the test directory. We also want to create a subdirectory called "SL" and put the output of the web project there. This gives us a path to our test page so we can run the unit tests. We also need to configure the test to use the deployment hints. Under the main solution, there should be a folder called Solution Items. Double-click on the file Local.testsettings. Click on the deployment setting, and make sure that Enable Deployment is checked. Once checked, click the Apply button in the lower right corner of the dialog. Change the class to inherit from the FixtureBase provided by Prism: [TestClass] public class UnitTest1 : FixtureBase<SilverlightAppLauncher> Set up the using statements to include the namespaces we'll need: using System.Reflection; using System.Threading; using AcceptanceTestLibrary.Common; using AcceptanceTestLibrary.Common.Silverlight; using AcceptanceTestLibrary.TestEntityBase; using AcceptanceTestLibrary.UIAWrapper; using Microsoft.VisualStudio.TestTools.UnitTesting; Now, let's add some code to launch the browser and load the Silverlight test page, as well as tear it down when the test is finished. You'll want to tweak the delays to a value that works well for you. Put this at the top of the class: private const string APP_PATH = @"\SL\UIAutomationTestPage.html"; private const string APP_TITLE = "UIAutomation"; #region Additional test attributes // Use TestInitialize to run code before running each test [TestInitialize] public void MyTestInitialize() { var currentOutputPath = (new System.IO.DirectoryInfo(Assembly.GetExecutingAssembly().Location)).Parent.FullName; MainPageBase<SilverlightAppLauncher>.Window = LaunchApplication(currentOutputPath + APP_PATH, APP_TITLE)[0]; Thread.Sleep(5000); } // Use TestCleanup to run code after each test has run [TestCleanup] public void MyTestCleanup() { PageBase<SilverlightAppLauncher>.DisposeWindow(); SilverlightAppLauncher.UnloadBrowser(APP_TITLE); } #endregion We are using the Silverlight application launcher, a helper provided with the Prism project, to launch our test page. Notice that we get the output directory for the test that is running, then append the path to the test page. The test page is underneath the SL subdirectory because that's how we defined it with the deployment item. Now that we have it launched and ready to tear down, we can write the actual automation test. Here's what we'll do: - Simulate typing text into the source text box - Confirm that the target text box is blank - Simulate clicking the button - Confirm that the target text box now has the text we entered into the source text box Here's how we do it: [TestMethod] public void TextBoxSubmission() { const string TESTVALUE = "TestValue"; // set up the value var txtBox = MainPageBase<SilverlightAppLauncher>.TextBoxSource; Assert.IsNotNull(txtBox, "Text box is not loaded"); txtBox.SetValue(TESTVALUE); Thread.Sleep(1000); Assert.AreEqual(txtBox.GetValue(), TESTVALUE); // ensure the text block is empty to start with var txtBox2 = MainPageBase<SilverlightAppLauncher>.TextBoxTarget; Assert.IsNotNull(txtBox2, "Target text box is not loaded."); Assert.IsTrue(string.IsNullOrEmpty(txtBox2.GetValue()), "Target text box is not empty."); var btnSubmit = MainPageBase<SilverlightAppLauncher>.Button; Assert.IsTrue(btnSubmit.Current.IsEnabled, "Submit Button is not enabled"); btnSubmit.Click(); Thread.Sleep(1000); var actual = txtBox2.GetValue(); Assert.AreEqual(TESTVALUE, actual, "Text block was not updated."); Thread.Sleep(1000); } As you can see, writing the automation tests is relatively straightforward. We don't have "record and playback" but it's easy to grab a control, tell it to do something, and then query the result. Build the project and make sure there are no errors. Run the Test Remember what I mentioned earlier about lots of steps? We're there. We've set it up, and should be good to go. If you have issues, go back and check the steps to make sure you didn't miss anything. Again, once you get the hang of it, you'll find it's not that difficult to get up and running and to write some great automation tests. There are good examples in the Prism samples, specifically in the MVVM quick start. Let's open the test view: Select the test and click "Debug Selection": You should eventually see a browser window pop up. The browser may complain about security. If this happens, simply click on the yellow bar and allow the blocked content. Be sure to do this before the launch times out: You can literally watch the Silverlight application appear in the browser, then see the text entered into the source box, and eventually the button will click and the text should post to the target box. If all goes well, you'll get the familiar green check box: There you go ... Silverlight UI automation! If you've made it this far, then you have what you need to set this up for your own projects. I've heard of some companies who don't use Silverlight because they are under the impression it doesn't support automated testing. If you're at one of those companies, be sure to go grab your manager, drag them to your cube and show them that it should now be an approved platform for you and your fellow developers!
https://csharperimage.jeremylikness.com/2010/07/silverlight-ui-automation-testing-using.html
CC-MAIN-2021-43
refinedweb
2,544
56.76
# we obtain a list of word/tag pairs from the Brown corpus # using news data only import nltk brown_news = nltk.corpus.brown.tagged_words(categories="news") # This dictionary maps each tag to a list of words that have been observed with it tag_wordlist = { } # Now we fill the dictionary for word, tag in brown_news: if tag not in tag_wordlist: tag_wordlist[ tag ] = [ ] tag_wordlist[ tag ].append(word) # what words have we observed as nouns? print(tag_wordlist[ "NN"]) # For the prepositions "in", "on", "up", we collect the words # that precede them. # Ideally, we would collect the verbs that form particle verbs # with these prepositions, like "check in", "take on", "look up", # but we assume we don't have part-of-speech tags available, # so collecting preceding words is the next best thing. import sys import string prepositions = [ "in", "on", "up"] # we ask the user for a filename from which we can read text print "Please enter a filename" filename = raw_input() # we try to open the file, # but are prepared for the case # that the user may have mistyped try: f = open(filename) except IOError: print "sorry, could not open", filename sys.exit(0) # this leaves the program # we have successfully opened the file, now we read it contents = f.read() f.close() words = [ w.strip(string.punctuation).lower() for w in contents.split() ] bigrams = nltk.bigrams(words) prepositions_preceding = { } for w1, w2 in bigrams: if w2 in prepositions: # store w1 as a word that preceded a preposition if w2 in prepositions_preceding: prepositions_preceding[ w2 ].append( w1) else: prepositions_preceding[ w2 ] = [ w1 ] for preposition, preceding in prepositions_preceding.items(): print preposition, preceding
http://www.katrinerk.com/courses/intro-to-computational-linguistics-ug/schedule-introduction-to-computational-linguistics-undergraduate/using-python-dictionaries
CC-MAIN-2021-04
refinedweb
262
61.36
Content Publishing Manager, Windows Mobile SDK The Great Pyramid of Khufu, at Giza, Egypt consists of approximately 2.3 million blocks of stone, is 775 feet long on each side and more than 451 feet high, and took nearly thirty years for a force of 100,000 slaves to construct. Sometimes getting an instance of BizTalk Server up and running can feel just like that. However, with the right understanding it doesn’t have to. Installing BizTalk Server 2004 itself is not terribly difficult. In fact it's pretty fast and straightforward. The setup process takes roughly 25 minutes. What can be difficult is building the platform necessary to execute the BizTalk Server runtime. This is the part that makes you feel like you are building the Great Pyramid of Khufu. This article discusses the intricacies of integrating the software required to run BizTalk Server 2004 and how to troubleshoot the most common problems that crop up. Key Assumption: This article assumes a single-server installation of BizTalk Server 2004 running on Windows Server 2003. Although some of the content applies specifically to a single-server installation on Windows Server 2003, most of this content can be applied to all of the supported deployments. Understanding the Prerequisite Software. To troubleshoot anything, you need a basic understanding of the environment that you are going to work on. The following is a quick overview of the key software pieces that you need to understand: Internet Information Services (IIS) What is it? Microsoft Internet Information Services (IIS) is a powerful Web server that provides a highly reliable, manageable, and scalable Web application infrastructure. IIS is optional and must be installed separately after you have installed the base operating system on your computer. How does it affect my BizTalk Server installation? Depending on which components of BizTalk Server you use, you will have to configure IIS to work in that environment. Here are the pieces that require IIS: Windows SharePoint Services Before I go any further, let me first state what it is not. Windows SharePoint Services is not SharePoint Portal Server. They are two different yet similar things. Windows SharePoint Services is a collection of services for Microsoft Windows Server 2003 that you can use to share information, collaborate with other users on documents, and create lists and Web Part pages. SharePoint Portal Server, on the other hand, is a secure, scalable, enterprise portal server built upon Windows SharePoint Services that you can use to aggregate SharePoint sites, information, and applications in your organization into a single, easy-to-use portal. Basically, SharePoint Portal Server is one big cool application built using the Windows SharePoint Services framework. You do not have to install Windows SharePoint Services unless you plan on using Business Activity Services (BAS). BAS is a Web application hosted within Windows SharePoint Services that allows you to interact with trading partners in a collaborative environment. It's also important to note that you could install SharePoint Portal Server 2003 instead of Windows SharePoint Services. SQL Server 2000 Microsoft SQL Server 2000 is an enterprise-class relational database server capable of efficiently processing high volumes of critical data. The BizTalk Server 2004 engine provides the capability to specify a business process and a mechanism for communicating between applications the business process uses. The BizTalk Server 2004 core engine uses SQL Server 2000 as the main repository for this communication mechanism. SQL Server 2000 is a required piece of the overall architecture. When you install and configure BizTalk Server 2004, the following SQL Server databases are created: On a single-server installation, the following Windows accounts are created locally when you install and configure BizTalk Server 2004. Items denoted with an asterisk (*) are added as SQL Server accounts: The following SQL Server jobs are also created: Visual Studio .NET 2003 Microsoft Visual Studio .NET 2003 provides a powerful, enterprise team development environment for rapidly building mission-critical applications that target any device and integrate with any platform. Visual Studio .NET is important for the development process in BizTalk Server. To install the BizTalk Server 2004 development tools, you must install Visual Studio .NET 2003 on your computer. The BizTalk Server development tools are based on Visual Studio .NET 2003. Therefore, at a minimum, you must have the Visual C# .NET portion of Visual Studio .NET 2003 installed on your computer before installing BizTalk Server development tools. You must also install the Visual Studio product documentation for BizTalk Server User Interface (F1) Help to work in the Visual Studio environment. Additional Software The following are some of the additional software components that may or may not be required depending on which components of BizTalk Server you are going to use. SQL Server 2000 Analysis Services SQL Server 2000 Analysis Services is the next generation of the OLAP Services component that shipped in SQL Server 7.0. Analysis Services is an easy-to-use, integrated, and scalable set of components that enables you to build multidimensional cubes and provide the application programs with access to the cubes. SQL Server Analysis Services is optional for a BizTalk Server 2004 installation. It is required only if you want to use Health and Activity Tracking (HAT) or Business Activity Monitoring (BAM). SQLXML 3.0 SQLXML enables XML support for your SQL Server database. It enables developers to bridge the gap between XML and relational data. SQLXML 3.0 is optional for a BizTalk Server 2004 installation. It is required only if you want to use the SQL adapter. XML Core Services XML Core Services (formerly known as MSXML, for Microsoft Extensible Markup Language or XML) is an application for processing Extensible Stylesheet Language Transformation (XSLT) in an XML file. XML Core Services is a required piece of software for your BizTalk Server installation. Microsoft Office XP Tool: Web Components (OWC10) Microsoft Office Web Components are a collection of Component Object Model (COM) controls for publishing spreadsheets, charts, and databases to the Web, and for viewing the published components on the Web. OWC10 is optional for a BizTalk Server 2004 installation. It is required only if you want to use Health and Activity Tracking (HAT). Microsoft Office InfoPath 2003 InfoPath 2003 can help teams and organizations efficiently gather the information they need through rich, dynamic forms. The information collected can easily be reused throughout organizations and across business processes because InfoPath 2003 supports industry-standard Extensible Markup Language (XML) using any customer-defined schema. InfoPath is optional for a BizTalk Server 2004 installation. It is required if you plan on configuring Business Activity Services (BAS). Troubleshooting Common Issues Now that you have a general understanding of the major prerequisite software that forms the foundation required for a BizTalk Server 2004 installation, let's look at some of the common problems you may encounter, along with their resolutions. The following topics will be discussed: Troubleshooting Internet Information Services (IIS) Issues The following is a list of the most common IIS problems you may run into when building your BizTalk Server 2004 platform. IIS ProblemWhen you try to open a page on your server, you get the following error: Page Cannot Be FoundHTTP 404 - File not found Cause & ResolutionThe following are some common causes of this error message: To resolve this problem, verify that the file requested in the browser's URL exists on the IIS computer and that it is in the correct location. See the following for more information: 248033 Common reasons IIS Server returns "HTTP 404 - File not found" error The Page Cannot Be Displayed Cannot find server or DNS Error Cause & ResolutionThere are a number of reasons that you could be encountering this. Here are the most common problems: The most common problem occurs when you do not have your Internet Explorer connection settings configured properly. To resolve this problem, see the following article: 326155 "The Page Cannot Be Displayed" Error Message When You Try to Start Internet Explorer 401 - Access denied Cause & ResolutionIIS defines a number of different 401 errors that indicate a more specific cause of the error. These specific error codes are displayed in the browser: The 401 error messages can occur for a variety of reasons. See the following article for more information: 318380 IIS Status Codes IIS ProblemWhen you try to open a page on your server, you get the following error in the browser: 500 - Internal server error Cause & ResolutionThis error message can occur for a wide variety of server-side errors. Your Event Viewer logs will contain more information about why this error occurs. Additionally, you can disable friendly HTTP error messages to receive a detailed description of the error. For additional information about how to disable friendly HTTP error messages, see the General Troubleshooting Information section in this article. Service Unavailable Cause & ResolutionThis occurs because the application pool hosting the Web site you are trying to view. Recommended Support Articles for IIS 842493 You receive a "Service Unavailable" error message when you browse an IIS 6.0 Web page on a Windows Server 2003-based domain controller Troubleshooting various issues in IIS 6.0 Troubleshooting Windows SharePoint Services Issues The following is a list of the most common Windows SharePoint Services problems you may run into when building your BizTalk Server 2004 platform. Windows SharePoint Services ProblemWhen creating a content database using the SharePoint Central Administration tool, you get the following error in the browser: Login failed for user 'NT AUTHORITY\NETWORK SERVICE' Cause & ResolutionThis issue occurs when the database owner of the database that you are connecting to is different from the application pool identity that Windows SharePoint Services is running under. To resolve this issue, you can do one of two things: Windows SharePoint Services ProblemWhen you open the SharePoint Central Administration tool, you get the following error in the browser: Cause & ResolutionThis occurs because the application pool hosting the Web site for the SharePoint Central Administration tool. Windows SharePoint Services ProblemWhen trying to extend and create a content database, you get the following error in the browser: Cannot connect to the configuration database Cause & ResolutionThere are a few reasons that this can occur. The most common reason this occurs when building your BizTalk Server 2004 platform is that the account that is used by the application pool that is running the SharePoint Central Administration site does not have the required permissions to the SQL Server database. To resolve this issue, you can do one of two things: Windows SharePoint Services ProblemWhen you open the SharePoint Central Administration tool, you get the following error in the browser: Cause & ResolutionThis is not a Windows SharePoint Services issue, and is most likely a proxy error, IIS configuration error, or network connectivity issue. For information about how to resolve this type of problem, see the Troubleshooting Internet Information Services (IIS) Issues section of this article. Windows SharePoint Services ProblemAfter restarting the server, you see the following error in the event logs: The SharePoint Timer Service service failed to start You may also see the following error when you open the SharePoint Central Administration tool: Unable to connect to the WSS configuration database STS_Config Cause & ResolutionThis problem generally occurs when the SharePoint Timer service fails to contact the Windows SharePoint Services database when rebooting. The SharePoint Timer service is used to send notifications and perform scheduled tasks for Windows SharePoint Services. The most common reason that the SharePoint Timer. Note If you are using Small Business Server, there is another possible cause that is covered in Microsoft Knowledge Base Article 840685 located at. Recommended Support Articles for Windows SharePoint Services 832769 How to configure a Windows SharePoint Services virtual server 833797 How to back up and restore installations of Windows SharePoint Services 823287 You receive a "Cannot connect to the configuration database" error You Receive a "Service Unavailable" Error Message When You Browse a Windows SharePoint Services Web Site You receive a "Database <Database_Name> already exists" error message when you manage your Windows SharePoint Services content database Troubleshooting SQL Server Issues The following is a list of the most common SQL Server problems you may run into when building your BizTalk Server 2004 platform. SQL Server ProblemAfter restarting the server, you see one or both of the following errors in the event logs: MSSQLSERVER service failed to start SQLSERVERAGENT service failed to start Cause & ResolutionThe most common reason that the MSSQLSERVER. Recommended Support Articles for SQL Server SQL Server 2000 Frequently Asked Questions - Setup SQL Server 2000 Frequently Asked Questions - Tools Troubleshooting Visual Studio .NET Issues The following is a list of the most common Visual Studio .NET problems you may run into when building your BizTalk Server 2004 platform. Visual Studio .NET ProblemWhen trying to create an ASP.NET Web application in Visual Studio .NET 2003, you get the following error: The Web server reported the following error when attempting to create or open the Web project located at the following URL: ''. 'HTTP/1.1 503 Service Unavailable'. Cause & ResolutionThis occurs because the application pool hosting the Web site on which you are attempting to create the application. The default Web access mode for this project is set to file share, but the project folder at cannot be opened with the path 'c:\inetpub\wwwroot\WebApplication1'. The error returned was: Unable to create Web project 'WebApplication1' The file path 'c:\inetpub\wwwroot\WebApplication1' does not correspond to the URL ''. These two need to map to the same server location. HTTP Error 404: Cause & ResolutionThis problem most likely occurs because you are trying to create a Web application on an IIS virtual server that has been extended with Windows SharePoint Services. Once an IIS virtual server has been extended by Windows SharePoint Services, a collision occurs between the Windows SharePoint Services Internet Services API (ISAPI) filter and the approach Visual Studio .NET uses to create the Web project on the Web server. To resolve this problem, you need to use the Windows SharePoint Services Define Managed Paths tool to exclude the URL namespace that is currently being managed by Windows SharePoint Services. Recommended Support Articles for Visual Studio .NET 319714 How to troubleshoot Visual Studio .NET installation Visual Studio .NET 2003 Setup May Fail When Antivirus or Firewall Program Is Running General Troubleshooting Information Successful troubleshooting depends on your ability to identify the source of the problem. Like anything else, it comes with practice. If you find the source, you are 90 percent of the way there. A quick way to isolate most problems associated with servers is to follow these rules: For further information about troubleshooting specific issues related to various components of BizTalk Server 2004, see the Troubleshooting node in BizTalk Server 2004 Help. Let's discuss a few more important items for troubleshooting. These are: Analyzing Log Files One of the most important aspects of effectively troubleshooting your system is the ability to read and understand various log files. IIS Log Files Internet Information Services (IIS) 6.0 offers a number of ways to record the activity of your Web sites, File Transfer Protocol (FTP) sites, Network News Transfer Protocol (NNTP) service, and Simple Mail Transfer Protocol (SMTP) service. It also allows you to choose the log file format that works best for your environment. IIS logging is designed to be more detailed than the event logging or performance monitoring features of the Windows Server 2003 operating systems. For more information about analyzing IIS log files, see. Windows SharePoint Services Log Files Troubleshooting Windows SharePoint Services goes hand-in-hand with troubleshooting IIS issues. If you are having a problem in Windows SharePoint Services, first analyze your IIS logs, and then look into your event logs. Windows SharePoint Services also includes a usage analysis logging feature to collect and evaluate information about how a Web site is being used, such as visitor user names, number of visits to each page, and the types of Web browsers used. By default, log files for usage analysis processing are stored in the following folder, where Drive is the drive where Windows Server 2003 is installed: Drive:\WINNT\System32\LogFiles\STS A subfolder exists for each virtual server, and in each of these subfolders are folders that are created for each day. If you specify a different location to store log files, make sure that the STS_WPG group has Read, Write, and Update permissions to the folder. Event Logs The Event Viewer often contains valuable information that is critical to the troubleshooting process. When a site or a site system is experiencing a problem, always check both the Application and System event logs to determine if the problem is caused by temporary network problems, device drivers, or third-party software. Must-Have Tools The following tools are handy to have around when troubleshooting various issues on your server. is available at. Regmon Regmon is a registry monitoring utility that shows you in real time which applications are accessing your registry, which keys they are accessing, and the registry data that they are reading and writing. Regmon is available at. Error Code Lookup Tool The Error Code Lookup Tool helps you determine error values from decimal and hexadecimal error codes in Windows operating systems. The tool can look up one or more values at a time. It is available at. Error Lookup Tool for BizTalk The Error Lookup Tool for BizTalk is a tool that you can use to quickly look up error codes related to BizTalk Server 2004 from within Visual Studio .NET. Note that this is an unsupported and undocumented tool. It is available at. Troubleshooting Tips Here are some great tips to keep in your bag while troubleshooting your server Additional Resources Installation & Product Documentation Microsoft BizTalk Server 2004 Installation Guide Microsoft BizTalk Server Documentation Updates QuickStart Guide to Installing Microsoft BizTalk Server 2004 Knowledge Base Articles How to cluster the Enterprise Single Sign-On (SSO) service on the master secret server in BizTalk Server 2004 How to enable network DTC access in Windows Server 2003 How To Enable Network COM+ Access in Windows Server 2003 The Enterprise Single Sign-On Service and associated BizTalk Server 2004 services fail after you install Windows XP Service Pack 2 (SP2) If you would like to receive an email when updates are made to this post, please register here RSS I am getting the follwing error while installing Biztalk in windows 2003 server Error 5003: Regsvcs failed for assembly C:\Program Files\Microsoft BizTalk Server 2006\Microsoft.BizTalk.Deployment.dll. Return code 1." Please help I am getting the follwing error while installing Biztalk in windows 2000 server Try manually registering the dll. Just drag&drop into %systemroot%\assembly folder. If it failed to register, it would be helpful if you ran depends.exe on it to see if anything is missing. Please post the results and if there's any info in eventvwr.exe to the BizTalk Newsgroups. Yes your solution of dropping the Microsoft.BizTalk.Deployment.dll into the %systemroot%/assembly folder worked for me. Thanks a lot. I am trying to install BizTalk Server 2006 Developer Edition on WIndows XP SP2. I tried the above dropping of Microsoft.BizTalk.Deployment.dll into the assembly. That worked, but the installation still fails (see below). I noticed that Windows Defender found a problem at about the time of the failure. Disabling real time protection did not help. I'm trying other combiantions. Any suggestions? Brad [08:59:24 Info] Action 8:59:24: RegsvcsDeployment.D7298A41_5E0F_4d72_9A44_B1F4C74FF805. Registering BizTalk Deployment COM+ application [08:59:26 Info] Error 5003.Regsvcs failed for assembly C:\Program Files\Microsoft BizTalk Server 2006\Microsoft.BizTalk.Deployment.dll. Return code 1. I finally got BTS 2006 to install. I had to install in two parts. The default items in the first install and then I added all of the optional software. The security software has no impact on the installation. I tried turning off all AV/Spyware checkers and got the same problem. I even tried reinstalling .Net 2.0 framework. Good news is it is done. Bad news is I don't know the root cause. Hi Brad, Thanks for coming back and closing that out. So it worked by doing a simple install and then coming back and adding all of the software? Thats a first ... and yes you are right that determining root cause is difficult at best now. However, the install/config logs can be quite telling. Was there any interesting items in there? Sorry for the late response. -luke Server 2006\Microsoft.BizTalk.Deployment.dll. Return code -532459699." i have register this assembly into the %systemroot%/assembly it is getting the same error.Before installing this biztalkserver 2006,i have register the above dll,after that i have installed this software.I am getting same error. Great information on and yes it really helps if you start from Event viewer. That's where you have all the error logs recorded. Tina Hi Luke, I am getting the following error: Failed to create Management database "BizTalkMgmtDb" on server "....." This version of BizTalk requires Microsoft SQL Server 2000 with Service Pack 3 or later. Thanks in advance! Kilian Great article Luke, its effective at highlighting those other supporting / prerequisite software packages that Biztalk interacts with. Thanks v.much
http://blogs.msdn.com/luke/articles/241024.aspx
crawl-002
refinedweb
3,542
52.39
How Democratizing College Coaching Tackles Key Problems Plaguing Today’s Teens - The problem understood more broadly: 21st-century teens are depressed and anxious, increasingly so, paving the way for unstable adulthoods personal narrative, — even as their children’s adulthood portends to be much less safe and structured than theirs. No wonder 13-year-olds are writing Matt Haig. 2. The problem understood in terms of college planning If ever there were a universal trigger for teen stress and parent-teen strife in today’s society, it’s the prospect of college admissions. The mass-media model of hyperperfection, the lack of clear direction around defining and executing passions, the indeterminacy coupled with the high expectations of the college admissions process: all of this has become overwhelmingly emotionally taxing for most teens. As a consequence, teenhood feels more like a gauntlet than a time of discovery, growth, love, and fun. To top it all off, teens often face adult exasperation about their poor time-management skills, lack of direction, goals, etc. Teens know they lack these skills. Parents and teachers are aware as well. The problem? There seems to be no realization that teens need to be taught these things, that they can be taught these things, and that someone needs to do it. My diagnosis: teens are not taught in a formal way the soft skills they need to become developmentally ready for college. The skills I’ve located in my practice are: - Decision-making: Including time-management, ethical dilemmas, and choosing among options (non-stress and stress situations). - Prioritizing: The key to decision-making is prioritizing, but teens do not know this; it’s why they’re often frozen when it comes to tough choices. Teens don’t necessarily know how to engage in the deliberation process that results in determining priorities, or don’t know that this exists and need to be taught. - Defining interests: With the societal mandate to get a job that pays well, teens can censor themselves and don’t realize that a career comes much later after they’ve defined who they are and what they want. Students do not have a guide for an independent audit of possible areas of exploration. - Activating interests: Teens are not given any guidance on how to reach beyond their personal comfort zone and find best-fit and/or unique extracurricular activities. - Researching opportunities: Teens don’t know how to research internships, volunteer opportunities, independent projects, summer programs, or colleges. - Project management: This skill is one of the most difficult for teens as it involves more advanced executive functions, yet is so important for the contemporary college applicant (working collaboratively is a connected skill). - Communicating pre-professionally (written and verbal) with adults: Teens have no clue, which is a real roadblock in securing employment, internships, and/or volunteering opportunities. - Writing mini-memoir style personal statements: This is so fear-inducing for teens. Why are they asked to do something so high stakes that they have not been taught to do and is developmentally mismatched for them? Yet they can be taught to do this. - Remaining calm when facing stress: A modern-day soft skill on par will others at this point; must be taught, and can be taught! Because no societal institution — neither secondary education, nor the family, nor religious institutions — offer systematic training for high school students to become acquainted with these skills, practice them, stumble along the way, and hone them in the process, the prospect of college, instead of serving as an inspiring motivator, has become a demoralizing experience. The only vendors I can detect possibly offering this kind of educational personal-growth training are CBOs serving select underprivileged students, many of them first-generation applicants, and private college counselors who mostly cater to affluent families to sustain their livelihoods. As a result, most American teens lack virtually all the resources they need to develop themselves uniquely and independently. Yet that is exactly what they need to become successful adults. College readiness haunts teens who feel rudderless in tumultuous waters with no clear anchors. They feel unequipped and unprepared for the coming challenge, they are right, and they know they are right. This extreme level of indeterminacy combined with a perceived sense of neglect by those around them, from my observational standpoint, contributes to high (even crippling) levels of anxiety in teens. 3. According to student-led market research, teens agree with this diagnosis In the summer of 2018, my company hired a former student who had been very successful in admissions and is a talented organizer to conduct market research on what graduating high school seniors felt they lacked and needed in the college admissions process. Our high-school marketing intern surveyed students from two high schools in her hometown, Gilroy, CA, 30 miles south of San Jose. These students had not worked with a college counselor. Student admits from this pool range from UC Berkeley to CSU Monterey Bay to University of Utah. Here’s a sampling of responses to the question: “What do you wish you’d known about the college admissions process before you started?” - I wish they told us about how to find meaningful volunteer opportunities, how to be a leader in your community, how to be passionate about something and pursue it. Not just for college, but for life! - The amount of self-reflection that had to go into those was tough. - I wish I knew how tough the whole process was going to be. It was really, really hard. I thought you just fill out some information and write essays, but it was a lot more stressful than that. - I wish people knew you could write a good essay without dropping in all your fabulous accomplishments. - I wish I knew that colleges actually cared about everything I did in high school. That junior year really does matter. - I was surprised by how much leadership and community service mattered. It’s a lot harder to prove passion for a hard science subject than it is to list extracurriculars. - I wish I knew that it was more leadership-based. I might’ve tried to do more leadership stuff. - I wish I had more support from the school. It was hard to get in touch with my counselor and my teachers didn’t know/weren’t trained for this. - Early in high school, I wish I knew the importance of doing things that you enjoy. If people don’t do what they enjoy, they tend to do it formulaically. - I wish I knew how demanding it really is. - I wish I narrowed down my schools more so I could have wasted less money. I applied to too many schools that were outside of my range- I wish I just applied to five or so. - I wish I knew more about how scholarships worked. - I wish I had a mentor that would help me find more activities to do outside of school and guide me through the financial aid process. Teens are clearly hungry for help. They also know exactly what kind of help they need. Why aren’t they getting it? 4. The good news: college coaching calms students down and optimizes their success There is good news! According to Nicholas Allen, a child development specialist at the University of Oregon, “Young people, empirical studies of all stripes show, struggle with decisions made in the heat of the moment… but when it comes to decisions that allow them time for reflection, the evidence suggests an adolescent’s skills can be on-par with a fully-grown adult.”⁴ This means that if given the proper life coaching well in advance — starting in ninth grade, not eleventh — teens can build healthy habits and master the soft skills they need for their high school years and beyond. They just need a formal system and a teacher who will not only introduce these skills but provide guidance in the practice of honing them. There is even more good news! According to our student-led study, teens want this kind of guidance and would pursue the opportunity of small-group college coaching if presented with it in the form of a low or no-cost offering. As part of the survey, students were shown a description of this workshop. Here’s a sampling of responses to our question: “Would you have benefitted from this kind of program?” - I think I would have. The internship and summer programs thing would have been useful. Having someone help me make a plan for every single year would have been so cool, it’s kind of hard to find opportunities for yourself while you’re busy in school and studying. - If this program was affordable and accessible here, I would have probably done it. And it would have improved my application as a whole. - Definitely. It would be helpful to connect with internship opportunities, because that sets you up for being a good college applicant. Just being able to go through the motions with a guide/mentor is gonna be helpful for any student who is trying to apply. - I think the average student would benefit; it really depends on if they can afford it. - I would have benefited a lot because I wouldn’t have felt so lost figuring out the college process myself. - I would’ve liked the guidance from more experienced professionals who could help me. - I definitely would! I don’t think it would be hard to motivate the kids. If you just go up to them and tell them about the opportunities, they’d want to do it. - If I knew about this program, I would have gone, because I needed all the help I could get! I was really scared at the beginning of this process. So having any help would have been good. - Yes, definitely! I didn’t realize what I wanted to major in until late junior year. With this type of program, I would’ve figured out what I wanted to do earlier and maybe have done more internships, making me a better candidate. - Easily. It would have been nice to be more informed and in the loop as an underclassman, as opposed to being shocked with everything in junior year. Teens are clearly eager for this help. Why aren’t they getting it? 5. The best news: it is possible to provide this kind of coaching at a low cost for many American teens. It need not be limited to affluent students or the pool of fortunate underserved students who get noticed by a CBO. Blue Stars has a wealth of resources that could change teens’ lives. How do we get them to teens across the country? In the effort to expand my practice beyond a boutique one-on-one model serving affluent families, my company has undertaken a number of initiatives: - We helped a number of underserved students, several of them undocumented, gain admission and scholarships to four-year schools based on a one-on-one model, which was funded partially through a crowdfunding campaign called “Dream Schools are for Everyone.” - We created two low-cost 40-hour workshops, one for college planning fundamentals and the other for admissions work later in high school, which could be installed in any educational institution. There are many possibilities for using this material both on site and online. The purpose of this white paper is to create the framework for the necessity and value of the material and invite deeper discussions about the material. - We recently created a new mini-version of the college planning fundamentals workshop, which could serve as a 1–2 hour introduction of what it would look like for a teen to take control of their unique college path. This program could function in-person or online. It is also something that could be integrated as an add-on into teen programs serving other needs (leadership, sports, volunteering). There are so many ways worksheets, guided interaction, and expert instruction can be integrated into teen life so that they learn their personal growth and pre-professional lessons before paralyzing panic sets in. The work my team and I have done with students has significantly contributed to improved self-regulation and made significant strides in maturity, calm, and confidence. As I follow our college planning students into college, they continue to impress me with their great decisions and steady intention. Hopefully, this white paper can serve as a call to action in new arenas. As I’ve been observing teens, their parents, and the challenges they face over the years, I keep returning to the work of the great Dale Carnegie, author of How to Win Friends and Influence People, whose courses on “learning to speak effectively” and “preparing for leadership” played to packed ballrooms in New York City and Philadelphia 80 years ago. American workers knew they needed something more to get ahead. They knew that reading, writing and arithmetic got them only so far. Soft skills were imperative, and they flooded Carnegie’s public classes to get them. I suspect that if given the opportunity, the teen response to such a growth opportunity might be similar. I invite you to join me in exploring what might happen if teens were taught soft skills in a systematic way such that their high school years could serve as an enlightening time of growth, connection, and preparation. Let’s tap into teens, rather than leave them bewildered and feeling judged. We can do it! (1) Dermendzhiyska, Elitsa. (2018). I Left My Cushy Job to Study Depression. Here’s What I Learned. Medium. Retrieved from: (2) Lewis, Katherine Reynolds. (2018). The Good News About Bad Behavior [iBook version]. Retrieved from iTunes.Apple.com. (3) Harris, Nadine Burke. (2014, September). How childhood trauma affects health across a lifetime. [Video file]. Retrieved from (4) Allen, Nicholas. (2018). We shouldn’t disregard the ideas that come from teens’ developing brains. Popular Science. Retrieved from:
https://medium.com/@DrMBlueStars/how-democratizing-college-coaching-tackles-key-problems-plaguing-todays-teens-a6e784fecc15
CC-MAIN-2019-35
refinedweb
2,330
61.26
Log message: ruby-nokogiri: update to 1.8.2. Upstream changelog (from CHANGELOG.md): # 1.8.2 / 2018-01-29 ## Security Notes [MRI] The update of vendored libxml2 from 2.9.5 to 2.9.7 addresses at least one \ published vulnerability, CVE-2017-15412. [#1714 has complete details] ## Dependencies * [MRI] libxml2 is updated from 2.9.5 to 2.9.7 * [MRI] libxslt is updated from 1.1.30 to 1.1.32 ## Features * [MRI] OpenBSD installation should be a bit easier now. [#1685] (Thanks, \ @jeremyevans!) * [MRI] Cross-built Windows gems now support Ruby 2.5 ## Bug fixes * Node#serialize once again returns UTF-8-encoded strings. [#1659] * [JRuby] made SAX parsing of characters consistent with C implementation \ [#1676] (Thanks, @andrew-aladev!) * [MRI] Predefined entities, when inspected, no longer cause a segfault. [#1238] Log message: Actually take maintainership (missed in the previous commit). Log message: nokogiri: update to 1.8.1. This version is necessary for ruby-mini_portile2 2.3.0 in pkgsrc-2017Q3. pkgsrc changes: - strict dependency against ruby-mini_portile2 as defined in the Gemfile - take maintainership Upstream changes (from CHANGELOG.md): # 1.8.1 / 2017-09-19 ## Dependencies * [MRI] libxml2 is updated from 2.9.4 to 2.9.5. * [MRI] libxslt is updated from 1.1.29 to 1.1.30. * [MRI] optional dependency on the pkg-config gem has had its constraint \ loosened to `~> 1.1` (from `~> 1.1.7`). [#1660] * [MRI] Upgrade mini_portile2 dependency from `~> 2.2.0` to `~> 2.3.0`, \ which will validate checksums on the vendored libxml2 and libxslt tarballs \ before using them. ## Bugs * NodeSet#first with an integer argument longer than the length of the NodeSet \ now correctly clamps the length of the returned NodeSet to the original length. \ [#1650] (Thanks, @Derenge!) * [MRI] Ensure CData.new raises TypeError if the `content` argument is not \ implicitly convertible into a string. [#1669] Log message: Update ruby-nokogiri to 1.8.0. # 1.8.0 / 2017-06-04 ## Backwards incompatibilities This release ends support for Ruby 2.1 on Windows in the `x86-mingw32` and \ `x64-mingw32` platform gems (containing pre-compiled DLLs). Official support \ ended for Ruby 2.1 on 2017-04-01. Please note that this deprecation note only applies to the precompiled Windows \ gems. Ruby 2.1 continues to be supported (for now) in the default gem when \ compiled on installation. ## Dependencies * [Windows] Upgrade iconv from 1.14 to 1.15 (unless --use-system-libraries) * [Windows] Upgrade zlib from 1.2.8 to 1.2.11 (unless --use-system-libraries) * [MRI] Upgrade rake-compiler dependency from 0.9.2 to 1.0.3 * [MRI] Upgrade mini-portile2 dependency from `~> 2.1.0` to `~> 2.2.0` ## Compatibility notes * [JRuby] Removed support for `jruby --1.8` code paths. [#1607] (Thanks, @kares!) * [MRI Windows] Retrieve zlib source from to avoid \ deprecation issues going forward. See #1632 for details around this problem. ## Features * NodeSet#clone is not an alias for NodeSet#dup [#1503] (Thanks, @stephankaag!) * Allow Processing Instructions and Comments as children of a document root. \ [#1033] (Thanks, @windwiny!) * [MRI] PushParser#replace_entities and #replace_entities= will control whether \ entities are replaced or not. [#1017] (Thanks, @spraints!) * [MRI] SyntaxError#to_s now includes line number, column number, and log level \ if made available by the parser. [#1304, #1637] (Thanks, @spk and @ccarruitero!) * [MRI] Cross-built Windows gems now support Ruby 2.4 * [MRI] Support for frozen string literals. [#1413] * [MRI] Support for installing Nokogiri on a machine in FIPS-enabled mode [#1544] * [MRI] Vendored libraries are verified with SHA-256 hashes (formerly some MD5 \ hashes were used) [#1544] * [JRuby] (performance) remove unnecessary synchronization of class-cache \ [#1563] (Thanks, @kares!) * [JRuby] (performance) remove unnecessary cloning of objects in XPath searches \ [#1563] (Thanks, @kares!) * [JRuby] (performance) more performance improvements, particularly in XPath, \ Reader, XmlNode, and XmlNodeSet [#1597] (Thanks, @kares!) ## Bugs * HTML::SAX::Parser#parse_io now correctly parses HTML and not XML [#1577] \ (Thanks for the test case, @gregors!) * Support installation on systems with a `lib64` site config. [#1562] * [MRI] on OpenBSD, do not require gcc if using system libraries [#1515] \ (Thanks, @jeremyevans!) * [MRI] XML::Attr.new checks type of Document arg to prevent segfaults. [#1477] * [MRI] Prefer xmlCharStrdup (and friends) to strdup (and friends), which can \ cause problems on some platforms. [#1517] (Thanks, @jeremy!) * [JRuby] correctly append a text node before another text node [#1318] (Thanks, \ @jkraemer!) * [JRuby] custom xpath functions returning an integer now work correctly [#1595] \ (Thanks, @kares!) * [JRuby] serializing (`#to_html`, `#to_s`, et al) a document with explicit \ encoding now works correctly. [#1281, #1440] (Thanks, @kares!) * [JRuby] XML::Reader now returns parse errors [#1586] (Thanks, @kares!) * [JRuby] Empty NodeSets are now decorated properly. [#1319] (Thanks, @kares!) * [JRuby] Merged nodes no longer results in Java exceptions during XPath \ queries. [#1320] (Thanks, @kares!) # 1.7.2 / 2017-05-09 ## Security Notes [MRI] Upstream libxslt patches are applied to the vendored libxslt 1.1.29 which \ address CVE-2017-5029 and CVE-2016-4738. For more information: * * … -5029.html * … -4738.html Log message: Update ruby-nokogiri to 1.7.1. # 1.7.1 / unreleased ## Security Notes [MRI] Upstream libxml2 patches are applied to the vendored libxml 2.9.4 which \ address CVE-2016-4658 and CVE-2016-5131. For more information: * * … -4658.html * … -5131.html ## Dependencies * [Windows] Upgrade zlib from 1.2.8 to 1.2.11 (unless --use-system-libraries) Log message: Now gemspec dose not require ruby-pkg-config any more. Bump PKGREVISION. Log message: Updated ruby-nokogiri to 1.7.0.1. # 1.7.0.1 / 2017-01-04 ## Bugs * Fix OpenBSD support. (#1569) (related to #1543) # 1.7.0 / 2016-12-26 ## \ … 27d593a483) Log message: Update ruby-nokogiri to 1.6.8.1 === 1.6.8.1 / 2016-10-03 ==== Dependency License Notes Removes required dependency on the `pkg-config` gem. This dependency was introduced in v1.6.8 and, because it's distributed under LGPL, was objectionable to many Nokogiri users (#1488, #1496). This version makes `pkg-config` an optional dependency. If it's installed, it's used; but otherwise Nokogiri will attempt to work around its absence. === 1.6.8 / unreleased ==== Security Notes [MRI] Bundled libxml2 is upgraded to 2.9.4, which fixes many security issues. \ Many of these had previously been patched in the vendored libxml 2.9.2 in the \ 1.6.7.x branch, but some are newer. See these libxml2 email posts for more: * … 00012.html * … 00023.html For a more detailed analysis, you may care to read Canonical's take on these \ security issues: * [MRI] Bundled libxslt is upgraded to 1.1.29, which fixes a security issue as \ well as many long-known outstanding bugs, some features, some portability \ improvements, and general cleanup. See this libxslt email post for more: * … 00004.html ==== Features Several changes were made to improve performance: * [MRI] Simplify NodeSet#to_a with a minor speed-up. (#1397) * XML::Node#ancestors optimization. (#1297) (Thanks, Bruno Sutic!) * Use Symbol#to_proc where we weren't previously. (#1296) (Thanks, Bruno Sutic!) * XML::DTD#each uses implicit block calls. (Thanks, @glaucocustodio!) * Fall back to the `pkg-config` gem if we're having trouble finding the system \ libxml2. This should help many FreeBSD users. (#1417) * Set document encoding appropriately even on blank document. (#1043) (Thanks, \ @batter!) ==== Bug Fixes * [JRuby] fix slow add_child (#692) * [JRuby] fix load errors when deploying to JRuby/Torquebox (#1114) (Thanks, \ @atambo and @jvshahid!) * [JRuby] fix NPE when inspecting nodes returned by NodeSet#drop (#1042) \ (Thanks, @mkristian!) * [JRuby] fix nil attriubte node's namespace in reader (#1327) (Thanks, \ @codekitchen!) * [JRuby] fix Nokogiri munging unicode characters that require more than 2 bytes \ (#1113) (Thanks, @mkristian!) * [JRuby] allow unlinking an unparented node (#1112, #1152) (Thanks, @esse!) * [JRuby] allow Fragment parsing on a frozen string (#444, #1077) * [JRuby] HTML `style` tags are no longer encoded (#1316) (Thanks, @tbeauvais!) * [MRI] fix assertion failure while accessing attribute node's namespace in \ reader (#843) (Thanks, @2potatocakes!) * [MRI] fix issue with GCing namespace nodes returned in an xpath query. (#1155) * [MRI] Ensure C strings are null-terminated. (#1381) * [MRI] Ensure Rubygems is loaded before using mini_portile2 at installation. \ (#1393, #1411) (Thanks, @JonRowe!) * [MRI] Handling another edge case where the `libxml-ruby` gem's global \ callbacks were smashing the heap. (#1426). (Thanks to @bbergstrom for providing \ an isolated test case!) * [MRI] Ensure encodings are passed to Sax::Parser xmldecl callback. (#844) * [MRI] Ensure default ns prefix is applied correctly when reparenting nodes to \ another document. (#391) (Thanks, @ylecuyer!) * [MRI] Ensure Reader handles non-existent attributes as expected. (#1254) \ (Thanks, @ccutrer!) * [MRI] Cleanup around namespace handling when reparenting nodes. (#1332, #1333, \ #1444) (Thanks, @cuttrer and @bradleybeddoes!) * unescape special characters in CSS queries (#1303) (Thanks, @twalpole!) * consistently handle empty documents (#1349) * Update to mini_portile2 2.1.0 to address whitespace-handling during patching. \ (#1402) * Fix encoding of xml node namespaces. * Work around issue installing Nokogiri on overlayfs (commonly used in Docker \ containers). (#1370, #1405) ==== Other Notes * Removed legacy code remaining from Ruby 1.8.x support. * Removed legacy code remaining from REE support. * Removing hacky workarounds for bugs in some older versions of libxml2. * Handling C strings in a forward-compatible manner, see \
http://pkgsrc.se/textproc/ruby-nokogiri
CC-MAIN-2018-22
refinedweb
1,507
62.54
Docker’s Voting App on Swarm, Kubernetes and Nomad TL;DR When you work in tech you definitely have to be curious as this is essential to always keep on learning and stay up to date. Things are moving too damn fast in the area. Container orchestration is such a hot topic that even if you have your favorite tool (my heart goes to Docker Swarm) it’s always interesting to see how the other ones are working and learn from them as well. In this article, we will use Docker’s Voting App and deploy it on Swarm, Kubernetes and Hashicorp’s Nomad. I hope you’ll have as much fun in the reading than I had experimenting those things. The Voting App I’ve used (and abused) the Voting App in previous articles. The Voting App has several compose files as we can see in the github repository. docker-stack.yml is the production ready representation of the application. The content of this file is the following. version: "3" services: redis: image: redis:alpine ports: - "6379" networks: - frontend deploy: replicas:: Basically, there are 6 services defined in this file, but only 5 services are defined in the Voting App architecture. The additional one is the visualizer, this is a great tool which provides a clean interface showing where the services’ tasks are deployed. Docker Swarm Docker Swarm is a clustering and scheduling tool for Docker containers. With Swarm, IT administrators and developers can establish and manage a cluster of Docker nodes as a single virtual system. Swarm’s concepts A Swarm cluster is composed of several nodes, some of them acting as managers, the others as workers: - manager nodes are the ones in charge of the cluster’s internal state - worker nodes are the ones executing the tasks (= running the containers) As we can see, the managers share an internal distributed store in order to maintain a consistent state of the cluster. This is ensured through the logs of the Raft distributed consensus algorithm. Note: if you want to know more about Raft logs usage in a Swarm, you might find the following article interesting (well… I hope so). On a Swarm, services define how a part of the application needs to be run and deployed in containers. Installation of the Docker platform In case you do not have Docker installed on your machine, you can download the Community Edition for your OS and install it from the following location. Install Docker with easy to use installers for the major desktop and cloud platforms. Creation of a Swarm Once Docker is installed you are only a single command away from a working Swarm. $ docker swarm init Yes ! This is all it needs to have a Swarm cluster, a one node cluster but still a Swarm cluster with all its associated processes. Deployment of the application Among the Compose files available in the Voting App’s GitHub repository, docker-stack.yml is the one which needs to be used to deploy the application on a Swarm. $ docker stack deploy -c docker-stack.yml app Creating network app_backend Creating network app_default Creating network app_frontend Creating service app_visualizer Creating service app_redis Creating service app_db Creating service app_vote Creating service app_result Creating service app_worker As I run the stack on Docker for Mac, I have access to the application directly from the localhost. It’s possible to select CATS or DOGS from the vote interface (port 5000) and to see the result on port 5001. I will not go into the details here but just wanted to show how easy this application can be deployed on a Swarm. In case you want a more in-depth guide on how to deploy this same application on a multi nodes Swarm, you can check the following article. With Docker 1.13, it’s now possible to deploy a stack from a docker-compose file. Let’s test that and deploy the Voting…medium.com Kubernetes Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. Kubernetes’ concepts A Kubernetes cluster is composed of one of several Masters and Nodes. - The Master handle the cluster’s control plane (managing cluster’s state, scheduling tasks, reacting to cluster’s events, …) - The Nodes (previously called Minion, yes like in Despicable me) provide the runtime to execute the application containers (through Pods) In order to run commands against a Kubernetes cluster, the kubectl command line tool is used. We will see several example of its usage below. There are several high level Kubernetes’ objects we need to know to understand how to deploy an application: - A Pod is the smallest unit that can be deployed on a Node. It’s a group of containers which must run together. Quite often, a Pod only contains one container though. - A ReplicaSet ensures that a specified number of pod replicas are running at any given time - A Deployment manages ReplicaSet and allows to handle rolling updates, blue/green deployment, canary testing, … - A Service defines a logical set of Pods and a policy by which to access them In this chapter, we will use a Deployment and a Service objects for each service of the Voting App. Installing kubectl kubectl is the command line tool used to deploy and manage application on Kubernetes. It can be easily installed following the official documentation. For instance, to install it on MacOS, the following commands need to be run. $ curl -LO(curl -s)/bin/darwin/amd64/kubectl $ chmod +x ./kubectl $ sudo mv ./kubectl /usr/local/bin/kubectl Installing Minikube Minikube is an all-in-one setup of Kubernetes. It creates a local VM, for instance on VirtualBox, and run a one node cluster running all the Kubernetes processes. It’s obviously not a tool that should be used to setup a production cluster but it’s really convenient for development and testing purposes. Creation of a one node cluster Once Minikube is installed, we just need to issue the start command to setup our one node Kubernetes cluster. $ minikube start. Kubernetes descriptors On Kubernetes containers are not run directly but through ReplicaSet managed by a Deployment. Below is an example of a .yml file describing a Deployment. A ReplicaSet will ensure 2 replicas of a Pod running Nginx are running. // nginx-deployment.yml apiVersion: apps/v1beta1 kind: Deployment metadata: name: nginx-deployment spec: replicas: 2 # tells deployment to run 2 pods matching the template template: # create pods using pod definition in this template metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.7.9 ports: - containerPort: 80 As we will see below, in order to create a Deployment we need to use the kubectl command line tool. To define a whole micro-services application in Kubernetes we need to create a Deployment file for each service. We can do this manually or we can use Kompose to help us in this task as we will see now. Using Kompose to create deployments and services Kompose is a great tool which converts Docker Compose file into descriptor files (for Deployments and Services) used by Kubernetes. It is very convenient and it really accelerates the process of migration. kompose is a tool to help users familiar with docker-compose move to Kubernetes. It takes a Docker Compose file and…kompose.io Notes: - Kompose does not have to be used, as descriptors file can be written manually, but it surely speeds up the deployment when it is - Kompose does not take into account all the options used in a Docker Compose file The following commands install Kompose version 1.0.0 on Linux or MacOS. # Linux $ curl -L -o kompose # macOS $ curl -L -o kompose $ chmod +x kompose $ sudo mv ./kompose /usr/local/bin/kompose Before applying Kompose on the original docker-stack.yml file, we will modify that one and remove the deploy key of each service. This key is not taken into account and can raise errors when generating descriptors files. We can also remove the information regarding the networks. We will then use the following file, renamed to docker-stack-k8s.yml to feed Kompose. version: "3" services: redis: image: redis:alpine ports: - "6379" db: image: postgres:9.4 volumes: - db-data:/var/lib/postgresql/data vote: image: dockersamples/examplevotingapp_vote:before ports: - 5000:80 depends_on: - redis result: image: dockersamples/examplevotingapp_result:before ports: - 5001:80 depends_on: - db worker: image: dockersamples/examplevotingapp_worker visualizer: image: dockersamples/visualizer:stable ports: - "8080:8080" stop_grace_period: 1m30s volumes: - "/var/run/docker.sock:/var/run/docker.sock" volumes: db-data: From the docker-stack-k8s.yml file, we can generate the descriptors of the Voting App using the following command. $ kompose convert --file docker-stack-k8s.yml WARN Volume mount on the host "/var/run/docker.sock" isn't supported - ignoring path on the host INFO Kubernetes file "db-service.yaml" created INFO Kubernetes file "redis-service.yaml" created INFO Kubernetes file "result-service.yaml" created INFO Kubernetes file "visualizer-service.yaml" created INFO Kubernetes file "vote-service.yaml" created INFO Kubernetes file "worker-service.yaml" created INFO Kubernetes file "db-deployment.yaml" created INFO Kubernetes file "db-data-persistentvolumeclaim.yaml" created INFO Kubernetes file "redis-deployment.yaml" created INFO Kubernetes file "result-deployment.yaml" created INFO Kubernetes file "visualizer-deployment.yaml" created INFO Kubernetes file "visualizer-claim0-persistentvolumeclaim.yaml" created INFO Kubernetes file "vote-deployment.yaml" created INFO Kubernetes file "worker-deployment.yaml" created We can see that for each service, a deployment and a service files are created. We only got one warning linked to the visualizer service as the Docker socket cannot be mounted. We will not try to run this service and focus on the other ones though. Deployment of the application Using kubectl we will create all the components defined in the descriptor files. We indicate the files are located in the current folder. $ kubectl create -f . persistentvolumeclaim "db-data" created deployment "db" created service "db" created deployment "redis" created service "redis" created deployment "result" created service "result" created persistentvolumeclaim "visualizer-claim0" created deployment "visualizer" created service "visualizer" created deployment "vote" created service "vote" created deployment "worker" created service "worker" created unable to decode "docker-stack-k8s.yml":... Note: as we left the modified compose file in the current folder, we get an error as this one cannot be parsed. This error can be ignore without any risk. The commands below show the services and deployments created. $ kubectl get services NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE db None <none> 55555/TCP 3m kubernetes 10.0.0.1 <none> 443/TCP 4m redis 10.0.0.64 <none> 6379/TCP 3m result 10.0.0.121 <none> 5001/TCP 3m visualizer 10.0.0.110 <none> 8080/TCP 3m vote 10.0.0.142 <none> 5000/TCP 3m worker None <none> 55555/TCP 3m $ kubectl get deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE db 1 1 1 1 3m redis 1 1 1 1 3m result 1 1 1 1 3m visualizer 1 1 1 1 3m vote 1 1 1 1 3m worker 1 1 1 1 3m Expose the application to the outside In order to access the vote and result interfaces, we need to modify a little bit the services created for those ones. The file below is the descriptor generated for vote. apiVersion: v1 kind: Service metadata: creationTimestamp: null labels: io.kompose.service: vote name: vote spec: ports: - name: "5000" port: 5000 targetPort: 80 selector: io.kompose.service: vote status: loadBalancer: {} We will modify the type of service and change the default type, ClusterIP, by NodePort instead. While ClusterIP allows a service to be accessible internally, NodePort allows the publication of a port on each node of the cluster and makes it available to the outside world. We will do the same for result as we want both vote and result to be accessible from the outside. apiVersion: v1 kind: Service metadata: labels: io.kompose.service: vote name: vote spec: type: NodePort ports: - name: "5000" port: 5000 targetPort: 80 selector: io.kompose.service: vote Once the modification is done for both services (vote and result), we can recreate them. $ kubectl delete svc vote $ kubectl delete svc result $ kubectl create -f vote-service.yaml service "vote" created $ kubectl create -f result-service.yaml service "result" created Access the application Let’s now get the details of the vote and result services and retrieve the port each one exposes. $ kubectl get svc vote result NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE vote 10.0.0.215 <nodes> 5000:30069/TCP 15m result 10.0.0.49 <nodes> 5001:31873/TCP 8m vote is available on port 30069 and result on port 31873. We can now vote and see the result. After a basic understanding of Kubernetes’ components, we managed to deploy the Voting App very easily. Kompose really helped us in the process. Hashicorp’s Nomad. Nomad’s concept A Nomad cluster is compose of agents which can run in Server or Client mode. - Servers take on the responsibility of being part of the consensus protocol which allows the servers to perform leader election and state replication - Client nodes are very lightweight as they interface with the server nodes and maintain very little state of their own. Client nodes are where tasks are run. Several type of tasks can run on a Nomad cluster. Docker workload can run using the docker driver, this is the driver we will use to run the Voting App. There are several concepts (Stanza in Nomad vocabulary) we first need to understand in order to deploy an application on Nomad: - A job is a declarative specification of tasks that Nomad should run. It is defined in a job file (text file in hcl, Hashicorp Configuration Language). A job can have one of many groups of tasks. Jobs are submitted by users and represent a desired state. - A group contains a set of tasks that are co-located on a machine - A task is a running process, a Docker container in our example - The mapping of tasks in a job to clients is done using Allocations. An allocation is used to declare that a set of tasks in a job should be run on a particular node There are a lot more Stanza described in Nomad’s documentation The setup In this example, we will run the application on a Docker Host created with Docker Machine. Its local IP is 192.168.1.100. We will start by running Consul, used for the service registration and discovery. We’ll then start Nomad and will deploy the services of the Voting App as Nomad’s Jobs. Getting Consul for service registration and discovery In order to ensure service registration and discovery, it is recommended to use a tool, such as Consul, which does not run as a Nomad’s job. Consul can be downloaded from the following location. The following command launches a Consul server locally. $ consul agent -dev -client=0.0.0.0 -dns-port=53 -recursor=8.8.8.8 Let’s get some more details on the options used: - -dev is a convenient flag which setup a Consul cluster with a server and a client. This option must not be used except for dev and testing purposes - -client=0.0.0.0 allows to reach consul services (API and DNS servers) from any interfaces of the host. This is needed as Nomad will connect to Consul on the localhost interface while containers will connect through the Docker bridge (often something like 172.17.x.x). - -dns-port=53 specifies the port used by Consul’s DNS server (it defaults to 8600). We set it to the standard 53 port so Consul DNS can be used from within the containers - -recursor=8.8.8.8 specifies another DNS server which will serve requests that cannot be handled by Consul Getting Nomad Nomad is a single binary, written in Go, which can be downloaded from the following location. Creation of a one node cluster Once Nomad is downloaded, we can run an agent with the following configuration. // nomad.hcl bind_addr = "0.0.0.0" data_dir = "/var/lib/nomad" server { enabled = true bootstrap_expect = 1 } client { enabled = true network_speed = 100 } The agent will run both as a server and a client. We specify the bind_addr to listen on any interfaces so tasks can be accessed from the outside. Let’s start a Nomad agent with this configuration: $ nomad agent -config=nomad.hcl ==> WARNING: Bootstrap mode enabled! Potentially unsafe operation. Loaded configuration from nomad-v2.hcl ==> Starting Nomad agent... ==> Nomad agent configuration: Client: true Log Level: INFO Region: global (DC: dc1) Server: true Version: 0.6.0 ==> Nomad agent started! Log data will stream in below: Note: by default Nomad connects to the local Consul instance. We have just setup a one node cluster. The information on the unique member are listed below. $ nomad server-members Name Address Port Status Leader Protocol Build Datacenter Region neptune.local.global 192.168.1.100 4648 alive true 2 0.6.0 dc1 global Deployment of the application From the previous examples, we saw that in order to deploy the Voting App on a Swarm the Compose file can be used directly. When deploying the application on Kubernetes, descriptor files can be created from this same Compose file. Let’s see now how our Voting App can be deployed on Nomad. First, there is no tool like Kompose in the Hashicorp world which can smooth the migration of a Docker Compose application to Nomad (might be an idea of an open source project then…). Files describing Jobs, Groups, Tasks (and other Nomad’s Stanzas) needs to be written manually then. We will go into the details when defining jobs for the redis and the vote services of our application. The process will be quite similar for the other services. Definition of the redis job The following file define the redis part of the application. // redis.nomad job "redis-nomad" { datacenters = ["dc1"] type = "service" group "redis-group" { task "redis" { driver = "docker" config { image = "redis:3.2" port_map { db = 6379 } } resources { cpu = 500 # 500 MHz memory = 256 # 256MB network { mbits = 10 port "db" {} } } service { name = "redis" address_mode = "driver" port = "db" check { name = "alive" type = "tcp" interval = "10s" timeout = "2s" } } } } } Let’s explain a little bit what is defined here: - the name of the job is redis-nomad - the job is of type service (which means a long running task) - a group is defined, with an arbitrary name; it contains a single task - a task named redis using the docker driver, meaning this one will run in a container - the redis task is configured to use redis:3.2 docker image, and expose port 6379, labeled db, within the cluster - within the resources block are defined some cpu and memory constraints - In the network block we specify that port db should be dynamically allocated - the service block defines how the registration will be done in Consul: the service name, the IP address which should be specified (IP of the container), and the definition of the health check To check if this job can be run correctly, we first use the plan command. $ nomad plan redis.nomad + Job: "nomad-redis" + Task Group: "cache" (1 create) + Task: "redis" (forces create) Scheduler dry-run: - All tasks successfully allocated. Job Modify Index: 0 To submit the job with version verification run: nomad run -check-index 0 redis.nomad When running the job with the check-index flag, the job will only be run if the server side version matches the job modify index returned. If the index has changed, another user has modified the job and the plan's results are potentially invalid. Everything seems fine, let’s now run the job can deploy the task. $ nomad run redis.nomad ==> Monitoring evaluation "1e729627" Evaluation triggered by job "nomad-redis" Allocation "bf3fc4b2" created: node "b0d927cd", group "cache" Evaluation status changed: "pending" -> "complete" ==> Evaluation "1e729627" finished with status "complete" From this output, we can see an allocation is created. Let’s see the status of this one. $ nomad alloc-status bf3fc4b2 ID = bf3fc4b2 Eval ID = 1e729627 Name = nomad-redis.cache[0] Node ID = b0d927cd Job ID = nomad-redis Job Version = 0 Client Status = running Client Description = <none> Desired Status = run Desired Description = <none> Created At = 08/23/17 21:52:03 CEST Task "redis" is "running" Task Resources CPU Memory Disk IOPS Addresses 1/500 MHz 6.3 MiB/256 MiB 300 MiB 0 db: 192.168.1.100:21886 Task Events: Started At = 08/23/17 19:52:03 UTC Finished At = N/A Total Restarts = 0 Last Restart = N/A Recent Events: Time Type Description 08/23/17 21:52:03 CEST Started Task started by client 08/23/17 21:52:03 CEST Task Setup Building Task Directory 08/23/17 21:52:03 CEST Received Task received by client The redis task (= the container) seems to run correctly. Let’s ckeck Consul DNS server and make sure the service is correctly registered. $ dig @localhost SRV redis.service.consul ; <<>> DiG 9.10.3-P4-Ubuntu <<>> @localhost SRV redis.service.consul ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 35884 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 2 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ;; QUESTION SECTION: ;redis.service.consul. IN SRV ;; ANSWER SECTION: redis.service.consul. 0 IN SRV 1 1 6379 ac110002.addr.dc1.consul. ;; ADDITIONAL SECTION: ac110002.addr.dc1.consul. 0 IN A 172.17.0.2 ;; Query time: 0 msec ;; SERVER: 127.0.0.1#53(127.0.0.1) ;; WHEN: Wed Aug 23 23:08:36 CEST 2017 ;; MSG SIZE rcvd: 103 We can see that the task was allocated the IP 172.17.0.2 (on Docker’s bridge) and its port is 6379 as we defined. Definition of the vote job Let’s now define the job for the vote service. We will use the following job file. // job.nomad job "vote-nomad" { datacenters = ["dc1"] type = "service" group "vote-group" { task "vote" { driver = "docker" config { image = "dockersamples/examplevotingapp_vote:before" dns_search_domains = ["service.dc1.consul"] dns_servers = ["172.17.0.1", "8.8.8.8"] port_map { http = 80 } } service { name = "vote" port = "http" check { name = "vote interface running on 80" interval = "10s" timeout = "5s" type = "http" protocol = "http" path = "/" } } resources { cpu = 500 # 500 MHz memory = 256 # 256MB network { port "http" { static = 5000 } } } } } } There are a couple of differences from the job file we used for redis: - the vote task connects to redis using only the name of the task. The example below is an except of the app.py file used in the vote service. // app.py def get_redis(): if not hasattr(g, 'redis'): g.redis = Redis(host="redis", db=0, socket_timeout=5) return g.redis In this case, the vote’s container needs to use Consul DNS to get the IP of the redis’s container. DNS request from a container are done through Docker bridge (172.17.0.1). the dns_search_domains is also specified as service X is registered as X.service.dc1.consul within Consul. - We defined a static port so that vote service can be accessed on port 5000 from outside of the cluster We can do pretty much the same configuration for the other services: worker, postgres and result. Access the application Once all the jobs have been launched, we can check the status and should see all of them running. $ nomad status ID Type Priority Status Submit Date nomad-postgres service 50 running 08/23/17 22:12:04 CEST nomad-redis service 50 running 08/23/17 22:11:46 CEST result-nomad service 50 running 08/23/17 22:12:10 CEST vote-nomad service 50 running 08/23/17 22:11:54 CEST worker-nomad service 50 running 08/23/17 22:13:19 CEST We can also see all the services registered and healthy in Consul’s interface. From the node IP (192.168.1.100 in this example) we can access the vote and result interfaces. Summary Docker’s Voting App is a great application for demo purposes. I was curious to see if it could be deployed, without changes in the code, on some of the main orchestration tools. The answer is yes… and without too many tweaks. I hope this article helped in the understanding of the very basics of Swarm, Kubernetes and Nomad. I’d love to hear about how you run Docker workload and which orchestration tool you are using.
https://medium.com/lucjuggery/dockers-voting-app-on-swarm-kubernetes-and-nomad-8835a82050cf?source=collection_home---6------0----------------
CC-MAIN-2017-47
refinedweb
4,097
54.83
classes to use when handling where clause More... #include "procedure.h" #include <myisam.h> #include "sql_array.h" #include "records.h" #include "opt_range.h" #include "filesort.h" #include "mem_root_array.h" #include "sql_executor.h" #include "opt_explain_format.h" #include <functional> Go to the source code of this file. classes to use when handling where clause Returns a constant of type 'type' with the 'A' lowest-weight bits set. Example: LOWER_BITS(uint, 3) == 7. Requirement: A < sizeof(type) * 8. A position of table within a join order. This structure is primarily used as a part of join->positions and join->best_positions arrays. One POSITION element contains information about: This class has to stay a POD, because it is memcpy'd in many places. Bits describing quick select type Substitutes constants for some COUNT(), MIN() and MAX() functions. Test if the predicate compares a field with constants.
http://mingxinglai.com/mysql56-annotation/sql__select_8h.html
CC-MAIN-2018-17
refinedweb
142
64.27
Red Hat Bugzilla – Bug 252110 Review Request: wstx - Woodstox Stax Implementation Last modified: 2008-06-26 02:51:34 EDT Spec URL: SRPM URL: Woodstox is a high-performance validating namespace-aware StAX-compliant (JSR-173) Open Source XML-processor written in Java. XML processor means that it handles both input (== parsing) and output (== writing, serialization)), as well as supporting tasks such as validation. Needed for supporting application servers *** This bug has been marked as a duplicate of 227121 *** *** Bug 227121 has been marked as a duplicate of this bug. *** Hmm, When I suggested in Bug 227121 to mark it as a duplicate of this and to reopen this, and that I then would review this, I was kind of hoping that I would only be reviewing it, as I'm not mighty familiar with java. The only reason I'm interested in wstx, is because it beats the stax implementation shipped by default with icedtea by a factor of 10 when it comes to speed, making the loading of freecol (a game written in java I maintain) about 10 times faster. So perhaps a co-maintainership is the best solution, I can take either role (submitter or reviewer) now, and then later we maintain this together? Closing as detailed in bug 252049 after a complete lack of response. Hans, given that Vivek hasn't responded at all to any of the reviews he opened, might I suggest you consider just submitting this yourself? (In reply to comment #4) > Hans, given that Vivek hasn't responded at all to any of the reviews he opened, > might I suggest you consider just submitting this yourself? I would love to but currently I'm investing all my time into: So maybe sometime in the future.
https://bugzilla.redhat.com/show_bug.cgi?id=252110
CC-MAIN-2017-30
refinedweb
293
51.72
Ruby:Tutorial Getting Ruby For Windows or Mac, you will want to head over to the official Download page, grab the appropriate installer, and go for it. On OSX, install it via DarwinPorts. Linux (Yeah, it gets its own section, it sucks, blah blah blah.) Easy version: Install it from the packages supplied by the distributor. Full version: - Many distributions should come with packages that you can easily install with the proper package management tool. - Debian, Ubuntu, or other debian-based distributions: sudo apt-get install ruby irb rdoc - If you want to install gems, you may need to install separate packages like e.g. rubygems, Ruby development packages, or other header packages for native extensions (database interfaces, etc.) - Otherwise see the Ruby download section for instructions How to Run Ruby There are several methods to run ruby code. The most straightforward is, of course, to write a script file, and then just run ruby myscript.rb some optional arguments On *nix, you can also make the script file executable, and use the hash-bang notation, so a script might look like this: #!/usr/bin/ruby puts 'This could be your code!' For quick experimentation, there is also an interactive shell-type interface, called irb (you may need to install it separately). An example session would look like this: $ irb irb(main):001:0> puts 'Hello, world!' Hello, world! => nil irb(main):002:0> Basic Concepts Hello World puts "Hello World" There is not much more to be said about it, so let us try a more involved example, showing some actual features of the language. #!/usr/bin/ruby - You guess what it does. def bottles(n) if n>0 then "#{n} bottle#{n>1 ? 's' : } of beer" else "no more bottles of beer" end end number_of_bottles = 99 number_of_bottles = ARGV[0].to_i if ARGV.size > 0 number_of_bottles.downto(1) do |n| puts "#{bottles(n)} on the wall, #{bottles(n)};" puts " take one down, pass it around, #{bottles(n-1)} on the wall." end This small snippet demonstrates several features: - Function declarations, and return values. The last statement called in a function defines that function's return value. - String interpolation, - Command-line arguments, - Iterators Classes The following example demonstrates a Person class: class Person attr_accessor :name, :age, :height, :weight def initialize(name, age, height, weight) @name = name @age = age @height = height @weight = weight end def information print "Name: #{name}\nAge: #{age}\nHeight: #{height}\nWeight: #{weight}\n" end end One could use the class like so: smith = Person.new("Mr Smith", 20, 5.11, 13.5) smith.information Would output: Name: Mr Smith Age: 20 Height: 5.11 Weight: 13.5 Setters/Getters (or: Love the Assignment) Consider again the above example of a class. Now, what if we wanted to have a way to store the weight in kg, but provide setters/getters in US pounds? Just add the following methods to the class: def weight_in_lbs @weight / 0.4536 end def weight_in_lbs=(w) @weight = w * 0.4536 end Now you have created transparent access to Person#weight, doing unit conversion on the fly by accessing and assigning Person#weight_in_lbs as you would any "normal" attribute. (Alternatively, you could of course extend the numeric classes to provide #kg_to_lbs and other necessary methods doing the conversion for you...)
http://content.gpwiki.org/index.php/Ruby:Tutorial
CC-MAIN-2013-48
refinedweb
545
64.3
functions can be used when you're adding numbers together and don't want the total to overflow. Normally, when an addition overflows, it just drops back down to 0 and you're left with however much is left over after the overflow. If you were doing something like calculating the distance between two points, and it overflowed, they would appear to be very close to each other. Using these functions, you still won't know how far apart they are, but you'll be able to see that the points are very far apart (at least 0xFFFFFFFF, in the case of 32 bits). #include <stdint.t> uint16_t saturateadd16(uint16_t a, uint16_t b) { return (a > 0xFFFF - b) ? 0xFFFF : a + b; } uint32_t saturateadd32(uint32_t a, uint32_t b) { return (a > 0xFFFFFFFF - b) ? 0xFFFFFFFF : a + b; } uint32_t x=14, y=15, sum; sum = saturateadd32(x, y); The functions can easily be adapted to 8 or 64 bit operations at well. There is significant overhead in function calls, so consider inlining the functions or changing them to macros instead. A similar approach can be used for subtraction, multiplication, and division as well. Multiplication and division will be more difficult than subtraction due to more edge cases. This one is a little out of my league, but on an x86, if performance is key, you can use inline assembly... uint32_t add(uint32_t a, uint32_t b) { #ifdef IA32 __asm { mov eax,a xor edx,edx add eax,b setnc dl dec edx or eax,edx } #else // Portable version return sum;
http://ctips.pbworks.com/w/page/7277630/Saturated%20Addition
CC-MAIN-2021-04
refinedweb
254
68.3
We can traverse a tree in the inorder fashion iteratively, using stack, but it consumes space. So, in this problem, we are going to traverse a tree without the linear space being used. This concept is called Morris Inorder Traversal or Threading in Binary trees. Example 2 / \ 1 3 / \ 4 5 4 1 5 2 3 3 / \ 1 4 / \ 2 5 1 3 2 4 5 Approach The idea is: we can traverse the tree without the space of a stack (or the auxiliary space of recursion) if we do not lose the address of any node we visited earlier without storing them in the memory. This approach is called Morris Inorder Traversal. But without any space allowed, how can one store the addresses of nodes? The idea is to change the structure of the tree in such a way, that after visiting some particular nodes of one subtree from the root node, we can get back to the root node to process its other subtree. Say we visited the left subtree completely, and add a pointer from some node of the left subtree to the root again. Now, we can process the right subtree by coming back to the original root. In Morris Inorder Traversal, we link the inorder predecessor of a root(in its left subtree) to itself. This process of adding pointers (threads) from the inorder predecessor to root is called threading. Again, we don’t want to disturb the structure of the tree, so we will design an algorithm that automatically deletes the links and unthreads the tree to retain its original structure. Algorithm - Initialize the current node as root of the tree. - Keep iterating till we reach the condition where the current node becomes NULL. - If the left subtree of the current root is NULL : - We can now print the root and move to the right subtree, so currentNode = currentNode->right. - If the left subtree of the current root is NOT NULL : - In this case, we unthread/thread the rightmost node in the left subtree(inorder predecessor) of the current node to itself - temp = currentNode->left; - While temp->right is NOT NULL or temp is NOT currentNode - temp = temp->right - If temp->right is NULL: - remove threading as temp->right = NULL - process the right subtree, currentNode = currentNode->right - If temp->right is NOT NULL: - thread this node as temp->right = currentNode - Process the left subtree now, currentNode = currentNode->left Implementation of Morris Inorder Traversal C++ Program #include <bits/stdc++.h> using namespace std; struct treeNode { int value; treeNode *left , *right; treeNode(int x) { value = x; left = NULL; right = NULL; } }; void morrisInorder(treeNode* &root) { treeNode* currentNode = root; while(currentNode != NULL) { if(currentNode->left == NULL) { cout << currentNode->value << " "; currentNode = currentNode->right; } else { treeNode* temp = currentNode->left; while((temp->right != NULL) && (temp->right != currentNode)) temp = temp->right; if(temp->right == NULL) { temp->right = currentNode; //threading currentNode = currentNode->left; } else { cout << currentNode->value << " "; temp->right = NULL; //unthreading currentNode = currentNode->right; } } } } int main() { treeNode* root = new treeNode(2); root->left = new treeNode(1); root->right = new treeNode(3); root->left->left = new treeNode(4); root->left->right = new treeNode(5); morrisInorder(root); return 0; } 4 1 5 2 3 Java Program class treeNode { int value; treeNode left, right; public treeNode(int x) { value = x; left = right = null; } } class BinaryTree { treeNode root; void MorrisInorder() { treeNode currentNode = root; while(currentNode != null) { if(currentNode.left == null) { System.out.print(currentNode.value + " "); currentNode = currentNode.right; } else { treeNode temp = currentNode; while(temp.right != null && temp.right != currentNode) temp = temp.right; if(temp.right == null) { temp.right = currentNode; currentNode = currentNode.left; } else { temp.right = null; System.out.print(currentNode.value + " "); currentNode = currentNode.right; } } } } public static void main(String args[]) { BinaryTree tree = new BinaryTree(); tree.root = new treeNode(2); tree.root.left = new treeNode(1); tree.root.left.left = new treeNode(4); tree.root.left.right = new treeNode(5); tree.root.right = new treeNode(3); tree.MorrisInorder(); } } 4 1 5 2 3 Complexity Analysis of Morris Inorder Traversal Time Complexity O(N), where N is the number of nodes in the tree. It’s certain that we visit every node exactly 2 times, as they all go through the process of threading and unthreading. But, the time complexity remains linear. Space Complexity O(1), as we use constant space for declaring some variables. But, no other auxiliary space is used for any purpose. That’s what Morris Inorder Traversal promises about.
https://www.tutorialcup.com/interview/tree/morris-inorder-traversal.htm
CC-MAIN-2021-49
refinedweb
734
55.13
5.11. Asynchronous Programming¶ Perhaps one of the hardest aspects of prgramming in JavaScript is learning to cope with the asynchronous nature of JavaScript itself. Javascript can only do one thing at a time. It is a “single threaded” language. But what happens when you want to do something that may take some time, usually this involves makeing a call to another server to get data from a database. Now, you don’t just want your program to sit around and wait while the other server does some work, You’d like it to respond to button clicks or whatever else is next on the event queue. Then when the data comes back you would like your program to deal with the data and update your web page accordingly. To handle this we return to callback functions. Just as we first described a callback in Events we return to callback functions here. When the other server is done with its work, then we need to function to call to continue. In 2015 the ECMASCRIPT5 standard was released and it introduced a new object for JavaScript called a Promise. Promises and callbacks solve a similar problem, the difference is that with callbacks, you tell the function what to do when the task completes whereas with promises the function returns a promise to you and you tell the promise what to do when the task completes. 5.11.1. Promises¶ JavaScript Promises are very much like promises in real life. If I tell you that I promise I will stop at the grocery store and buy milk on my way home, then you can stop worrying about getting milk and leave it to me. When I get home with the milk then my promise is fulfilled (resolved in JavaScript terms). If I mess up and forget to buy the milk or the store is out of milk for some reason then my promise is broken (rejected in JavaScript terms). With JavaScript you can also specify what you want to have happen when a promise is resolved. For example, when I arrive home with the milk the next thing to do is get out some cookies and eat them with a nice glass of milk. We will illustrate this idea of promises using the JavaScript fetch method. The fetch method is used to get information from a server. When you call fetch it returns a Promise. This promise is resolved as soon as the headers are received from the server. You can then tell the promise what you want done when the request is resolved. Maybe you want to get the data that is returned and make a graph, or display it in a table, or show it on a map. If the promise is broken then you may try again, or display a message to the user, or try a whole different backup server. The Runestone server has an imaginary service that can predict the price of any stock. You give it a stock symbol and its current price and and it will predict the closing price for the next day it is traded. The accuracy of this predictor is very suspect, but it gives us something to play with. Oh, it also only knows how to make predictions for AAPL, GOOG, FB, AMZN and MSFT. Now making predictions takes a lot of specialized AI software so it may take a while to calculate. This is why we need an asynchronous interface to deal with it. If you try the following URL: it will return something that looks like this: {"stock": "AAPL", "price": 235.87, "date": "2019-10-14"} If you look it up you will see that it nailed this prediction perfectly. But again, the past performance is not necessarily indicative of future performance, your mileage may vary, please don’t take this as investment advice. Using the fetch method we can have runestone give us back a stock prediction that we can use in our program. We will ask the server to return a prediction for the stock price of Apple and then simply display the prediction to the user. If the request fails for any reason then we will display an error message to the user. Now that seems like a lot of code to accomplish a simple task, but I’ve made it as verbose as possible to make it easier to break down. Shortly, we will see how to shorten it up considerably. The call to fetchsends the request to runestone and immediately returns a promise ( myPromise). That promise will resolve when the response headers are returned. This is partly for security reasons, that we will mention later. a. We have instructed myPromiseto call receiveResponseusing myPromise.then(receiveResponse). The thenmethod is what you use to tell a promise what to do when a promise is resolved successfully. b. Below you will see that we can also use catchto tell our promise how it should respond in the case of an error. The receiveResponsefunction returns the value of resp.json()which itself is a promise ( jp)! This is because myPromiseresolves before all the data has arrived, so lots of things could still go wrong. When the jppromise resolves then you know that all of the JSON data has arrived successfully. The jp.catchmethod provides us a safety net for the case when any kind of error has occurred in any step of this process. receiveJsonmakes no more promises, it simply works with the data. In this case it just prints out a couple of values, but of course it could do anything you want. Q-1: Without running the example above, and assuming that the prediction for AAPL is 242.42 put the output in the order you would see it from the previous code. sending the request the request has been sent received response AAPL 242.42 There are several ways we can reduce the amount of code from the above example, that also illustrate more common JavaScript coding practices. Lets take a look at a first group of refinements. The first thing we can do is use an anonymous function instead of writing and declaring a function that is only used in one place. Why clog up the namespace with things that do not need to be there? The next thing to do is to make use of promise chaining Instead of assigning a promise to a variable, we can attach the then method to the the original function that generates the promise ( fetch in this case) and when the result of a function called inside then is a promise we can just attach another then like this fetch(...).then(...).then(...). This is quite compact, and the promise chaining seems very clean when you get used to it. 5.11.2. Async / Await¶ Promises and using then and catch can still be very confusing as we are all used to writing code that runs from top to bottom in order, and even experienced coders can get confused sometimes when writing asynchronous code. In 2016 the ES 7 standard introduced the keywords async and await which makes writing asynchronous code feel a lot more like the synchronous code we are used to writing. The async keyword is used before a function definition and guarantees that the function will return a Promise. It also allows you to use the await keyword inside the function. To say that sentence in another way: You cannot use the await keyword outside of an async function. Not from the top level of a script tag, or from the top level of a .js file. The await keyword pauses the exectuion of the async function until the promise is resolved. When this hapens the function resumes from the point it left off and the await expression evaluates to the result value of the promise. More on this later! Lets see how this simplifies our example. Wow, now that is nice! It makes the code in the async function very clean and easy to read, but notice that the getPrediciton function itself returns right away! That is by design as you know that the function itself is going to handle some asynchronous tasks but you want the main program to be able to go on and handle the next event. The promises interface has two other methods worth exploring. Promise.all and Promise.race. Promise.all allows you to have a number of different promises active and then wait for all of them to complete. In fact this all interface is used in the activecode widget in this very book. When a page loads and you press the load history button several asynchronous tasks are kicked off. But if you press the Run button before all of them are done, then the history can get out of order. So, the run button is disabled and when all the tasks are complete it is re-enabled. Lets look at an example of Promise.all in action, by kicking off requests for predictions for all of our stocks. 5.11.3. Promises in Depth¶ In this section you will learn how to make your own promises (and hopefully not break them!) The promise constructor takes a function as an argument. That function in turn takes two two parameters, each of them functions one function for when the promise is resolved and another for when the promise is rejected. Most often the function you pass is an anonymous function as it will be called immediately as the new Promise is being made. Lets look at a fun example. Write a function to generate the nth fibonnaci number. If the number is odd we will resolve the promise and if the number is even we’ll reject it.
https://runestone.academy/runestone/books/published/webfundamentals/Javascript/asyncJavascript.html
CC-MAIN-2020-24
refinedweb
1,630
78.59
Details - Type: Bug - Status: Closed - Priority: Major - Resolution: Fixed - Affects Version/s: JRuby 1.4 - - Component/s: Core Classes/Modules - Labels:None - Environment:Windows/MacOS/Linux - Number of attachments : Description Ruby 1.8.x has the following behavior: irb(main):006:0> File.open("c:/bla/foo", "wb") Errno::ENOENT: No such file or directory - c:/bla/foo from (irb):6:in `initialize' from (irb):6:in `open' from (irb):6 JRuby instead gives the following: irb(main):001:0> File.open("c:/bla/foo", "wb") IOError: Das System kann den angegebenen Pfad nicht finden from (irb):2:in `initialize' from (irb):2:in `open' from (irb):2 The problem is that IOError is thrown in JRuby while Errno::ENOENT in Ruby MRI. I found this problem while using Radiant 0.8.1 which has rack-cache on board. Rack-Cache rescues only Errno::ENOENT and therefore fails to run Radiant. Here is the code for rack-cache (GEM_HOME\gems\radiant-0.8.1\vendor\rack-cache\lib\rack\cache\metastore.rb): def write(key, entries) path = key_path(key) File.open(path, 'wb') do |io| Marshal.dump(entries, io, -1) end rescue Errno::ENOENT Dir.mkdir(File.dirname(path), 0755) retry end I'll write a spec and apply a patch soon. Activity I thought about fixing JRuby on this, to atleast provide one patch instead of issues But if you have a patch feel free to provide it. I will take care of the spec stuff. I meant fixing JRuby not Radiant. Radiant can be fixed by editing rack-cache as a workaround. I will inform the radiant team about a possible solution. Michael: got it, thanks! Meanwhile, here's the patch (in my dev repo): I'll wait for your spec/tests and then commit to the official master repository. P.S. And I should read bug reports more carefully, so that I wouldn't miss at first reading the last line of your report about desire to work on the patch, sorry about that. Here's a patch with a new test spec Vladimir, I attached a patch file with a test. Is this the correct way to provide patches? Any further readings regarding the contribution process? The test patch is good. Thanks! Applied the fix and the test in rev. b87f3cd. Michael, as for reading, take a look at, section "Getting Involved". Typically, the process is simple and straightforward, pick up a bug to work on, fix it, most preferably with the accompanied test, and attach a patch to the jira issue. That's all. As for formatting, just follow the existing sources (no TABs). Alternatively, there is RubySpec project, please consider contributing there as well, with all the ruby specification issues. That way, these tests will be used by multiple implementations (C-Ruby, JRuby, rubinius, MacRuby etc). Yes, this is JRuby incompatibility. Will fix. Michael, when you say "write a spec and apply a patch soon", what do you mean? A rubyspec for this particular case, right? And the patch, you mean for JRuby or for radiant? I have a preliminary patch for this issue as well, but it would be really good to have the specs/tests for that.
http://jira.codehaus.org/browse/JRUBY-4380?focusedCommentId=203789&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel
CC-MAIN-2014-42
refinedweb
535
67.04
Explicit type conversions often mask errors related to a change of a pointer type. One of such errors is casting of a pointer to 32-bit objects into a pointer to 64-bit ones. Let us look at one example received from the users of our tool PVS-Studio (Viva64). The error shows after porting the code to the 64-bit Windows: void ProcessTime(long * pTime) { time((time_t *)pTime); } In a 32-bit program, the 32-bit version of the type time_t was used. On a 64-bit system, only the 64-bit version of time_t type is available. This is explicitly defined in the header file crtdefs.h: #ifdef _USE_32BIT_TIME_T #ifdef _WIN64 #error You cannot use 32-bit time_t (_USE_32BIT_TIME_T) with _WIN64 #endif #ifdef _USE_32BIT_TIME_T typedef __time32_t time_t; #else typedef __time64_t time_t; #endif The explicit type conversion we have demonstrated, can lead to an unpredictable program behavior or crash. It is difficult to diagnose such errors as the construct of the explicit type conversion suppresses the compiler warnings (see the note: "Search of explicit type conversion errors in 64-bit programs"). The diagnostic warning "V114. Dangerous explicit type pointer conversion" generated by PVS-Studio (Viva64) code analyzer when checking 64-bit projects helps detect such errors. Besides the example given above, the diagnostic warning V114 can detect a similar error related to a change of an array type: int array[4] = { 1, 2, 3, 4 }; size_t *sizetPtr = (size_t *)(array); cout << sizetPtr[1] << endl; Result on the 32-bit system: 2 Result on the 64-bit system: 17179869187
http://www.viva64.com/en/b/0034/
CC-MAIN-2014-10
refinedweb
258
57.91
Getting Started The SAMPLES namespace includes two DeepSee samples. One is the DeepSee.Study.Patient class and related classes. This sample is meant for use as the basis of a DeepSee model. It does not initially contain any data. The DeepSee.Model package includes sample cubes, subject areas, KPIs, pivot tables, and dashboards, for use as reference during this tutorial. This sample is intended as a flexible starting point for working with DeepSee. You use this sample to generate as much data or as little data as needed, and then you use the DeepSee Architect to create a DeepSee model that explores this data. You can then create DeepSee pivot tables, KPIs, and dashboards based on this model. The sample contains enough complexity to enable you to use the central DeepSee features and to test many typical real-life scenarios. This book presents hands-on exercises that use this sample. DeepSee uses SQL to access data while building the cube, and also when executing detail listings. If your model refers to any class properties that are SQL reserved words, you must enable support for delimited identifiers so that DeepSee can escape the property names. For a list of reserved words, see the “Reserved Words” section in the Caché SQL Reference. For information on enabling support for delimited identifiers, see the chapter “Identifiers” in Using Caché SQL. Be sure to consult the online InterSystems Supported Platforms document for this release for information on system requirements for DeepSee. Getting Started Most of the tools that you will use are contained in the Management Portal. To log on: Click the InterSystems Launcher and then click Management Portal. Depending on your security, you may be prompted to log in with a Caché username and password. Switch to the SAMPLES namespace as follows: Click Switch. Click SAMPLES. Click OK. Regenerating Data The tutorial uses a larger, slightly more complex set of data than is initially provided in SAMPLES. To generate data for this tutorial: In the Terminal, switch to the SAMPLES namespace: zn "SAMPLES"Copy code to clipboard Execute the following command: do ##class(DeepSee.Populate).GenerateAll SAMPLES namespace, as described earlier. Click System Explorer > SQL. Click the Execute Query tab. Click Actions and then Tune All Tables. The system then displays a dialog box where you select a schema and confirm the action. For Schema, select the DeepSee_Study schema. Click Finish. Click Done. The system then runs the Tune Table facility in the background.
https://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=D2DT_CH_SETUP
CC-MAIN-2021-04
refinedweb
410
57.47
Back to: C#.NET Tutorials For Beginners and Professionals Introduction to Collections in C# In this article, I am going to give you a brief Introduction to Collections in C#. Please read our previous article where we discussed the advantages and disadvantages of arrays in C# with examples. As part of this article, we are going to discuss the following pointers in detail. So, let’s first understand what is the problem with the traditional array in C#, and then we will discuss how to overcome the problems using collections in C#. What are the Array and their disadvantages in C#? In simple words, we can say that the Arrays in C# are the simple data structure that is used to store similar types of data items in sequential order. Although the arrays in c# are commonly used, they have some limitations. For example, you need to specify the array’s size while creating the array. If at execution time, you want to modify it that means if you want to increase or decrease the size of an array, then you need to do it manually by creating a new array or by using the Array class’s Resize method, which internally creates a new array and copies the existing array element into the new array. Following are the limitations of Array in C#: - The array size is fixed. Once the array is created we can never increase the size of an array. If we want then we can do it manually by creating a new array and copy the old array elements into the new array or by using the Array class Resize method which will do the same thing means to create a new array and copy the old array elements into the new array and then destroy the old array. - We can never insert an element into the middle of an array - Deleting or removing elements from the middle of the array. To overcome the above problems, the Collections are introduced in C# 1.0. What is a Collection in C#? The Collections in C# are a set of predefined classes that are present in the System.Collections namespace that provides greater capabilities than the traditional arrays. The collections in C# are reusable, more powerful, more efficient and most importantly they have been designed and tested to ensure quality and performance. So in simple words, we can say a Collection in C# is a dynamic array. That means the collections in C# have the capability of storing multiple values but with the following features. - Size can be increased dynamically. - We can insert an element into the middle of a collection. - It also provides the facility to remove or delete elements from the middle of a collection. The collection which is from .Net framework 1.0 is called simply collections or Non-Generic collections in C#. These collection classes are present inside the System.Collections namespace. The example includes are Stack, Queue, LinkedList, SortedList, ArrayList, HashTable, etc. Auto-Resizing of collections: The capacity of a collection increases dynamically i.e. when we keep adding new elements, then the size of the collection keeps increasing automatically. Every collection class has three constructors and the behavior of collections will be as following when created using a different constructor. - Default Constructor: It Initializes a new instance of the collection class that is empty and has the default initial capacity as zero which becomes four after adding the first element and whenever needed the current capacity becomes double. - Collection (int capacity): This constructor initializes a new instance of the collection class that is empty and has the specified initial capacity, here also when the requirement comes current capacity doubles. - Collection (Collection): It Initializes a new instance of the collection class that contains elements copied from the specified collection and that has the same initial capacity as the number of elements copied, here also when the requirement comes current capacity doubles. In the next article, I am going to discuss the ArrayList in C# with examples. Here, in this article, I give you a brief introduction to Collections in C#. I hope this article will help you with your needs. I would like to have your feedback. Please post your feedback, question, or comments about this article.
https://dotnettutorials.net/lesson/collections-csharp/
CC-MAIN-2022-21
refinedweb
714
53.61
ES6 for Django Lovers! The Django community is not one to fall to bitrot. Django supports every new release of Python at an impressive pace. Active Django websites are commonly updated to new releases quickly and we take pride in providing stable, predictable upgrade paths. We should be as adamant about keeping up that pace with our frontends as we are with all the support Django and Python put into the backend. I think I can make the case that ES6 is both a part of that natural forward pace for us, and help you get started upgrading the frontend half of your projects today. The Case for ES6 As a Django developer and likely someone who prefers command lines, databases, and backends you might not be convinced that ES6 and other Javascript language changes matter much. If you enjoy the concise expressiveness of Python, then ES6's improvements over Javascript should matter a lot to you. If you appreciate the organization and structure Django's common layouts for projects and applications provides, then ES6's module and import system is something you'll want to take advantage of. If you benefit from the wide variety of third-party packages the Python Package index makes available to you just a pip install away, then you should be reaching out to the rich ecosystem of packages NPM has available for frontend code, as well. For all the reasons you love Python and Django, you should love ES6, too! Well Structured Code for Your Whole Project In any Python project, you take advantage of modules and packages to break up a larger body of code into sensible pieces. It makes your project easier to understand and maintain, both for yourself and other developers trying to find their way around a new codebase. If you're like many Python web developers, the lack of structure between your clean, organized Python code and your messy, spaghetti Javascript code is something that bothers you. ES6 introduces a native module and import system, with a lot of similarities to Python's own modules. import React from 'react'; import Dispatcher from './dispatcher.jsx'; import NoteStore from './store.jsx'; import Actions from './actions.jsx'; import {Note, NoteEntry} from './components.jsx'; import AutoComponent from './utils.jsx' We don't benefit only from organizing our own code, of course. We derive an untold value from a huge and growing collection of third-party libraries available in Python and often specifically for Django. Django itself is distributed in concise releases through PyPI and available to your project thanks to the well-organized structure and the distribution service provided by PyPI. Now you can take advantage of the same thing on the frontend. If you prefer to trust a stable package distribution for Django and other dependencies of your project, then it is a safe bet to guess that you are frustrated when you have to "install" a Javascript library by just unzipping it and committing the whole thing into your repository. Our Javascript code can feel unmanaged and fragile by comparison to the rest of our projects. NPM has grown into the de facto home of Javascript libraries and grows at an incredible pace. Consider it a PyPI for your frontend code. With tools like Browserify and Webpack, you can wrap all the NPM installed dependencies for your project, along with your own organized tree of modules, into a single bundle to ship with your pages. These work in combination with ES6 modules to give you the scaffolding of modules and package management to organize your code better. A Higher Baseline This new pipeline allows us to take advantage of the language changes in ES6. It exposes the wealth of packages available through NPM. We hope it will raise the standard of quality within our front-end code. This raised bar puts us in a better position to continue pushing our setup forward. How Caktus Integrates ES6 With Django Combining a Gulp-based pipeline for frontend assets with Django's runserver development web server turned out to be straightforward when we inverted the usual setup. Instead of teaching Django to trigger the asset pipeline, we embedded Django into our default gulp task. Now, we set up livereload, which reloads the page when CSS or JS has been changed. We build our styles and scripts, transforming our Less and ES6 into CSS and Javascript. The task will launch Django's own runserver for you, passing along --port and --host parameters. The rebuild() task delegated to below will continue to monitor all our frontend source files for changes to automatically rebuild them when necessary. // Starts our development workflow gulp.task('default', function (cb) { livereload.listen(); rebuild({ development: true, }); console.log("Starting Django runserver http://"+argv.address+":"+argv.port+"/"); var args = ["manage.py", "runserver", argv.address+":"+argv.port]; var runserver = spawn("python", args, { stdio: "inherit", }); runserver.on('close', function(code) { if (code !== 0) { console.error('Django runserver exited with error code: ' + code); } else { console.log('Django runserver exited normally.'); } }); }); Integration with Django's collectstatic for Deployments Options like Django Compressor make integration with common Django deployment pipelines a breeze, but you may need to consider how to combine ES6 pipelines more carefully. By running our Gulp build task before collectstatic and including the resulting bundled assets — both Less and ES6 — in the collected assets, we can make our existing Gulp builds and Django work together very seamlessly. References - GulpJS () - ES6 Features () - Django Project Template (), maintained by Caktus
https://www.caktusgroup.com/blog/2016/05/02/es6-django-lovers/
CC-MAIN-2017-47
refinedweb
913
63.19
Hey Jesse, It should be just this This displays the number 4! #include "Nextion.h" NexText t0 = NexText(0, 1, "t0"); char buffer[100] = {0}; int Mypins [4][5] = {{0,1,2,3,4},{5,6,7,8,9},{10,11,12,13,14},{15,16,17,18,19}}; NexTouch *nex_listen_list[] = { NULL }; void setup(void) { nexInit(); } void loop(void) { nexLoop(nex_listen_list); uint16_t number; number = Mypins [0][4]; memset(buffer, 0, sizeof(buffer)); itoa(number, buffer, 10); t0.setText(buffer); } Rod It works. I tested it! Did you put a text box on the screen? Make sure the name is t0 for the text box. Now just to be sure. you are using an Arduino to send this info to the screen? Hi Jesse, Ok this is how it works. You need to help us help you. When somebody asks you what are you using and did you do this. Tell us. We aren't mind readers. The best way to help is attach the files you are using. eg the HMI file for the screen, the Arduino ino. Tell us exactly what your setup is, wiring detail, screen size etc. eg Arduino Uno, Arduino connected to pin 2 and 3. Add a photo of your setup. Tell us what library's on the Arduino you are using. what errors or not you are getting. Give us something to work with! Rod Cool. I had frustration with it too. Have you checked out the smart fishtank tutorial? it uses an RTC and has a lot of code you might be able to use Bugger. I hate that when library's don't work together. Send us an email. I'll see what I can do. parsosmail@yahoo.com.au Jesse lyman
http://support.iteadstudio.com/support/discussions/topics/11000001039
CC-MAIN-2017-43
refinedweb
290
87.42
Hi There Last task in exercise here want you to initialise hobbies array with length of zero in the constructor to prevent exception when ViewProfile is called and hobbies have not been set. “If you call ViewProfile() before calling SetHobbies() , you’ll get an error because the hobbies field isn’t set to any value. Fix the class so that you can call ViewProfile() without adding hobbies.” In the constructor of ViewProfile, I’ve tried to initialise hobbies array with length of zero with no luck, can anyone help with what I doing incorrectly? As is is below I get an error saying that value for hobbies must be a compile time constant. How does one initialise array with length of 0 in ViewProfile constructor? namespace DatingProfile { class Profile { private string name; private int age; private string city; private string country; private string pronouns; private string hobbies; public Profile(string name,int age,string city,string country,string pronouns = "they/them",string[] hobbies = new string[0]) { this.name = name; this.age = age; this.city = city; this.country = country; this.pronouns = pronouns; this.hobbies = hobbies; }
https://discuss.codecademy.com/t/the-object-of-your-affection/436280
CC-MAIN-2019-43
refinedweb
184
65.83
Introduction FTP or File Transfer Protocol is a common way to transfer files. For FTP, Python has a built in package called ftplib. There is also a Secure File Transfer Protocol (SFTP) that uses SSH to encrypt the communication. We will focus just on traditional FTP in this guide. For SFTP you can check out the Paramiko package. In this guide, we will look at some basic operations like: - Connect and login to an FTP server - List directories and files - Upload and download files Note these examples are all using Python 3. Connect to an FTP server The first thing to learn is how to connect to an FTP server. We'll look at how to connect anonymously and - Port 21 - Default FTP port Anonymous connection This first example shows how to use a with context manager to login to an FTP server. The connection will automatically be closed. The code will print out the welcome banner just to ensure it connected. # Anonymous FTP login from ftplib import FTP with FTP('') as ftp: print() Authenticated login If you want to authenticate, you can simply pass the user and passwd parameters to the FTP() constructor or you can call connect() and login() yourself. This example shows how to login using both methods. from ftplib import FTP # Connect and login at once with FTP(host='', user='me', passwd='secret') as ftp: print() # Or connect and login as separate steps ftp = FTP()('', 21)('me', 'secret') print() Connect with SSL/TLS Use the ftplib.FTP_TLS class instead. Note, this is not SFTP that uses SSH over port 22, this is FTP with SSL/TLS over port 21. If your provider offers this option, always use it over the plaintext FTP. Then make sure you call ftplib.FTP_TLS.prot_p() which will setup the secure data connection. from ftplib import FTP_TLS # Connect to server using TLS, port 21 default. Is not SFTP over SSH with FTP_TLS('', 'user', 'secret') as ftp: print() Work with directories Let's look first at how to do some basic operations with directories like: - printing your current working directory - changing directories - creating a directory - removing a directory Print current working directory Once you are connected, you will first want to know where you are in the directory structure. The pwd() function on the ftplib.FTP object provides this data. from ftplib import FTP with FTP('', 'user', 'secret') as ftp: print() # Usually default is / Create a directory You can make a directory using ftplib. and pass it the name of the directory. It will return a string containing the name of the directory created. from ftplib import FTP with FTP('', 'user', 'secret') as ftp:('my_dir') Remove a directory To remove a directory, just use rmd() on your FTP object. A directory must be empty to delete it. from ftplib import FTP with FTP('', 'user', 'secret') as ftp:('my_dir') Change current working directory To switch to a different directory, use ftplib.. from ftplib import FTP with FTP('', 'user', 'secret') as ftp: print(('other_dir')) # Change to `other_dir/` List directory contents The next basic task you will probably want to do is list the files in a directory. The ftplib. command will list all the files in your current directory. It does not just provide the filename though, it provides a string that contains the permissions, whether it is a directory, byte size, modification timestamp, owner, and group information. It is formatted just like the output from an ls command. Since the output is a string, you will have to parse out the information from it manually using split() or regular expressions. from ftplib import FTP with FTP('', 'user', 'secret') as ftp: # List files files = [] # Takes a callback for each file for f in files: print(f) An example file listing might look similar too: drwxr-xr-x 3 dano dano 4096 Mar 12 23:15 www-null Work with files Now that we have learned how to navigate directories, it is time to learn how to: - upload files - download files - get the size of a file - rename a file - delete a file Upload a file You may not be able to upload a file on every server, especially if you are only logged in anonymously. If you do have the permissions though, this example will show you how to upload a file. For text files use storlines() and for binary use storbinary(). from ftplib import FTP with FTP('', 'user', 'pass') as ftp: # For text or binary file, always use `rb` with open('test.txt', 'rb') as text_file:('STOR test.txt', text_file) with open('image.png', 'rb') as image_file:('STOR image.png', image_file) Get the size of a file To check the size of a file on a remote FTP server, you can simply use the size() function as demonstrated in the following example. Depending on whether you want to check a text file or a binary file, you want to tell the FTP server what type you want to use. Use sendcmd() and pass the type with either TYPE I for image/binary data or TYPE A for ASCII text. The size() function will return the size of the file in bytes. from ftplib import FTP, all_errors with FTP('', 'user', 'pass') as ftp: try:('TYPE I') # "Image" or binary data print(('image.png')) # Get size of 'image.png' on server except all_errors as error: print(f"Error checking image size: {error}") try:('TYPE A') # "ASCII" text print(('test.txt')) except all_errors as error: print(f"Error checking text file size: {error}") Rename a file To rename a file on a remote FTP server, use the rename() function and pass the original filename and the new filename. from ftplib import FTP, all_errors with FTP('', 'user', 'pass') as ftp: try:('test.txt', 'my_file.txt') except all_errors as error: print(f'Error renaming file on server: {error}') Download a file To download a file you can use retrlines() or retrbinary() on the FTP object.(cmd, callback=None)(cmd, callback, blocksize=8192, rest=None) For the callback, we'll use write() on the file object so each chunk is written to the file we have open. from ftplib import FTP, all_errors with FTP('', 'user', 'pass') as ftp: # For text files with open('local_test.txt', 'w') as local_file: # Open local file for writing # Download `test.txt` from server and write to `local_file` # Pass absolute or relative path response =('RETR test.txt', local_file.write) # Check the response code # if response.startswith('226'): # Transfer complete print('Transfer complete') else: print('Error transferring. Local file may be incomplete or corrupt.') # For binary use `retrbinary()` with open('image.png', 'wb') as local_file:('RETR image.png', local_file.write) Delete a file To delete a remote file, use the delete() function and give it the filename you want to delete. You cannot delete directories with this. For directories, you must use rmd() as shown in the example earlier. from ftplib import FTP, all_errors with FTP('', 'user', 'pass') as ftp: try:('test.txt') except all_errors as error: print(f'Error deleting file: {error}') Error checking So far, none of the examples here include any exception handling. There are a few exceptions that may be thrown if the server returns an error or malformed response. To catch everything, wrap your code with a try statement and catch all FTP exceptions with ftplib.all_errors. Some of the exceptions to watch for are: ftplib.error_reply- unexpected server reply ftplib.error_temp- temporary error ftplib.error_perm- permanent error ftplib.error_proto- malformed server reply ftplib.all_errors- catches all above errors that a server can return and OSErrorand EOFError. from ftplib import FTP with FTP('', 'user', 'secret') as ftp: try: print() except ftplib.all_errors as e: print(f'Error with FTP: {e}') Conclusion After reading this guide, you should understand how to connect to an FTP server with or without TLS and login anonymously or as an authenticated user. You should also understand how to move around directories, create and delete directories, list files, and upload and download files from FTP.
https://www.devdungeon.com/content/python-ftp-client-tutorial
CC-MAIN-2022-40
refinedweb
1,327
62.98
In the Client app create a new folder named Components. This is not a special name, we can choose any name we wish. Once you’ve created the new Components folder, create a file within it named MyFirstComponent.razor and enter the following mark-up. <div> <h2>This is my first component</h2> </div> Now edit the Index.razor file. At this point we can either reference the component with a fully qualified name: <CreatingAComponent.Client.Components.MyFirstComponent/> Or edit /_Imports.razor and add @using CreatingAComponent.Client.Components. The using statements here are cascaded into all Razor views – which means the mark-up to use the new component in /Pages/Index.razor no longer needs a namespace. @page "/" <h1>Hello, world!</h1> <MyFirstComponent/> Welcome to your new app. <SurveyPrompt Title="How is Blazor working for you?" /> Now run the app and we’ll see the following.
https://blazor-university.com/components/creating-a-component/
CC-MAIN-2021-25
refinedweb
146
53.78
100 Days of VR: Day 6 Survival Shooter Tutorial II Entry posted by Josh Chang · Today in day 6, we’re going to finish the rest of the Survival Shooter tutorial and finally move on to developing a simple game of my own! Today, we learn more about: - Creating the UI - Attacking and Moving for the player and the enemies - Raycasting - Creating more animations - …And more! So let’s get started! Health HUD In the next part of the video series, we went on to create the health UI for the game for when the enemy attacks us. Creating the Canvas parent The first thing we want to do is to create a new Canvas object on the hierarchy. We called it HUDCanvas We add a Canvas Group component to our Canvas. According to the documentation, anything we check in Canvas Group will persist to its child. Specifically, we want to uncheck Interactable and Blocks a Raycast. We want to avoid the UI from doing any of these things. Adding the Health UI Container Next, we create an Empty GameObject as a child to our HUDCanvas. This will be the parent container for our Health UI. We’ll call it HealthUI. What’s interesting to note is that, because it’s a child of the Canvas, we also have a Rect Transform component attached to our Game Object. Click on the Rect Transform and position our HealthUI to the bottom left corner of the game. Remember to hold alt + shift to move the anchor and the position! Adding the Health Image Next up, we create an Image UI as a child to the HealthUI. In the Image (Script) component, we just need to attach the provided Heart.png image You should see something like this in our scene tab: And it should look something like this in our game tab: Creating our UI Slider Next up, we need to create the HP bar that we use to indicate the HP that our player has. We do that by creating a Slider UI GameObject as a child to our canvas. The Slider will come with children objects of its own. Delete everything, except for Fill Area. Next we want to make our HP. In the Slide GameObject, make the Max Valueof 100 and set Value to also be 100. Note: I was not able to get the slider to fit perfectly like the video did in the beginning. If you weren’t able to do so either, go to the Rect Transform of the slider and play with the positioning. Adding a Screen Flicker When the Player Gets Hit Next, we created an Image UI called DamageImage that’s a child of the HUDCanvas. We want to make it fill out the whole canvas. This can be accomplished by going to Rect Transform, clicking the positioning box, and then clicking the stretch width and height button while holding alt + shift. We also want to make the color opaque. We can do that by clicking on Colorand moving the A (alpha) value to 0. When you’re done with everything, your HUDCanvas should look something like this. Player Health Now that we have our Player Health UI created it’s time to use it. We attached an already created PlayerHealth script to our Player GameObject. Here’s the code: using UnityEngine; using UnityEngine.UI; using System.Collections; using UnityEngine.SceneManagement; Update () { // If the player has just been damaged... if(damaged) { // ... set the colour of the damageImage to the flash colour. damageImage.color = flashColour; } // Otherwise... else { // ... transition the colour back to clear. damageImage.color = Color.Lerp (damageImage.color, Color.clear, flashSpeed * Time.deltaTime); } // Reset the damaged flag. damaged = false; } public void TakeDamage (int amount) { // Set the damaged flag so the screen will flash. damaged = true; // Reduce the current health by the damage amount. currentHealth -= amount; // Set the health bar's value to the current health. healthSlider.value = currentHealth; // Play the hurt sound effect. playerAudio.Play (); // If the player has lost all it's health and the death flag hasn't been set yet... if(currentHealth <= 0 && !isDead) { // ... it should die. Death (); } } void Death () { // Set the death flag so this function won't be called again. isDead = true; // Turn off any remaining shooting effects. //playerShooting.DisableEffects (); // Tell the animator that the player is dead. anim.SetTrigger ("Die"); // Set the audiosource to play the death clip and play it (this will stop the hurt sound from playing). playerAudio.clip = deathClip; playerAudio.Play (); // Turn off the movement and shooting scripts. playerMovement.enabled = false; //playerShooting.enabled = false; } public void RestartLevel () { // Reload the level that is currently loaded. SceneManager.LoadScene (0); } } Like before, the video commented out some of the code, because we haven’t reached that point yet. It’s important to note how the functions have been separated into modules that specify what everything does instead of stuffing everything inside Update(). Some things to note from our script: Looking at Update() Inside Update() we create the damage flicker animation effect. If the player gets damaged (the damaged Boolean becomes true), we set the DamageImage to a red color, then we change the damage Boolean to be false. Afterwards, as we continue to call Update() on each frame, we would create a lerp that would help us transition from the damaged color back to the original color over time. Taking Damage How do we set damaged to be true? From TakeDamage()! Notice the public in: public void TakeDamage (int amount) We’ve seen this before in the previous tutorial. As you recall, this means that we can call use this function whenever we have access to the script component. Attaching the Components to the Script The rest of the code is pretty well documented so I’ll leave it to you to read through the comment. Before we move on, we have to attach the components to our script. Creating the Enemy Attack script It was mentioned earlier that we have a public TakeDamage() function that allows other scripts to call. The question then is, which script calls it? The answer: the EnemyAttack script. Already provided for us, just attach it to the player. The code will look something like this: Update () { // Add the time since Update was last called to the timer. timer += Time.deltaTime; // If the timer exceeds the time between attacks, the player is in range and this enemy is alive... if(timer >= timeBetweenAttacks && playerInRange && enemyHealth.currentHealth > 0) { // ... attack. Attack (); } // If the player has zero or less health... if(playerHealth.currentHealth <= 0) { // ... tell the animator the player is dead. anim.SetTrigger ("PlayerDead"); } } void Attack () { // Reset the timer. timer = 0f; // If the player has health to lose... if(playerHealth.currentHealth > 0) { // ... damage the player. playerHealth.TakeDamage (attackDamage); } } } Like before, some things aren’t commented in yet, however the basic mechanic for the function is: - Enemy get near the player, causing the OnTriggerEnter() to activate and we switch the playerInRange Boolean to be true. - In our Update() function, if it’s time to attack in the enemy is in range, we call the Attack() function which then would call TakeDamage() if the player is still alive. - Afterwards if the player has 0 or less HP, then we set the animation trigger to make the player the death animation. - Otherwise if the outruns the zombie and exit the collider, OnTriggerExit() will be called and playerInRange would be set to false, avoiding any attacks. With that, we have everything for the game to be functional… or at least in the sense that we can only run away and get killed by the enemy. Note: If the monster doesn’t chase you, make sure you attached the Playerobject with the Player tag, otherwise the script won’t be able to find the Player object. Harming Enemies In the previous video, we made the enemy hunt down and kill the player. We currently have no way of fighting back. We’re going to fix this in the next video by giving HP to the enemy. We can do that by attaching the EnemyHealth script to our Enemy GameObject. Here’s the); } } In a way, this is very similar to the PlayerHealth script that we have. The biggest difference is that when the player dies, the games ends, however when the enemy dies, we need to somehow get them out of the game. The flow of this script would go something like this: - We initialize our script in Awake() - Whenever the enemy takes damage via our public function: TakeDamage(), we play our special effects to show the enemy received damage and adjust our health variable - If the enemy’s HP ends up 0 or below, we run the death function which triggers the death animation and other death related code. - We call StartSinking() which will set the isSinking Boolean to be true. - You might notice that StartSinking() isn’t called anywhere. That’s because it’s being called as an event when our enemy animation finishes playing its death clip. You can find it under Events in the Animations for the Zombunny. - After isSinking is set to be true, our Update() function will start moving the enemy down beneath the ground. Moving to the Player Our enemy has HP now. The next thing we need to do is to do is to make our player character damage our enemy. The first thing we need to do is some special effects. We need to copy the particle component on the GunParticles prefab… and pass that into the GunBarrelEnd Game Object which is the child of Player Next, still in GunBarrelEnd, we add a Line Renderer component. This will be used to draw a line, which will be our bullet that gets fired out. For a material, we use the LineRendereredMaterial that’s provided for us. We also set the width of our component to 0.05 so that the line that we shoot looks like a small assault rifle that you might see in other games. Make sure to disable the renderer as we don’t want to show this immediately when we load. Next we need to add a Light component. We set it to be yellow. Next up, we attach player gunshot as the AudioSource to our gun. Finally, we attach the PlayerShooting script that was provided for us to shoot the gun. Here it is: Update () { // Add the time since Update was last called to the timer. timer += Time.deltaTime; // If the Fire1 button is being press and it's time to fire... if(Input.GetButton ("Fire1") && timer >= timeBetweenBullets) { // ... shoot the gun. Shoot (); } // If the timer has exceeded the proportion of timeBetweenBullets that the effects should be displayed for... if(timer >= timeBetweenBullets * effectsDisplayTime) { // ... disable the effects. DisableEffects (); } } public void DisableEffects () { // Disable the line renderer and the light. gunLine.enabled = false; gunLight.enabled = false; } void Shoot () { // Reset the timer. timer = 0f; // Play the gun shot audioclip. gunAudio.Play (); // Enable the light. gunLight.enabled = true; // Stop the particles from playing if they were, then start the particles. gunParticles.Stop (); gunParticles.Play (); // Enable the line renderer and set it's first position to be the end of the gun. gunLine.enabled = true; gunLine.SetPosition (0, transform.position); // Set the shootRay so that it starts at the end of the gun and points forward from the barrel. shootRay.origin = transform.position; shootRay.direction = transform.forward; // Perform the raycast against gameobjects on the shootable layer and if it hits something... if(Physics.Raycast (shootRay, out shootHit, range, shootableMask)) { // Try and find an EnemyHealth script on the gameobject hit. EnemyHealth enemyHealth = shootHit.collider.GetComponent <EnemyHealth> (); // If the EnemyHealth component exist... if(enemyHealth != null) { // ... the enemy should take damage. enemyHealth.TakeDamage (damagePerShot, shootHit.point); } // Set the second position of the line renderer to the point the raycast hit. gunLine.SetPosition (1, shootHit.point); } // If the raycast didn't hit anything on the shootable layer... else { // ... set the second position of the line renderer to the fullest extent of the gun's range. gunLine.SetPosition (1, shootRay.origin + shootRay.direction * range); } } } The flow of our script is: - Awake() to initialize our variables - In Update() we wait for the user to left click to shoot, which would call Shoot() - In Shoot() we create a Raycast that will go straight forward until it either hits an enemy or a structure, or it reaches the max distance we sit it. From there, we create the length of our LineRenderer from the gun to the point we hit. - After a couple more frames in Update() we will disable the LineRenderer to give the illusion that we’re firing something out. At this point, we have to do some cleanup work. We have to go back to the EnemyMovement script and uncomment the code that stops the enemy from moving when either the player or it dies. The changes are highlighted: using UnityEngine; using System.Collections; public class EnemyMovement : MonoBehaviour { Transform player; PlayerHealth playerHealth; EnemyHealth enemyHealth; UnityEngine.AI.NavMeshAgent nav; void Awake () { player = GameObject.FindGameObjectWithTag ("Player").transform; playerHealth = player.GetComponent <PlayerHealth> (); enemyHealth = GetComponent <EnemyHealth> (); nav = GetComponent <UnityEngine.AI.NavMeshAgent>; (); } void Update () { if(enemyHealth.currentHealth > 0 && playerHealth.currentHealth > 0) { nav.SetDestination (player.position); } else { nav.enabled = false; } } } After all of this is done, we have a playable game! Note: if you start playing the game and try shooting the enemy and nothing happens. Check if the enemy’s Layer is set to Shootable. Scoring Points At this point we have a complete game! So what’s next? As you can guess from the next video, we’re creating a score system. We end up doing something similar to what has been done before with the previous 2 video tutorials where we put a UI Text on the screen. Anchor With that being said, we create a UI Text in our HUDCanvas. We set the RectTransform to be the top. This time we want to just set the anchor by clicking without holding shift + ctrl. Font Next, in the Text component, we want to change the Font Style to LuckiestGuy, which was a font asset that was provided for us Add Shadow Effect Next up we attach the shadow component to our text to give it a cool little shadow. I’ve played around with some of the values to make it look nice. Adding the ScoreManager Finally, we need to add a script that would keep track of our score. To do that we’ll have to create a ScoreManager script, like the one provided for us:; } } This code is pretty straightforward. We have a score variable and we display that score in Unity, in every Update() call. So where will score be updated? It won’t be in the ScoreManager, it’ll be whenever our enemy dies. Specifically that’ll be in our EnemyHealth); } } And that’s it! Now we can get a grand total score of… 1. But we’ll fix that in the next video when we add more enemies. Creating a prefab Before we move on to the next video, we made a prefab of our enemy. Like we saw in previous videos, prefabs can be described as a template of an existing GameObject you make. They’re handy for making multiple copies of the same thing… like multiple enemies! Spawning In this upcoming video, we learned how to create multiple enemies that would chase after the player. The first thing to done was to create the Zombear. To be re-usable, if you have enemy models that have similar animations like the Zombear and Zombunny, you can re-use the same animations. However, I was not able to see any animation clips for the Zombear so… I decided to just skip this part. Then at that point I got into full-blown laziness and decided to skip the Hellephant too. However, some important thing to note was that if we have models that have the same types of animation, but different models, we can create an AnimatorOverrideController that takes in an AnimtorController which uses the same animation clips. EnemyManager So after our… brief attempt at adding multiple types of enemies, we have to somehow create a way to spawn an enemy. To do this, we create an empty object which we’ll call EnemyManager in our hierarchy. Then we attach the EnemyManager script provided us to it: using UnityEngine; public class EnemyManager : MonoBehaviour { public PlayerHealth playerHealth; public GameObject enemy; public float spawnTime = 3f; public Transform[] spawnPoints; void Start () { InvokeRepeating ("Spawn", spawnTime, spawnTime); } void Spawn () { if(playerHealth.currentHealth <= 0f) { return; } int spawnPointIndex = Random.Range (0, spawnPoints.Length); Instantiate (enemy, spawnPoints[spawnPointIndex].position, spawnPoints[spawnPointIndex].rotation); } } The flow of this code is: - In Start(), we call InvokeRepeating to call the method “Spawn” starting in spawnTime and then repeating every spawnTime, with spawnTime being 3 seconds - Inside Spawn() we would randomly create an enemy from the array of spawnPoints. However, in this case, we only have 1 location. It was made into an array for re-usability purposes. And that’s it! But before we move on, we have to create the spawn point. We created a new empty object: Zombunny Spawn Point and I set it at: - position: (-20.5, 0, 12.5) - Rotation (0, 130, 0) And then from there, just drag the Zombunny Spawn Point to the spawnPoint label inside the EnemyManager script to add the GameObject to our array. If we followed the video perfectly, we’d have multiple Spawn points that would be hard to tell the difference between. Unity has an answer for that. We create add a label by clicking on the colored cube in the inspector in your Game Object and select a color: Play the game and now you should see an endless wave of Zombunny coming at you! Now we’re really close to having a full game! Gameover In the final video in this tutorial we create a more fluid game over state for the player. Currently, when the player dies, all that happens is that we reload the game and the player starts over. We’re going to do better and add some nifty UI effects! The first thing we want to do is create an Image UI that we’ll call screenFader.We set the color of the Image to be black and the alpha to be 0. Later on we create a transition to change the Alpha of the Image so that we’ll have an effect of fading into the game. Next we created a Text UI called GameOverText to show to the player that the game is over. At this point, we have to make sure that we have this ordering inside our HUDCanvas: - HealthUI - DamageImage - ScreenFader - GameOverText - ScoreText It’s important that we have this ordering, as the top element on the list will be placed in the screen first. If we were to stack everything on top of each other, our HealthUI would be at the bottom and the ScoreText would be on the top. Creating an animation Now that we have all the UI elements in place, we want to create an UI animation. The first thing we need to do is go to Unity > Window > Animation selecting HUDCanvas to create an new animation using the objects that are attached to HUDCanvas. Click Create a new clip and make a new clip called GameOverClip. Click Add Property and select: - GameOverText > Rect Transform > Scale - GameOverText > Text > Color - ScoreText > Rect Transform > Scale - ScreenFader > Image > Color This will add these 4 properties to our animation. How animation works is that you start at some initial value as represented in the diamond: When you double click in the timeline of the effects, you create a diamond for a property. When you move the white line slider to the diamond, and select it, you can change the value of the property in the inspector that the game object will be at in that specific time in the animation. Essentially the animation will make gradual changes from the 1st diamond to the 2nd diamond. Or from the original location to the diamond. An example is: at 0:00 if X scale is 1 and at 0:20 X scale is 2, at 0:10, X scale will be 1.5 So follow what was done in the above picture. - GameOverText : Scale — We want to create a popping text, where the text appears disappears, and then pops back. - 0:00 Scales are all 1 - 0:20 Scales are all 0 - 0:30 Scales are all 1 - GameOverText : Text.Color — We want to create white text that gradually fades in. - 0:00 color is white with alpha at 0 - 0:30 color is white with alpha at 255 - ScoreText: Scale — we want the score to shrink a bit - 0:00 scales are all 1 - 0:30 scales are all 0.8 - ScreenFader : Image.Color — We want to gradually make a black background show upo - 0:00 color is black with alpha 0 - 0:30 color is black with alpha 255 When we create an animation, Unity will already create an Animator Controller with the name of the object we created the animation for us (HUDCanvas). Setting up our HudCanvas Animator Controller In the HudCanvas animator controller, we create 2 New State. One will act as a main transition and the other we’ll nave it GameOver. We also create a new trigger called GameOver. We make the New State our main transition. From there we create a transition from New State to GameOver when the trigger GameOver is enabled. We should have something like this after you’re done: Save our work and then we’re done! Note: When we create an Animation from HUDCanvas, it would add the animator controller to it. If it doesn’t, manually create an Animator component to HUDCanvas and attach the HUDCanvas Animator Controller. Creating a GameOverManager to use our Animation Finally, in the last step, we need to create some code that will use our animation that we just created when the game is over. To do this, we just add the provided GameOverManager script to our HUDCanvas. Here’s the code: Update () { // If the player has run out of health... if(playerHealth.currentHealth <= 0) { // ... tell the animator the game is over. anim.SetTrigger ("GameOver"); // .. increment a timer to count up to restarting. restartTimer += Time.deltaTime; // .. if it reaches the restart delay... if(restartTimer >= restartDelay) { // .. then reload the currently loaded level. Application.LoadLevel(Application.loadedLevel); } } } } The basic flow of the code is: - We initialize our Animator by grabbing the Animator component that is attached to our game object inside Awake() - Inside Update(), we’ll always check to see if the player is alive, if he’s not, we play the GameOver animation and set a timer so that after our clip is over we would restart the game. Conclusion Phew, this has really drawn out long past 2 days. The only reason why I decided to follow through is: - There’s a lot of good learning that happens when you have to write - Most likely this will be the last of the long articles. From now on, it’ll be going on by myself to create a simple game and progress will be much slower as I try to Google for my answer. There were a lot of things that we saw again, but even more things that we learned. We saw a lot of things that we already knew like: - The UI system - Colliders - Raycasts - Navigating Unity …And then we saw a lot more things that we have never seen before like: - Character model animations - Management scripts to control the state of the game - Creating our own UI animation - Using Unity’s built in AI It’s only day 6 of our 100 days of VR please end me now I’m going to collapse on my bed now. I’ll see you back for day 7 where I start trying to develop my own simple game. See the original Day 6 here. Go see
https://www.gamedev.net/blogs/entry/2263668-100-days-of-vr-day-6-survival-shooter-tutorial-ii/
CC-MAIN-2018-51
refinedweb
4,003
64.2
Directory │ └── hello_web │ └── hello_web.ex ├── priv └── test We will go over those directories one by one: _build- a directory created by the mixcommand line tool that ships as part of Elixir that holds all compilation artefacts. As we have seen in "Up and Running", mixis the main interface to your application. We use Mix to compile our code, create databases, run our server, and more. This directory must not be checked into version control and it can be removed at any time. Removing it will force Mix to rebuild your application from scratch assets- a directory that keeps everything related to front-end assets, such as JavaScript, CSS, static images and more. It is typically handled by the npmtool. Phoenix developers typically only need to run npm installinside the assets directory. Everything else is managed by Phoenix config- a directory that holds your project configuration. The config/config.exsfile is the main entry point for your configuration. At the end of the config/config.exs, it imports environment specific configuration, which can be found in config/dev.exs, config/test.exs, and config/prod.exs deps- a directory with all of our Mix dependencies. You can find all dependencies listed in the mix.exsfile, inside the def deps dofunction definition. This directory must not be checked into version control and it can be removed at any time. Removing it will force Mix to download all deps from scratch lib- a directory that holds your application source code. This directory is broken into two subdirectories, lib/helloand lib/hello_web. The lib/hellodirectory will be responsible to host all of your business logic and business domain. It typically interacts directly with the database - it is the "Model" in Model-View-Controller (MVC) architecture. lib/hello_webis responsible for exposing your business domain to the world, in this case, through a web application. It holds both the View and Controller from MVC. We will discuss the contents of these directories with more detail in the next sections priv- a directory that keeps all assets that are necessary in production but are not directly part of your source code. You typically keep database scripts, translation files, and more in here test- a directory with all of our application tests. It often mirrors the same structure found in lib The lib/hello directory The lib/hello directory hosts all of your business domain. Since our project does not have any business logic yet, the directory is mostly empty. You will only find two files: lib/hello ├── application.ex └── repo.ex The lib/hello/application.ex file defines an Elixir application named Hello.Application. That's because at the end of the day Phoenix applications are simply Elixir applications. The Hello.Application module defines which services are part of our application: children = [ # Start the Ecto repository Hello.Repo, # Start the Telemetry supervisor HelloWeb.Telemetry, # Start the PubSub system {Phoenix.PubSub, name: Hello.PubSub}, # Start the Endpoint (http/https) HelloWeb.Endpoint # Start a worker by calling: Hello.Worker.start_link(arg) # {Hello.Worker, arg} ] If it is your first time with Phoenix, you don't need to worry about the details right now. For now, suffice it to say our application starts a database repository, a pubsub system for sharing messages across processes and nodes, and the application endpoint, which effectively serves HTTP requests. These services are started in the order they are defined and, whenever shutting down your application, they are stopped in the reverse order. You can learn more about applications in Elixir's official docs for Application. In the same lib/hello directory, we will find a lib/hello/repo.ex. It defines a Hello.Repo module which is our main interface to the database. If you are using Postgres (the default), you will see something like this: defmodule Hello.Repo do use Ecto.Repo, otp_app: :hello, adapter: Ecto.Adapters.Postgres end And that's it for now. As you work on your project, we will add files and modules to this directory. The lib/hello_web directory The lib/hello_web directory holds the web-related parts of our application. It looks like this when expanded: lib/hello_web ├── channels │ └── user_socket.ex ├── controllers │ └── page_controller.ex ├── templates │ ├── layout │ │ └── app.html.eex │ └── page │ └── index.html.eex ├── views │ ├── error_helpers.ex │ ├── error_view.ex │ ├── layout_view.ex │ └── page_view.ex ├── endpoint.ex ├── gettext.ex ├── router.ex └── telemetry.ex All of the files which are currently in the controllers, templates, and views directories are there to create the "Welcome to Phoenix!" page we saw in the "Up and running" guide. The channels directory is where we will add code related to building real-time Phoenix applications. By looking at templates and views directories, we can see Phoenix provides features for handling layouts and error pages out of the box. Besides the directories mentioned, lib/hello_web has four files at its root. lib/hello_web/endpoint.ex is the entry-point for HTTP requests. Once the browser accesses, the endpoint starts processing the data, eventually leading to the router, which is defined in lib/hello_web/router.ex. The router defines the rules to dispatch requests to "controllers", which then uses "views" and "templates" to render HTML pages back to clients. We explore these layers in length in other guides, starting with the "Request life-cycle" guide coming next. Through telemetry, Phoenix is able to collect metrics and send monitoring events of your application. The lib/hello_web/telemetry.ex file defines the supervisor responsible for managing the telemetry processes. You can find more information on this topic in the Telemetry guide. Finally, there is a lib/hello_web/gettext.ex file which provides internationalization through Gettext. If you are not worried about internationalization, you can safely skip this file and its contents.
https://hexdocs.pm/phoenix/directory_structure.html
CC-MAIN-2021-04
refinedweb
951
50.43
Typicon image files have all-or-nothing transparency when loaded When I load a Typicon file, e.g. the house picture, all the partial transparency (antialiasing) is removed: This is how it looks once I'm done with it: and this is how it should look: Here is my code: from PIL import Image as ImageP import io import base64 import clipboard import ui appsize = (120, 120) typsize = (96, 96) offset = [(appsize[i] - typsize[i])/2 for i in range(2)] master = ImageP.new('RGBA', (120, 120), (50, 50, 50, 255)) home = ImageP.open("Typicons96_Home") home.show() # it is broken here, even before converting. home = home.convert('RGBA') r, g, b, a = home.split() home = ImageP.merge("RGB", (r, g, b)) mask = ImageP.merge("L", (a,)) master.paste(home, tuple(offset), mask) with io.BytesIO() as bIO: master.save(bIO, 'PNG') img = ui.Image.from_data(bIO.getvalue()) bytes = img.to_png() clipboard.set(base64.b64encode(bytes)) Curiously, this seems to happen only with the Typicon files and nothing else. Fixed it. I just load it as a ui.Image first and then convert it to a PIL image. At the moment I'm using the clipboard, which is a pretty hacky solution so I'm open to ideas…. You can convert from a ui.Imageto a PIL Imagewithout using the clipboard like this: import ui import Image from io import BytesIO ui_img = ui.Image.named('Typicon96_Home') data = BytesIO(ui_img.to_png()) img = Image.open(data) img.show() Thank you! I was using that to convert the other way around, I'm surprised I didn't think of it earlier. My new code: from PIL import Image as PILImage from ui import Image as UIImage import io appsize = (120, 120) typsize = (96, 96) offset = [(appsize[i] - typsize[i])/2 for i in range(2)] def makegradient(c1, c2, size): img = PILImage.new('RGB', size, c1) d = tuple(c2[i]-c1[i] for i in range(3)) pixels = img.load() h = appsize[1] for i in range(h): c = tuple(c1[a] + d[a]*i/h for a in range(3)) for j in range(appsize[0]): pixels[j, i] = c return img def composite(top, bottom, offset): bottom = bottom.copy() top = top.convert('RGBA') r, g, b, a = top.split() top = PILImage.merge("RGB", (r, g, b)) mask = PILImage.merge("L", (a,)) bottom.paste(top, tuple(offset), mask) return bottom def makeicon(c1, c2, name): gradient = makegradient(c1, c2, appsize) # hack to support partial transparency uii = UIImage.named(name) data = io.BytesIO(uii.to_png()) top = PILImage.open(data) icon = composite(top, gradient, offset) data.close() with io.BytesIO() as bIO: icon.save(bIO, 'PNG') img = UIImage.from_data(bIO.getvalue()) return img
https://forum.omz-software.com/topic/1342/typicon-image-files-have-all-or-nothing-transparency-when-loaded/?
CC-MAIN-2021-49
refinedweb
448
53.78
Name | Synopsis | Description | Parameters | Errors | Examples | Environment Variables | Attributes | See Also #include <slp.h> SLPError SLPUnescape(const char *pcInBuf, char** ppcOutBuf, SLPBoolean isTag); The SLPUnescape() function processes the input string in pcInbuf and unescapes any SLP reserved characters. If the isTag parameter is SLPTrue, then look for bad tag characters and signal an error if any are found with the SLP_PARSE_ERROR code. No transformation is performed if the input string is an opaque.. Must be freed using SLPFree(3SLP) when the memory is no longer needed. When true, the input buffer is checked for bad tag characters. This function or its callback may return any SLP error code. See the ERRORS section in slp_api(3SLP). The following example decodes the representation for “,tag,”: char* pcOutBuf; SLPError err; err = SLPUnescape("\\2c tag\\2c", &pcOutbuf, SLP_TRUE); When set, use this file for configuration. See attributes(5) for descriptions of the following attributes: slpd(1M),SLPFree(3SLP), slp_api(3SL | Parameters | Errors | Examples | Environment Variables | Attributes | See Also
http://docs.oracle.com/cd/E19253-01/816-5170/6mbb5et3r/index.html
CC-MAIN-2014-35
refinedweb
164
55.95
Subject: Re: [OMPI users] No output from mpirun From: Varun R (nigen7_at_[hidden]) Date: 2007-12-31 08:42:49 Yes, the 'mpirun' is the one from OpenMPI. And btw mpich worked perfectly for me. It's only ompi that's giving me these problems. Do I have to setup ssh or something? Because I remember doing that for mpich. On Dec 31, 2007 4:15 AM, Reuti <reuti_at_[hidden]> wrote: > Hi, > > Am 26.12.2007 um 10:08 schrieb Varun R: > > > I just installed Openmpi 1.2.2 on my new openSUSE 10.3 system. All > > my programs(C++) compile well with 'mpic++' but when I run them > > with 'mpirun' i get no output and I immediately get back the > > prompt. I tried the options '--verbose' and got nothing. When I > > tried '--debug-daemons' I get the following output: > > > > Daemon [0,0,1] checking in as pid 6308 on host suse-nigen > > [suse-nigen:06308] [0,0,1] orted: received launch callback > > [suse-nigen:06308] [0,0,1] orted_recv_pls: received message from > > [0,0,0] > > [suse-nigen:06308] [0,0,1] orted_recv_pls: received exit > > > > > > Also when I simply run the executable without mpirun it gives the > > right output. I also tried inserting a long 'for' loop in the > > program to check if it's getting executed at all and as I suspected > > mpirun still returns immediately to the prompt. Here's my program: > > is the mpirun the one from Open MPI? > > -- Reuti > > > #include <iostream> > > #include <mpi.h> > > > > using namespace std; > > > > int main(int argc,char* argv[]) > > { > > int rank,nproc; > > cout<<"Before"<<endl; > > > > MPI_Init(&argc,&argv); > > MPI_Comm_rank(MPI_COMM_WORLD,&rank); > > MPI_Comm_size(MPI_COMM_WORLD,&nproc); > > cout<<"Middle"<<endl; > > MPI_Finalize(); > > > > int a = 5; > > for(int i=0; i< 100000; i++) > > for(int j=0; j<10000; j++) > > a += 4; > > > > if(rank == 0) > > cout<<"Rank 0"<<endl; > > > > cout<<"Over"<<a<<endl; > > > > return 0; > > } > > > > I also tried version 1.2.4 but still no luck. Could someone please > > tell me what could be wrong here? > > > > Thanks, > > Varun > > > > > > _______________________________________________ > > users mailing list > > users_at_[hidden] > > > > _______________________________________________ > users mailing list > users_at_[hidden] > >
http://www.open-mpi.org/community/lists/users/2007/12/4760.php
CC-MAIN-2015-18
refinedweb
343
72.76
So I'm attempting to make a hangman game for practice. I've wrote the function to get the random word and a function to pair those characters up with there index. Wondering if the user guess a correct character is there a way to reference the dictionary, and output the character to an empty list at the correct index? Code I Have so far: import random import sys def hangman(): print("Welcome to Hangman\n") answer = input("Would you like to play? yes(y) or no(n)") if answer is "y": print("Generating Random Word...Complete") wordlist = [] with open('sowpods.txt', 'r') as f: line = f.read() line = list(line.splitlines()) word = list(random.choice(line)) Key = matchUp(word) else: sys.exit() def matchUp(word): list = [] for x in word: list.append(word.index(x)) newDict = {} newDict = dict(zip(word, list)) return newDict hangman() So like this? You can skip the whole dictionary thing... a = "_" * len(word) def letter_check(letter): if letter in word: a[word.index(letter)] = letter # possibly print 'a' here else: # function for decrement tries here EDIT: Ooops... I forgot about potential repeated letters... um... how about this: a = "_" * len(word) def letter_check(letter): if letter in word: for x, y in enumerate(word): if letter == y: a[x] = letter else: # function for decrement tries here
https://codedump.io/share/8WSs2BRUIKIm/1/dictionary-keyvalue-removal-and-re-instertion-into-a-list-or-something
CC-MAIN-2017-17
refinedweb
222
68.77
This code has a problem and I think I know what it is: When I set "node = null;" node is being passed by value instead of reference. I thought all objects in Java were passed by reference and all primitive data was passed by value. So why is ListNode being passed by value? (It's an object) public class Solution { public ListNode removeNthFromEnd(ListNode head, int n) { ListNode node = head; int size = 0; while(node != null) { node = node.next; size++; } node = head; for(int i = 0; i < size - n; i++) { node = node.next; } if(node.next == null) node = null; else if(node.next.next == null) node.next = node.next.next; return head; } }
https://discuss.leetcode.com/topic/32326/error-in-code-need-help
CC-MAIN-2018-05
refinedweb
112
85.79
Just a quick post to note a problem I found with the above mentioned security policy. This policy should enabled mutual or two-way https; but you will find that if you deploy this service to what appears to be a properly configured service that it will fail: @WebService @Policy(uri="policy:Wssp1.2-2007-Https-ClientCertReq.xml") public class HelloTwoWay { public String sayHello(String name) { return "Hello " + name; } } You need another step compared with other https policies to have this work. You need to go to Servers -> [ServerName] -> SSL -> Advanced and under "Two Way Cert Behaviour" you need at least "Client Certs Requested". You can go for the enforced option if you want to use mutual everywhere; but in that case you can use the more general https policies so it doesn't really make sense.
http://kingsfleet.blogspot.com/2009/09/wssp12-2007-https-clientcertreqxml.html
CC-MAIN-2018-09
refinedweb
137
54.36
The DTD should start with one long comment that lists lots of metadata about the DTD. Generally, this would start with a title specifying the XML application the DTD describes. For example: MegaBank Account Statement DTD, Version 1.1.2 This would normally be followed with some copyright notice. For example: Alternately, you could use the copyright symbol: 2003 MegaBank Do not use a c inside parentheses: (c) 2003 MegaBank This is not recognized by the international treaty that establishes copyright law in most countries . For similar reasons do not use both the word Copyright and the symbol like this: Neither of these forms is legally binding. I wouldn't want to rely on the difference between and (c) in a defense against a claim of copyright infringement, but as a copyright owner I wouldn't want to count on them being considered the same either. If the genuine symbol is too hard to type because you're restricted to ASCII, just use the word Copyright with a capital C instead. Now that the DTD has been copyrighted , the next question is, who do you want to allow to use the DTD and under what conditions? If the DTD is purely for internal use and you don't want to allow anyone else to use it, a simple 2003 MegaBank statement may be all you need. However, by default such a statement makes the DTD at least technically illegal for anyone else to use, copy, or distribute without explicit permission from the copyright owner. If it is your intention that this DTD be used by multiple parties, you should include language explicitly stating that. For example, this statement would allow third parties to reuse the DTD but not to modify it: This DTD may be freely copied and redistributed. If you want to go a little further, you can allow other parties to modify the DTD. For example, one of the most liberal licenses common among DTDs is the W3C's, which states:. Often some authorship information is also included. As well as giving credit where credit is due (or assigning blame), this is important so that users know who they can ask questions of or report bugs to. Depending on circumstances the contact information may take different forms. For instance, a private DTD inside a small company might use a personal name and a phone extension while a large public DTD might provide the name and URL of a committee. Whichever form it takes, there should be some means of contacting a party responsible for the DTD. For example: Prepared by the MegaBank Interdepartment XML Committee Joseph Quackenbush, editor <jquackenbush@megabank.com> If you modify the DTD, you should add your name and contact information and indicate how the DTD has been modified. However, you should also retain the name of the original author. For example: International Statement DTD prepared for MegaBank France to satisfy EEC banking regulations by Stefan Hilly <shilly@megabank.fr> Original prepared by the MegaBank Interdepartment XML Committee Joseph Quackenbush, editor <jquackenbush@megabank.com> Following the copyright and authorship information, the next thing is normally a brief description of the XML application the DTD describes. For example, the bank statement application might include something like this: This is the DTD for MBSML, the MegaBank Statement Markup Language. It is used for account statements sent to both business and consumer customers at the end of each month. Each document represents a complete statement for a single account. It handles savings, checking, CD, and money market accounts. However, it is not used for credit cards or loans. This is often followed by useful information about the DTD that is not part of the DTD grammar itself. For example, the following comment describes the namespace URI, the root element, the public ID, and the customary system ID for this DTD. All elements declared by this DTD are in the namespace. Documents adhering to this DTD should have the root element Statement. This DTD is identified by these PUBLIC and SYSTEM identifiers: PUBLIC "-//MegaBank//DTD Statement//EN" SYSTEM "" The system ID may be pointed at a local copy of the DTD instead. For example, <!DOCTYPE Statement PUBLIC "-//MegaBank//DTD Statement//EN" "statement.dtd"> The internal DTD subset should *not* be used to customize the DTD. Some DTDs also include usage instructions or detailed lists of changes in the current version. There's certainly nothing wrong with this, but I normally prefer to point to some canonical source of documentation for the application. For example: For more information see This pretty much exhausts the information that's customarily stored in the header.
https://flylib.com/books/en/1.130.1.41/1/
CC-MAIN-2021-31
refinedweb
772
52.9
Python threads in PSSE I tried to implement wxPython aplication with additional thread for calculations and update wxPython progress dialog within that thread. But psse crashes all the time. So after that i tried to run really simple multithread python script in psse and it also caused psse crash. There is simple multithread psse script import threading import time class TestThread(threading.Thread): def __init__(self): threading.Thread.__init__(self) def run(self): for i in range(0, 60, 3): print "T2: ", i time.sleep(3) TestThread().start() for i in range(0, 60, 2): print "T1: ", i time.sleep(1) So my question is does psse allow to use threads in python script?
https://psspy.org/psse-help-forum/question/692/python-threads-in-psse/
CC-MAIN-2018-05
refinedweb
114
75.71
Hi! Just discovered KICS two days ago and love it. Quick question though - is there a way to call kics and only scan the commits within a PR - e.g. between two git hashes? I saw that we can set the path - however, I was unable to just scan the files that have been updated within the PR. So I wasn't sure if I then just have to call KICS multiple times for each file that has changed? Thanks for the feedback! So my team isn't really using github actions, and instead concourse. As a quick fix I added something like cat pull-request/.git/resource/changed_files | while read changedfile do echo "./kics scan -p pull-request/$changedfile -t Kubernetes -o .--report-formats "json,sarif,html" --ci" ./kics scan -p pull-request/$changedfile -t Kubernetes -o . --report-formats "json,sarif,html" --ci cat results.sarif >> final_results.sarif done Does that make sense? or is there a way to call kics once and point it to multiple files? Also, I attempted to upload the sarif to github and got an error and was wondering if you if an enterprise edition is required to view the results in github? curl \ > -X POST \ > -H 'Authorization: token ${TOKEN}' \ > -H "Accept: application/vnd.github.v3+json" \ >{GIT_ORG}/${GIT_REPO}/code-scanning/sarifs \ > -d '{"commit_sha":"$COMMIT_SHA","ref":"${REF}","sarif":"${KICS_SARIF}"},"tool_name":"KICS"' { "message": "Advanced Security must be enabled for this repository to use code scanning.", "documentation_url": "" } Regarding your first question, we want to enhance our -p flag to support the user to provide it multiple times, or in a comma-separated string. You're invited to contribute to our project. Regarding the second question, Github enterprise edition is required for private repositories. Github docs terraformrules to run? Good Morning I was just introduced to kics this week and since we started trying out moving stuff to terraform or in some places use terraform on Azure to begin with, i gave it a try. Now i ran into a high severity issue that i cannot wrap my head around, either because i may lack the knowledge on Azure or because i simply do not understand the meaning of the heuristic query. The query in question is "Trusted Microsoft Services Not Enabled" and the documentation leads me to the network rules section of the azure storage module here: I tried some combinations like "default: deny" and "bypass: AzureServices", but i do not want to just try and error my way around and but understand, what is going on. Is there someone here who may give me a pointer? ah, okay, so after going through the code and the tests for that query i found what i did wrong. Here is my original section of the main.tf : resource "azurerm_storage_account" "infra_storage_account" { depends_on = [azurerm_resource_group.infra_resource_group] # ... network_rules { default_action = "Deny" bypass = "AzureServices" } } where it should have been resource "azurerm_storage_account" "infra_storage_account" { depends_on = [azurerm_resource_group.infra_resource_group] # ... network_rules { default_action = "Deny" bypass = ["AzureServices"] } } Agreed: This is rather a syntax error in Terraform than a security finding and i could probably have found it earlier had i not run kics first. It would have helped me to have a more talkative issue description. Like "Ok, this is wrong here because ... a is missing or b is configured wrong". terraform validatenor tflintcould point out to type mismatch in the variable attribution even when I placed an integer for example, which makes sense since those parameters are provider dependant. I strongly recommend running terraform planbefore so that we get the validation from Azure API. You're welcome to open a bug if you confirm this is an issue. Meanwhile, we'll try to improve our remediation texts in the following sprints. Also, you're more than welcome to contribute if you feel like helping us. Hey Team, Just wanted to say thank you so much for the awesome project. I maintain an OSS project called Kubernetes Goat, an intentionally vulnerable Kubernetes Cluster to learn and practice Kubernetes Security. I have recently scanned the resources of the project with KICS and the results are pretty amazing and useful. I have added them to the documentation as well for other users of Kubernetes Goat also get benefits with KICS. }} package Cx import data.preknowledge CxPolicy[result] { document := input.document[i] kind := document.kind k8sLib.checkKind(kind, listKinds) metadata = document.metadata metadata.namespace == "default" result := { "documentId": input.document[i].id, "issueType": "IncorrectValue", "searchKey": sprintf("metadata.name={{%s}}.namespace", [metadata.name]), "keyExpectedValue": "metadata.namespace is not default", "keyActualValue": "metadata.namespace is default", } Hi KICS community - I'm trying to use the download / install script but it's getting a 404. I added some extra debug to the script KiCS Example>cat script.out | bash -s -- -d Checkmarx/kics info platform is linux/amd64 Checkmarx/kics info checking GitHub for latest tag Checkmarx/kics debug http_download Checkmarx/kics info tag is v1.3.2 Checkmarx/kics info version is 1.3.2 Checkmarx/kics info found version: 1.3.2 for v1.3.2/linux/amd64 Checkmarx/kics debug downloading files into /tmp/tmp.Gsk8zYXfGC Checkmarx/kics debug http_download Checkmarx/kics debug http_download_curl received HTTP status 404 This is on an Ubuntu 18.04: `Linux kics-example 5.4.0-1041-gcp #44~18.04.1-Ubuntu SMP Mon Mar 29 19:16:50 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux There isn't a kics_1.3.2_linux_amd64.tar.gz on github (but I notice the nightly ones have a linux_amd64.tar.gz variant. Should I log an issue? Hi @dohnalv, thank you for the question. In some instances, we have different queries (rules) grouped into a single rego file. The reason for this is to enable us to reuse boilerplate rego code for different rules we're trying to catch. Example: -> API spec query "Invalid Schema External Documentation URL" for Swagger and OpenAPI 3.0. Are grouped in this same rego file -> Several query ports that are being scanned are grouped into a single file You can get the queries distribution list by running: pip3 install -r .github/scripts/metrics/requirements.txt python3 .github/scripts/metrics/get-metrics.py Currently, these are our numbers: ::group::Queries Metrics | Platform | Count | |------------------------+---------| | total | 1704 | | cloudformation_queries | 465 | | openapi_queries | 288 | | ansible_queries | 235 | | k8s_queries | 80 | | common_queries | 2 | | dockerfile_queries | 53 | | terraform_queries | 581 | ::endgroup:: ::set-output name=total_queries::1704 ::group::Rego File Metrics | Platform | Count | |---------------------+---------| | total | 1148 | | cloudformation_rego | 223 | | openapi_rego | 197 | | ansible_rego | 201 | | k8s_rego | 80 | | common_rego | 2 | | dockerfile_rego | 53 | | terraform_rego | 392 | Let me know if you have any more questions. Thanks for the question. I'm glad you asked, as this is a good opportunity to clarify. KICS uses sentry ( ) to track crashes of the software. What is being tracked is the source go file and the line number the caused the crash. That's it. This gives the developers a lead to what they should investigate if/when a crash happens. In this context, the environment variable you asked about is confusing and we'll change that. Do you want to report an issue or should I ?
https://gitter.im/kics-io/community?at=60c3e2124fc7ad136ac6c64b
CC-MAIN-2021-31
refinedweb
1,167
57.57
FSM In Game Introduction Since the early days of video games finite state machines (FSM) are a common instrument to imbue a game agent with the illusion of intelligence. The following is a descriptive definition of FSM. A finite state machine behind a FSM is to decompose an object's behavior into easily manageable states. There are a number of ways of implementing finite state machines. A typical naive approach is to use a switch statement like the following. public void runAway(int state) { switch (state) { case 0: // Wander wander(); if (seeEnemy()) state = MathUtils.randomBoolean(0.8f) ? 1 : 0; if (isDead()) state = 3; break; case 1: // Attack attack(); if (isDead()) state = 3; break; case 2: // Run away runAway(); if (isDead()) state = 3; break; case 3: // Dead slowlyRot(); break; } } The code above is a legitimate state machine, however it has some serious weakness: The state changes (also known as transitions) are poorly regulated. States are of type int and would be more robust and debuggable as classes or enums. The omission of a single breakkeyword would cause hard-to-find bugs. Redundant logic appears in multiple states. No way to tell that a state has been entered or exited. The right solution to these issues is to provide a more structured approach. Having performance in mind we have chosen to implement FSMs through _embedded rules_, thus hard-coding the rules for the state transitions within the states themselves. This architecture is known as the state design pattern and provides an elegant and powerful way of implementing state-driven behavior with minimal overhead. The state machine implementation provided by gdx-ai is mainly based on the approach described by Mat Buckland in his book _"Programming Game AI by Example"_. This same approach, with minor variations, has been supported by hundreds of articles over the years. IMPORTANT NOTE: You don't have to restrict the use of state machines to agents. The state design pattern is also useful for structuring the main components of your game flow. For example, you could have a menu state, a save state, a paused state, an options state, a run state, etc. The State Interface The states of the FSM are encapsulated as objects and contain the logic required to facilitate state transitions. All state objects implement the State interface which defines the following methods: enter(entity)will execute when the state is entered update(entity)is called on the current state of the FSM on each update step exit(entity)will execute when the state is exited onMessage(entity, telegram)executes if the entity receives a message from the message dispatcher while it is in this state. Actually, enter and exit methods are only called when the FSM changes state. When a state transition occurs, the StateMachine.changeState method first calls the exit method of the current state, then it assigns the new state to the current state, and finishes by calling the enter method of the new state (which is now the current state). IMPORTANT NOTES States as Java enumeration: Conceptually speaking each concrete state should be implemented as a singleton object in order to ensure that there is only one instance of each state, which agents share. Using singletons makes the design more efficient because they remove the need to allocate and deallocate memory every time a state change is made. In real world, concrete states are typically implemented as a _Java enumeration_, since they are an easy and versatile way to implement early loading singletons as we'll see in the examples below. However, using singletons (or enums) has one drawback. Because they are shared between clients, singleton states are unable to make use of their own local, agent-specific data. For instance, if an agent uses a state that when entered should move it to an arbitrary position, the position cannot be stored in the state itself because the position may be different for each agent that is using the state. Instead, it would have to be stored somewhere externally and be accessed by the state through the agent. Anyways, it's worth noticing that the framework doesn't force you to use the singleton design. This is just the recommended approach you should stick to as long as there are not well-founded reasons to allocate a new state on each transition. Global State: Often, when designing finite state machines you'll end up with code that is duplicated in every state. When it happens it's convenient to create a global state that is called every time the FSM is updated. That way, all the logic for the FSM is contained within the states and not in the agent class that owns the FSM. State Blip: Occasionally it will be convenient for an agent to enter a state with the condition that when the state is exited, the agent returns to its previous state. This behavior is called a _state blip_. For instance, in the Far West example below the agent Elsa can visit the bathroom at any time, then she always returns to its prior state. The StateMachine Interface All the state related data and methods are encapsulated into a state machine object. This way an agent can own an instance of a FSM and delegate the management of current states, global states, and previous states to it. Also, a state machine can be explicitly delegated by its owner to handle the messages it receives. A FSM instance implements the following StateMachine interface. public interface StateMachine<E, S extends State<E>> extends Telegraph { public void update(); public void changeState(S newState); public boolean revertToPreviousState(); public void setInitialState(S state); public void setGlobalState(S state); public S getCurrentState(); public S getGlobalState(); public boolean isInState(S state); public boolean handleMessage(Telegram telegram); } All an agent has to do is to own an instance of a StateMachine and implement a method to update the state machine to get full FSM functionality. DefaultStateMachine The DefaultStateMachine class provided by the framework is the default implementation of the StateMachine interface. The handleMessage method of the DefaultStateMachine first routes the telegram to the current state. If the current state does not deal with the message, it's routed to the global state (if any). Especially, the boolean value returned by the onMessage method of the State interface indicates whether or not the message has been handled successfully and enables the state machine to route the message accordingly. This technique is rather interesting. For instance, what if you want a global event response to the message MSG_DEAD in every state but the state STATE_DEAD? The solution is to override the message response MSG_DEAD within STATE_DEAD. Since messages are sent first to the current state you can consume the message by returning true so to prevent it from being sent to the global state. StackStateMachine The StackStateMachine implements a pushdown automaton. It actually inherits from DefaultStateMachine and mostly behaves the same. The only difference is the behavior of revertToPreviousState(). While the default implementation will always change back and forth between the same two states when it is called multiple times, the StackStateMachine will instead keep track of all past states, store them in a stack-like manner and is able to revert to those past states in a "last in, first out" (LIFO) order. This is especially useful when using a state machine for hierarchical menu structures. Let's assume we have just a single MenuScreen to handle all menus. Each menu would be a State. For example MainMenuState, OptionsMenuState, GraphicsOptionsMenuState and InputOptionsMenuState. Usually the user would start with the main menu, then navigate to the options menu and choose the graphics options. When he is done, it is common to navigate this hierarchical menu structure backwards by just pressing ESC. With the stack implementation this can now easily be done by just calling stateMachine.revertToPreviousState(), whenever the ESC button is pressed. When there is no more "previous" state, the revert method will not change the state and return false. A Simple Example Imagine a Troll class that has member variables for attributes such as health, anger, stamina, etc., and public methods to query and adjust those values. A Troll can be given the functionality of a FSM by adding a member variable stateMachine. public class Troll { // An instance of the state machine class public StateMachine<Troll, TrollState> stateMachine; public Troll() { stateMachine = new DefaultStateMachine<Troll, TrollState>(this, TrollState.SLEEP); } @Override public void update(float delta) { stateMachine.update(); } /* OTHER METHODS OMITTED FOR CLARITY */ } When the update method of a Troll is called, it in turn calls the update method of its FSM which in turn calls the update method of the current state. The current state may then use the Troll interface to query its owner, to adjust its owner's attributes, or to effect a state transition. In other words, how a Troll behaves when updated can be made completely dependent on the logic in its current state. This is best illustrated with an example, so let's create a couple of states to enable a troll to run away from enemies when it feels threatened and to sleep when it feels safe. public enum TrollState implements State<Troll> { RUN_AWAY() { @Override public void update(Troll troll) { if (troll.isSafe()) { troll.stateMachine.changeState(SLEEP); } else { troll.moveAwayFromEnemy(); } } }, SLEEP() { @Override public void update(Troll troll) { if (troll.isThreatened()) { troll.stateMachine.changeState(RUN_AWAY); } else { troll.snore(); } } }; @Override public void enter(Troll troll) { } @Override public void exit(Troll troll) { } @Override public boolean onMessage(Troll troll, Telegram telegram) { // We don't use messaging in this example return false; } } As you can see, when updated, a troll will behave differently depending on its current state. Both states are encapsulated as a Java enumeration and both provide the rules effecting state transition. A Complete Example with Messaging As a practical example of how to create agents using finite state machines and messaging, we are going to look at a game environment set in the Far West. Well, actually, you'll have to use your imagination since the example has no graphics at all. Any state changes or output from state actions will be sent as text to the logging system. This simple approach demonstrates clearly the mechanism of a finite state machine without adding the code clutter of a more complex environment. In our imaginary country in the Far West there are 2 characters, Bob the miner and his wife Elsa: Bob always moves between 4 locations: a gold mine, a bank where Bob can deposit any nuggets he finds, a saloon in which he can quench his thirst, and his home where he can sleep. Exactly where he goes, and what he does when he gets there, is determined by Bob's current state. He will change states depending on member variables like thirst, fatigue, and how much gold he has found hacking away down in the gold mine. Elsa doesn't do much; she's mainly busy with cleaning the shack, cooking food and emptying her bladder (she drinks way too much). Of course Bob and Elsa will communicate in certain situations. They only have two messages they can use: HI_HONEY_I_M_HOME used by Bob to let Elsa know he's back at the shack. STEW_READY used by Elsa to let herself know when to take dinner out of the oven and for her to communicate to Bob that food is on the table. To run the example you have to launch the StateMachineTest class from gdxAI tests. The other classes of the example are inside the package com.badlogic.gdx.tests.ai.fsm. Here is a sample of the output from the test program. Bob: All mah fatigue has drained away. Time to find more gold! Bob: Walkin' to the goldmine Elsa: Walkin' to the can. Need to powda mah pretty li'lle nose Elsa: Ahhhhhh! Sweet relief! Elsa: Leavin' the Jon Bob: Pickin' up a nugget: Makin' the bed Bob: Depositing gold. Total savings now: 3 Bob: Leavin' the bank Bob: Walkin' to the goldmine Elsa: Walkin' to the can. Need to powda mah pretty li'lle nose Elsa: Ahhhhhh! Sweet relief! Elsa: Leavin' the Jon: Washin' the dishes Bob: Depositing gold. Total savings now: 4 Bob: Leavin' the bank Bob: Walkin' to the goldmine Elsa: Makin' the bed Bob: Pickin' up a nugget Elsa: Makin' the bed: Moppin' the floor Bob: Pickin' up a nugget Bob: Ah'm leavin' the goldmine with mah pockets full o' sweet gold Bob: Goin' to the bank. Yes siree Elsa: Washin' the dishes Bob: Depositing gold. Total savings now: 5 Bob: WooHoo! Rich enough for now. Back home to mah li'lle lady Bob: Leavin' the bank Bob: Walkin' home Message handled by Elsa at time: 11213152138 Elsa: Hi honey. Let me make you some of mah fine country stew Elsa: Putting the stew in the oven Elsa: Fussin' over food Message received by Elsa at time: 11213889030 Elsa: StewReady! Lets eat Message handled by Bob at time: 11214221965 Bob: Okay Hun, ahm a comin'! Bob: Smells Reaaal goood Elsa! Elsa: Puttin' the stew on the table Bob: Tastes real good too! Bob: Thankya li'lle lady. Ah better get back to whatever ah wuz doin' Elsa: Washin' the dishes Bob: ZZZZ... Elsa: Moppin' the floor Bob: ZZZZ... Elsa: Makin' the bed Bob: ZZZZ... Elsa: Makin' the bed Bob: ZZZZ... Elsa: Makin' the bed As you have seen, the use of messaging combined with state machines gives you the illusion of intelligence and the output of the program looks like the interactions of two real people. What's more, this is only a very simple example. Divide and Conquer The unbounded growth of states is one of the main problems plaguing state machines. Using a global state reduces this problem greatly, but the real solution is to be disciplined and not let too many states exist within a single FSM. This can be achieved by using 2 techniques: Hierarchical State Machines: Rather than combining all the logic into a single state machine, we can separate it into several by dividing an AI's tasks into independent chunks that can each become a self-contained state machine. For instance, your game agent may have the states Explore_, _Combat_, and _Patrol_. In turn, the _Combat state may own a state machine that manages the states required for combat such as _Dodge_, _ChaseEnemy_, and _Shoot_. Simultaneous State Machines: Another way to deal with limiting the size of state machines is to use several different state machines at the same time. For example, you can imagine an AI that has a master FSM to make global decisions and other FSMs that deal with movement, gunnery, or conversations. Properly combining and structuring these two techniques is an extremely powerful way to limit the complexity of individual state machines. Limitations of State Machines Even with all the common techniques and extensions mentioned above, state machines are still pretty limited. The trend these days in game AI is more toward exciting things like behavior trees and planning systems. If complex AI is what you're interested in, all this chapter has done is whet your appetite. This doesn't mean finite state machines, pushdown automata, and other simple systems aren't useful at all. and network protocols. 0 条评论
https://segmentfault.com/a/1190000004854961
CC-MAIN-2021-43
refinedweb
2,549
60.85