text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
I've been playing with ASP.NET since Beta 1, and I have to admit that I love it! What used to take days in traditional ASP can now be done in an afternoon. One of the few complaints that I have is ASP.NET's implementation of validators. While I certainly believe that validation controls extremely are time saving, I'm not really thrilled with the ones provided by Microsoft. They work great if you only have a few items to validate, but can become a chore to maintain, when there are many fields that need validation. What I wanted was a single control that you could drop on to your form, that would handle all the validation for you. This way, you have one validator to maintain, not 10, or whatever. So I wrote this custom Validator control. Included in the download is the Validator control itself, its source code (don't run away screaming now, it's in VB.NET...), and 3 JavaScript files. I couldn't find an elegant way to use Microsoft's JavaScript files, and they didn't seem to support the DOM specs, so I wrote my own files. These need to be placed in your \inetpub\wwwroot\_vti_script directory. Validation.js is the main JavaScript file, which sets up some prototype functions, then links to one of the other two files depending on your browsers capability. ValidationDOM.js is, of course, for DOM compliant browsers (NS6.2+, IE5+). ValidationIE.js is for earlier versions of IE. I chose not to support early versions of Netscape, because there doesn't seem to be an elegant version of document.getElementById or document.all, and so Validation.js simply forces early versions of Netscape to use server side validation only. Validator document.getElementById document.all That being said, an explanation of its use is in order. The Validator control is derived from System.Web.UI.Webcontrols, and so has the standard properties of a web control. It also has 4 additional properties: System.Web.UI.Webcontrols HeaderText As String The text that is displayed at the beginning of the error summary report. ListStyle As Enum The style of bulleting that the error summary will use. ClientSideScript As Boolean Produces code to allow validation at the client side. UniqueErrors As Boolean Ensures that all strings in the errors collection are unique. Once the Validator has been added to a page, you need to tell it what to validate. By either right-mouse clicking on the control, or looking at the designer verbs area just below the property window, you'll notice a menu/verb item marked 'Edit Fields'. A dialog then appears (as shown in the screenshot) allowing you to modify the control's collection of validations. Clicking on one of the buttons, adds a validator to the list, while clicking Remove will, of course, remove the selected item. All validation options share some common properties: ControlToValidate As String The ID of the control to validate. The Validator control automatically appends any namespace on, so you just need the controls ID. ErrorMessage As String The message to display if this validation proves invalid ErrorMessageTarget As String The ID of the control where you want the ErrorMessage to be displayed. This is useful when you want the error message to appear somewhere on your form other than the summary. ErrorMessage ShowInSummary As Boolean Determines whether or not the ErrorMessage should be displayed in the summary. If this is false, and you set the ErrorMessageTarget property, then the ErrorMessage only appears at the ErrorMessageTarget. false ErrorMessageTarget Next is a summary of the additional properties of each of the validation types: ReqFieldValidator InitialValue As String If the controls value equals this when validated, then the validation is considered invalid. RegExFieldValidator RegularExpression As String The regular expression to use for validation. CaseSensitive As Boolean If True, then case will matter. So, with a pattern of "[A-Z]{3}", AAA will be valid, but AaA will not. True AAA AaA MultiLine As Boolean Allows pattern matching to span more than one line of an HTML TextArea field. TextArea RangeFieldValidator Minimum As String The minimum value. This is a string, because the Validator supports characters as well as numbers. Maximum As String The maximum value. Type As String The type of value that you'll be comparing. Possible values are String, Integer, Double, and DateTime String Integer Double DateTime IgnoreCase As Boolean Whether to ignore case or not. If IgnoreCase is true, Minimum is A, and Maximum is Q, then f will be considered valid. IgnoreCase true Minimum A Maximum Q f CompareFieldValidator Operator As Enum Determines what kind of comparison to perform. Possible values include (but are not limited to) DataTypeMatch, GreaterThan, LessThan, etc. DataTypeMatch GreaterThan LessThan The type of value that you'll be comparing. Possible values are String, Integer, Double, and DateTime. ControlToCompare As String The control to compare against. ValueToCompare As String The value to compare against. If ControlToCompare and ValueToCompare both have values, then ControlToCompare takes precedence. ControlToCompare ValueToCompare Whether to ignore case or not. CustomFieldValidator ClientSideFunction As String The name of the client side function to use in validating. This function must take a single parameter that is the control specified by the ControlToValidate property, and must return true for valid and false for invalid. ControlToValidate Name As String The Validator control raises a single event when server side validation takes place. This property allows you to distinguish between different custom validations. In order to use server side validation for the CustomFieldValidator, you need to handle the CustomFieldValidation event of the Validator object. All CustomFieldValidator items that you've added to the control will route through this same event, so you need a way to distinguish between them. This is where the Name property of the CustomFieldValidator comes in hand. It is sent in the CustomFieldValidatorEventArgs class to the event handler. CustomFieldValidation Name CustomFieldValidatorEventArgs Public Sub Validator1_CustomFieldValidation(ByVal sender As Object, _ ByVal E As CP.Validator.CustomFieldValidatorEventArgs) _ Handles Validator.CustomFieldValidation Select Case E.Name Case "IsEven" If CInt(DirectCast(E.ControlToValidate, _ TextBox).Text) Mod 2 = 0 Then E.IsValid = True Else E.IsValid = False End If End Select End Sub To add a validation to the control at run time, use the Validators collection of the control: Validators Validator1.Validators.Add(New ReqFieldValidator("txtName", "Enter a name", "")) I've tested it every way that I could think of, and it seems to be stable. However, since this is the control's first release to the public, you should expect bugs. If you run across any, please either post them here, or E-mail me directly at jamie.nordmeyer@mcgnw.com, and I'll do my best to fix it. Also, if you have any comments, concerns, or suggestions, let me know. This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) General News Suggestion Question Bug Answer Joke Rant Admin Man throws away trove of Bitcoin worth $7.5 million
http://www.codeproject.com/Articles/2889/Single-Control-Validation-Solution?PageFlow=FixedWidth
CC-MAIN-2013-48
refinedweb
1,176
57.37
Table of Contents It should be obvious that most, if not all, programs are devoted to gathering data from outside, processing it, and providing results back to the outside world. That is, input and output are key. Haskell's I/O system is powerful and expressive. It is easy to work with and important to understand. Haskell strictly separates pure code from code that could cause things to occur in the world. That is, it provides a complete isolation from side-effects in pure code. Besides helping programmers to reason about the correctness of their code, it also permits compilers to automatically introduce optimizations and parallelism. We'll begin this chapter with simple, standard-looking I/O in Haskell. Then we'll discuss some of the more powerful options as well as provide more detail on how I/O fits into the pure, lazy, functional Haskell world. Let's get started with I/O in Haskell by looking at a program that looks surprisingly similar to I/O in other languages such as C or Perl. -- file: ch07/basicio.hs main = do putStrLn "Greetings! What is your name?" inpStr <- getLine putStrLn $ "Welcome to Haskell, " ++ inpStr ++ "!" You can compile this program to a standalone executable, run it with runghc, or invoke main from within ghci. Here's a sample session using runghc: $ runghc basicio.hsGreetings! What is your name? JohnWelcome to Haskell, John! That's a fairly simple, obvious result. You can see that putStrLn writes out a String, followed by an end-of-line character. getLine reads a line from standard input. The <- syntax may be new to you. Put simply, that binds the result from executing an I/O action to a name. [15] We use the simple list concatenation operator ++ to join the input string with our own text. Let's take a look at the types of putStrLn and getLine. You can find that information in the library reference, or just ask ghci: ghci> :type putStrLnputStrLn :: String -> IO () ghci> :type getLinegetLine :: IO String Notice that both of these types have IO in their return value. That is your key to knowing that they may have side effects, or that they may return different values even when called with the same arguments, or both. The type of putStrLn looks like a function. It takes a parameter of type String and returns value of type IO (). Just what is an IO () though? Anything that is type IO is an I/O action. You can store it and nothing will happen. I could say something writefoo = putStrLn "foo" and nothing happens right then. But if I later use writefoo in the middle of another I/O action, the writefoo action will be executed when its parent action is executed -- I/O actions can be glued together to form bigger I/O actions. The () is an empty tuple (pronounced “unit”), indicating that there is no return value from putStrLn. This is similar to void in Java or C.[16] Let's look at this with ghci: ghci> let writefoo = putStrLn "foo" ghci> writefoofoo In this example, the output foo is not a return value from putStrLn. Rather, it's the side effect of putStrLn actually writing foo to the terminal. Notice one other thing: ghci actually executed writefoo. This means that, when given an I/O action, ghci will perform it for you on the spot. The type of getLine may look strange to you. It looks like a value, rather than a function. And in fact, that is one way to look at it: getLine is storing an I/O action. When that action is performed, you get a String. The <- operator is used to "pull out" the result from performing an I/O action and store it in a variable. main itself is an I/O action with type IO (). You can only perform I/O actions from within other I/O actions. All I/O in Haskell programs is driven from the top at main, which is where execution of every Haskell program begins. This, then, is the mechanism that provides isolation from side effects in Haskell: you perform I/O in your IO actions, and call pure (non-I/O) functions from there. Most Haskell code is pure; the I/O actions perform I/O and call that pure code. do is a convenient way to define a sequence of actions. As you'll see later, there are other ways. When you use do in this way, indentation is significant; make sure you line up your actions properly. You only need to use do if you have more than one action that you need to perform. The value of a do block is the value of the last action executed. For a complete description of do syntax, see the section called “Desugaring of do blocks”. Let's consider an example of calling pure code from within an I/O action: -- file: ch07/callingpure.hs name2reply :: String -> String name2reply name = "Pleased to meet you, " ++ name ++ ".\n" ++ "Your name contains " ++ charcount ++ " characters." where charcount = show (length name) main :: IO () main = do putStrLn "Greetings once again. What is your name?" inpStr <- getLine let outStr = name2reply inpStr putStrLn outStr Notice the name2reply function in this example. It is a regular Haskell function and obeys all the rules we've told you about: it always returns the same result when given the same input, it has no side effects, and it operates lazily. It uses other Haskell functions: (++), show, and length. Down in main, we bind the result of name2reply inpStr to outStr. When you're working in a do block, you use <- to get results from IO actions and let to get results from pure code. When used in a do block, you should not put in after your let statement. You can see here how we read the person's name from the keyboard. Then, that data got passed to a pure function, and its result was printed. In fact, the last two lines of main could have been replaced with putStrLn (name2reply inpStr). So, while main did have side effects—it caused things to appear on the terminal, for instance— name2reply did not and could not. That's because name2reply is a pure function, not an action. Let's examine this with ghci: ghci> :load callingpure.hs[1 of 1] Compiling Main ( callingpure.hs, interpreted ) Ok, modules loaded: Main. ghci> name2reply "John""Pleased to meet you, John.\nYour name contains 4 characters." ghci> putStrLn (name2reply "John")Pleased to meet you, John. Your name contains 4 characters. The \n within the string is the end-of-line (newline) character, which causes the terminal to begin a new line in its output. Just calling name2reply "John" in ghci will show you the \n literally, because it is using show to display the return value. But using putStrLn sends it to the terminal, and the terminal interprets \n to start a new line. What do you think will happen if you simply type main at the ghci prompt? Give it a try. After looking at these example programs, you may be wondering if Haskell is really imperative rather than pure, lazy, and functional. Some of these examples look like a sequence of actions to be followed in order. There's more to it than that, though. We'll discuss that question later in this chapter in the section called “Is Haskell Really Imperative?” and the section called “Lazy I/O”. As a way to help with understanding the differences between pure code and I/O, here's a comparison table. When we speak of pure code, we are talking about Haskell functions that always return the same result when given the same input and have no side effects. In Haskell, only the execution of I/O actions avoid these rules. In this section, we've discussed how Haskell draws a clear distinction between pure code and I/O actions. Most languages don't draw this distinction. In languages such as C or Java, there is no such thing as a function that is guaranteed by the compiler to always return the same result for the same arguments, or a function that is guaranteed to never have side effects. The only way to know if a given function has side effects is to read its documentation and hope that it's accurate. Many bugs in programs are caused by unanticipated side effects. Still more are caused by misunderstanding circumstances in which functions may return different results for the same input. As multithreading and other forms of parallelism grow increasingly common, it becomes more difficult to manage global side effects. Haskell's method of isolating side effects into I/O actions provides a clear boundary. You can always know which parts of the system may alter state and which won't. You can always be sure that the pure parts of your program aren't having unanticipated results. This helps you to think about the program. It also helps the compiler to think about it. Recent versions of ghc, for instance, can provide a level of automatic parallelism for the pure parts of your code -- something of a holy grail for computing. For more discussion on this topic, refer to the section called “Side Effects with Lazy I/O”. So far, you've seen how to interact with the user at the computer's terminal. Of course, you'll often need to manipulate specific files. That's easy to do, too. Haskell defines quite a few basic functions for I/O, many of which are similar to functions seen in other programming languages. The library reference for System.IO provides a good summary of all the basic I/O functions, should you need one that we aren't touching upon here. You will generally begin by using openFile, which will give you a file Handle. That Handle is then used to perform specific operations on the file. Haskell provides functions such as hPutStrLn that work just like putStrLn but take an additional argument—a Handle—that specifies which file to operate upon. When you're done, you'll use hClose to close the Handle. These functions are all defined in System.IO, so you'll need to import that module when working with files. There are "h" functions corresponding to virtually all of the non-"h" functions; for instance, there is hPrint for printing to a file. Let's start with an imperative way to read and write files. This should seem similar to a while loop that you may find in other languages. This isn't the best way to write it in Haskell; later, you'll see examples of more Haskellish approaches. -- file: ch07/toupper-imp.hs import System.IO ineof <- hIsEOF inh if ineof then return () else do inpStr <- hGetLine inh hPutStrLn outh (map toUpper inpStr) mainloop inh outh Like every Haskell program, execution of this program begins with main. Two files are opened: input.txt is opened for reading, and output.txt is opened for writing. Then we call mainloop to process the file. mainloop begins by checking to see if we're at the end of file (EOF) for the input. If not, we read a line from the input. We write out the same line to the output, after first converting it to uppercase. Then we recursively call mainloop again to continue processing the file.[17] Notice that return call. This is not really the same as return in C or Python. In those languages, return is used to terminate execution of the current function immediately, and to return a value to the caller. In Haskell, return is the opposite of <-. That is, return takes a pure value and wraps it inside IO. Since every I/O action must return some IO type, if your result came from pure computation, you must use return to wrap it in IO. As an example, if 7 is an Int, then return 7 would create an action stored in a value of type IO Int. When executed, that action would produce the result 7. For more details on return, see the section called “The True Nature of Return”. Let's try running the program. We've got a file named input.txt that looks like this: This is ch08/input.txt Test Input I like Haskell Haskell is great I/O is fun 123456789 Now, you can use runghc toupper-imp.hs and you'll find output.txt in your directory. It should look like this: THIS IS CH08/INPUT.TXT TEST INPUT I LIKE HASKELL HASKELL IS GREAT I/O IS FUN 123456789 Let's use ghci to check on the type of openFile: ghci> :module System.IO ghci> :type openFileopenFile :: FilePath -> IOMode -> IO Handle FilePath is simply another name for String. It is used in the types of I/O functions to help clarify that the parameter is being used as a filename, and not as regular data. IOMode specifies how the file is to be managed. The possible values for IOMode are listed in Table 7.2, “Possible IOMode Values”. FIXME: check formatting on this table for final book; openjade doesn't render it well While we are mostly working with text examples in this chapter, binary files can also be used in Haskell. If you are working with a binary file, you should use openBinaryFile instead of openFile. Operating systems such as Windows process files differently if they are opened as binary instead of as text. On operating systems such as Linux, both openFile and openBinaryFile perform the same operation. Nevertheless, for portability, it is still wise to always use openBinaryFile if you will be dealing with binary data. You've already seen that hClose is used to close file handles. Let's take a moment and think about why this is important. As you'll see in the section called “Buffering”, Haskell maintains internal buffers for files. This provides an important performance boost. However, it means that until you call hClose on a file that is open for writing, your data may not be flushed out to the operating system. Another reason to make sure to hClose files is that open files take up resources on the system. If your program runs for a long time, and opens many files but fails to close them, it is conceivable that your program could even crash due to resource exhaustion. All of this is no different in Haskell than in other languages. When a program exits, Haskell will normally take care of closing any files that remain open. However, there are some circumstances in which this may not happen[18], so once again, it is best to be responsible and call hClose all the time. Haskell provides several tools for you to use to easily ensure this happens, regardless of whether errors are present. You can read about finally in the section called “Extended Example: Functional I/O and Temporary Files” and bracket in the section called “The acquire-use-release cycle”. When reading and writing from a Handle that corresponds to a file on disk, the operating system maintains an internal record of the current position. Each time you do another read, the operating system returns the next chunk of data that begins at the current position, and increments the position to reflect the data that you read. You can use hTell to find out your current position in the file. When the file is initially created, it is empty and your position will be 0. After you write out 5 bytes, your position will be 5, and so on. hTell takes a Handle and returns an IO Integer with your position. The companion to hTell is hSeek. hSeek lets you change the file position. It takes three parameters: a Handle, a SeekMode, and a position. SeekMode can be one of three different values, which specify how the given position is to be interpreted. AbsoluteSeek means that the position is a precise location in the file. This is the same kind of information that hTell gives you. RelativeSeek means to seek from the current position. A positive number requests going forwards in the file, and a negative number means going backwards. Finally, SeekFromEnd will seek to the specified number of bytes before the end of the file. hSeek handle SeekFromEnd 0 will take you to the end of the file. For an example of hSeek, refer to the section called “Extended Example: Functional I/O and Temporary Files”. Not all Handles are seekable. A Handle usually corresponds to a file, but it can also correspond to other things such as network connections, tape drives, or terminals. You can use hIsSeekable to see if a given Handle is seekable. Earlier, we pointed out that for each non-"h" function, there is usually also a corresponding "h" function that works on any Handle. In fact, the non-"h" functions are nothing more than shortcuts for their "h" counterparts. There are three pre-defined Handles in System.IO. These Handles are always available for your use. They are stdin, which corresponds to standard input; stdout for standard output; and stderr for standard error. Standard input normally refers to the keyboard, standard output to the monitor, and standard error also normally goes to the monitor. Functions such as getLine can thus be trivially defined like this: getLine = hGetLine stdin putStrLn = hPutStrLn stdout print = hPrint stdout Earlier, we told you what the three standard file handles "normally" correspond to. That's because some operating systems let you redirect the file handles to come from (or go to) different places—files, devices, or even other programs. This feature is used extensively in shell scripting on POSIX (Linux, BSD, Mac) operating systems, but can also be used on Windows. It often makes sense to use standard input and output instead of specific files. This lets you interact with a human at the terminal. But it also lets you work with input and output files—or even combine your code with other programs—if that's what's requested.[19] As an example, we can provide input to callingpure.hs in advance like this: $ echo John|runghc callingpure.hsGreetings once again. What is your name? Pleased to meet you, John. Your name contains 4 characters. While callingpure.hs was running, it did not wait for input at the keyboard; instead it received John from the echo program. Notice also that the output didn't contain the word John on a separate line as it did when this program was run at the keyboard. The terminal normally echoes everything you type back to you, but that is technically input, and is not included in the output stream. So far in this chapter, we've discussed the contents of the files. Let's now talk a bit about the files themselves. System.Directory provides two functions you may find useful. removeFile takes a single argument, a filename, and deletes that file.[20] renameFile takes two filenames: the first is the old name and the second is the new name. If the new filename is in a different directory, you can also think of this as a move. The old filename must exist prior to the call to renameFile. If the new file already exists, it is removed before the rename takes place. Like many other functions that take a filename, if the "old" name doesn't exist, renameFile will raise an exception. More information on exception handling can be found in Chapter 19, Error handling. There are many other functions in System.Directory for doing things such as creating and removing directories, finding lists of files in directories, and testing for file existence. These are discussed in the section called “Directory and File Information”. Programmers frequently need temporary files. These files may be used to store large amounts of data needed for computations, data to be used by other programs, or any number of other uses. While you could craft a way to manually open files with unique names, the details of doing this in a secure way differ from platform to platform. Haskell provides a convenient function called openTempFile (and a corresponding openBinaryTempFile) to handle the difficult bits for you. openTempFile takes two parameters: the directory in which to create the file, and a "template" for naming the file. The directory could simply be "." for the current working directory. Or you could use System.Directory.getTemporaryDirectory to find the best place for temporary files on a given machine. The template is used as the basis for the file name; it will have some random characters added to it to ensure that the result is truly unique. It guarantees that it will be working on a unique filename, in fact. The return type of openTempFile is IO (FilePath, Handle). The first part of the tuple is the name of the file created, and the second is a Handle opened in ReadWriteMode over that file. When you're done with the file, you'll want to hClose it and then call removeFile to delete it. See the following example for a sample function to use. Here's a larger example that puts together some concepts from this chapter, from some earlier chapters, and a few you haven't seen yet. Take a look at the program and see if you can figure out what it does and how it works. -- file: ch07/tempfile.hs import System.IO import System.Directory(getTemporaryDirectory, removeFile) import System.IO.Error(catch) import Control.Exception(finally) -- The main entry point. Work with a temp file in myAction. main :: IO () main = withTempFile "mytemp.txt" myAction {- The guts of the program. Called with the path and handle of a temporary file. When this function exits, that file will be closed and deleted because myAction was called from withTempFile. -} myAction :: FilePath -> Handle -> IO () myAction tempname temph = do -- Start by displaying a greeting on the terminal putStrLn "Welcome to tempfile.hs" putStrLn $ "I have a temporary file at " ++ tempname -- Let's see what the initial position is pos <- hTell temph putStrLn $ "My initial position is " ++ show pos -- Now, write some data to the temporary file let tempdata = show [1..10] putStrLn $ "Writing one line containing " ++ show (length tempdata) ++ " bytes: " ++ tempdata hPutStrLn temph tempdata -- Get our new position. This doesn't actually modify pos -- in memory, but makes the name "pos" correspond to a different -- value for the remainder of the "do" block. pos <- hTell temph putStrLn $ "After writing, my new position is " ++ show pos -- Seek to the beginning of the file and display it putStrLn $ "The file content is: " hSeek temph AbsoluteSeek 0 -- hGetContents performs a lazy read of the entire file c <- hGetContents temph -- Copy the file byte-for-byte to stdout, followed by \n putStrLn c -- Let's also display it as a Haskell literal putStrLn $ "Which could be expressed as this Haskell literal:" print c {- This function takes two parameters: a filename pattern and another function. It will create a temporary file, and pass the name and Handle of that file to the given function. The temporary file is created with openTempFile. The directory is the one indicated by getTemporaryDirectory, or, if the system has no notion of a temporary directory, "." is used. The given pattern is passed to openTempFile. After the given function terminates, even if it terminates due to an exception, the Handle is closed and the file is deleted. -} withTempFile :: String -> (FilePath -> Handle -> IO a) -> IO a withTempFile pattern func = do -- The library ref says that getTemporaryDirectory may raise on -- exception on systems that have no notion of a temporary directory. -- So, we run getTemporaryDirectory under catch. catch takes -- two functions: one to run, and a different one to run if the -- first raised an exception. If getTemporaryDirectory raised an -- exception, just use "." (the current working directory). tempdir <- catch (getTemporaryDirectory) (\_ -> return ".") (tempfile, temph) <- openTempFile tempdir pattern -- Call (func tempfile temph) to perform the action on the temporary -- file. finally takes two actions. The first is the action to run. -- The second is an action to run after the first, regardless of -- whether the first action raised an exception. This way, we ensure -- the temporary file is always deleted. The return value from finally -- is the first action's return value. finally (func tempfile temph) (do hClose temph removeFile tempfile) Let's start looking at this program from the end. The withTempFile function demonstrates that Haskell doesn't forget its functional nature when I/O is introduced. This function takes a String and another function. The function passed to withTempFile is invoked with the name and Handle of a temporary file. When that function exits, the temporary file is closed and deleted. So even when dealing with I/O, we can still find the idiom of passing functions as parameters to be convenient. Lisp programmers might find our withTempFile function similar to Lisp's with-open-file function. There is some exception handling going on to make the program more robust in the face of errors. You normally want the temporary files to be deleted after processing completes, even if something went wrong. So we make sure that happens. For more on exception handling, see Chapter 19, Error handling. Let's return to the start of the program. main is defined simply as withTempFile "mytemp.txt" myAction. myAction, then, will be invoked with the name and Handle of the temporary file. myAction displays some information to the terminal, writes some data to the file, seeks to the beginning of the file, and reads the data back with hGetContents.[21] It then displays the contents of the file byte-for-byte, and also as a Haskell literal via print c. That's the same as putStrLn (show c). Let's look at the output: $ runhaskell tempfile.hsWelcome to tempfile.hs I have a temporary file at /tmp/mytemp8572.txt My initial position is 0 Writing one line containing 22 bytes: [1,2,3,4,5,6,7,8,9,10] After writing, my new position is 23 The file content is: [1,2,3,4,5,6,7,8,9,10] Which could be expressed as this Haskell literal: "[1,2,3,4,5,6,7,8,9,10]\n" Every time you run this program, your temporary file name should be slightly different since it contains a randomly-generated component. Looking at this output, there are a few questions that might occur to you: Why is your position 23 after writing a line with 22 bytes? Why is there an empty line after the file content display? Why is there a \n at the end of the Haskell literal display? You might be able to guess that the answers to all three questions are related. See if you can work out the answers for a moment. If you need some help, here are the explanations: That's because we used hPutStrLn instead of hPutStr to write the data. hPutStrLn always terminates the line by writing a \n at the end, which didn't appear in tempdata. We used putStrLn c to display the file contents c. Because the data was written originally with hPutStrLn, c ends with the newline character, and putStrLn adds a second newline character. The result is a blank line. The \n is the newline character from the original hPutStrLn. As a final note, the byte counts may be different on some operating systems. Windows, for instance, uses the two-byte sequence \r\n as the end-of-line marker, so you may see differences on that platform. So far in this chapter, you've seen examples of fairly traditional I/O. Each line, or block of data, is requested individually and processed individually. Haskell has another approach available to you as well. Since Haskell is a lazy language, meaning that any given piece of data is only evaluated when its value must be known, there are some novel ways of approaching I/O. One novel way to approach I/O is the hGetContents function.[22] hGetContents has the type Handle -> IO String. The String it returns represents all of the data in the file given by the Handle.[23] In a strictly-evaluated language, using such a function is often a bad idea. It may be fine to read the entire contents of a 2KB file, but if you try to read the entire contents of a 500GB file, you are likely to crash due to lack of RAM to store all that data. In these languages, you would traditionally use mechanisms such as loops to process the file's entire data. But hGetContents is different. The String it returns is evaluated lazily. At the moment you call hGetContents, nothing is actually read. Data is only read from the Handle as the elements (characters) of the list are processed. As elements of the String are no longer used, Haskell's garbage collector automatically frees that memory. All of this happens completely transparently to you. And since you have what looks like—and, really, is—a pure String, you can pass it to pure (non-IO) code. Let's take a quick look at an example. Back in the section called “Working With Files and Handles”, you saw an imperative program that converted the entire content of a file to uppercase. Its imperative algorithm was similar to what you'd see in many other languages. Here now is the much simpler algorithm that exploits lazy evaluation: -- file: ch07/toupper-lazy1.hs import System.IO import Data.Char(toUpper) main :: IO () main = do inh <- openFile "input.txt" ReadMode outh <- openFile "output.txt" WriteMode inpStr <- hGetContents inh let result = processData inpStr hPutStr outh result hClose inh hClose outh processData :: String -> String processData = map toUpper Notice that hGetContents handled all of the reading for us. Also, take a look at processData. It's a pure function since it has no side effects and always returns the same result each time it is called. It has no need to know—and no way to tell—that its input is being read lazily from a file in this case. It can work perfectly well with a 20-character literal or a 500GB data dump on disk. You can even verify that with ghci: ghci> :load toupper-lazy1.hs[1 of 1] Compiling Main ( toupper-lazy1.hs, interpreted ) Ok, modules loaded: Main. ghci> processData "Hello, there! How are you?""HELLO, THERE! HOW ARE YOU?" ghci> :type processDataprocessData :: String -> String ghci> :type processData "Hello!"processData "Hello!" :: String This program was a bit verbose to make it clear that there was pure code in use. Here's a bit more concise version, which we will build on in the next examples: -- file: ch07/toupper-lazy2.hs import System.IO import Data.Char(toUpper) main = do inh <- openFile "input.txt" ReadMode outh <- openFile "output.txt" WriteMode inpStr <- hGetContents inh hPutStr outh (map toUpper inpStr) hClose inh hClose outh You are not required to ever consume all the data from the input file when using hGetContents. Whenever the Haskell system determines that the entire string hGetContents returned can be garbage collected —which means it will never again be used—the file is closed for you automatically. The same principle applies to data read from the file. Whenever a given piece of data will never again be needed, the Haskell environment releases the memory it was stored within. Strictly speaking, we wouldn't have to call hClose at all in this example program. However, it is still a good practice to get into, as later changes to a program could make the call to hClose important. Haskell programmers use hGetContents as a filter quite often. They read from one file, do something to the data, and write the result out elsewhere. This is so common that there are some shortcuts for doing it. readFile and writeFile are shortcuts for working with files as strings. They handle all the details of opening files, closing files, reading data, and writing data. readFile uses hGetContents internally. Can you guess the Haskell types of these functions? Let's check with ghci: ghci> :type readFilereadFile :: FilePath -> IO String ghci> :type writeFilewriteFile :: FilePath -> String -> IO () Now, here's an example program that uses readFile and writeFile: -- file: ch07/toupper-lazy3.hs import Data.Char(toUpper) main = do inpStr <- readFile "input.txt" writeFile "output.txt" (map toUpper inpStr) Look at that—the guts of the program take up only two lines! readFile returned a lazy String, which we stored in inpStr. We then took that, processed it, and passed it to writeFile for writing. Neither readFile nor writeFile ever provide a Handle for you to work with, so there is nothing to ever hClose. readFile uses hGetContents internally, and the underlying Handle will be closed when the returned String is garbage-collected or all the input has been consumed. writeFile will close its underlying Handle when the entire String supplied to it has been written. By now, you should understand how lazy input works in Haskell. But what about laziness during output? As you know, nothing in Haskell is evaluated before its value is needed. Since functions such as writeFile and putStr write out the entire String passed to them, that entire String must be evaluated. So you are guaranteed that the argument to putStr will be evaluated in full.[24] But what does that mean for laziness of the input? In the examples above, will the call to putStr or writeFile force the entire input string to be loaded into memory at once, just to be written out? The answer is no. putStr (and all the similar output functions) write out data as it becomes available. They also have no need for keeping around data already written, so as long as nothing else in the program needs it, the memory can be freed immediately. In a sense, you can think of the String between readFile and writeFile as a pipe linking the two. Data goes in one end, is transformed some way, and flows back out the other. You can verify this yourself by generating a large input.txt for toupper-lazy3.hs. It may take a bit to process, but you should see a constant—and low—memory usage while it is being processed. You learned that readFile and writeFile address the common situation of reading from one file, making a conversion, and writing to a different file. There's a situation that's even more common than that: reading from standard input, making a conversion, and writing the result to standard output. For that situation, there is a function called interact. The type of interact is (String -> String) -> IO (). That is, it takes one argument: a function of type String -> String. That function is passed the result of getContents—that is, standard input read lazily. The result of that function is sent to standard output. We can convert our example program to operate on standard input and standard output by using interact. Here's one way to do that: -- file: ch07/toupper-lazy4.hs import Data.Char(toUpper) main = interact (map toUpper) Look at that—one line of code to achieve our transformation! To achieve the same effect as with the previous examples, you could run this one like this: $ runghc toupper-lazy4.hs < input.txt > output.txt Or, if you'd like to see the output printed to the screen, you could type: $ runghc toupper-lazy4.hs < input.txt If you want to see that Haskell output truly does write out chunks of data as soon as they are received, run runghc toupper-lazy4.hs without any other command-line parameters. You should see each character echoed back out as soon as you type it, but in uppercase. Buffering may change this behavior; see the section called “Buffering” later in this chapter for more on buffering. If you see each line echoed as soon as you type it, or even nothing at all for awhile, buffering is causing this behavior. You can also write simple interactive programs using interact. Let's start with a simple example: adding a line of text before the uppercase output. -- file: ch07/toupper-lazy5.hs import Data.Char(toUpper) main = interact (map toUpper . (++) "Your data, in uppercase, is:\n\n") Here we add a string at the beginning of the output. Can you spot the problem, though? Since we're calling map on the result of (++), that header itself will appear in uppercase. We can fix that in this way: -- file: ch07/toupper-lazy6.hs import Data.Char(toUpper) main = interact ((++) "Your data, in uppercase, is:\n\n" . map toUpper) This moved the header outside of the map. Another common use of interact is filtering. Let's say that you want to write a program that reads a file and prints out every line that contains the character "a". Here's how you might do that with interact: -- file: ch07/filter.hs main = interact (unlines . filter (elem 'a') . lines) This may have introduced three functions that you aren't familiar with yet. Let's inspect their types with ghci: ghci> :type lineslines :: String -> [String] ghci> :type unlinesunlines :: [String] -> String ghci> :type elemelem :: (Eq a) => a -> [a] -> Bool Can you guess what these functions do just by looking at their types? If not, you can find them explained in the section called “Warming up: portably splitting lines of text” and the section called “Special string-handling functions”. You'll frequently see lines and unlines used with I/O. Finally, elem takes a element and a list and returns True if that element occurs anywhere in the list. Try running this over our standard example input: $ runghc filter.hs < input.txtI like Haskell Haskell is great Sure enough, you got back the two lines that contain an "a". Lazy filters are a powerful way to use Haskell. When you think about it, a filter—such as the standard Unix program grep—sounds a lot like a function. It takes some input, applies some computation, and generates a predictable output. You've seen a number of examples of I/O in Haskell by this point. Let's take a moment to step back and think about how I/O relates to the broader Haskell language. Since Haskell is a pure language, if you give a certain function a specific argument, the function will return the same result every time you give it that argument. Moreover, the function will not change anything about the program's overall state. You may be wondering, then, how I/O fits into this picture. Surely if you want to read a line of input from the keyboard, the function to read input can't possibly return the same result every time it is run, right? Moreover, I/O is all about changing state. I/O could cause pixels on a terminal to light up, to cause paper to start coming out of a printer, or even to cause a package to be shipped from a warehouse on a different continent. I/O doesn't just change the state of a program. You can think of I/O as changing the state of the world. Most languages do not make a distinction between a pure function and an impure one. Haskell has functions in the mathematical sense: they are purely computations which cannot be altered by anything external. Moreover, the computation can be performed at any time—or even never, if its result is never needed. Clearly, then, we need some other tool to work with I/O. That tool in Haskell is called actions. Actions resemble functions. They do nothing when they are defined, but perform some task when they are invoked. I/O actions are defined within the IO monad. Monads are a powerful way of chaining functions together purely and are covered in Chapter 14, Monads. It's not necessary to understand monads in order to understand I/O. Just understand that the result type of actions is "tagged" with IO. Let's take a look at some types: ghci> :type putStrLnputStrLn :: String -> IO () ghci> :type getLinegetLine :: IO String The type of putStrLn is just like any other function. The function takes one parameter and returns an IO (). This IO () is the action. You can store and pass actions in pure code if you wish, though this isn't frequently done. An action doesn't do anything until it is invoked. Let's look at an example of this: -- file: ch07/actions.hs str2action :: String -> IO () str2action input = putStrLn ("Data: " ++ input) list2actions :: [String] -> [IO ()] list2actions = map str2action numbers :: [Int] numbers = [1..10] strings :: [String] strings = map show numbers actions :: [IO ()] actions = list2actions strings printitall :: IO () printitall = runall actions -- Take a list of actions, and execute each of them in turn. runall :: [IO ()] -> IO () runall [] = return () runall (firstelem:remainingelems) = do firstelem runall remainingelems main = do str2action "Start of the program" printitall str2action "Done!" str2action is a function that takes one parameter and returns an IO (). As you can see at the end of main, you could use this directly in another action and it will print out a line right away. Or, you can store—but not execute—the action from pure code. You can see an example of that in list2actions—we use map over str2action and return a list of actions, just like we would with other pure data. You can see that everything up through printitall is built up with pure tools. Although we define printitall, it doesn't get executed until its action is evaluated somewhere else. Notice in main how we use str2action as an I/O action to be executed, but earlier we used it outside of the I/O monad and assembled results into a list. You could think of it this way: every statement, except let, in a do block must yield an I/O action which will be executed. The call to printitall finally executes all those actions. Actually, since Haskell is lazy, the actions aren't generated until here either. When you run the program, your output will look like this: Data: Start of the program Data: 1 Data: 2 Data: 3 Data: 4 Data: 5 Data: 6 Data: 7 Data: 8 Data: 9 Data: 10 Data: Done! We can actually write this in a much more compact way. Consider this revision of the example: -- file: ch07/actions2.hs str2message :: String -> String str2message input = "Data: " ++ input str2action :: String -> IO () str2action = putStrLn . str2message numbers :: [Int] numbers = [1..10] main = do str2action "Start of the program" mapM_ (str2action . show) numbers str2action "Done!" Notice in str2action the use of the standard function composition operator. In main, there's a call to mapM_. This function is similar to map. It takes a function and a list. The function supplied to mapM_ is an I/O action that is executed for every item in the list. mapM_ throws out the result of the function, though you can use mapM to return a list of I/O results if you want them. Take a look at their types: ghci> :type mapMmapM :: (Monad m) => (a -> m b) -> [a] -> m [b] ghci> :type mapM_mapM_ :: (Monad m) => (a -> m b) -> [a] -> m () Why a mapM when we already have map? Because map is a pure function that returns a list. It doesn't—and can't—actually execute actions directly. mapM is a utility that lives in the IO monad and thus can actually execute the actions.[25] Going back to main, mapM_ applies (str2action . show) to every element in numbers. show converts each number to a String and str2action converts each String to an action. mapM_ combines these individual actions into one big action that prints out lines. do blocks are actually shortcut notations for joining together actions. There are two operators that you can use instead of do blocks: >> and >>=. Let's look at their types in ghci: ghci> :type (>>)(>>) :: (Monad m) => m a -> m b -> m b ghci> :type (>>=)(>>=) :: (Monad m) => m a -> (a -> m b) -> m b The >> operator sequences two actions together: the first action is performed, then the second. The result of the computation is the result of the second action. The result of the first action is thrown away. This is similar to simply having a line in a do block. You might write putStrLn "line 1" to test this out. It will print out two lines, discard the result from the first >> putStrLn "line 2" putStrLn, and provide the result from the second. The >>= operator runs an action, then passes its result to a function that returns an action. That second action is run as well, and the result of the entire expression is the result of that second action. As an example, you could write getLine , which would read a line from the keyboard and then display it back out. >>= putStrLn Let's re-write one of our examples to avoid do blocks. Remember this example from the start of the chapter? -- file: ch07/basicio.hs main = do putStrLn "Greetings! What is your name?" inpStr <- getLine putStrLn $ "Welcome to Haskell, " ++ inpStr ++ "!" Let's write that without a do block: -- file: ch07/basicio-nodo.hs main = putStrLn "Greetings! What is your name?" >> getLine >>= (\inpStr -> putStrLn $ "Welcome to Haskell, " ++ inpStr ++ "!") The Haskell compiler internally performans a translation just like this when you define a do block. Earlier in this chapter, we mentioned that return is probably not what it looks like. Many languages have a keyword named return that aborts execution of a function immediately and returns a value to the caller. The Haskell return function is quite different. In Haskell, return is used to wrap data in a monad. When speaking about I/O, return is used to take pure data and bring it into the IO monad. Now, why would we want to do that? Remember that anything whose result depends on I/O must be within the IO monad. So if we are writing a function that performs I/O, then a pure computation, we will need to use return to make this pure computation the proper return value of the function. Otherwise, a type error would occur. Here's an example: -- file: ch07/return1.hs import Data.Char(toUpper) isGreen :: IO Bool isGreen = do putStrLn "Is green your favorite color?" inpStr <- getLine return ((toUpper . head $ inpStr) == 'Y') We have a pure computation that yields a Bool. That computation is passed to return, which puts it into the IO monad. Since it is the last value in the do block, it becomes the return value of isGreen, but this is not because we used the return function. Here's a version of the same program with the pure computation broken out into a separate function. This helps keep the pure code separate, and can also make the intent more clear. -- file: ch07/return2.hs import Data.Char(toUpper) isYes :: String -> Bool isYes inpStr = (toUpper . head $ inpStr) == 'Y' isGreen :: IO Bool isGreen = do putStrLn "Is green your favorite color?" inpStr <- getLine return (isYes inpStr) Finally, here's a contrived example to show that return truly does not have to occur at the end of a do block. In practice, it usually is, but it need not be so. -- file: ch07/return3.hs returnTest :: IO () returnTest = do one <- return 1 let two = 2 putStrLn $ show (one + two) Notice that we used <- in combination with return, but let in combination with the simple literal. That's because we needed both values to be pure in order to add them, and <- pulls things out of monads, effectively reversing the effect of return. Run this in ghci and you'll see 3 displayed, as expected. These do blocks may look a lot like an imperative language. After all, you're giving commands to run in sequence most of the time. But Haskell remains a lazy language at its core. While it is necessary to sequence actions for I/O at times, this is done using tools that are part of Haskell already. Haskell achieves a nice separation of I/O from the rest of the language through the IO monad as well. Earlier in this chapter, you read about hGetContents. We explained that the String it returns can be used in pure code. We need to get a bit more specific about what side effects are. When we say Haskell has no side-effects, what exactly does that mean? At a certain level, side-effects are always possible. A poorly-written loop, even if written in pure code, could cause the system's RAM to be exhausted and the machine to crash. Or it could cause data to be swapped to disk. When we speak of no side effects, we mean that pure code in Haskell can't run commands that trigger side effects. Pure functions can't modify a global variable, request I/O, or run a command to take down a system. When you have a String from hGetContents that is passed to a pure function, the function has no idea that this String is backed by a disk file. It will behave just as it always would, but processing that String may cause the environment to issue I/O commands. The pure function isn't issuing them; they are happening as a result of the processing the pure function is doing, just as with the example of swapping RAM to disk. In some cases, you may need more control over exactly when your I/O occurs. Perhaps you are reading data interactively from the user, or via a pipe from another program, and need to communicate directly with the user. In those cases, hGetContents will probably not be appropriate. The I/O subsystem is one of the slowest parts of a modern computer. Completing a write to disk can take thousands of times as long as a write to memory. A write over the network can be hundreds or thousands of times slower yet. Even if your operation doesn't directly communicate with the disk—perhaps because the data is cached—I/O still involves a system call, which slows things down by itself. For this reason, modern operating systems and programming languages both provide tools to help programs perform better where I/O is concerned. The operating system typically performs caching—storing frequently-used pieces of data in memory for faster access. Programming languages typically perform buffering. This means that they may request one large chunk of data from the operating system, even if the code underneath is processing data one character at a time. By doing this, they can achieve remarkable performance gains because each request for I/O to the operating system carries a processing cost. Buffering allows us to read the same amount of data with far fewer I/O requests. Haskell, too, provides buffering in its I/O system. In many cases, it is even on by default. Up till now, we have pretended it isn't there. Haskell usually is good about picking a good default buffering mode. But this default is rarely the fastest. If you have speed-critical I/O code, changing buffering could make a significant impact on your program. There are three different buffering modes in Haskell. They are defined as the BufferMode type: NoBuffering, LineBuffering, and BlockBuffering. NoBuffering does just what it sounds like—no buffering. Data read via functions like hGetLine will be read from the OS one character at a time. Data written will be written immediately, and also often will be written one character at a time. For this reason, NoBuffering is usually a very poor performer and not suitable for general-purpose use. LineBuffering causes the output buffer to be written whenever the newline character is output, or whenever it gets too large. On input, it will usually attempt to read whatever data is available in chunks until it first sees the newline character. When reading from the terminal, it should return data immediately after each press of Enter. It is often a reasonable default. BlockBuffering causes Haskell to read or write data in fixed-size chunks when possible. This is the best performer when processing large amounts of data in batch, even if that data is line-oriented. However, it is unusable for interactive programs because it will block input until a full block is read. BlockBuffering accepts one parameter of type Maybe: if Nothing, it will use an implementation-defined buffer size. Or, you can use a setting such as Just 4096 to set the buffer to 4096 bytes. The default buffering mode is dependent upon the operating system and Haskell implementation. You can ask the system for the current buffering mode by calling hGetBuffering. The current mode can be set with hSetBuffering, which accepts a Handle and BufferMode. As an example, you can say hSetBuffering stdin (BlockBuffering Nothing). For any type of buffering, you may sometimes want to force Haskell to write out any data that has been saved up in the buffer. There are a few times when this will happen automatically: a call to hClose, for instance. Sometimes you may want to instead call hFlush, which will force any pending data to be written immediately. This could be useful when the Handle is a network socket and you want the data to be transmitted immediately, or when you want to make the data on disk available to other programs that might be reading it concurrently. Many command-line programs are interested in the parameters passed on the command line. System.Environment.getArgs returns IO [String] listing each argument. This is the same as argv in C, starting with argv[1]. The program name ( argv[0] in C) is available from System.Environment.getProgName. The System.Console.GetOpt module provides some tools for parsing command-line options. If you have a program with complex options, you may find it useful. You can find an example of its use in the section called “Command line parsing”. If you need to read environment variables, you can use one of two functions in System.Environment: getEnv or getEnvironment. getEnv looks for a specific variable and raises an exception if it doesn't exist. getEnvironment returns the whole environment as a [(String, String)], and then you can use functions such as lookup to find the environment entry you want. Setting environment variables is not defined in a cross-platform way in Haskell. If you are on a POSIX platform such as Linux, you can use putEnv or setEnv from the System.Posix.Env module. Environment setting is not defined for Windows. [15] You will later see that it has a more broad application, but it is sufficient to think of it in these terms for now. [17] Imperative programmers might be concerned that such a recursive call would consume large amounts of stack space. In Haskell, recursion is a common idiom, and the compiler is smart enough to avoid consuming much stack by optimizing tail-recursive functions. [19] For more information on interoperating with other programs with pipes, see the section called “Extended Example: Piping”. [21] hGetContents will be discussed in the section called “Lazy I/O” [23] More precisely, it is the entire data from the current position of the file pointer to the end of the file.
http://book.realworldhaskell.org/read/io.html
CC-MAIN-2017-26
refinedweb
9,239
65.52
The first interesting thing to know is how to write comments. In VB .NET, you write a comment by writing an apostrophe ' or writing REM. This means the rest of the line will not be taken into account by the compiler. 'This entire line is a comment Dim x As Integer = 0 'This comment is here to say we give 0 value to x REM There are no such things as multiline comments 'So we have to start everyline with the apostrophe or REM One interesting thing is the ability to add you own comments into Visual Studio Intellisense. So you can make your own written functions and classes self-explanatory. To do so, you must type the comment symbol three times the line above your function. Once done, Visual Studio will automatically add an XML documentation : ''' <summary> ''' This function returns a hello to your name ''' </summary> ''' <param name="Name">Your Name</param> ''' <returns></returns> ''' <remarks></remarks> Public Function Test(Name As String) As String Return "Hello " & Name End Function After that, if you type in your Test function somewhere in your code, this little help will show up : In VB.NET, every variable must be declared before it is used (If Option Explicit is set to On). There are two ways of declaring variables: Functionor a Sub: Dim w 'Declares a variable named w of type Object (invalid if Option Strict is On) Dim x As String 'Declares a variable named x of type String Dim y As Long = 45 'Declares a variable named y of type Long and assigns it the value 45 Dim z = 45 'Declares a variable named z whose type is inferred 'from the type of the assigned value (Integer here) (if Option Infer is On) 'otherwise the type is Object (invalid if Option Strict is On) 'and assigns that value (45) to it See this answer for full details about Option Explicit, Strict and Infer. Classor a Module: These variables (also called fields in this context) will be accessible for each instance of the Class they are declared in. They might be accessible from outside the declared Class depending on the modifier ( Public, Private, Protected, Protected Friend or Friend) Private x 'Declares a private field named x of type Object (invalid if Option Strict is On) Public y As String 'Declares a public field named y of type String Friend z As Integer = 45 'Declares a friend field named z of type Integer and assigns it the value 45 These fields can also be declared with Dim but the meaning changes depending on the enclosing type: Class SomeClass Dim z As Integer = 45 ' Same meaning as Private z As Integer = 45 End Class Structure SomeStructure Dim y As String ' Same meaning as Public y As String End Structure Modifiers are a way to indicate how external objects can access an object's data. Means any object can access this without restriction Means only the declaring object can access and view this Means only the declaring object and any object that inherits from it can access and view this. Means only the delcaring object, any object that inherits from it and any object in the same namespace can access and view this. Public Class MyClass Private x As Integer Friend Property Hello As String Public Sub New() End Sub Protected Function Test() As Integer Return 0 End Function End Class A function is a block of code that will be called several times during the execution. Instead of writing the same piece of code again and again, one can write this code inside a function and call that function whenever it is needed. A function : Private Function AddNumbers(X As Integer, Y As Integer) As Integer Return X + Y End Function A Function Name, could be used as the return statement Function sealBarTypeValidation() as Boolean Dim err As Boolean = False If rbSealBarType.SelectedValue = "" Then err = True End If Return err End Function is just the same as Function sealBarTypeValidation() as Boolean sealBarTypeValidation = False If rbSealBarType.SelectedValue = "" Then sealBarTypeValidation = True End If End Function Named Types Dim someInstance As New SomeClass(argument) With { .Member1 = value1, .Member2 = value2 '... } Is equivalent to Dim someInstance As New SomeClass(argument) someInstance.Member1 = value1 someInstance.Member2 = value2 '... Anonymous Types (Option Infer must be On) Dim anonymousInstance = New With { .Member1 = value1, .Member2 = value2 '... } Although similar anonymousInstance doesn't have same type as someInstance Member name must be unique in the anonymous type, and can be taken from a variable or another object member name Dim anonymousInstance = New With { value1, value2, foo.value3 '... } ' usage : anonymousInstance.value1 or anonymousInstance.value3 Each member can be preceded by the Key keyword. Those members will be ReadOnly properties, those without will be read/write properties Dim anonymousInstance = New With { Key value1, .Member2 = value2, Key .Member3 = value3 '... } Two anonymous instance defined with the same members (name, type, presence of Key and order) will have the same anonymous type. Dim anon1 = New With { Key .Value = 10 } Dim anon2 = New With { Key .Value = 20 } anon1.GetType Is anon2.GetType ' True Anonymous types are structurally equatable. Two instance of the same anonymous types having at least one Key property with the same Key values will be equal. You have to use Equals method to test it, using = won't compile and Is will compare the object reference. Dim anon1 = New With { Key .Name = "Foo", Key .Age = 10, .Salary = 0 } Dim anon2 = New With { Key .Name = "Bar", Key .Age = 20, .Salary = 0 } Dim anon3 = New With { Key .Name = "Foo", Key .Age = 10, .Salary = 10000 } anon1.Equals(anon2) ' False anon1.Equals(anon3) ' True although non-Key Salary isn't the same Both Named and Anonymous types initializer can be nested and mixed Dim anonymousInstance = New With { value, Key .someInstance = New SomeClass(argument) With { .Member1 = value1, .Member2 = value2 '... } '... } Arrays Dim names = {"Foo", "Bar"} ' Inferred as String() Dim numbers = {1, 5, 42} ' Inferred as Integer() Containers ( List(Of T), Dictionary(Of TKey, TValue), etc.) Dim names As New List(Of String) From { "Foo", "Bar" '... } Dim indexedDays As New Dictionary(Of Integer, String) From { {0, "Sun"}, {1, "Mon"} '... } Is equivalent to Dim indexedDays As New Dictionary(Of Integer, String) indexedDays.Add(0, "Sun") indexedDays.Add(1, "Mon") '... Items can be the result of a constructor, a method call, a property access. It can also be mixed with Object initializer. Dim someList As New List(Of SomeClass) From { New SomeClass(argument), New SomeClass With { .Member = value }, otherClass.PropertyReturningSomeClass, FunctionReturningSomeClass(arguments) '... } It is not possible to use Object initializer syntax AND collection initializer syntax for the same object at the same time. For example, these won't work Dim numbers As New List(Of Integer) With {.Capacity = 10} _ From { 1, 5, 42 } Dim numbers As New List(Of Integer) From { .Capacity = 10, 1, 5, 42 } Dim numbers As New List(Of Integer) With { .Capacity = 10, 1, 5, 42 } Custom Type We can also allow collection initializer syntax by providing for a custom type. It must implement IEnumerable and have an accessible and compatible by overload rules Add method (instance, Shared or even extension method) Contrived example : Class Person Implements IEnumerable(Of Person) ' Inherits from IEnumerable Private ReadOnly relationships As List(Of Person) Public Sub New(name As String) relationships = New List(Of Person) End Sub Public Sub Add(relationName As String) relationships.Add(New Person(relationName)) End Sub Public Iterator Function GetEnumerator() As IEnumerator(Of Person) _ Implements IEnumerable(Of Person).GetEnumerator For Each relation In relationships Yield relation Next End Function Private Function IEnumerable_GetEnumerator() As IEnumerator _ Implements IEnumerable.GetEnumerator Return GetEnumerator() End Function End Class ' Usage Dim somePerson As New Person("name") From { "FriendName", "CoWorkerName" '... } If we wanted to add Person object to a List(Of Person) by just putting the name in the collection initializer (but we can't modify the List(Of Person) class) we can use an Extension method ' Inside a Module <Runtime.CompilerServices.Extension> Sub Add(target As List(Of Person), name As String) target.Add(New Person(name)) End Sub ' Usage Dim people As New List(Of Person) From { "Name1", ' no need to create Person object here "Name2" }
https://sodocumentation.net/vb-net/topic/3997/introduction-to-syntax
CC-MAIN-2022-27
refinedweb
1,348
51.58
I have the following two models Ads and Listings. class Listing < ActiveRecord::Base belongs_to :ads end class Ad < ActiveRecord::Base has_many :listings end In my view to create a new ad, I used a listing partial, because I would like to use that listing partial in other parts of my project. So the view looks like: <%= error_messages_for ‘ad’ %> Title Ad <%= text_field 'ad', 'title' %> Text <%= text_area 'ad', 'text' %> <%= render (:partial => “listing/listing”, :object => @listing) %> The view code of the listing partial is: Startdatum <%= select_day (Time.now.day, :prefix => “listing”) %> <%= select_month (Time.now.month, :prefix => “listing”) %> <%= select_year (Time.now.year, :prefix => “listing”, :start_year => Time.now.year, :end_year => Time.now.year+1) %> Aantal weken <%= select (:listing, :number_weeks, [['1 Week', 1], ['2 Weken', 2], ['3 Weken', 3], ['4 Weken', 4], ['5 Weken', 5], ['6 Weken', 6], ['7 Weken', 7], ['8 Weken', 8], ['9 Weken', 9], ['10 Weken', 10]]) %> Adding a valid ad, with a correct listing works fine. The problem is when I try to add an ad with an invalid start date. It only save the data of the ad to the database and not that of the listing. I guessed it would raise an error and not add any data to the database. In my listing model I have added validation to validate the correctness of the date, like so: def validate if Date::valid_civil?(year.to_i, month.to_i, day.to_i) == nil errors.add(:year, day.to_s + “-” + month.to_s + “-” + year.to_s + “is geen geldige datum.”) else self.start_date = year.to_s + “-” + month.to_s + “-” + day.to_s end end The validation works perfect if I only test the listing model without the integration in the ad model. In my ad controller I create the ad like: def create @ad = Ad.new(params[:ad]) @listing = Listing.new(params[:listing]) @listing.listing_type = “ads” if @ad.save @ad.listings << @listing flash[:notice] = 'Ad was successfully created.' redirect_to :action => 'list' else render :action => 'new' end end When I save my ad, how can I get Rails to also check for the correctness of the start date and raise an error if it is not valid? Hope somebody can give some pointers. Kind regards, Nick
https://www.ruby-forum.com/t/partial-and-model-errors/49616
CC-MAIN-2018-47
refinedweb
355
65.32
We need an alternate method of detecting forks on that platform. The logic could be the following : - In C_Initialize, record the PID to a global. - Change the CHECK_FORK macro to call getpid() and compare it to the global variable. This is less efficient than using pthread_atfork, so it should be done conditionally only for s8 and s9. We can continue to use the current method on s10+. The reason this is P1 is that an app can dlopen softoken, call C_Initialize, call C_Finalize, then dlclose softoken , then fork. On s8 and s9, the child will crash calling into an address that's no longer in memory. I tested this scenario as part of bug 331096 and the behavior was OK on s10. I didn't test s9 and s8, and they are broken. One possible fix is to link softoken with -z nodelete. I don't particularly like the idea of leaking memory especially since we can't make that conditional for s8/s9. I suggest that you create a new check fork FUNCTION, and have the macro call it, on S8 and S9. The function can also check to see if global value is zero, and initialize it with an initial call to getPID if so. Also, C_Finalize should reset this global value. Nelson, I think resetting the state in C_Finalize is something unrelated to the crash due to the fork handler still being registered, and should be dealt with separately. I will email you separately about that issue. When C_Finalize succeeds, it must clear the PID. Otherwise, the following sequence will always fail: parent C_Initialize C_Finalize fork child C_Initialize There is nothing wrong with that behavior. That behavior should always work. Of course, what I wrote for the PID also goes for the "forked" flag, on platforms that use that technique. In today's meeting, I believe there was consensus that NSS must not disallow children to call C_Initialize, even if NSS was initialized before the fork. NSS must not be more strict that the PKCS#11 spec allows it to be. Some went further and expressed the desire to allow C_Finalize and other functions related to the releasing of resources (such as C_CloseSession, C_CloseAllSessions, C_Logout, and maybe C_FindObjectsFinal, C_CancelFunction) but there was not clear consensus about that. Nelson, Thanks for clarifying by email. Your comment 2 was not clear about which situations you wanted C_Finalize to reset the global value. I agree that C_Finalize should reset the state, ie. the global PID, in the case of a successful C_Finalize, ie. not in the case where a fork was detected. This will allow the situation in comment 4 to work, which I agree we must support. Re: comment 5, IMO, the same does not go for the "forked" flag. The meaning of the "forked" flag is that the parent process forked after C_Initialize. The forked flag is set only in the child, never in the parent. It's currently not possible for the child to do a successful C_Finalize in that case. The child always is hosed, period - there is currently nothing it can do to recover - not C_Initialize, nor C_Finalize. The only thing that *might* currently work would be for the app to dlclose() the softoken module, and that would take care of clearing the forked flag at the same time. I will file a separate bug/RFE about allowing the child to call C_Initialize after the parent forked, since I think it is unrelated to this fork handler crash. Created attachment 345628 [details] [diff] [review] Use PID checks on Solaris <10 This is my first draft. Very little testing has been done at this point. So far, I have only verified with the printfs that the Solaris version check works correctly on s9 and s10 machines. I haven't tried to fork any process yet with either scheme. Created attachment 345666 [details] [diff] [review] Use PID checks on Solaris < 10 . Update. This patch has been tested on s9 and s10. Both the PID check and fork handler methods were verified. There was a bug in the previous patch for the PID check. myPid was being reset in NSC_Finalize , where it was supposed to be in nsc_CommonFinalize. The change was not effective for FIPS mode. Now it is. The other change is that I added an environment variable called NSS_STRICT_NOFORK . When this variable is set to 1, the softoken will assert and dump core in every PKCS#11 function if the parent forked while softoken was initialized. This functionality is present in debug builds only. I found it very helpful to detect and fix the aforementioned bug, so I think it should stay. Created attachment 345667 [details] [diff] [review] Add fork tests to pk11mode This patch adds 5 optional fork tests to the pk11mode PKCS#11 test program on Unix platforms. They are triggered by the -F argument (for fork). The test cases are : 1) fork before softoken is loaded or initialized 2) fork with softoken loaded, but not initialized 3) fork with softoken both loaded and initialized 4) fork with softoken still loaded, but de-initialized 5) fork with softoken unloaded Status of the tests is verified by means of process exit code, which is an arbitrary value in tests 1 and 5, CKR_CRYPTOKI_NOT_INITIALIZED in tests 2 and 5, and CKR_DEVICE_ERROR in test 3. I believe this covers all the fork cases we need to worry about, but if there are others, feel free to suggest some more. I was able to find the bug in NSC_Finalize my initial patch using these tests. Comment on attachment 345666 [details] [diff] [review] Use PID checks on Solaris < 10 . Update. r+. Please make these changes. 1. Remove the commented-out debug printfs. 2. Change the sense of the sense of the NSS_STRICT_NOFORK envariable so that the default (in its absence) is to be strict. The variable should make it be less strict. Oh, wait until the tinderbox tree goes green before committing this.. It's orange now. Comment on attachment 345667 [details] [diff] [review] Add fork tests to pk11mode Is this patch missing a piece? It adds options to pk11mode, but I don't see any changes to any test script called from all.sh to actually use these new options. Nelson, You are right, there is no change to the test script. I was going to let Slavo do that part. We only needs to add the -F argument to pk11mode on Unix platforms. Created attachment 345829 [details] [diff] [review] Update with Nelson's feedback (checked in) - Removed debug printf. - Change the default to assert when a PKCS#11 call is made in a child after fork. This can be turned off by setting NSS_STRICT_NOFORK=0 in the environment. I will check this in when the tree is green. Created attachment 345830 [details] [diff] [review] Test update Changed the default to run fork tests. The -F argument will now turn off the fork tests. This is easier than trying to figure out the platform in the shell scripts and set -F only on Unix platforms. With this update to pk11mode, no script change is needed. Comment on attachment 345829 [details] [diff] [review] Update with Nelson's feedback (checked in) An observation: The expression: ( (!forkAssert) || ( forkAssert && (0 == strcmp(forkAssert, "1"))) ) Is exactly equivalent to the shorter and more efficient ( (!forkAssert) || (0 == strcmp(forkAssert, "1") ) Comment on attachment 345829 [details] [diff] [review] Update with Nelson's feedback (checked in) I checked in this fix to the trunk. Checking in pkcs11.c; /cvsroot/mozilla/security/nss/lib/softoken/pkcs11.c,v <-- pkcs11.c new revision: 1.153; previous revision: 1.152 done Checking in softoken.h; /cvsroot/mozilla/security/nss/lib/softoken/softoken.h,v <-- softoken.h new revision: 1.18; previous revision: 1.17 done I might have been too quick to mark this resolved. The bug is fixed, but I would still like a review on the additional test cases so we don't run into this again. Comment on attachment 345830 [details] [diff] [review] Test update >+#if defined(XP_UNIX) && !defined(NO_PTHREADS) >+#include <unistd.h> >+#include <sys/wait.h> >+#endif I suggest that you add a #define in that block that defines a symbol that can be used as a "feature test macro" below. IOW, add something like #define DO_FORK_CHECK 1 and then below, just do #ifdef DO_FORK_CHECK rather than repeating that more complex #if. This has the side benefit that if you ever decide to change that more complex #if, you only need to change it in one place, not in many. >+CK_RV PKM_ForkCheck(int expected, CK_FUNCTION_LIST_PTR fList); Please #ifdef all references to ForkCheck. I don't want this program to be calling ForkCheck on non-Unix platforms. I also don't want to be doing unnecessary putenv calls on Windows. Yes I know it does nothing on them, but still. >- PLOptState *opt = PL_CreateOptState(argc, argv, "nvhf:d:p:"); >+ PLOptState *opt = PL_CreateOptState(argc, argv, "nvhf:Fd:p:"); This code adds "F" unconditionally to the list of known option characters. But below, the case that handles 'F' is conditinally compiled. That means that adding -F on Windows will cause a failure but adding -F on Unix will not. I suggest you fix that by removing the ifdef from the code below. Go ahead set set doForkTests to false. That just won't mean anything on Windows. >+#if defined(XP_UNIX) && !defined(NO_PTHREADS) >+ case 'F': /* disable fork tests */ >+ doForkTests = PR_FALSE; >+ break; >+#endif Please ifdef the block below, and all similar new blocks that begin with if (doForkTests) >+ if (doForkTests) >+ { >+ /* first, try to fork without softoken loaded to make sure >+ * everything is OK */ >+ crv = PKM_ForkCheck(123, NULL); >+ if (crv != CKR_OK) >+ goto cleanup; >+ } The convention used in this file seems to be that "case" statements are indented at the same level of indentation as the switch that encloses them. Please use that convention here also, and unindent the entire contents of this new switch by one level. >+ switch (child) { >+ case -1: >+ PKM_Error("Fork failed.\n"); >+ crv = CKR_DEVICE_ERROR; >+ break; >+ case 0: >+ if (fList) { >+ /* If softoken is loaded, make a PKCS#11 call in the child. Finally, regarding the new code in the parent that tries to manage the child after the fork, I don't see any code there that deals with early returns from wait() due to EINTR. Also, does the code need to take precautions against SIGCHLD interrupts? Created attachment 346798 [details] [diff] [review] Second test update (checked in) 1) I added the DO_FORK_CHECK macro . 2) I did not wait to pepper the main functions with #ifdef . The entire source has very few ifdef's. I prefer to have just one #ifdef inside PKM_ForkCheck. I moved the putenv calls into PKM_ForkCheck, and added one extra argument to PKM_ForkCheck. So now we really don't do anything on Windows regardless of the state of the doForkChecks variable. 3) I made the option available on all platforms, but it will have no effect on non-Unix platforms. I added the word "Unix" to the usage for this option. 4) I think this was the same request as 2), and I did not want to do add unnecessary ifdef's. 5) I fixed the indentation in the switch statement. 6) I'm not sure if any extra code is needed. It has always worked in all my tests so far. I believe wait() is implemented by waiting for SIGCHILD, so we shouldn't worry about that signal interruption. Maybe other signals. Comment on attachment 346798 [details] [diff] [review] Second test update (checked in) ok, r=nelson, thanks Thanks for the review, Nelson. I think we can finally mark this resolved. Checking in pk11mode.c; /cvsroot/mozilla/security/nss/cmd/pk11mode/pk11mode.c,v <-- pk11mode.c new revision: 1.19; previous revision: 1.18 done I completely forgot the tree was frozen. I will check the tinderbox detailed logs to see if this causes any new problem. Unfortunately, I have to reopen this once more. The last test - fork after C_Finalize and dlclose() - caused a crash on AIX. So, this problem is unfortunately not limited to Solaris 8/9. I am very surprised by this, because I reported having manually verified that things were OK for this case on AIX in . The HP-UX tests haven't finished running. The issue might exist there too. Perhaps we should take a more conservative approach. Rather than determining by experiments whether dlclose unregisters the fork handlers defined in the shared library it is unloading, we should ask the OS vendors for an authoritative answer. In the absence of an affirmative answer from the OS vendor, we should default to the getpid approach. I agree with comment 25, the default needs to be changed. It looks like pk11mode causes an infinite loop on HP-UX. I still don't know why. I am trying to debug with printf because I don't see a working debugger on the system. The debugger on HP-UX is gdb, and it's installed in /opt/langtools/bin. There are gdb32 for 32-bit apps and gdb64 for 64-bit apps. Thanks, Wan-Teh. The problem on HP-UX is also in the last fork test - after the PKCS#11 module is finalized and unloaded. After the fork, the child goes into an infinite loop. I can't even debug it with gdb. ctrl-c won't work. I think this calls for using the getpid() method on HP-UX as well. It looks like pthread_atfork only works as we expect on s10 and Linux :( I will prepare a new patch to reflect these findings. The default will be getpid. Created attachment 347026 [details] [diff] [review] Fix crash on other Unix platforms This makes the getpid method the default for Unix platforms. Linux uses pthread_atfork since we have since it succeeed on all our machines. Solaris still has the runtime check to switch between getpid and pthread_atfork. This patch also contains a small fix to pk11mode - skip the last test if the PKCS#11 library was not unloaded. Comment on attachment 347026 [details] [diff] [review] Fix crash on other Unix platforms I have tested this patch on AIX, HP-UX, Solaris 8, 9, and even Windows to make sure it compiled. Comment on attachment 347026 [details] [diff] [review] Fix crash on other Unix platforms Cancelling review request. I didn't test linux before submitting the patch. And of course there is a problem there :( Comment on attachment 347026 [details] [diff] [review] Fix crash on other Unix platforms This patch is OK after all even on Linux. I just had an environment problem. Comment on attachment 347026 [details] [diff] [review] Fix crash on other Unix platforms What is this NO_PTHREADS macro? This MXR search shows it's not defined: Instead of using LINUX, you should define a macro to indicate it's okay to use pthread_atfork. This makes it easier to enable pthread_atfork for a new platform in the future. You can use PR_BEGIN_MACRO and PR_END_MACRO instead of "do {" and "} while (0)". I prefer the shorter #ifdef FOO to #if defined(FOO) You should add the sftk_ prefix to extern variables 'forked', 'myPid', and 'usePthread_atfork'. 'usePthread_atfork' looks strange. How about 'usePthreadAtFork'? Comment on attachment 347026 [details] [diff] [review] Fix crash on other Unix platforms You may want to ask Nelson to review this patch instead because he reviewed previous versions of this fix. Comment on attachment 347026 [details] [diff] [review] Fix crash on other Unix platforms ForkedChild should be declared static because it is used in only one file (lib/softoken/pkcs11.c). Comment on attachment 347026 [details] [diff] [review] Fix crash on other Unix platforms In the Solaris definition of CHECK_FORK, I think it is better to write the test as (usePthread_atfork && forked) || (!usePthread_atfork && myPid != getpid()) (In reply to comment #36) > (usePthread_atfork && forked) || (!usePthread_atfork && myPid != getpid()) Testing the same variable twice can be avoided by rewriting the expression as (usePthread_atfork ? forked : (myPid != getpid())) Since last patch related to this bug (pk11mode.c) was integrated, pk11mode is failing on AIX. I see it was already mentioned in comment 24, but rather to have it clear: fips.sh: Run PK11MODE in FIPSMODE ----------------- pk11mode -d ../fips -p fips- -f ../tests.fipspw.602244 Loaded FC_GetFunctionList for FIPS MODE; slotID 0 Loaded FC_GetFunctionList for FIPS MODE; slotID 0 Loaded FC_GetFunctionList for FIPS MODE; slotID 0 FIPS MODE PKM_Error: Child misbehaved. Loaded FC_GetFunctionList for FIPS MODE; slotID 0 Child return status : 0. **** Total number of TESTS ran in FIPS MODE is 92. **** fips.sh: #175: Run PK11MODE in FIPS mode (pk11mode) . - Core file is detected - FAILED Comment on attachment 347026 [details] [diff] [review] Fix crash on other Unix platforms r=wtc. Before you check this in, please make as many of the suggested changes as you see fit. I have some more suggested changes. If DEBUG is defined, you should define FORK_ASSERT() as a function to avoid code duplication from macro expansion. In FORK_ASSERT(), we have if ( (!forkAssert) || \ ( forkAssert && (0 == strcmp(forkAssert, "1"))) ) { \ You can rewrite the Boolean expression as if ( (!forkAssert) || (0 == strcmp(forkAssert, "1")) ) { \ In the default definition of CHECK_FORK(), we have if (myPid && myPid != getpid()) { \ FORK_ASSERT(); \ return CKR_DEVICE_ERROR; \ } \ You may be able to rewrite the Boolean expression as if (myPid != getpid()) { \ In NSC_ModuleDBFunc, we have #if defined(XP_UNIX) && !defined(NO_PTHREADS) if (forked) return NULL; #endif Should that be removed, or replaced by a macro that is similar to CHECK_FORK() but returns NULL instead of CKR_DEVICE_ERROR? Comment on attachment 347026 [details] [diff] [review] Fix crash on other Unix platforms cmd/pk11mode/pk11mode.c also has one instance of NO_PTHREADS left. I wonder if that should be removed. Comment on attachment 347026 [details] [diff] [review] Fix crash on other Unix platforms In nsc_CommonFinalize, we have > if (!usePthread_atfork) { > myPid = 0; /* allow reinitialization */ > } >+#elif defined(XP_UNIX) && !defined(LINUX) >+ myPid = 0; /* allow reinitialization */ > #endif I don't understand why we need to reset myPid to 0 to allow reinitialization. The initialization code doesn't check myPid to be 0 before setting it to getpid(). If it's necessary, should we also reset 'forked' to PR_FALSE? Wan-Teh, Thanks for the review. Re: comment 33, the use of the NO_PTHREADS macro comes from a request from Bob Relyea to not have a dependency on pthreads in softoken on Linux. See Apparently, my latest patch broke that case, since it assumes that Linux always uses pthread_atfork . Previously, defining NO_PTHREADS resulted in disabling fork checks. But now it can result in switching to the getpid() method. I will provide an updated patch for that case and address your comment as well. This should be a feature macro. Re: comment 35, I agree. ForkedChild wasn't introduced or changed in the latest patch, but it belongs as a static. Re: comment 36 and comment 37, I will settle for Nelson's solution . Re: comment 39, I am not at all concerned about code duplication from macro expansion in DEBUG code. Is there any reason that we should be ? I will shorten the FORK_ASSERT macro as well as the default CHECK_FORK macro. Good catch about NSC_ModuleDBFunc. I completely forgot about that case. This is the one exception to the rule - a pseudo-PKCS#11 call that unfortunately doesn't return a CK_RV . I have some mixed feelings about that. I don't want to write a second macro. I will come up with a solution using an intermediate function that calls CHECK_FORK . Re: comment 40, yes, that particular instance of NO_PTHREADS can be removed since the fork check is now implemented in a non pthread-dependent fashion when using getpid(). Given the amount of changes suggested/requested, I unfortunately feel that a new patch with review is necessary. Re: comment 41, The code that assigns getpid() to myPid is invoked as part of C_Initialize . Currently, there is a CHECK_FORK in that function, just like every other PKCS#11 call. CHECK_FORK checks that myPid is either zero, or that its value is the same as getpid(). Ie. the macro checks that either softoken is not initialized, or we are still in the same process that did the initialization. If we don't reset myPid to zero in C_Finalize(), the process may fork after C_Finalize, and then a subsequent initialization in the child process would fail, since myPid would not match getpid(). Believe it or not, this is another case we have encountered. And I don't have a test for it in pk11mode yet. Sigh. On the other hand, the "forked" global variable is only ever set by the pthread_atfork child handler, and should never be reset, since the softoken cannot currently clean up after itself after a fork in the child process, or in any of its grandchildren. Created attachment 347902 [details] [diff] [review] Patch update This makes of the changes requested as discussed. You now get your choice of build macros to get the desired behavior. - defining NO_CHECK_FORK will turn off all fork checks - defining CHECK_FORK_MIXED will choose the Solaris "mixed" implementation - defining CHECK_FORK_PTHREAD will choose the pthread_atfork implementation - defining CHECK_FORK_GETPID will choose the getpid implementation If none of them are defining, softoken.h tries to figure out the best one for the platform according to our current knowledge. That is mixed for solaris, pthread for Linux, and getpid for every other Unix. I had to keep the check for myPid && myPid != getpid() in CHECK_FORK, otherwise the very first C_Initialize would fail since myPid is initially 0. I tested this on Solaris, Linux, AIX and Windows. I still need to write one more test for pk11mode but it won't be today and it can be reviewed separately. Comment on attachment 347902 [details] [diff] [review] Patch update You attached the wrong file. Created attachment 348090 [details] [diff] [review] Correct file Comment on attachment 348090 [details] [diff] [review] Correct file r=wtc. I still think the global variables 'forked', 'myPid', 'usePthread_atfork' should have a prefix such as sftk_ or nsc_. >+#error Incorrect fork method. "fork method" => "fork check method". > if (!usePthread_atfork) { > myPid = 0; /* allow reinitialization */ > } >+#elif defined(XP_UNIX) && !defined(LINUX) >+ myPid = 0; /* allow reinitialization */ > #endif You may want to change "allow reinitialization" to something like "allow the CHECK_FORK() in NSC_Initialize to succeed.". My previous confusion was that I thought "reinitialization" meant the reinitialization of myPid. You can also solve the reinitialization problem by removing the CHECK_FORK() from NSC_Initialize. >+ * This section should be updated as more platforms get pthread fixes for >+ * dlclose. You may want to say what the "pthread fixes" are: "unregister fork handlers in dlclose". >+#elif defined(LINUX) >+ >+/* Linux uses pthread_atfork */ This comment can be removed now. >+ if (usePthread_atfork ? forked : myPid && myPid != getpid() ) { \ You may want to add parentheses () around "myPid && myPid != getpid()". >+/* non-Unix platforms, or fork check disabled */ > > #define CHECK_FORK() > >+#define NO_FORK_CHECK This should be protected with #ifndef NO_FORK_CHECK, otherwise you may get a macro redefinition error if NO_FORK_CHECK is defined by the build system. Comment on attachment 348090 [details] [diff] [review] Correct file It's not clear why NSC_ModuleDBFunc needs a fork check. NSC_ModuleDBFunc is not a PKCS #11 function. You don't need to be logged in to use it. Created attachment 348293 [details] [diff] [review] Incorporate Wan-Teh's feedback. Add one more pk11mode test. Wan-Teh, I made most of your suggested changes, except those around variable naming. Re: the need for CHECK_FORK NSC_ModuleDBFunc, even though it is not a PKCS#11 entry point, and it doesn't appear to use locks currently, it is still part of the softoken module. With the current state of the code, we don't support the softoken module at all in the child if the parent initialized before forked. So, I think it makes sense for this check to be in all entry points. We had already discussed the same thing for C_Initialize as part of bug 331096 also. If we improve the softoken's fork-safety, we can remove the CHECK_FORK from some or all of the entry points. The main thing to review here is the additional test I made in pk11mode to attempt to fork and initialize in the child, after the parent has shutdown NSS. This is one of the cases we need to support. The DS does that. Comment on attachment 348293 [details] [diff] [review] Incorporate Wan-Teh's feedback. Add one more pk11mode test. r=wtc. > CK_RV PKM_ForkCheck(int expected, CK_FUNCTION_LIST_PTR fList, >- PRBool forkAssert); >+ PRBool forkAssert, CK_C_INITIALIZE_ARGS_NSS* initArgs); Nit: be consistent with the position of *. Use the "type *var" style in this file. > CK_RV PKM_ForkCheck(int expected, CK_FUNCTION_LIST_PTR fList, >- PRBool forkAssert) >+ PRBool forkAssert, CK_C_INITIALIZE_ARGS_NSS* initArgs) Same as above. >+ if (!initArgs) { Nit: In an if-else statement, I like to see the condition to have a positive sense, i.e., if (initArgs). >+ * it fails with CKR_CRYPTOKI_NOT_INITIALIZED . and >+ * kick in, and make it return CKR_DEVICE_ERROR . Nit: no space before the period (.). Same thing in the 'else' block. Not checking for fork in NSC_ModuleDBFunc allows us to remove the special ForkCheck function. NSC_ModuleDBFunc logically belongs to the pk11wrap layer. It has to live in the softoken after we made dbm an internal component of the softoken. So NSC_ModuleDBFunc doesn't need to follow the PKCS #11 rule on forking. Thanks for the review, Wan-Teh. I checked this in to the trunk. I hope this is finally resolved. Checking in cmd/pk11mode/pk11mode.c; /cvsroot/mozilla/security/nss/cmd/pk11mode/pk11mode.c,v <-- pk11mode.c new revision: 1.20; previous revision: 1.19 done Checking in lib/softoken/pkcs11.c; /cvsroot/mozilla/security/nss/lib/softoken/pkcs11.c,v <-- pkcs11.c new revision: 1.154; previous revision: 1.153 done Checking in lib/softoken/softoken.h; /cvsroot/mozilla/security/nss/lib/softoken/softoken.h,v <-- softoken.h new revision: 1.20; previous revision: 1.19 done
https://bugzilla.mozilla.org/show_bug.cgi?id=462293
CC-MAIN-2017-30
refinedweb
4,277
65.93
12 September 2013 17:33 [Source: ICIS news] TORONTO (ICIS)--The crude oil shipped on the train that derailed and exploded in ?xml:namespace> Don Ross, an investigator for the Transportation and Safety Board (TSB), said that samples the TSB took from the oil in the railcars showed that the oil should have been described as a “packing group II” flammable liquid, rather than a less flammable “packing group III” liquid. “It was shipped as a packing group III flammable liquid, which is the least hazardous type of liquid, but the results of our testing show that [the product] was packing group II,” he said. Packing group II has a lower flash point than group III, “so, as far as awareness of anybody who needs to handle the products or get into contact with them it would have been helpful" to have the products described and classified correctly, he said. Ross would not speculate about possible legal consequences the mislabelling may have for the rail carriers or other parties involved. The oil on the 72-railcar train that derailed and exploded on 6 July at Lac Megantic, “Our understanding of the Canadian regulations is that when you are dealing with an international shipment it is the importer that’s the person that’s going to be responsible to make sure that they comply with Canadian legislation,” he added. Ross said that the TSB’s investigation was ongoing. As part of the probe, the agency is also reviewing relevant regulations and company operating practices. He added that the TSB had long argued for stronger tank railcars to be used in shipping group I and II flammable liquids. In a separate statement, the TSB said that product characteristics such as flammability were one of the factors in selecting a container. As such, the mislabelling would “also brings into question the adequacy of Class 111 tank cars for use in transporting large quantities of low-flash flammable liquids”, it said. The TSB investigates accidents and makes recommendations to improve transport safety, but it is not a regulator. In related news on Thursday,
http://www.icis.com/Articles/2013/09/12/9705520/fatal-canada-oil-rail-shipment-was-mislabelled-agency.html
CC-MAIN-2015-14
refinedweb
349
53.34
Hi We have a network with 3 sites running DFS on 3 Server 2003 r2 machines. A server at one site was replaced by a new server due to lack of disk space. The new server was added to the site and 6 replication groups were replciated to the new server from the old server. 5 replications completed successfully with the appropriate events however the staging folders of the old server grew too large during this process (very low diskspace) and there seems to have been a corruption of the last replication group. The event log shows initial replication for this group completed successfully but when looking at the folder size it was about 40gb short. I then fixed the space issue on the old server and took a backup of the folder on the old server for prestaging. Removed the new server from the replication group and restored the data to the new server. Then added the server back into the replication group from a DFS server located in the authorative site outside this site. The problem I am having now is that the new server does not go through an inital replication process for this group but seems to be recognised as fully replication partner and some files on servers outside this site are then move into the Conflicted and Deleted rather than any conflicting files going into the Preexisting folder on the new server. I have to stop the replication immediately due to possible loss of recent files. I need to be able to remove info of the new server from the replication group and ensure the process of initial replication gone through after the server is readded. The other 5 replication groups are all functioning normally and are published to an single namespace. Due to the amount of data and WAN links, starting all groups from scratch is not really an attractive option. Any help with this would be appriciated. Hi, I suggest that let's use robocopy or similar steps to initial replication on all the referral targets intead of only restore the new server, as data on the new server is earlier than other referral targets after restore, which will be overwritten by replication process. Or you can have a test on the following steps to reset the Primary member: DFSRADMIN Membership Set /RGName:<replication group name> /RFName:<replicated folder name> /MemName:<member you want to be primary> /IsPrimary:True DFSRADMIN Membership Set /RGName:RG1 /RFName:DATA /MemName:NA-DC-01 /IsPrimary:True 4. Force AD Replication with: REPADMIN /syncall /Aed 5. Force DFSR to poll AD on all servers with: DFSRDIAG POLLAD /mem:<servers> Microsoft is conducting an online survey to understand your opinion of the Technet Web site. If you choose to participate, the online survey will be presented to you when you leave the Technet Web site. Would you like to participate?
http://social.technet.microsoft.com/Forums/windowsserver/en-US/b4549a27-3fb1-44a9-ac53-30c274fb499e/dfs-replication-no-inital-replication?forum=windowsserver2008r2branchoffice
CC-MAIN-2014-15
refinedweb
481
56.08
At the Unite Nordic and Nordic Gaming Conference, we showcased how a BlackBerry 10 smartphone can so easily turn into a console gaming system with your favorite gamepad and a micro HDMI cable hooked up to a TV. Unity developers loved it and wondered how their games can leverage that. This blog posts shows you how easy it is to add gamepad support for BlackBerry 10. Background The BlackBerry 10 Native SDK provides a native Gamepad API that currently supports the following gamepads Support for more gamepads are being added as we speak and native games utilizing this API does not have to worry about changing their code to support them. The Unity SDK uses the Gamepad API to provide out-of-the-box support to the gamepads listed above through its Input system API. The API allows accessing the connected gamepad’s names, axis values and currently pressed buttons. Lets see how to setup and leverage BlackBerry 10’s gamepad support in your Unity games. Step 1: Configure gamepad’s analog joysticks The first thing you need to do is setup the Input settings from the Project Settings as shown below: If your game would benefit from analog joysticks, the setup involves specifying names to axes of analog joysticks on your gamepad. For example, a top down shooter game with one analog joystick controlling the player movement and the other controlling direction of fire would require 4 axes to be setup: Axis 1 – corresponds to the horizontal movements of the right joystick (X axis) Axis 2 – corresponds to the vertical movements of the right joystick (Y axis) Axis 3 – corresponds to the horizontal movements of the left joystick (3rd axis) Axis 4 – corresponsds to the vertical movements of the left joystick (4th axis) The Input settings below shows a sample configuration: The cool part is that 2 joysticks can be paired to a BlackBerry 10 smartphone at once. If your game supports on-device multi-player feature, you could simply start by adding another 4 axes with Joy Num pointing to Joystick 2. Step 2: Configure Gamepad Buttons Actually, there is no configuration required here. The gamepad keys are already pre-mapped to KeyCode that your game could start listening to and react on. For example: the KeyCode mapping of a Moga Pro and SteelSeries Free Gamepad that are currently supported for BlackBerry 10 are shown below: As you can see from the above mappings, other gamepads with conventional layouts will report the same KeyCode for a button in same/similar position. For example, pressing up on a DPad up should report JoystickButton8 on any gamepad with a DPad. Step 3: Reading the Gamepad Input The following C# script shows how straightforward it is to integrate the gamepad support: using UnityEngine; using System.Collections; using System; public class PlayerJoystickClass : MonoBehaviour { private Transform originalTransform; private string currentButton; private float[] axisInput = new float[4]; // Use this for initialization void Start () { for(int i = 0; i < axisInput.Length; i++) axisInput[i] = 0.0f; } // Update is called once per frame void Update () { // Get the Gamepad Analog stick’s axis data axisInput[0] = Input.GetAxisRaw("Axis 1"); axisInput[1] = Input.GetAxisRaw("Axis 2"); axisInput[2] = Input.GetAxisRaw("Axis 3"); axisInput[3] = Input.GetAxisRaw("Axis 4"); // Get the currently pressed Gamepad Button name var values = Enum.GetValues(typeof(KeyCode)); for(int x = 0; x < values.Length; x++) { if(Input.GetKeyDown((KeyCode)values.GetValue(x))){ currentButton = values.GetValue(x).ToString(); } } // Transform the object. transform.Translate(0, 0, axisInput[1] * 0.05f); transform.Rotate(0, 0, axisInput[2]*3); if (currentButton.CompareTo("Joystick1Button0")) { // Fire something } } // Show some data void OnGUI() { GUI.TextArea(new Rect(0, 0, 250, 40), "Current Button : " + currentButton); GUI.TextArea(new Rect(0, 50, 250, 40), "Axis 1 : " + axisInput[0]); GUI.TextArea(new Rect(0, 100, 250, 40), "Axis 2 : " + axisInput[1]); GUI.TextArea(new Rect(0, 150, 250, 40), "Axis 3 : " + axisInput[2]); GUI.TextArea(new Rect(0, 200, 250, 40), "Axis 4 : " + axisInput[3]); } } First, we declare an array of floats to listen to 4 axes input values from the analog joysticks and a string to identify currently pressed button. To obtain an axis value, I simply call the Input.GetAxisRaw(<Axis name>) and then use it to transform my object. The currently pressed button can be obtained by simply checking for the KeyCode corresponding to the above gamepad mapping. Note: Many gamepads still do not have support for analog joysticks (for example: Wii Remote, Gametel etc.) and it is worthwhile adding core game controls via DPad if relevant. Step 4: Switching between touch control and gamepad Almost all games have touch controls but with a gamepad in reach gamers would love to player your game with it instead. Those with an appetite for long and non-stop hours of gameplay would very much appreciate an on-the-fly switch from touch controls to gamepad. Assuming your Bluetooth is turned on and the gamepad is in the pairing mode, you should be able to detect a gamepad by simply checking Input.GetJoystickNames() in your script’s update() for “BlackBerry Gamepad”. When paired, you could disable the touch gamepad altogether and use real gamepad controls to put your game in a “console mode”. Note: “BlackBerry Gamepad” is the currently returned identifier for all BlackBerry supported gamepads in BlackBerry 10 Add-on Open Beta. I recommend checking again for any changes in the returned gamepad identifier after Full release of the Unity BlackBerry 10 Add-on. There you have it folks. This is all it takes to add gamepad support to your Unity games. Awesome thing is we are committed to adding support all the cool gamepads out there and you may never have to change a line of code to support them! For any questions, comments, concerns or kudos feel free to reach out to me directly at rmadhavan@blackberry.com or connect directly on Twitter at @rmadhavan7.
http://devblog.blackberry.com/2013/06/blackberry10-gamepad-unity3d/
CC-MAIN-2017-47
refinedweb
985
53.51
Alex A(1)+L(12)+E(5)+X(24) = 42 Dirk D(4)+I(9)+R(18)+K(11) = 42 This is an example from my text book. Both names with be found the the same bucket as their Hashcodes are the same, so to find the name you looking for you need to do an equality test. They gave us an example of how to find the name you want using the equality test but i don't understand how it finds the name you want exactly. public class HasHash { public static void main(String[] args) { } public int x; private int xValue; HasHash(int xVal) {x = xVal; } class x { private int xValue; x (int val) { xValue = val; } public int getxValue() { return xValue; } } public boolean equals(Object o) { HasHash h = (HasHash) o; if (h.x == this.x){ return true; }else{ return false; } } public int hashCode() { return (x * 17); } } Any help on this would be great! Thanks
http://www.dreamincode.net/forums/topic/306552-hashcode/page__p__1780922
CC-MAIN-2016-22
refinedweb
156
69.25
sandbox/jmf/tutorial - Introduction (basic) Basilisk - Basilisc C (a bit more) - Outputs functions - Examples using solvers Tutorial Basilisk Introduction (basic) Basilisk This tutorial is a very very basic introduction to Basilisk. We propose a slow journey into Basilisk starting from the study of the diffusion equation, doing a comparison between a classical C code and the equivalent in Basilisk. Example : solving a diffusion equation So we want to solve the diffusion equation \displaystyle {\partial A \over \partial t} = \nabla^2 A using a simple scheme in C (we have supposed that the diffusion coefficient is 1) over a rectangular squared grid of NxN points and physical side equal to 1. In terms of an algorithmic approach we have to - declare and initialize the variables - set the initial and boundary conditions - compute the discrete problem - write the solution In a sequential C code we have then C code #include <stdio.h> #include <math.h> int main () { int i,j,k; int N = 256 ; double L = 1. ; double x,y; double dx = L/N ; double dt = 0.00001; double A[N][N]; double dA[N][N]; // boundary conditions for (i = 0 ; i < N ; i++) A[i][0] = A[i][N-1] = 0. ; for (j = 0 ; j < N ; j++) A[0][j] = A[N-1][j] = 0. ; // initial conditions for (i = 0 ; i < N ; i++) { for (j = 0 ; j < N ; j++) { x = i*dx - 0.5 ; y = j*dx - 0.5 ; A[i][j] = 1./0.1*((fabs(x*x+y*y) < 0.05)) ; } } for (j = 0 ; j < N ; j++) { printf("%f \n",A[(int)N/2][j]); } printf("\n\n"); // time integration for (k = 0 ; k < 10 ; k++) { for (i = 1 ; i < N-1 ; i++) { for (j = 1 ; j < N-1 ; j++) { dA[i][j] = (A[i+1][j] + A[i-1][j] - 2. * A[i][j])/dx/dx + (A[i][j+1] + A[i][j-1] - 2. * A[i][j])/dx/dx ; } } // update for (i = 0 ; i < N ; i++) { for (j = 0 ; j < N ; j++) { A[i][j] = A[i][j] + dt* dA[i][j] ; } } } // print solution (centerline) for (j = 0 ; j < N ; j++) { printf("%f \n",A[(int)N/2][j]); } } Basilisk Code Same code using Basilisk appears more compact in a first sight #include "grid/cartesian.h" #include "run.h" scalar A[]; scalar dA[]; double dt; int main() { L0 = 1.; N = 256; run(); } // boundary conditions A[left] = 0.0 ; A[top] = 0.0 ; A[right] = 0.0 ; A[bottom] = 0.0 ; // initial conditions event init (t = 0) { foreach() A[] = 1./0.1*(fabs(x*x+y*y)<0.05); boundary ({A}); } // integration event integration (i++) { foreach() dA[] = (A[1,0] + A[-1,0] - 2. * A[])/Delta/Delta + (A[0,1] + A[0,-1] - 2. * A[])/Delta/Delta ; foreach() A[] = A[] + dt*dA[]; boundary ({A}); } // print event print (i=10) { for (double y = 0 ; y <= L0; y += 0.01){ printf("%g %g \n", y, interpolate (A, 0, y)); } } First observations Observing both piece of code we recognize some differences, about - Some reserved words (N, L0, … ) - Automatic grid setting - New types (scalar, ) with automatic memory allocation - Function run() - Boundary conditions - Position in the grid : A[1,0], A[-1,0], A[] - New “iterators” like foreach() (replacing “for ( …”) - New method “events” managing code actions Exploring the differences We list now the difference to introduce the Basilisk syntax. Reserved word Some variables (and consequently their names) are global and reserved in Basilisk, some of them are you have to learn them, in particular you could take a look to the solvers to understand what variable is global and reserved. Automatic grid Basilisk computed equation over a cartesian grid, when you declare in the code N = 256 you are setting the grid size. Attention N must be multiple of 2. Another way is using the following function init_grid (128); New types Basilisc adds some new types to the classical C types (double, float, int, …). The first we have seen is scalar scalar A[]; which do an implicit allocation. An explicit allocation and deallocation can be done like this: scalar A = new scalar; ... delete ({A}); In both cases Basilisk allocates the memory for a NxN grid (or N if in 1D or NxNxN if in 3D) for the variable A. In a finite volume approach the elementary cell has several positions to define the variables : the center, the sides and the corners. The scalar A is defined at the center of the cell. There exist two other types of fields defined in Basilisk : vector and tensor. Function run() The run() function implements a generic time loop which executes events until termination. The time t and time step dt can be accessed as global variables. This function appears in all Basilisk programs, and basically - Set the grid if the variable N is done - List all events in order until termination A typical usage is int main() { N = 128 init_grid (N); run(); } } Boundary conditions Basilisk creates stencil values outside the domain (called ghost cell values) which need to be initialised. These values can be set in order to provide the discrete equivalents of different boundary conditions. In our case we use the reserved words left, right, top or bottom to impose such values. Doing A[left] = dirichlet(0.); we impose the value zero to the left column in the matrix A doesn’t matter the gird size. This is equivalent to the C code for (j = 0 ; j < N ; j++) A[0][j] = 0.0 ; We can also use a given function of spatial and temporal variables as A[left] = dirichlet(y * cos(2 Pi t); Ghost values usually depend on the values inside the domain (for example when using symmetry conditions). It is necessary to update them when values inside the domain are modified. This can be done by calling boundary ({A}); which sets all boundary conditions defined in the code. Normally we must to update the boundary conditions after each change in the stencil. Field values over a stencil Stencils are used to access field values and their local neighbours. By default Basilisk guarantees consistent field values in a 3x3 neighbourhood (in 2D). This can be represented like this Stencil 3x3 When you are inside of the loop foreach() you are every time at [0,0] (the center of the stencil) and you can access to all values over the stencil only calling them by their local position. The neighboring values, necessary to define integration schema, are accessed directly using the indexing scheme of the Figure. Note that A[] is a shortcut of A[0,0]. As an example we can compute a centered spatial 1st derivative (A[-1,0]+A[1,0])/Delta as well as the 2nd one (A[-1,0]+A[1,0]-2. * A[])/Delta/Delta Iterators As observed before foreach() iterates over the whole grid, in 2D the double loop over i and j in a C code is for (i = 1 ; i < N-1 ; i++) { for (j = 1 ; j < N-1 ; j++) { … } } becomes now in Basilisk foreach() { … } Note that inside iterators some variables are implicitly defined : double x, y; // coordinates of the center of the stencil double Δ; // size of the stencil cell Others iterators are also defined : foreach_dimension(), foreach_face() and foreach_vertex(). Events Numerical simulations need to perform actions (inputs or outputs for example) at given time intervals. Basilisk C provides events to manage all actions. The overall syntax of events is event name (t = 1; t <= 5; t += 1) { ... } where name is the user-defined name of the event, is this case t = 1 specifies the starting time, t <= 5 is the condition which must be verified for the event to carry on and t += 1 is the iteration operator. We can use both the specified times t or a specified number of time steps i using a C syntax, like event othername (i++) { ... } which means do it at every iteration Basilisc C (a bit more) Now we go inside Basilisk syntax a bit more deep Types and stencils Vector and tensor fields are used in a similar way. Vector fields are a collection of D scalar fields and tensor fields are a collection of D vector fields. - access - Each of the components of the vector or tensor fields are accessed using the x, y or z field of the corresponding structure. vector v[]; tensor t = new tensor; ... foreach() { v.x[] = 1.; t.x.x[] = (v.x[1,0] - v.x[-1,0])/Δ; t.y.x[] = (v.y[1,0] - v.y[-1,0])/Δ; } When we write numerical scheme we need often a special arrangements of discretisation variables relative to grid (this is sometimes called variable staggering). Basilisk provides support for the three most common types of staggering: - centered staggering (default case), - face staggering - vertex staggering The following Figure shows the three staggering Figure : Types of Staggering In this case we have defined Important : some operations performed by Basilisk (such as interpolation and boundary conditions) need to know that these fields are staggered, you need then to know the kind of variable you are using. - list - A new concept in Basilisk is list which can combine elements of different types (e.g. scalar fields and vector fields) is a single row. By the way an automatic list of scalars can be declared and allocated like this: scalar * list = {a,b,c,d}; Lists are used to do a repetitive things, for example to iterate over all the elements of a list use scalar * list = {a,b,c,d}; ... for (scalar s in list) dosomething (s); or setting boundary conditions boundary({a,b,c,d}) which update all defined boundary conditions for scalars a,b,c,d. Boundary conditions The default boundary condition is symmetry for all the fields : scalars, vectors or tensors. There exists somme reserved conditions for the boundary condition as the classical neumann or dirichlet Scalars Boundary conditions can be changed for scalar fields using the following syntax: A[top] = a[]; where a[top] is the ghost value of the scalar field a immediately outside the top (respectively bottom, right, left as stated above) boundary. This corresponds to a Neumann condition (i.e. a condition on the normal derivative of field a) which can be written as A[top] = neumann(0.0); Vectors and tensor); Periodic Periodic boundary conditions can be imposed on the right/left and top/bottom boundaries using for example Where all existing fields and all the fields allocated after the call will be periodic in the right/left direction. Boundary conditions on specific fields can still be set to something else. For example, one could use to impose a pressure gradient onto an otherwise periodic domain. Boundary Internal Domain (bid) In Basilsik, the simulation domain is by default a square box with right, left, top and bottom associated boundary conditions. It is possible to define domains of arbitrary shape, with an arbitrary number of associated boundary conditions, using the mask() function. This function associates a given boundary condition to each cell of the grid. For example, to turn the domain into a rectangle with the variable y between 0 and 0.5 mask (y > 0.5 ? top : none); Function mask - The argument of the function is the value of the boundary condition to assign to each cell. In this example, all grid points of our new domain will be assigned the (pre-defined) top boundary condition. - the boundary condition of all other grid points will be unchanged (the none value is just ignored). More complex boundary conditions can be done using the Boundary Internal Domain (or bid) by defining where circle is a user-defined identifier. For example for a no-slip boundary condition for a vector field u could be defined using u.t[circle] = dirichlet(0); mask (sq(x - 0.5) + sq(y - 0.5) < sq(0.5) ? circle : none); Outputs functions Later when you manage correctly the Basilisk solvers you will need only to know the output functions. You can write yourself your own output function using the standard C as we have done in the 1st Basilsik code event print (i=10) { for (double y = 0 ; y <= L0; y += 0.01){ printf("%g %g \n", y, interpolate (A, 0, y)); } } The function interpolate()is very useful to do slices over data, it does a bilinear interpolation over the grid and the syntax is interpolate(A,x,y). The following output code write every 10 time units the x,y position of the grid together with the x and y components of the velocity field the double blank line printf(“\n\n”); is useful for using the block notion in Gnuplot to graphic vectors, like gnuplot> plot "field" index 1:10 using 1:2:3:4 with vector Basilisk includes several output function, we present some of them output_field(): regular grid in a text format Does interpolation over multiple fields on a regular grid in a text format. - This function interpolates a list of fields on a n x n regular grid. - The resulting data are written in text format in the file pointed to by fp. - The correspondance between column numbers and variables is summarised in the first line of the file. - The data are written row-by-row and each row is separated from the next by a blank line. This format is compatible with the splot command of gnuplot i.e. one could use something like gnuplot> set pm3d map gnuplot> splot 'fields' u 1:2:4 The arguments and their default values are: - list : is a list of fields to output. Default is all. - fp : is the file pointer. Default is stdout. - n : is the number of points along each dimension. Default is N. - linear : use first-order (default) or bilinear interpolation. event output (t = 5) { char name[80]; sprintf (name, "pressure.dat", nf); FILE * fp = fopen (name, "w"); output_field ({p}, fp, linear = true); fclose (fp); output_ppm(): Portable PixMap (PPM) image output Given a field, this function outputs a colormaped representation as a Portable PixMap image. If ImageMagick is installed on the system, this image can optionally be converted to any image format supported by ImageMagick. The arguments and their default values are: - f : is a scalar field (compulsory). - fp : is a file pointer. Default is stdout. - n : is number of pixels. Default is N. - file : sets the name of the file used as output for ImageMagick. For example, one could use C output_ppm (f, file = “f.png”); to get a PNG image. You can use output_ppm()to generate movies, like event movie (t += 0.2; t <= 30) { static FILE * fp = popen ("ppm2mpeg > vort.mpg", "w"); scalar ω[]; vorticity (u, ω); output_ppm (ω, fp, linear = true); } where we supposed that exist a C function vorticiy()which computers the vorticity from the velocity field. output_vtk - Write data in a VTK format VTK format are used in softwares as paraviewor visit The arguments and their default values are: - list : is a list of fields to output. Default is all. - fp : is a file pointer. Default is stdout. - n : is number of pixels. Default is N. - linear : a boolean for linear or bilinear interpolation The syntax is output_vtk (scalar * list, int n, FILE * fp, bool linear) Examples using solvers Basilisk provides an ensemble of solvers (Saint Venant, Navier-Stokes, diffusion, etc) that could be used to solve simple and more complex systems by adding them. Now what’s a solver and how do you use it? - A solver is a C file which contains variable definitions and functions for solving a specific general problem. - When you include a solver file some variables as well as functions are reserved - You need then first to read the solver file to know the reserved variables and functions. - And you need to know the inputs for the solver!! (still) diffusion equation We come back to the diffusion equation \displaystyle \partial_t A = \nabla^2 A which is a particular case of a reaction–diffusion equation \displaystyle \theta\partial_tf = \nabla\cdot(D\nabla f) + \beta f + r where \beta f + r is a reactive term, D is the diffusion coefficient and \theta could be a kind of density term. Including the diffusion solver into the program as #include "diffusion.h" you include an implicit solver fro the reaction–diffusion equation, for a scalar field f, scalar fields r and \beta defining the reactive term, the time step dt and a face vector field containing the diffusion coefficient D. By the way a complete calling of the solver is diffusion (C, dt, D, r, β); which solves the diffusion-reaction problem for a scalar C, with a diffusion coefficient D, r and \beta as defined. In particular - If Dor \theta are omitted they are set to one. - If \beta is omitted it is set to zero. - Both Dand \beta may be constant fields. Then for \displaystyle \partial_t A = \nabla^2 A the syntax for the diffusion() function is diffusion (A, dt); For information using a time-implicit backward Euler discretisation, our equation can be written as \displaystyle \frac{A^{n+1} - A^{n}}{dt} = \nabla^2 A^{n+1} Rearranging the terms we get \displaystyle \nabla^2 A^{n+1} + \frac{1}{dt} A^{n+1} = - \frac{1}{dt}A^{n} This is a Poisson–Helmholtz problem which can be solved with a multigrid solver. We can now re-write the Basilsik program #include "grid/cartesian.h" #include "run.h" #include "diffusion.h" scalar A[]; scalar dA[]; double dt; int main() { L0 = 1.; N = 256; run(); } event init (t = 0) { foreach() A[] = 1./0.1*(fabs(x*x+y*y)<0.05); boundary ({A}); } event integration (i++) { diffusion(A,dt); boundary ({A}); } event print (i=10) { for (double y = 0 ; y <= L0; y += 0.01){ printf("%g %g \n", y, interpolate (A, 0, y)); } } Shallow water equation For conservation of mass and momentum in the shallow-water context we solve \displaystyle \partial_t \mathbf{q} + \nabla \mathbf{f} = 0 for the conserved vector \mathbf{q} and flux function \mathbf{f}(\mathbf{q}), explicitly \displaystyle \mathbf{q} = \left(\begin{array}{c} h\\ hu_x\\ hu_y \end{array}\right), \;\;\;\;\;\; \mathbf{f} (\mathbf{q}) = \left(\begin{array}{cc} hu_x & hu_y\\ hu_x^2 + \frac{1}{2} gh^2 & hu_xu_y\\ hu_xu_y & hu_y^2 + \frac{1}{2} gh^2 \end{array}\right) where \mathbf{u} is the velocity vector, h the water depth and z_b the height of the topography. The primary fields are the water depth h, the bathymetry z_b and the flow speed \mathbf{u}. \eta is the water level i.e. z_b + h. Note that the order of the declarations is important as z_b needs to be refined before h and h before \eta. scalar zb[], h[], eta[]; vector u[]; The only physical parameter is the acceleration due to gravity G. Cells are considered dry when the water depth is less than the dry parameter (very small number). double G = 1.; double dry = 1e-10; ~c #include “saint-venant.h” int LEVEL = 9; We define a new boundary for the cylinder. bid cylinder; int main() { size (5.); G = 9.81; origin (-L0/2., -L0/2.); init_grid (1 << LEVEL); run(); } We impose height and velocity on the left boundary. #define H0 3.505271526 #define U0 6.29033769408481 h[left] = H0; eta[left] = H0; u.n[left] = U0; event init (i = 0) { The geometry is defined by masking and the initial step function is imposed. mask (sq(x - 1.5) + sq(y) < sq(0.5) ? cylinder : none); mask (y > 2.-x*0.2 ? top : y < -2.+x*0.2 ? bottom : none); foreach() { h[] = (x <= -1 ? H0 : 1.); u.x[] = (x <= -1 ? U0 : 0.); } } event logfile (i++) { stats s = statsf (h); fprintf (ferr, "%g %d %g %g %.8f\n", t, i, s.min, s.max, s.sum); } We generate movies of depth and level of refinement. c event movie (t += 0.005; t < 0.4) { static FILE * fp = popen (“ppm2mpeg > depth.mpg”, “w”); output_ppm (h, fp, min = 0.1, max = 6, map = cool_warm, n = 400, linear = true); } event adapt (i++) { astats s = adapt_wavelet ({h}, (double[]){1e-2}, LEVEL); fprintf (ferr, “# refined %d cells, coarsened %d cells”, s.nf, s.nc); } The movie of the depth of water ## Navier-Stokes equations We simulate the lid-driven cavity problem using the centered solver We wish to approximate numerically the incompressible, variable-density Navier-Stokes equations \displaystyle \partial_t\mathbf{u}+\nabla\cdot(\mathbf{u}\otimes\mathbf{u}) = \frac{1}{\rho}\left[-\nabla p + \nabla\cdot(\mu\nabla\mathbf{u})\right] + \mathbf{a} \displaystyle \nabla\cdot\mathbf{u} = 0 When we analyze the solver (file centered.h) we learn that 1. reserved words scalar p[]; vector u[]; vector g[]; are reserved variables (all centered, the pressure p and the vectors \mathbf{u} an \mathbf{g}) and also scalar pf[]; face vector uf[]; the auxiliary face velocity field \mathbf{u}_f and the associated centered pressure field p_f. 2. parameters a. In the case of variable density, the user will need to define both the face and centered specific volume fields (\alpha and \alpha_c respectively) i.e. 1/\rho. If not specified by the user, these fields are set to one i.e. the density is unity. b. Viscosity is set by defining the face dynamic viscosity \mu; default is zero. c. The face field \mathbf{a} defines the acceleration term; default is zero. d. If stokes (a boolean variable) is set to true, the velocity advection term \nabla\cdot(\mathbf{u}\otimes\mathbf{u}) is omitted. The code is C #include “grid/multigrid.h” #include “navier-stokes/centered.h” int main() { // coordinates of lower-left corner origin (-0.5, -0.5); // number of grid points init_grid (64); // viscosity const face vector muc[] = {1e-3,1e-3}; μ = muc; // maximum timestep DT = 0.1; // CFL number CFL = 0.8; run(); } // boundary condition u.t[top] = dirichlet(1); u.t[bottom] = dirichlet(0); u.t[left] = dirichlet(0); u.t[right] = dirichlet(0); event outputfile (i += 100) { output_matrix (u.x, stdout, N, linear = true); } event movie (i += 4; t <= 15.) { static FILE * fp = popen (“ppm2mpeg > norm.mpg”, “w”); scalar norme[]; foreach() norme[] = norm(u); boundary ({norme}); output_ppm (norme, fp, linear = true); } ~ We generate a mpeg file of the norm of the velocity field ## Navier Stokes : Flow over a cylinder An example of 2D viscous flow around a simple solid boundary. Fluid is injected to the left of a channel bounded by solid walls with a slip boundary condition. The Reynolds number is set to 160. C #include “navier-stokes/centered.h” The domain is eight units long, centered vertically..); u.n[right] = neumann(0.); p[right] = dirichlet(0.); pf[right] = dirichlet(0.); We add a new boundary condition for the cylinder. The tangential velocity on the cylinder is set to zero. To make a long channel, we set the top boundary for y > 0.5 and the bottom boundary for y < -0.5. The cylinder has a radius of 0.0625. We set a constant viscosity corresponding to a Reynolds number of 160, based on the cylinder diameter (0.125) and the inflow velocity (1). We also set the initial velocity field and tracer concentration. We check the number of iterations of the Poisson and viscous problems. ~c event logfile (i++) fprintf (stderr, “%d %g %d %d”, i, t, mgp.i, mgu.i); event movies (i += 4; t <= 15.) { static FILE * fp = popen (“ppm2mpeg > vort.mpg”, “w”); scalar vorticity[]; foreach() vorticity[] = (u.x[0,1] - u.x[0,-1] - u.y[1,0] + u.y[-1,0])/(2.*Delta); boundary ({vorticity}); output_ppm (vorticity, fp, box = {{-0.5,-0.5},{7.5,0.5}}, min = -10, max = 10, linear = true); } ~ We generate a mpeg file of the vorticity
http://basilisk.fr/sandbox/jmf/tutorial
CC-MAIN-2020-24
refinedweb
3,940
63.49
Opened 4 years ago Closed 2 years ago Last modified 21 months ago #21414 closed Cleanup/optimization (fixed) Remove django.db.models.related.RelatedObject Description Currently related fields have two similar attributes, self.rel and self.related. The first is ForeignObjectRel subclass, the second is RelatedObject. Both of these do almost the same thing, and it doesn't seem necessary to have both rel and related. It is confusing to try to remember which one does what. In the proposed patch at the RelatedObject usage is removed. The idea is to make ForeignObjectRel to work exactly like RelatedObject worked and provide the same instance from field.rel and field.related. I've opted for deprecation path where RelatedObject can still be used, and so can field.related, too. The field.related attribute is actually ForeignObjectRel, but by usage of __instancecheck__ isinstance(field.related, RelatedObject) answers yes. This should make this change easy for 3rd party apps. Change History (12) comment:1 Changed 4 years ago by comment:2 Changed 4 years ago by I am no sure about docs. This is completely private API... comment:3 Changed 4 years ago by By docs I meant docs/internals/deprecation.txt, dunno if this document is meant for public consumption or for internal housekeeping. comment:4 Changed 4 years ago by It's only for public APIs. If we know that a private API is used in the wild, we might mention it there, but that's an exception. comment:5 Changed 4 years ago by All good, I wrongly assumed it was a TODO list for when the new dev branch is created. So yes, it's not needed here since this is very much private APIs. comment:6 Changed 3 years ago by Anssi, do you think there is much work to do to incorporate this in 1.8? Might it simply the _meta refactor or should we try to merge that branch first? comment:7 Changed 3 years ago by I'm trying to update this to apply cleanly. comment:8 Changed 3 years ago by comment:9 Changed 2 years ago by comment:10 Changed 2 years ago by Found two apps failing as they are using RelatedObject, django-taggit and django-import-export. To fix, would I be right in thinking it is a case of replacing from django.db.models.fields.related import ForeignObjectRel with from django.db.models.related import RelatedObject and all instances of RelatedObject with ForeignObjectRel ? comment:11 Changed 2 years ago by Yes Looking pretty good. I left a couple of comments on commit 8d63a8e (linking to it since these are lost on rebase). Dunno if it's an omission or still on the todo list, but it's missing the deprecation docs.
https://code.djangoproject.com/ticket/21414
CC-MAIN-2017-26
refinedweb
461
66.33
LWP_NEWSTK(3L) LWP_NEWSTK(3L) NAME lwp_checkstkset, lwp_stkcswset, CHECK, lwp_setstkcache, lwp_newstk, lwp_datastk, STKTOP - LWP stack management SYNOPSIS #include <<lwp/lwp.h>> #include <<lwp/check.h>> #include <<lwp/lwpmachdep.h>> #include <<lwp/stackdep.h>> CHECK(location, result) int lwp_checkstkset(tid, limit) thread_t tid; caddr_t limit; int lwp_stkcswset(tid, limit) thread_t tid; caddr_t limit; int lwp_setstkcache(minstksz, numstks) int minstksz; int numstks; stkalign_t *lwp_newstk() stkalign_t *lwp_datastk(data, size, addr) caddr_t data; int size; caddr_t *addr; STKTOP(s) DESCRIPTION Stacks are problematical with lightweight processes. What is desired is that stacks for each thread are red-zone protected so that one thread's stack does not unexpectedly grow into the stack of another. In addition, stacks should be of infinite length, grown as needed. The process stack is a maximum-sized segment (see getrlimit(2).) This stack is redzone protected, and you can even try to extend it beyond its initial maximum size in some cases. With SunOS 4.x, it is possible to efficiently allocate large stacks that have red zone protection, and the LWP library provides some support for this. For those systems that do not have flexible memory management, the LWP library provides assis- tance in dealing with the problems of maintaining multiple stacks. The stack used by main() is the same stack that the system allocates for a process on fork(2V). For allocating other thread stacks, the client is free to use any statically or dynamically allocated memory (using memory from main()'s stack is subject to the stack resource limit for any process created by fork()). In addition, the LASTRITES agent message is available to free allocated resources when a thread dies. The size of any stack should be at least MINSTACKSZ * sizeof (stkalign_t), because the LWP library will use the client stack to exe- cute primitives. For very fast dynamically allocated stacks, a stack cacheing mechanism is available. lwp_setstkcache() allocates a cache of stacks. Each time the cache is empty, it is filled with numstks new stacks, each containing at least minstksz bytes. minstksz will auto- matically be augmented to take into account the stack needs of the LWP library. lwp_newstk() returns a cached stack that is suitable for use in an lwp_create() call. lwp_setstkcache() must be called (once) prior to any use of lwp_newstk. If running under SunOS 4.x, the stacks allo- cated by lwp_newstk() will be red-zone protected (an attempt to refer- ence below the stack bottom will result in a SIGSEGV event). Threads created with stacks from lwp_newstk() should not use the NOLAS- TRITES flag. If they do, cached stacks will not be returned to the cache when a thread dies. lwp_datastk() also returns a red-zone protected stack like lwp_newstk() does. It copies any amount of data (subject to the size limitations imposed by lwp_setstkcache) onto the stack above the stack top that it returns. data points to information of size bytes to be copied. The exact location where the data is stored is returned in the reference parameter addr. Because lwp_create() only passes simple types to the newly-created thread, lwp_datastk() is useful to pass a more complex argument: Call lwp_datastk() to get an initialized stack, and pass the address of the data structure (addr) as an argument to the new thread. A reaper thread running at the maximum pod priority is created by lwp_setstkcache. It's action may be delayed by other threads running at that priority, so it is suggested that the maximum pod priority not be used for client-created threads when lwp_newstk() is being used. Altering the maximum pod priority with pod_setmaxpri() will have the side effect of increasing the reaper thread priority as well. The stack address passed to lwp_create() represents the top of the stack: the LWP library will not use any addresses at or above it. Thus, it is safe to store information above the stack top if there is room there. For stacks that are not protected with hardware redzones, some protec- tion is still possible. For any thread tid with stack boundary limit made part of a special context with lwp_checkstkset(), the CHECK macro may be used. This macro, if used at the beginning of each procedure (and before local storage is initialized (it is all right to declare locals though)), will check that the stack limit has not been violated. If it has, the non-local location will be set to result and the proce- dure will return. CHECK is not perfect, as it is possible to call a procedure with many arguments after CHECK validates the stack, only to have these arguments clobber the stack before the new procedure is entered. lwp_stkcswset() checks at context-switch time the stack belonging to thread tid for passing stack boundary limit. In addition, a checksum at the bottom of the stack is validated to ensure that the stack did not temporarily grow beyond its limit. This is automated and more efficient than using CHECK, but by the time a context switch occurs, it's too late to do much but abort(3) if the stack was clobbered. To portably use statically allocated stacks, the macros in <<lwp/stack- dep.h>> should be used. Declare a stack s to be an array of stkalign_t, and pass the stack to lwp_create() as STKTOP(s). RETURN VALUES lwp_checkstkset() and lwp_stkcswset() return 0. lwp_setstkcache() returns the actual size of the stacks allocated in the cache. lwp_newstk() and lwp_datastk() return a valid new stack address on suc- cess. On failure, they return 0. SEE ALSO getrlimit(2), abort(3) WARNINGS lwp_datastk() should not be directly used in a lwp_create() call since C does not guarantee the order in which arguments to a function are evaluated. BUGS C should provide support for heap-allocated stacks at procedure entry time. The hardware should be segment-based to eliminate the problem altogether. 21 January 1990 LWP_NEWSTK(3L)
http://modman.unixdev.net/?sektion=3&page=CHECK&manpath=SunOS-4.1.3
CC-MAIN-2017-17
refinedweb
977
54.22
A Guide to Testing in Django. I'll assume you've never done any testing before but that you're comfortable with Python & Django. We'll be walking through adding tests to the perennial tutorial Django app. To make it easier to follow along, I've uploaded the code to Github with tags for the major steps & to show how the code changes over time. Before we dive into code, let's introduce some basic concepts & talk about how to think/go about testing. Why Should You Test Your Code? "Code without tests is broken by design." - Jacob. When you first get started, writing tests is a scary task that sounds like extra work. But simple tests are easy to write and having some tests is better than no tests at all. And as you add new tests, your suite (and your confidence) grows with it. This is not to say that tests solve everything. There will always be bugs in software. Maybe the tests miss a codepath or a user will use something in an unexpected way. But tests give you better confidence & a safety net. Types Of Testing There are many different types of testing. The prominent ones this series will cover are unit tests and integration tests. Unit tests cover very small, highly-specific areas of code. There's usually relatively few interactions with other areas of the software. This style of testing is very useful. Tooling Within the Python world, there are a wide variety of tools to test your code. Some popular options include: unittest/ unittest2 doctest nose This guide won't dive into doctests or nose tests, sticking to unittest. This is because tests written in unittest run the fastest when testing Django apps (thanks to some fun transactional bits). I'd encourage you to go investigate the other options, if only to expand your knowledge of what's available. You should not confuse unittest (the library) with unit testing (the approach of testing small chunks of contiguous code). You'll often use the unittest library for both unit & **integration`` tests. What To Test? Another common setback for developers/designers new to testing is the question of "what should (or shouldn't) I test?" While there are no hard & fast rules here that neatly apply everywhere, there are some general guidelines I can offer on making the decision: - If the code in question is a built-in Python function/library, don't test it. Examples like the datetimelibrary. - If the code in question is built into Django, don't test it. Examples like the fields on a Modelor testing how the built-in template.Noderenders included tags. - If your model has custom methods, you should test that, usually with unit tests. - Same goes for custom views, forms, template tags, context processors, middleware, management commands, etc. If you implemented the business logic, you should test your aspects of the code. Another upfront question is "how far down do you go?" Again, there's no right answer here, save for "where am I comfortable?" If you start mumbling "yo dawg..." under your breath or humming the tune of the INCEPTION theme, you know you've probably gone too far. :D When Should You Test? Another point of decision is deciding whether to do test-first (a.k.a. Test Driven Development) or test-after. Test-first is where you write the necessary tests to demonstrate. Something that is always appropriate, regardless of general style, is when you get a bug report. ALWAYS create a test case first & run your tests. Make sure it demonstrates the failure, THEN go fix the bug. If your fix is correct, that new test should pass! It's an excellent way to sanity check yourself & is a great way to get started with testing to boot. Let's Get To It! Now that we've got a solid foundation on the why, what & when of testing, we're going to start diving into code. Most people's first experiences with Django involve the classic "polls" app (introduced in Django's tutorial docs). Since the tutorial never adds or mentions tests for that application, we'll use it as a starting point. You should clone the repository ( git clone) to follow along. Whenever a decent chunk of code or new concept is introduced, I will mention the tag you should check out ( git co <tagname>). Tag: 01-initial Our starting point is the completed "polls" app (with a few bugs intentionally added for demonstration). The first thing we'll do is run the test suite, even though we haven't added any tests yet. Run the following command: python manage.py test You'll get a large number of .s then the following output: ---------------------------------------------------------------------- Ran 307 tests in 5.763s OK What? How are there so many tests and all of them already passing? The answer is that the various Django contrib apps that are included in the INSTALLED_APPS all have tests that run as part of the suite. Since we trust that Django is working right, we'll run tests only for our application ( polls). Run the following command: python manage.py test polls This limits the tests run only to those within the polls app. You should get something more reasonable, like: . ---------------------------------------------------------------------- Ran 1 test in 0.000s OK That's better, though we still have two unaccounted-for tests. When you run python manage.py startapp <appname>, this automatically creates a tests.py file within your app. This file has two basic, kinda-useless tests included. Something like: """) Since 1 + 1 = 2 doesn't really test anything meaningful, we're going to get rid of these & replace them with something more useful. We'll start with the easiest area of Django to add new tests: views. Adding Tests To Views Step one is to nuke the existing tests. Select everything in tests.py & delete it all. Much better. Tag: 02-first-test Let's replace it with the simplest meaningful test we can do: from django.test import TestCase class PollsViewsTestCase(TestCase): def test_index(self): resp = self.client.get('/polls/') self.assertEqual(resp.status_code, 200) This code sets up a new test case ( PollsViewsTestCase), which you can think of as a collection of related tests. Any method name starting with test will be run automatically & its output will be included in the testing output when we run the command. The test itself is simple. We ask the Client ( self.client) built-in to Django's TestCase to fetch the URL /polls/ using GET. We store that response (an HttpResponse in resp, then perform tests on it. In this case, we do a simple check on what status code did we get back. Since successful HTTP GET requests result in a 200, we do an assertEqual to make sure resp.status_code = 200. When we run our tests, we get: . ---------------------------------------------------------------------- Ran 1 test in 0.114s OK Much better. And cheers, because we know that our index view works! Or does it? We know that the user will get a successful response, but we don't know what content the user will get. Fortunately, we got back that HttpResponse we stashed in resp. As you (hopefully) know, HttpResponse objects include the content the user should get back. We could test that content against a known string. However, that'd be comparing the full rendered content of the page, which could have other elements involved (template tags, design changes, etc.) that could make our tests fail when there's nothing wrong. Fortunately, there's a better way. The HttpResponse you get back has a number of additional properties on it that will make it easier for us to test. In particular, the useful ones are: resp.status_code resp.context resp.templates resp[<header name>] Since the context should be very consistent between runs, let's use it to make sure things are on the up & up: from django.test import TestCase class PollsViewsTestCase(TestCase): def test_index(self): resp = self.client.get('/polls/') self.assertEqual(resp.status_code, 200) self.assertTrue('latest_poll_list' in resp.context) self.assertEqual([poll.pk for poll in resp.context['latest_poll_list']], [1]) We add a check to make sure the latest_poll_list key is seen in the context. Then we make sure that the only Poll we have in our database is in that list of latest polls. You might be asking "why use a list comprehension"? The answer is that, without using a list comprehension, what you'll actually get back out of the context is that list of the most recent five Polls. Since you're evaluating the list when you run assertEqual, the Poll objects will each return their __unicode__ method, which could change over time. By checking the pk, we make sure that out tests don't randomly fail in the future. Let's run our tests: python manage.py test polls Uh-oh. This doesn't look good: F ====================================================================== FAIL: test_index (polls.tests.PollsViewsTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/daniel/Desktop/guide_to_testing/polls/tests.py", line 9, in test_index self.assertEqual([poll.pk for poll in resp.context['latest_poll_list']], [1]) AssertionError: Lists differ: [] != [1] Second list contains 1 additional elements. First extra element 0: 1 - [] + [1] ? + ---------------------------------------------------------------------- Ran 1 test in 0.100s FAILED (failures=1) What happened? Our . was replaced with an F & we got a traceback saying AssertionError: Lists differ: [] != [1]. For some reason, the list of primary keys we were expecting wasn't there! The reason is that tests don't run against the database you have in your settings.py. This could lead to destroying real data. Instead, Django runs your tests against a test-only database. When Django creates that database, it's completely empty, hence, the Poll that's present in our "live" database isn't there. To fix this, we can manually recreate the data as part of the test. import datetime from django.test import TestCase from polls.models import Poll, Choice class PollsViewsTestCase(TestCase): def test_index(self): poll_1 = Poll.objects.create( question='Are you learning about testing in Django?', pub_date=datetime.datetime(2011, 04, 10, 0, 37) ) choice_1 = Choice.objects.create( poll=poll_1, choice='Yes', votes=0 ) choice_2 = Choice.objects.create( poll=poll_1, choice='No', votes=0 ) resp = self.client.get('/polls/') self.assertEqual(resp.status_code, 200) self.assertTrue('latest_poll_list' in resp.context) self.assertEqual([poll.pk for poll in resp.context['latest_poll_list']], [1]) Now, if we run our tests, we get: ---------------------------------------------------------------------- Ran 1 test in 0.020s OK Much better. However, that was a lot of work to create that data & having to do that a lot could get verbose, when what you want is for a test to be a concise as possible. There's a general solution to this problem, in the form of fixtures. Adding fixtures Fixtures are serialized data that are easy to load. And one of the best uses if for test data within test cases. There are several options for creating them: python manage.py dumpdata - By hand - Applications like testmaker For now, because we're lazy & want to use what's included with Django, we'll take our "live" database & dump that data. Run the following command: mkdir polls/fixtures python manage.py dumpdata polls --indent=4 > polls/fixtures/polls_views_testdata.json This gives us a new directory ( fixtures) & drops some nicely formatted JSON data in polls_views_testdata.json. Let's use this new fixture to run our tests. Warning - Fixture names are "project-wide", so make sure your fixtures have a unique name, otherwise you may get unexpected data. Tag: 03-better-index First, we need to modify our code to use this new fixture. We'll remove the model creation bits we added & use the TestCase.fixtures attribute to tell Django's testing facilities what data we want to use when running the tests.]) When we run our tests, we see that they still pass correctly: . ---------------------------------------------------------------------- Ran 1 test in 0.115s OK This is great! Let's add a few more tests to make sure the question & choices are right as well.]) poll_1 = resp.context['latest_poll_list'][0] self.assertEqual(poll_1.question, 'Are you learning about testing in Django?') self.assertEqual(poll_1.choice_set.count(), 2) choices = poll_1.choice_set.all() self.assertEqual(choices[0].choice, 'Yes') self.assertEqual(choices[0].votes, 1) self.assertEqual(choices[1].choice, 'No') self.assertEqual(choices[1].votes, 0) We do some tests to make sure all the data we're expecting is there. Again, run the tests & receive an all-clear: . ---------------------------------------------------------------------- Ran 1 test in 0.065s OK Another point of note is that the individual test methods are what count toward the overall test results, not each assertion. So while we've fleshed out our test_index method, it's still only counted as one test. Tag: 04-second-test To introduce, more coverage, let's test the detail view. Add the following method to PollsViewsTestCase: def test_detail(self): resp = self.client.get('/polls/1/')/') self.assertEqual(resp.status_code, 404) It uses all the same things we've already introduced, which is a very common pattern for views that simply display data. As a new twist, we're also checking to make sure that if a non-existent Poll primary key is requested, that we're correctly serving an Http404. Running the tests gives us: E. ====================================================================== ERROR: test_detail (polls.tests.PollsViewsTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): # ... lots here... TemplateDoesNotExist: 404.html ---------------------------------------------------------------------- Ran 2 tests in 0.418s FAILED (errors=1) Oops! We made the common mistake of forgetting to add the 404.html and 500.html templates (since the test suite usually runs with settings.DEBUG = False). We'll create those files (empty is fine for now) & we're back to passing tests: .. ---------------------------------------------------------------------- Ran 2 tests in 0.097s OK Since the results view is largely the same (at least right now), tests for that a largely a duplicate of the test_detail ones: def test_results(self): resp = self.client.get('/polls/1/results/')/results/') self.assertEqual(resp.status_code, 404) Running the tests gives us: ... ---------------------------------------------------------------------- Ran 3 tests in 0.079s OK This is actually very important, even though those tests are trivial, because those views could change in the future & having test coverage helps ensure that things aren't broken. As we refactor parts of the app in the next installment, you'll see how this can be important. What's Next? We've covered getting up & started with testing, going from no tests to ensuring that the display aspects of our application work properly. Things we'll cover next time: - Testing POST requests - Testing forms - Testing models Updates - Fixed Github link - Thanks @prometheus. - Removed doctests from the autogenerated test code, as that's no longer there in Django 1.3. Thanks @alex_gaynor. - Fixed references from test_to just test. Thanks @notanumber. - Fixed broken URLs in the final tests. Thanks to Laura Creighton.
http://toastdriven.com/blog/2011/apr/10/guide-to-testing-in-django/
CC-MAIN-2014-35
refinedweb
2,479
68.36
Posted my first official article to the CodeProject. We'll see how it goes. You can view it here: Model-View-Controller using ASP.NET WebForms View Engine. Here is a local post for your viewing pleasure, but if you like it, I would ask that you head over to the CodeProject and let them know with your vote! Thanks! Introduction With the release of the ASP.NET MVC (Model-View-Controller) view engine, many developers are rejoicing that the old pattern has finally made it to the .NET world. In reality, there have been MVC view engines in existence for ASP.NET for quite some time. Contrary to popular belief, the common WebForms view engine, which is the default ASP.NET rendering engine so many developers have been familiar with, readily supports the model-view-controller paradigm. The purpose of this article is to demonstrate a sample framework that uses the MVC pattern successfully with the traditional ASP.NET WebForms engine. Background Model-View-Controller has been around for quite some time and is attributed to the SmallTalk language/platform. You'll find like many design patterns that actual explanations and implementation of the MVC pattern may vary. It is not my intent to provide a "pure" solution adhering to some golden standard, but to present a variation of MVC that works well with ASP.NET WebForms and explain what and why it is being used. There are three intrinsic parts to MVC, and these are described as follows: The Model The model is often referred to as the "domain entity" and is essentially data. It is a container for information that is moving through your system. Typically, a model should contain attributes and properties but little to no function nor operation, short of constructors and possibly validation. An example of a model would be contact information or a set of credentials. The View The view takes the data from the model and presents it to the end user as meaningful information. A view might render a list as a bulleted list, a drop down, a multi-select box or event a paged grid. There might be a view for a console that emits text and another view for a rich web interface that contains 3D graphics controls. I consider the following rules important to follow for a view. - A view only understands the model and its own "rendering space" (web, console, WinForm, etc). - The view is primarily concerned with mapping the model to the render space, and possibly receiving changes from that render space (i.e. user input). - Views should only manipulate data to the extent that it is necessary to pull from the model or place back within the model. Business logic belongs elsewhere (we'll discuss this in a minute with the controller) - Views do not understand how to write data to databases (or retrieve it) and are unaware of complex logic such as security concerns or calculations. Instead, views expose properties that allow their conditions to be set by another mechanism that understands these. - Views receive information via exposed properties and methods. - Views are like most people in the movie, "The Matrix" ... they are completely unaware that they are being controlled and have absolutely ZERO affinity to a controller. - Because views are ignorant of being controlled, if a view requires additional information or is responding to a command from the UI such as "delete" or "update," the view is only able to marshall these values into a container and raise an event. The view raises the event and forgets about it. The separation of the view from the business logic is important. In a rapid prototyping environment when we need UI for proof of concept, we don't have time to wire in the backend services. A view that is ignorant of control can be easily set up with a "mock" controller that simply pushes static data for the prototype. On the flipside, in a production environment there may be a controller that manages lists of entities but must present these differently based on the user. In this case, multiple views can be invoked with the same controller. On a large development team, a view that is cleanly separated from its control can be developed independently of the controller itself, allowing for example the mobile device team to build the UI on the device while the Silverlight experts build their own rich control. The Controller Finally, we have the controller. The controller is responsible for handling all of the business logic, fetching data, processing changes, responding to events, and making it all work. The controller is aware of both the view and the model. However, I believe a controller should also follow a set of its own rules to fit solidly within an enterprise software environment. - A controller should never understand which concrete instance of a view it is working with but only interact with a view interface - The controller should never, ever concern itself with view logic. This means controllers do not emit HTML fragments, do not understand drop downs and don't have to worry about JavaScript. The controller simply deals with lists of models and events that contain models and nothing else. A true test of a controller is that it functions perfectly whether the view is console output, a WinForm, or a web interface. - Controllers should never try to interact with another controller's views. Instead, controllers can raise their own events for consumption by other controllers. - Controllers communicate to views by setting attributes or calling methods on the view, and by handling events raised by the view. - Controllers should never talk directly to another controller, only to a controller interface (therefore a "super" controller might have sub-controllers, but again, it is interacting with interfaces, not concrete instances). By following these rules, you can have a flexible yet powerful controller architecture. One controller can generate the static models needed for a prototype. Another controller might set up some mock objects to run unit tests, while the production controller interacts with services to pull lists of models and manipulate the models by responding to events. Traditional, out-of-the-box ASP.NET seems to violate these principles. While the code-behind separates the code from the display elements, there is a strong affinity between the two. In fact, most pages must embed a specific, static reference to user controls in order for them to function. This creates a dependency that doesn't allow the flexibility we discussed. Furthermore, doing things like taking a text field value and moving it into the model and sending to the service violates the rules of the controller not understanding the nuances of the view or the view not knowing its controller. Fortunately, through the use of a framework to define the views as controls and controllers, along with dynamic user controls, we can build a platform that adheres to the tenants of MVC. Dynamic user controls are very important so that the controller can deal with the view interface and a factory or other mechanism can invoke the concrete instance of the view being used at runtime. The included application presents this framework and provides a proof-of-concept for showing a controller and two views that are controller-ignorant yet can still interact through the use of events. Using the Code The included code is a proof-of-concept implementation of the MVC pattern using WebForms. The source application includes a full-blown framework to support the application. The "model" exists in our domain projects with the purpose to describe the business data. In our example, this is a set of reserved keywords for the C# language. The KeywordModel contains a unique identifier for the keyword and the value which is the keyword itself. The code is straightforward and should compile "out of the box". You can right-click on the Default.aspx page to view. You'll see a very plain page with two columns: one with a "currently selected" keyword and a dropdown, another with a list of all of the keywords, and finally a submit button. When you select a keyword, it should dynamically show next to the "selected" label, and the corresponding keyword in the right column will become bold. When you click submit, the last selected keyword will render red in the right column to demonstrate the server-side actions that "remember" the recent selection (there is no query string nor forms parsing in the application). I created an Interface.Data to describe interactions with persisted data. There is a load and a list function. To save time, instead of wiring into a full-blown database, I simply embedded an XML resource and used a concrete class called KeywordDataAccess to pull values from the XML. In my applications, I try to keep the data layer as focused as possible (reads, writes, updates). Any business logic such as calculations and sorts are in the service layer that sits on top of the data layer. You'll find that everything above the data layer references an IDataAccess<KeywordModel>, not a concrete instance. The service layer simply invokes the data layer to grab the data. Note that we use the factory pattern to grab our concrete instance. This will allow us to stub out "mock" data layers for unit tests or even swap the XML data layer with a SQL, Access, or other data layer without having to change anything else but the concrete class for the data layer and the instance that the factory returns. For a larger application, of course, a Dependency Injection framework would help wire these in — hence the constructor that takes the data access reference for constructor injection. In this case, the only "extra" handling of the domain model that the service is required to do is to perform a sort of the data prior to presenting it (see the sort in the List method). Finally we get to the presentation layer, serviced as a web application. We've already discussed the model that contains the business data. Now we have two more pieces: the view, and the controller. Let's talk about the controller first. For the application to truly scale, the controller should function independently of the specific view it needs to control. A controller is simply of type T (where T is a model) and can operate on any view of type T. The following diagram illustrates the controller interface and implementation. Note that the controller itself simply has a constructor that takes in the view it will be managing. Everything else is handled by the base class, DynamicController<T> that implements IController<T>. There are a few things we have wired into our base controller: - Context — a guid is generated to manage state through callbacks and postbacks. - The controller knows the type (T) it is managing, so our model allows us to simply go to the ServiceFactoryand get the default service for T. Note that a SetServicemethod is also exposed for injection of the service for things like unit tests using mocked services. - The controller keeps track of its view ( _control) and exposes events for other controllers. Remember our rule: controllers talk to controllers and their own control, controls only raise events and are ignorant of being controlled. - There is a list of models along with a "current" model - The controller responds to a "select" event. If we were performing CRUD or deletes, etc, we would process this in the controller and then pass it through. In our sample, we just pass it through so any higher-level controllers can respond to the event as well. - Finally, we have an Initializemethod that is meant to be invoked the first time the controller is created. Subsequent postbacks and callbacks do not call this method, and internal lists are managed via state. More on that when we discuss the controls. The interface is simple: using System; using Interface.Domain; namespace Interface.Controller { /// <summary> /// Interface for a controller /// </summary> public interface IController<T> where T : IKey, new() { /// <summary> /// Called the first time to initialize the controller /// </summary> void Initialize(); /// <summary> /// Raised when a model is selected /// </summary> event EventHandler<EventArgs> OnSelect; /// <summary> /// Current active or selected model /// </summary> T CurrentModel { get; set; } /// <summary> /// Context (state management) /// </summary> Guid CurrentContext { get; set; } } } ...and the base controller class: /// <summary> /// Controller base class /// </summary> /// <typeparam name="T">The type to control</typeparam> public class DynamicController<T> : IController<T> where T : IKey, new() { /// <summary> /// Manages the state of the controller and control /// </summary> public Guid CurrentContext { get { return _control.CurrentContext; } set { _control.CurrentContext = value; } } /// <summary> /// Inject the control and grab the default service /// </summary> /// <param name="control"></param> public DynamicController(IControl<T> control) { _control = control; _service = ServiceFactory.GetDefaultService<T>(); _control.NeedData += _ControlNeedData; _control.OnSelect += _ControlOnSelect; } /// <summary> /// Fired when the child control raises the select event /// </summary> /// <param name="sender">Sender</param> /// <param name="e">Args</param> private void _ControlOnSelect(object sender, EventArgs e) { if (OnSelect != null) { OnSelect(this, e); } } /// <summary> /// Needs the list again /// </summary> /// <param name="sender"></param> /// <param name="e"></param> private void _ControlNeedData(object sender, EventArgs e) { _control.ControlList = _service.List(); } /// <summary> /// Fired when a model is selected from the list /// </summary> public event EventHandler<EventArgs> OnSelect; /// <summary> /// The control this controller will work with /// </summary> protected IControl<T> _control; /// <summary> /// Service related to the entity /// </summary> protected IService<T> _service; /// <summary> /// Current active or selected model /// </summary> public virtual T CurrentModel { get { return _control.CurrentModel; } set { _control.CurrentModel = value; } } /// <summary> /// Allows injection of the service /// </summary> /// <param name="service">The service</param> public virtual void SetService(IService<T> service) { _service = service; } /// <summary> /// Called the first time to initialize the controller /// </summary> public virtual void Initialize() { _control.Initialize(_service.List()); } } Now that we have a decent understanding of the controller, let's move on to the controls. The project defines two controls for the same model: KeywordDropdownView and KeywordListView. Both of these views implement IControl<T>, the view for the KeywordModel entity. The controls are a bit more in depth. You'll note the controls contain an IControlCache which allows the control to persist internal state. For example, instead of going out to the service (and hence, data) layers to request a list each time, the control can store these lists in the cache. When the cache expires, the control simply raises the NeedData event and the controller will supply a new list. Two examples of the IControlCache implementation are included to give you some ideas of how to code this. One, SimpleCache, simply shoves the objects in the Session object. The other takes advantage of the ASP.Net Cache object and stores the items in cache for 5 minutes. Each control is wired to receive a context: the unique GUID that identifies the instance. This solves a common problem. Many developers are happy to store objects in session with generic keys, like this: ... Session["ControlList"] = ControlList; ... The problem is that if you open multiple tabs in the same browser, each tab now fights for the same session variable and you can collide across pages. Generating a GUID in the page and using the guid for caching and session storage ensures that each instance in the browser, even when they share the same session, is managed appropriately. The view contract: /// <summary> /// Interface for a generic control /// </summary> /// <typeparam name="T"></typeparam> public interface IControl<T> where T : IKey, new() { /// <summary> /// Initializes the control with the list of items to manage /// </summary> /// <param name="list"></param> void Initialize(List<T> list); /// <summary> /// Raised when something is selected from the list /// </summary> event EventHandler<EventArgs> OnSelect; /// <summary> /// Raised when the control needs data again /// </summary> event EventHandler<EventArgs> NeedData; /// <summary> /// The list for the control /// </summary> List<T> ControlList { get; set; } /// <summary> /// The current active model /// </summary> T CurrentModel { get; set; } /// <summary> /// Caching mechanism for the control to save/load state /// </summary> IControlCache ControlCache { get; set; } /// <summary> /// Context (state management) for the control /// </summary> Guid CurrentContext { get; set; } } And the view base class: /// <summary> /// A dynamic control /// </summary> /// <typeparam name="T"></typeparam> public abstract class DynamicControl<T> : UserControl, IControl<T> where T : IKey, new() { /// <summary> /// Context for this control /// </summary> private Guid _context = Guid.Empty; /// <summary> /// Cache for the control /// </summary> public IControlCache ControlCache { get; set; } /// <summary> /// Unique context for storing state /// </summary> public Guid CurrentContext { get { if (_context.Equals(Guid.Empty)) { _context = Guid.NewGuid(); } return _context; } set { _context = value; LoadState(); } } /// <summary> /// List that the control works with /// </summary> public virtual List<T> ControlList { get; set; } /// <summary> /// The current selected model /// </summary> public virtual T CurrentModel { get; set; } /// <summary> /// Initializes the control with the list of items to manage /// </summary> /// <param name="list"></param> public virtual void Initialize(List<T> list) { ControlList = list; } /// <summary> /// Allow override of tis event /// </summary> public virtual event EventHandler<EventArgs> OnSelect; /// <summary> /// Need data? /// </summary> public virtual event EventHandler<EventArgs> NeedData; /// <summary> /// Load event - allow things to wire and settle before trying to bring in state /// </summary> /// <param name="e"></param> protected override void OnLoad(EventArgs e) { base.OnLoad(e); LoadState(); } /// <summary> /// Last thing to do is save state /// </summary> /// <param name="e"></param> protected override void OnUnload(EventArgs e) { SaveState(); base.OnUnload(e); } /// <summary> /// Save the control state /// </summary> public virtual void SaveState() { if (ControlCache != null) { object[] savedState = new object[] {ControlList ?? new List<T>(), CurrentModel}; ControlCache.Save(CurrentContext, savedState); } } /// <summary> /// Load the control state /// </summary> public virtual void LoadState() { if (ControlCache != null) { object[] loadedState = ControlCache.Load(CurrentContext) as object[]; if (loadedState != null && loadedState.Length == 2) { ControlList = loadedState[0] as List<T> ?? ControlList; CurrentModel = (T) loadedState[1]; } } } } The KeywordDropdownView view simply takes the list of models from the controllers and renders them into a drop down. It contains some JavaScript code (embedded as a resource, this is covered in more detail in JavaScript and User Controls 101) that responds to a selection. The example demonstrates how a control can be responsive on both the client and server sides. When a new keyword is selected in the dropdown, it raises an event on the client that other controls can subscribe to called keywordDropdownChanged. It then does a callback to the control on the server side, which raises the OnSelect event for server-side management. We'll examine these events in more detail. Instead of my favorite JQuery library add-on, I decided to use traditional JavaScript that is cross-browser compatible (I tested on IE 6, IE 7, IE 8, and FireFox). This will give you some examples of DOM manipulation on the fly as well as one way to wire in a callback. The KeywordListView uses a simple repeater to list the keywords as labels, which render as DIV tags on the client. Again, this is a proof of concept to show two interactions. First, on the client, the control registers for the keywordDropdownChanged event. When the event is raised, it searches through its own rendered list of keywords and changes the target to bold. You can see this happen on the client as an example of two controls talking to each other without any knowledge of their existance nor implementation. The second pieces is that the main page itself acts as a "master" controller. It responds to the OnSelect event from the dropdown control, and sends the selected keyword to the list control. The list control persists this information and colors the keyword red when it renders a new list. You can see this behavior by selecting a keyword and then clicking submit. Notice that all of the interactions are done through events, not some awkward mechanism that is parsing form data and responding by calling various controls directly. The final piece that really ties the MVC concept together is the main page. Here you'll see we are only dealing with abstractions ( IController<KeywordModel>). This is where I feel my framework breaks with many traditional models I've seen. Most attempts at MVC in WebForms still require a strong affinity between the page and the control. How many times have you found yourself embedding a user control by using the <%Register%> tag to point to a control? This leaves little room for flexibility (for example, grabbing a different view if the user is on a PDA, but using the same controller). In our example, we simply put placeholders in the page where the views can render. We use a view factory to grab the view. There is a default view mapped to the dropdown and a more specific view requested for the repeater. The same controller manages both views, proof of true abstraction of the controller from the view it is controlling. The page, as a "master" controller, generates a context that both controllers can share, and coordinates the controllers by taking an event from one and pushing the value to the other. The controllers themselves are ignorant both of each other's existance as well as the actual implementation of the view they manage — despite the dropdown and list, the controller still simply works with an IControl<KeywordModel>. This is the "skeleton" for the page: <%@ Page <html xmlns="" > <head runat="server"> <title>Dynamic Control Example</title> </head> <body> <h1>Dynamic Control Example</h1> <form id="_form1" runat="server"> <asp:ScriptManager <asp:HiddenField <table> <tr valign="top"> <td> <asp:Panel </td> <td> <asp:Panel </td> <td> <asp:Button </td> </tr> </table> </form> </body> </html> And here is the "master controller" code behind. Notice how we reference the interface and the factory. We wouldn't even reference control at all except to manage the event args, this would normally be hidden inside a "master controller." using System; using System.Web.UI; using Domain; using DynamicControls.Control; using DynamicControls.Factory; using Interface.Controller; namespace DynamicControls { /// <summary> /// Default page /// </summary> public partial class Default : Page { /// <summary> /// Controller for the page /// </summary> private IController<KeywordModel> _ctrlrLeft; /// <summary> /// Another controller /// </summary> private IController<KeywordModel> _ctrlrRight; private Guid _context = Guid.Empty; /// <summary> /// Init /// </summary> /// <param name="e"></param> protected override void OnInit(EventArgs e) { base.OnInit(e); _ctrlrLeft = ControllerFactory.GetDefaultController(ViewFactory.GetDefaultView<KeywordModel>(_pnlLeft)); _ctrlrRight = ControllerFactory.GetDefaultController(ViewFactory.GetKeywordListView(_pnlRight)); _ctrlrLeft.OnSelect += _CtrlrLeftOnSelect; } /// <summary> /// This is when viewstate is first available to us to wire in the appropriate context /// </summary> /// <param name="e"></param> protected override void OnPreLoad(EventArgs e) { base.OnPreLoad(e); // bind a global context so the controllers and controls all can talk to each other with the same // instance of state if (string.IsNullOrEmpty(_hdnGlobalContext.Value)) { _context = Guid.NewGuid(); _hdnGlobalContext.Value = _context.ToString(); } else { _context = new Guid(_hdnGlobalContext.Value); } _ctrlrLeft.CurrentContext = _context; _ctrlrRight.CurrentContext = _context; } /// <summary> /// Let the right hand know what the left hand is doing /// </summary> /// <param name="sender"></param> /// <param name="e"></param> private void _CtrlrLeftOnSelect(object sender, EventArgs e) { KeywordSelectArgs args = e as KeywordSelectArgs; if (args != null) { _context = args.CurrentContext; _ctrlrRight.CurrentContext = _context; _ctrlrRight.CurrentModel = args.SelectedModel; } } /// <summary> /// Page load event /// </summary> /// <param name="sender"></param> /// <param name="e"></param> protected void Page_Load(object sender, EventArgs e) { if (!IsCallback && !IsPostBack) { _ctrlrLeft.Initialize(); _ctrlrRight.Initialize(); } } } } I purposefully used a straight script reference for the list control (instead of an embedded resource) so you can get a feel for the differences between the two options. Note how the state management works in the list control. It shoves the list and the selected model into an array of objects and then sends these to the IControlCache. The GUID controls the keys for the cache. The master page sets this initially. When the dropdown fires the callback, it passes the context to the server, so the server can reset the controls with the context and make sure they are pulling the correct state. Otherwise, the control would have no "idea" of what the list was or what the currently selected model is. That explains the sample application and framework. You can play with a working copy of it at. This is a bare bones proof-of-concept, no fancy graphics or fonts. Points of Interest I always like to view the source and see just how much is being rendered: what and why. Of course, the first thing you'll notice is the view state. Because we are managing our own state via the guid that is embedded in the page (just view source and search for _hdn), we could actually turn viewstate off. ASP.NET wires in the __doPostBack function which is bound to controls like the submit button. After this function, you can see the JavaScript for our custom controls. The first embedded one references WebResource.axd which is responsible for pulling the JavaScript out of the assembly and rendering it to the browser. The next is a straight reference to KeywordListView.js because we did not embed it. This is followed by a few includes that wire up the AJAX framework, and then we're in the midst of our actual controls. Note the wired "onchange" on the select box. We bound this on the server side and emitted the context as well as the client IDs of the various controls. We do this because the code must work no matter how nested the control is. Each control wires its respective init function, and the last code emitted is the initialization for the AJAX framework itself. To extend this framework, you will want to create different types of views (IListControl, IUpdateControl, etc) that perform different functions (the most fun I had with my own company was creating the controller and control for grids that manage large data sets and paging). You can even experiment with unit tests and building "test" controls that don't inherit from UserControl and therefore don't require HttpContext to work. There are lots of possibilities but hopefully this example forms a decent foundation for you to work from.
https://csharperimage.jeremylikness.com/2009/06/model-view-controller-using-aspnet.html
CC-MAIN-2017-39
refinedweb
4,291
53.21
There's been a lot of talk recently about the various language proposals that people have made, or debated, or agree with, or detest more than their mother using spit to wipe something off their mouth when they were kids. But I think that some very important language features have been under-represented in all of the discussion, and I'd like to make sure that they make it into the language change death match. Here are my pet language proposals. You may recognize some of these as features from other languages whose absence from Java has long been a mystery. How many times have you wanted to refer to a line or block of code, in a presentation or a review, or just a conversation in a bar (otherwise known as a "pickup line"), and you end up saying something like: "in the line that starts with "if (ventedCelebrium == MAX_VENTAL_SIZE)" or "in the for() loop ... no, the other for() loop ... no, the one before that, right after the call to the creakleFrantic() method"? Isn't it time that we had line numbers in Java? Then you could simply say, "On line 42" or "in lines 8709-8752" and be done with it. Obviously, time has moved on since line numbers were first used in other computer languages, back in the 16th century or so, and we're all a lot smarter now. So I would like to propose a small tweak to this proposal and demand that the line numbers be in floating-point units. Besides having all of the additional power of floating-point processing over the more typical, and therefore much dumber, integers, think of the utility of being able to insert code without changing the surrounding line numbers. Imagine: Original: 1 for (int i = 0; i < 100; ++i) {2 System.out.println("fart");3 } Improved: 1 for (int i = 0; i < 100; ++i) {2 System.out.println("fart");2.5 System.out.println("Oops! Excuse me!"); 3 } An alternate proposal on the table is to allow imaginary numbers as well, but I find that idea irrational. Goto Once we have line numbers, it's an obvious step to go ahead and shovel in the keyword "goto", which has been waiting in the wings lo these many years, like the scrawny, bespectacled youth waiting to be picked for a kickball team. After all, wouldn't we goto justification; goto justification; justification: love to have teleportation in our own lives? Why should we deny our code that very capability when it's just lying there like a goto metaphor; goto metaphor; metaphor: Twinkie under the bleachers, waiting for us to pick it up and cram it in our mouths? goto end; goto end; end: Whatever happened to that great capability in older languages of having meaning embedded in the spaces? You couldn't just type code willy-nilly wherever you wanted, but instead had to start the code in certain columns. Sure, we get this fantastic capability from Makefiles, but today's make systems are hidden behind IDE interfaces, and we rarely get to work directly with the beauty and elegance of Makefile directly. Isn't the significance of space just a metaphor for life (at least when you're brain is chemically altered) where it's really the space and silence that count the most? It's the silence in scary movies that really build the suspense. It's the quiet times that you spend with your kids that make you forget the screaming matches. It's the geographical space between relatives that makes family relationships function at all. And it's the space characters in code that really give the code meaning and quality. I would like to introduce column significance into Java, and make us all think more about where we put our code. I've had enough of the code style wars of indentation amounts, tabs-vs-spaces, and wide-vs-narrow line lengths. Let's start enforcing code style by making the compiler puke when it finds code in the wrong place. Have you ever noticed when you get an email from someone in ALL CAPITAL LETTERS that the case actually changes the way you interpret the information? For one thing, you feel confident that the sender is an idiot who isn't aware of the caps-lock key. But more importantly, you feel as though they are SHOUTING AT YOU through the computer (a real need in society which has since been addressed by Skype). I think that the code that we write is cool enough, powerful enough, and significant enough that it should be screaming itself at the readers. How else will they understand how awesome the code is? Did Martin Luther King just casually mention "Oh, by the way, I have a dream" to whoever happened to be nearby? No: he used a microphone to proclaim this great speech to as wide an audience as possible. Did Black Sabbath speak their lyrics to the audience, like a folk singing group in a coffee shop? Of course not: they screamed them out through bat entrails to an audience so deaf from the amplifiers that nothing less than an ear-splitting banshee wail would penetrate the din. Did Prince Charles proclaim his love for Camilla in a quiet manner, over a private telephone call? Actually, yes: it was the press that illegally tapped the call and then relayed the sordid content through huge headlines, in a way that only the British tabloids could do. Of course, we are free use capital letters in our code today. But this approach is mixed at best, with various keywords expected in lower case and a lack of convention and consistency defeating the system overall. We need compiler-enforced capitalization to make sure that our code intention comes through LOUD AND CLEAR. There are many more classic language features out there, dying to resurface in a modern language. They worked well for previous generations of programmers, I'm sure they would work for us too. Ignoring all of these features would be a sin tax. I want all of these features for Java 7 ! Very funny :D Posted by: adiguba on August 23, 2007 at 06:51 AM Chet, you have a real talent to make something useful sound like a complete nonsense. Posted by: euxx on August 23, 2007 at 07:18 AM continue; Posted by: francisdb on August 23, 2007 at 08:05 AM Re: line numbers. I would go further and suggest using irrational numbers, too. Don't you want to have a line with sqrt(2), or Pi.. Now line sorting might become an issue with lines like 'e pow pi' and 'pi pow e'.. Dmitri Posted by: trembovetski on August 23, 2007 at 09:54 AM No, Dmitri, you're thinking to much in the box. By allowing floats, you can already have irrational line numbers. But the leap that needs to be made is allowing static constants within your code. Then it's easy, and Very Java. 1 public static void main(String args[]) { Math.E System.out.println("eeeeee!"); Math.PI System.out.println("I like pi!"); 4 } See, it's so intuitive. And with Static Imports, it's even BETTER! Think of the flexibility! By using static constants, you can change the implementing class and reorder your code without have to actually change it in the source file! Imagine this: CodeSequence.java: public class CodeSequence { public static final int MAINSTART = 1 public static final int FIRST = 10; public static final int SECOND = 20; public static final int MAINEND = 100 } LineNumberzR00L.java: import static CodeSequence.*; public class LineNumberzR00L { MAINSTART public static void main(String args[]) { FIRST System.out.println("I'm first!"); SECOND System.out.println("I'm second!"); MAINEND } Just think -- change CodeSequence.java setting FIRST to 20 and SECOND to 10, and rebuild. Voila! Resequenced code! This is some powerful stuff -- groundbreaking even I think. Only Java can take something as mundane as line numbers to a new level. Posted by: whartung on August 23, 2007 at 10:29 AM Since when did this become a C++ standards discussion board? Oh, whoops, sorry. Rgds Damon Posted by: damonhd on August 23, 2007 at 12:52 PM A constant war between developers: where should the braces go? Same line? New line? How about this new language feature: require TWO braces, one on the same line AND one on the next line. Everybody wins! Posted by: prunge on August 23, 2007 at 03:40 PM "An alternate proposal on the table is to allow imaginary numbers as well, but I find that idea irrational." :D Good one. I don't agree with your view chet. Yeah, they're making too much fuss about adding more and more language features and blah blah blah, but I think that closures are THE language proposal that should be taken into consideration. Just see how much time and clarity you gain by using them. Neal Gafter is really doing a great job. For you out there that are still not convinced, I suggest that we should go back to Java 1.1. Oh wait, it supports inner classes... OK then let's stick with 1.0 ! Posted by: jaxer on August 23, 2007 at 04:13 PM Chet this gave me a good laugh, but on a serious note the one thing that I miss from C++ is multiple inheritance. Just curious, but does anyone else miss this feature? Sometimes interfaces just are not enough to avoid code duplication, when what you really want is to inherit from to different classes. Maybe this could be allowed only when inheriting from an abstract class and a concrete class? This would still leave the ability to implement any needed interfaces. Would you say this is a reasonable proposal or just more nonsense? Posted by: badapple on August 23, 2007 at 08:47 PM hello badapple, Remeber that Object oriented Lanaguage designted for reallworld purpose. The Java is mainly designed for really world purpose. As you told your missing Multiple inheritence in Java . The reason is as good as we cannot have a child with parents Dog and Cat. Class itself defines particular cateogery Dog is on one cateogery Cat is on one cateogery now I two classes Dog and Cat. But In really I cannot have Child from DOG and Cat as parents This means my class cannot inherit more than one class. If java Designers provide Mutliple inheritence in Java then it will be like this. public class Child extends DOG,CAT{ } do you feel this would be good. So now the word Interface comes into account. where you can put certain properties . and your class which implements these interfaces can use it. Posted by: dhilshuk on August 23, 2007 at 11:16 PM You have forgotten one important keyword - assembly followed by the sequence of bytecode ... because everybody knows that we are much smarter then compiler ever can be ... ;) Posted by: rah003 on August 24, 2007 at 12:35 AM What about native assembly and then we can wrap up our platform-specific asm in a Java wrapper giving the illusion of portabilty and security while having neither! Way to go! Posted by: damonhd on August 24, 2007 at 12:40 AM Absolutely. These great ideas will certainly add clarity to Java source code. Any chance of squeezing them into Java 6 u4, along with the Consumer JRE stuff? Another thing I think it's important to consider is that, currently, it can be very confusing when lines of code spill over to more than one line of text. For example, consider the following line of java: i = j + k + l + m + n + o + p + q + r + s + t + u + v + w; That is almost totally unreadable. However, if Java were to borrow the idea of "statement continuation" from FORTAN, then code would be much more readable when statements span multiple lines. As with your line numbers idea, we could use the power of modern CPUs to expand on the concept to also say how many lines the statement is spread over. For example, the following code shows explicitly that the statement is spread over three lines, and shows which is the first line, and which is the second line. i = j + k + l + m + $$1/3 n + o + p + q + r + $$2/3 s + t + u + v + w; Much more readable. Another advantage of this idea is that the specific tagging of line continuations will significantly speed up compilation because of the extra hints given to the compiler. Obviously, with Java's reputation for slowness, this could be a useful performance boost. Obviously, the "$$X/Y" syntax is only a suggestion. The idea could work equally well by embedding fragments of XML to describe the continuation. Posted by: psynixis on August 24, 2007 at 06:06 AM CHET, LINE NUMBERS ARE TOTALLY IRRELEVENT NOW THAT WE HAVE xml.... WE CAN ACCOMPLISH YOUR OBJECTIVE BY DELINEATING ANY DESIRED BLOCK OF CODE WITH A LABEL. <LABEL id="first">for (i =0; i<100; i++) { <g;/LABEL> system.out.println("XML rules"); } ISN'T THAT MUCH BETTER? -JohnR Posted by: johnreynolds on August 24, 2007 at 06:49 AM I wonder why he did not mention the $ before Strings that is used for example in PHP Sorry Chet, but goto, break, continue were long time considered to be error provocating - and thats why most developers dont useit! And using complex numbers in a for loop, who on earth came up with this? A loop is used to do multiple instructions at a time and integers are considered as the fastest way to perform loops! This cannot really be the future of Java programming because its going backwards to the age of FORTRAN, COBOL, and these things. Posted by: alexanderschunk on August 24, 2007 at 07:19 AM Another good suggestion used i.g. in BASIC or MAPLE. The step keyword in for loops. This is clearly a great innovation for professional developers. Anyway, i dont see any groud breaking innovations concerning the language here at least at this low-level of the language. Rather i would improve high-level featuers and getting rid of some OOP stuff a few people only use. Posted by: alexanderschunk on August 24, 2007 at 07:26 AM I want the noreturn keyword from languages like DATABUS. For those unfamiliar with of the concept, it means. "don't return to the method that called me, return to the method that called the method that called me". This way I can just call noreturn when I get an exception and want to cancel the method that called it. It would allow for all sorts of nice optimizations! Oh yeah and a native jump table syntax! int x = 4; invoke x with callMethod1(), callMethod2(), callMethod3(); Isn't that just the coooolest! Posted by: aberrant on August 24, 2007 at 07:50 AM OMG... What are you smoking? Line numbers...Check your IDE I'm sure you can turn those on. Goto... Are you kidding!?! You want spaghetti code!?! Use AOP, or Byte code 167. Line Spacing.. Well if you would line up your braces you wouldn't need that!! Putting a brace at the end of the line instead of lining them up was done to save space in books. Everyone thinks that it's a good way to format code. WRONG. CAPITALS... EVER HEARD OF COMMENTING YOUR CODE!!! We have enough PHP/Ruby programmers flying by the seat of their pants. Honestly I never liked the idea of making the java compiler smarter so that it could allow more poorly constructed code. Microsoft cut VB for a reason, why do we need to pickup developers that have no sense of code design and style. There are reasons we don't have GOTO, there are reasons why there are no line numbers, there are reasons why syntax is not based on whitespace. Have we not learned from history, or did schools stop teaching these lessons. If you want to program like a then use a willy-nilly language and suffer the consequences, but don't mess up a perfectly good language. Just because there are more fly-by-seat programmers doesn't mean that the masses have the best ideas. -Sfitz Posted by: sfitzjava on August 24, 2007 at 08:02 AM Oh, man, it took me a while, but now I realize that Jonathan Schwartz must be picking your brain for ideas for taking Java to the next level. It's all starting to make nonsense. Posted by: detorres on August 24, 2007 at 08:48 AM When you were talking about line numbers.. I couldn't actually believe you were reintroducing the BASIC way of doing things. I thought you were talking about having a __LINE__ preprocessor constant that would allow you to pinpoint where an error happened (a good feature for debugging). While I have no use for line numbers a la BASIC, C's feature of allowing you to specify the exact line and file name of where an error happened is good for debugging. Java' s exceptions don't always get you the correct information. Posted by: dog on August 24, 2007 at 09:06 AM I think the ternary operator is just a tiny step in the right direction. After all, why waste keystrokes typing long words like "if" and "else" when you can save several keystrokes with "?" and ":"? I'd love to see a quaternary operator (and not just because "quaternary" is fun to say), in place of the for loop: int i=0 ? i<10 : i++ ! System.out.println(i); Isn't that much more readable than: for (int i=0; i<10; i++) System.out.println(i); Posted by: atripp on August 24, 2007 at 09:09 AM I'd also like an IDE that ignores everything after the first less-than symbol, as most web posting software seems to do. It tends to cut way down on the amount of code that people have to read. Posted by: atripp on August 24, 2007 at 09:11 AM I love the comments that think you are being serious. It just goes to show how crazy the language proposals have become where people actually believe that these are actual suggestions. I would love to see whitespace have meaning in programming languages again. Maybe we can take some tips from the programming language Whitespace Posted by: mtnsurfer on August 24, 2007 at 09:30 AM ahahahah.. for the ones looking for such advanced features, just check it out. ahahahahaha Posted by: felipegaucho on August 24, 2007 at 09:40 AM You forgot the most important language addition: Getting rid of those superfluous semicolons to terminate a statement. A statement latest ends at the end of the punch card. Ups, that is, at the EOL for those modern kids with interactive terminals. Imagine how many semicolon lifes could have been saved wouldn't it have been for such eggheads like Nicklaus Wirth. Eggheads who didn't understand that it's over when it's over. There is also an imminent need to add more control constructs. if, for, while are nice, but only for professionals. The proposed goto? Blah, to complex. until? Useless. Those whining beginners who don't know what to code badly need maybe, otherwise, sometimes, never and often. And don't forget that FORTRAN had the greatest if ever. arithmetic IF. That combined with computed GOTO ist a must-have for every language. IF (X*Y-Z) 100,200,300 C NOTHING HAPPENS HERE C CALCULATION RESULT IS LOWER THAN 0 100 I=4 J=1 GOTO 400 C CALCULATION RESULT IS EQUAL 0 200 I=2 J=2 GOTO 400 C CALCULATION RESULT IS GREATER THAN 0 300 I=1 J=1 400 GOTO(500,400,200,300) I*J 500 STOP END And guess what? Any of these is better than closures. Posted by: ewin on August 24, 2007 at 09:48 AM Now, i actually get what he is trying to say: This code is complete nonse. Hopefully Chet is not working on the OpenJDK compiler :). Posted by: alexanderschunk on August 24, 2007 at 10:43 AM Chet, Nice! I wonder if these things have been implemented in the kitchen sink project? It seems scripting languages solve things too (rhino, jython, etc), therefore maybe there should be a proposal for JAppleSoftBasic where they accept line numbers. or JFortran. -Carl Posted by: carldea on August 24, 2007 at 11:43 AM What about the IMP language th at al lo ws sp aces with in token s? So useful! Posted by: damonhd on August 24, 2007 at 01:33 PM I'm surprised that all of you brilliant guys are so low level and none of you has suggested the *right* thing to bring Java to become a 6-th generation language! Why stick with plain text? Java 1.0 introduced sources in UTF-8. But this was ten years ago. We live now in the XXI century! We need a powerfull language exploiting the whole power a graphical UI in source code! The first step in the right direction is to use multiple fonts. For instance, class names must use Arial 14 Bold. Why use extra characteres to delimit comments? Comments in Java 7 should be written in italic. Font size can be used to provide hints to the compiler: a small font means that HotSpot must not spend a lot of time trying to optimize this part of the code because that it's not really that important... Colors could be used too. Code in red means that it is buggy, so the compiler could try to find the bugs... Combined with XML and imaginary line numbers, I feel we start to have a really usefull language... Posted by: genepi on August 24, 2007 at 02:18 PM Chet, I would like you to seriously consider the concept of openures. These would require the new uhh keyword and would be block-level constructs, but would also helpfully reuse varargs syntax and preserve the best of the myriad closure proposal syntax out there: int numberOfBlogPostingsDevotedToClosureProposals = [{#@!? ( ( ( { | x | ...uhh...; Another strong language enhancement consideration should IMHO be indeterminate class completion, denoted by the keyword yada: public class DefaultAbstractNotifyingFactoryMechanizationHandlerBrokerPersistenceFactory { yada } A weaker assert statement, perhaps? pray x != null; And finally, while strictly speaking not a language change, I do believe we could use some standardization in the annotation camp. Forthwith I submit the @Enterprise annotation, which can be particularly powerful when used with indeterminate class completion: @Enterprise public class D6501 { // XXX why doesn't this work? ask jerry // comented out bc amir said to -- cjw 11/2/97 // public Object brillant = "paula"; yada } Cheers, Laird Posted by: ljnelson on August 24, 2007 at 07:02 PM Following on from the above @Enterprise, I think we need a MakeItSo operator, to be used in a vaguely regal manner to fill in specification gaps. Posted by: damonhd on August 25, 2007 at 04:50 AM And continuing the obscure Star Trek references punning on @Enterprise... I think we need a new access modifier: outtherethataway void doSomeObscureCommandInAnExternalLibrary(); The new access modifier is to be used in the same way as "extern" in C - specifying that there's a method "out there thataway" but that we really don't know the implementation of so the program can barf at runtime when the library containing it is not found! Obviously a vast improvement on the NoSuchMethodException, since we actually take the time to prototype the method, right? (The MakeItSo WAS a Star Trek reference, right? ;-)) Posted by: ekolis on August 25, 2007 at 06:26 AM Yes to StarTrek allusions, but also to Dilbertian PHB management-by-fiat! Two meanings for the price of one keyword: can't say fairer than that... Posted by: damonhd on August 25, 2007 at 07:49 AM we want REAL XML syntax for Java. Everyone knows XML is easy to read, so by adopting XML syntax Java becomes easy for everyone... Just think of it: Posted by: jwenting on August 25, 2007 at 11:11 AM That should be: <class name="NullPointerException" package="java.lang"> <extends class="RuntimeException" package="java.lang"> <constructor> <![CDATA[super();]]> </constructor> </class> Posted by: jwenting on August 25, 2007 at 12:21 PM The CDATA is genius! Posted by: damonhd on August 26, 2007 at 11:34 AM Don't laugh but I know a guy the religiously uses break and goto labels in the existing java! I didn't even know that they were in the language and had to verify how they functioned. It's shameful that they remain there! Posted by: alski on August 27, 2007 at 03:55 AM @jwenting: I actually have worked before at a place where there ARE Java + XML files, and they look exactly like you've satirized above (I mean exactly--down to the CDATA section). I wish I were joking. Posted by: ljnelson on August 27, 2007 at 07:17 AM The only thing funnier than this post are the replies from people taking it literally. This is genius, Chet! @ljnelson: I wish you were kidding. Posted by: coffeejolts on August 28, 2007 at 05:49 AM It is very difficult to contain my contempt for the omission of unsigned integers and longs in Java. The decision was obviously made by computer scientists who don’t understand hardware and the occasional need for unsigned arithmetic. I’m sorry if adding it would be “hard” because of the lack of bytecodes to support it. But it was a very poor decision to leave it out. I would like to request that future versions of Java include native support for unsigned. All modern computer processors have support for unsigned. Java should too. And since all processors support it, it won’t even break Java’s mantra of “run anywhere.” Take a quick look on google about unsigned and Java and you’ll find lots of “what a stupid idea” opinions and frustrated programmers and nowhere “what a brilliant idea” comments. I’m sure Java developers are having a wonderful time with new projects. But it would be hugely beneficial if they added some of the low-level support that is so lacking in the language. Posted by: biggunzclub on August 29, 2007 at 03:10 PM Chet, do you do a book on humo[u]r bypasses? Posted by: damonhd on August 31, 2007 at 02:47 PM This is, without a doubt, the best discussion yet on future language features. It seems to me that the time wasted discussing closures in Java is probably several orders of magnitude greater than the time that would be saved if we actually had closures. Line numbers, on the other hand, would speed all kinds of things up, not least of which would be the pickup process described. Posted by: grlea on September 16, 2007 at 07:28 PM Nobody here got the point. Next Java features will all be inspired by GOTO++. Here is the GOTO++doc. Javadoc is only a pale copycat of this. Posted by: syrion on October 17, 2007 at 05:03 AM:10: cgspender on June 19, 2008 at 10:03 PM _ Posted by: cgspender on June 19, 2008 at 10:04!!
http://weblogs.java.net/blog/chet/archive/2007/08/code_complete_n.html
crawl-002
refinedweb
4,575
70.94
I've been trying to get a SyncAdapter to work. Man, what a ridiculously complicated mess. I don't even know where to begin, so I guess I'll just dump all the related code. If I put breakpoints pretty much everywhere in the code below, the following stuff actually gets called at startup: 1: StubContentProvider.OnCreate() 2: AndroidApp.InitSyncService() (called explicitly when my app starts up). And that's it. Nothing else ever gets called. AndroidApp.cs public static class AndroidApp { public static string ACCOUNT_TYPE = "georgesmobile.android.backgroundservice"; public static string ACCOUNT = "mobile"; public static string AUTHORITY = "georgesmobile.android.provider"; public static Account SyncAccount { get; private set; } public static void InitSyncService(Context context) { if (SyncAccount == null) { CreateSyncAccount(context); ContentResolver.SetIsSyncable(AndroidApp.SyncAccount, AndroidApp.AUTHORITY, 1); ContentResolver.SetSyncAutomatically(AndroidApp.SyncAccount, AndroidApp.AUTHORITY, true); Bundle bund = new Bundle(); ContentResolver.AddPeriodicSync(AndroidApp.SyncAccount, AndroidApp.AUTHORITY, bund, 2); } } private static Account CreateSyncAccount(Context context) { SyncAccount = new Account(ACCOUNT, ACCOUNT_TYPE); AccountManager accountManager = (AccountManager)context.GetSystemService(Context.AccountService); return SyncAccount; } } StubContentProvider.cs [ContentProvider(new[] { "georgesmobile.android.provider" }, Name = "georgesmobile.android.backgroundservice.StubContentProvider", Exported = false, Syncable = true)] public class StubContentProvider : ContentProvider { public override int Delete(global::Android.Net.Uri uri, string selection, string[] selectionArgs) { return 0; } public override string GetType(global::Android.Net.Uri uri) { return ""; } public override global::Android.Net.Uri Insert(global::Android.Net.Uri uri, ContentValues values) { return null; } public override bool OnCreate() { return true; } public override global::Android.Database.ICursor Query(global::Android.Net.Uri uri, string[] projection, string selection, string[] selectionArgs, string sortOrder) { return null; } public override int Update(global::Android.Net.Uri uri, ContentValues values, string selection, string[] selectionArgs) { return 0; } } GenericAccountService.cs [Service(Name = "georgesmobile.android.backgroundservice.GenericAccountService")] [IntentFilter(new string[] { "android.accounts.AccountAuthenticator" })] [MetaData("android.accounts.AccountAuthenticator", Resource = "@xml/authenticator")] public class GenericAccountService : Service { private Authenticator _authenticator; public static Account GetAccount(string accountType) { return new Account(AndroidApp.ACCOUNT, AndroidApp.ACCOUNT_TYPE); } public override void OnCreate() { base.OnCreate(); _authenticator = new Authenticator(this); } public override IBinder OnBind(Intent intent) { return _authenticator.IBinder; } } SyncAdapter.cs public class SyncAdapter : AbstractThreadedSyncAdapter { public override void OnPerformSync(global::Android.Accounts.Account account, Bundle extras, string authority, ContentProviderClient provider, SyncResult syncResult) { BackgroundDataSync.Synchronize(); } } SyncService.cs [Service(Name = "georgesmobile.android.backgroundservice.SyncService", Exported = true)] [MetaData("android.content.SyncAdapter", Resource = "@xml/syncadapter")] public class SyncService : Service { private static SyncAdapter _syncAdapter; private static object _syncLock = new object(); public override void OnCreate() { base.OnCreate(); lock (_syncLock) { _syncAdapter = new SyncAdapter(ApplicationContext, true); } } public override IBinder OnBind(Intent intent) { return _syncAdapter.SyncAdapterBinder; } public override StartCommandResult OnStartCommand(Intent intent, StartCommandFlags flags, int startId) { return base.OnStartCommand(intent, flags, startId); } } syncadapter.xml <?xml version="1.0" encoding="utf-8" ?> <sync-adapter xmlns: authenticator.xml <?xml version="1.0" encoding="utf-8" ?> <account-authenticator xmlns: relevant AndroidManifest.xml (as generated by the compiler) <?xml version="1.0" encoding="utf-8"?> <manifest xmlns: <uses-sdk android: <uses-permission android: <uses-permission android: <uses-permission android: <uses-permission android: <uses-permission android: <uses-permission android: <application android: <receiver android: <intent-filter> <action android: </intent-filter> </receiver> <service android: <meta-data android: <intent-filter> <action android: </intent-filter> </service> <provider android: <service android: <meta-data android: </service> <activity android: <intent-filter> <action android: <category android: </intent-filter> </activity> <activity android: <provider android: <receiver android: <intent-filter> <action android: <category android: </intent-filter> </receiver> </application> <uses-permission android: </manifest> I love android, but Google's documentation on how this stuff works is a pathetic joke. I'm working on a similar issue and am not finding a lot of resources. Currently I'm stuck on the implementation of AbstractThreadedSyncAdapter. In your code, you show no constructors - how does that even compile for you? I get an error (at the context variable) when trying to build the constructor. Error: ") expected" @joedonahue.org This won't work. The basepart is to call the base constructor. To set it up properly, you'd want to do it like this: I actually do have a constructor. I removed it from the code above because all it did was get an instance of the logger from the IoC container which wasn't really relevant to the question at hand. You are correct that it would normally produce an error otherwise as AbstractThreadedSyncAdapter has no default constructor. This is the actual implementation: This is helpful, thanks. Do you have the metadata attribute? These statements were necessary for the AccountAuthenticator implementation, I'm guessing they will need to be here as well. Something like @joedonahue.org That didn't fix my problem, but it was a problem. I had the Metadata tag (Just like yours) on my SyncService instead of my SyncAdapter. Thanks for that. I similarly cannot seem to get the SyncService fired up. After InitSyncService (and elsewhere) I get: 05-22 20:32:30.050 I/mono-stdout( 4095): Home.InitSyncService - Start 05-22 20:32:30.061 D/SyncManager( 2681): setIsSyncable: Account {name=jd, type=com.myapp}, provider com.intrinsic.provider -> 1 05-22 20:32:30.061 D/SyncManager( 2681): setIsSyncable: already set to 1, doing nothing 05-22 20:32:30.061 D/SyncManager( 2681): setSyncAutomatically: , provider com.myapp.provider -> true 05-22 20:32:30.061 D/SyncManager( 2681): setSyncAutomatically: already set to true, doing nothing 05-22 20:32:30.070 I/mono-stdout( 4095): Home.InitSyncService - End 05-22 20:32:55.490 I/CalendarProvider2( 2888): Sending notification intent: Intent { act=android.intent.action.PROVIDER_CHANGED dat=content://com.android.calendar } 05-22 20:32:55.490 W/ContentResolver( 2888): Failed to get type for: content://com.android.calendar (Unknown URL content://com.android.calendar) The lack of information in logcat is truly disappointing. If I request a manual sync and it can't do it for whatever reason, it ought to log the reason. I have this method: And I call it in response to a button press, just for testing and it doesn't do squat. Nothing gets logged. That's ridiculous. I'm getting closer. I had issues earlier with the AccountAuthenticator stuff where Xamarin really doesn't like the Service clauses in the Manifest, and need to be correctly configured as Attributes in the Service. I think you were originally correct to put the Attributes in the Service class rather than the Adapter class, even though it links to the Adapter XML file. Here's the Attribute code from the SyncService. I've confirmed that the SyncService is actually firing now, although I still have more errors to work through. @PeterDavis @joedonahue.org Hey guys! I put together a sample project that should hopefully help. If you run the app and press the button, a manual sync is forced. You can verify the output via logcat - you will see a message "I have sync'd!". Some interesting things I found while putting this together: exportattribute in the <provider>element. It should show exported. ContentProviderAttributefor StubProviderwas not acting right. It was yelling at me with some strange 0 arguments exception that didn't make sense with my implementation. I will check if this is a bug and update with a report. <provider>issue above, I declared this manually in the manifest. The android:nameattribute needed to have my C# namespace prefixed instead of the package name. I should probably change my namespace to match the package to make things 1:1. Console.Writelineis not showing in the Application Ouput window because the OnPerformSyncis on a background thread; therefore I checked with adb shell logcatto see the output. I hope this helps! @John Thanks so much for the example. Still trying to get mine to work, but this is helping. I have suspected that some of my issues have to do with provider names and authorities and whether or not I'm naming them properly. I'm still pretty sure that's the issue, but I just haven't quite found the combination that makes it work... I'll keep tweaking and comparing against your example. @John, I can't get your example to work on my Nexus 7 (4.4.2). I put a breakpoint on the ContentResolver.RequestSync()call in the button.Clickevent and I see that get hit, but I have a breakpoint in SyncAdapter.OnPerformSync()and it's not getting called. There's nothing happening in logcat. I'd like to build it in 2.2 for my LG Optimus V to see if that acts any differently, but when I set the "Compile using Android Version" in the project to 2.2 (or anything other than API Level 19) and it reverts to API Level 19 after I save (Ctrl-shift-S) and then re-open the project properties. I can't seem to make enough room on my phone for the API 19 framework, so I can't test it on my phone. Tried running it in the API 10 emulator. Didn't work their either. So I kinda threw breakpoints all over the place. When the app starts up, the StubProvider.OnCreate()gets called. Then MainActivity.OnCreate()gets called, which in turn calls CreateSyncAccount()which then subscribes to the click event for the button. When you click the button, ContentResolver.RequestSync()gets called, but nothing happens. SyncService.OnCreate()and SyncService.OnBind()are never called. Same with the AuthenticatorService's OnCreate()and OnBind() @PeterDavis I don't know if breakpoints are the best way to debug this. I can set a breakpoint in OnPerformSyncand it will not get hit, but I still see my message in logcat. I also noticed that when I deployed from XS a second time, the message was not happening. I went into the Settings app on my Nexus device and looked under the Accountssection. Listed there was SyncAdapterExample. Touch that and you will see the dummyaccountlisted. The Menu had a "Cancel Sync" option available. Once touched, it now says Sync Now. Pressing that seemed to work. I am still wrapping my head around SyncAdapters. I am not sure why it had the option to "Cancel Sync", maybe something failed, or the state of the sync was broken. I am still investigating the documents. John, I don't think I'm alone when I say, as a developer, I frequently make the judgment that if a breakpoint isn't hit, the code isn't getting called. I would find it very concerning if I can't reliably expect breakpoints to be hit. But regardless, I uninstalled the app (your sample) and the account was removed. When I run it after uninstalling it, nothing happens when I click the button. No logcat messages no hit on the breakpoint. I honestly don't think it's getting called. Incidentally, if I go into accounts while the app is running and cancel the sync and then do a force sync, nothing happens there either. No logcat message, no hit on the breakpoint. @PeterDavis I agree. I will need to investigate that more to see if it's a bug in Xamarin Studio or if its related to how the SyncAdapter works. What I mean by this is how its called, via all these services and background threads. Normally break points work just fine when you are debugging. Until I learn more, I would not rely on that as an indication if the SyncAdpater is working. This is similar to what I am seeing, but the code is getting called. I noticed that when I first install the app from XS, the Sync seems to be doing something. Before pressing the button, go to your Sync settings for dummyaccount. You will notice that you can "Cancel Sync" even though the button was never pressed. I am not sure what this means yet. Sometimes it seems to take a while (even when I choose to run it immediately with ContentResolver.SyncExtrasExpedited). If you let it run for a while, does it eventually show the message? I am still learning this and trying to understand it. Thanks for your patience! John, I appreciate you working on this. I did wait for a few minutes because I know that syncs aren't necessary immediate, but it never came through. I'll continue to work on it and see if I can get anywhere. Thanks again for your efforts on this. I really appreciate it. @PeterDavis @BrendanZagaeski discovered the reason why the breakpoints are not being hit. It's likely because the SyncServiceis running on a different process id and the debugger does not know about it. I actually specified this per the documentation in the SyncServiceclass with this: Remove the Process = ":sync"attribute and SyncServicewill run on the same process id as the activity. Now the breakpoints will be hit and the Console.Writeline message will appear in the Application Output window in Xamarin Studio. That should make things easier to debug. See this and this for more context. @John That makes sense. I should have put that together. I noticed you used the Process parameter and I knew that that ran it in a separate process, but I didn't think about the implications on the breakpoints. But in retrospect, yeah, that makes perfect sense. I wasn't using the Process in mine. I've removed it from your sample and the sample seems to be working. Awesome! Thank you so much. Now it's just a matter of figuring out how modify mine to get it working, but that should be relatively easy now that I sort of see the order in which things should take place and it gives me a better idea of how to approach it. I'll keep you posted on my progress. Thanks so much, John. I really do appreciate your efforts on this! @John, Thank you again! My SyncAdapter is now working and I now have a much better understanding of why it wasn't working (several reasons). I'm now thinking that maybe it would be helpful to create a component that would encapsulate a lot of the work of creating a SyncAdapter and let the user focus on the actual work they need to get done and not spending 6 weeks just trying to get the very basics of their SyncAdapter to work Anyway, I'll add that to my to-do list. Again, Thank You! Thank You! Thank You! There are a few aspects to this that I NEVER would have figured out on my own (like the issue with the StubProvider's attributes and needing to put the info straight into the AndroidManifest.xml). My boss has been walking by all morning calling me a failure (I'm one of two people who forgot to wear a red shirt for a ridiculous thing going on here), so to have a big success on this, this morning is redemption on the red shirt catastrophe. ;-) Here's a summary of some of the key points I got from this whole thing for others that are running into issues with SyncAdapters: 1> ContentAuthorityshould be package name+ ".provider" 2> Servicenames should be package name+ .+ class name 3> The StubProvider'sAndroidManifest stuff needs to be hand-coded and not generated from attributes 4> The SyncServiceneeds Metadataattributes for the SyncAdapter(I originally, incorrectly, had the it on the SyncAdapter, not the SyncService) 5> Don't use the Processproperty of the SyncService's ServiceAttributeif you want breakpoints to work. Hello, I had posted a question about trying to replicate the Windows ScheduledTaskAgent here, @PeterDavis directed me to this thread and I'm just looking at the sample code that @John posted (thanks!). After removing 'Process = ":sync"' it does seem to be working when I hit the "Hello World, Click Me!", I see the "I have sync'd!" in the output. My question is, should this happen automatically periodically or am I missing something? I'd like it to run every x hours or so, is there somewhere I need to set this or would I need to code this myself, and if I did are there any obvious places to start? I'm new to Xamarin and indeed Android so please bear with me. Thanks @xceed If you look above at the InitSyncService() method in one of my posts, I have: The 2 means 2 seconds, I believe. Google apparently didn't feel it was important to share the units of measure for the sync period in their documentation, but I'm pretty sure it's in seconds. Thanks @PeterDavis, so looking at the code @John submitted, he doesn't seem to make use of the AddPeriodicSync call anywhere, unless I'm missing something. In fact there is no call to InitSyncService either. I'll have another look through your code and try and work out what's going on. Cheers @xceed In the sample @John did, he's using: Which basically fires off the sync immediately, which was fine for the demonstration he was preparing. If the immediate sync works, then the timed sync should work. It's merely a different type of trigger. Is there somewhere in Settings on the device where you can disable the syncing process? Just thinking back to the Windows version of scheduled tasks and you can enable and disable them. Does the periodic sync run completely independently from the app itself? I'm still worried about concurrent database access, when the user is saving records in-app and then the background sync task runs simultaneously. I remember a previous post where you mentioned using the lock mechanism. The only other way I can think of preventing this issue is to disable syncing when app is in use, re-enable when app is closed/deactivated. That's the way I do it with Windows. All I need to run in the OnPerformSync method is a couple calls to Azure using data pulled from the SQLite DB. Is there a time limit on how long once instance of the sync method can run for? Are the StubProvider and Authenticator classes absolutely necessary, I don't really understand their purpose since the methods return null/zero or aren't implemented. Thanks Sorry to continue this thread, but I wondered if either @John or @PeterDavis ever came across the error: "Failed to find provider info for com.xamarin.myexample.provider". Everything is named accordingly in my android manifest file, my AUTHORITY is set correctly, accountType is set in authenticator, syncadapter etc. The "android:name" attribute for the provider as set in AndroidManifest, does this take the form of the package name then the class name or is it the application name then class name (as seen in @John 's code)? Are namespaces used at all? What is your AUTHORITY string and what is your ContentProvider attribute? Can you post your AndroidManifest.xml (the compiled one from the "obj[[Release/Debug]]\android\" folder) and your syncadapter.xml? Something is misnamed. I've run into that before, but I can't remember exactly what caused it (I ran into just about every SyncAdapter error there is in my time trying to get it working). Thanks @PeterDavis, I had to change the name attribute in my manifest file so it wasn't the package name followed by the provider class, but the namespace first. After changing that and also the service names in SyncService and AuthenticatorService it is now working Thanks so much for your help, and also @Paul for your sample code. I just have one more question (I promise). Is my understanding correct in that the service will fire every x seconds even if the app is closed? So the first time the app runs, the syncservice/account is setup for the first time, then for subsequent launches the OS will know the service already exists so doesn't try to create it again? I believe it does if you've run the app. If you want, you can handle the BOOT_COMPLETEDaction with a BroadcastReceiver and start the service up there. See this. Just a quick follow up for anyone stumbling across this thread: I have found that it isn't required to manually write anything into the manifest as long as you let Xamarin studio create the service names. My working service classes look like this: Note that the Nameproperty is missing from the Serviceattributes. The stub content provider has its attributes set as follows: No Nameproperty either. Replace MY_CONTENT_AUTHORITYwith your actual content authority value and you should be good to go. No need to edit the manifest. When I tried to hardcode the content provider in the manifest as described earlier in this thread I was unable to find the proper value even though I tried all possible variations of package and namespace names I could think of. Omitting all service names from C# attributes and relying on Xamarin Studio's auto-generated values made the difference. Thanks to everyone who contributed to this, especially @PeterDavis and @John. Hi, I made the demo work in a huawei just need to change in the compile tab and select configuration manager and select implement Hello, I just wanted to drop by and thank everyone on this thread, particularly @PeterDavis . I spent days trying to figure out why my SyncAdapter wasn't working, when in fact all along it was working, but I was using the process property and none of my breakpoints were hitting: "5> Don't use the Process property of the SyncService's ServiceAttribute if you want breakpoints to work." I removed that attribute while debugging and everything was working all along! Thank you again! @PeterDavis : Can you please post working code which works fine in all Android versions and in which we don't have to go in Accounts and Sync it manually ? @PeterDavis @JohnMiller Thanks for your inputs, the "SyncAdapterExample" it's working as expected, Release mode it is working fine but when I run the app on the top existing app, I am getting this error. "Couldn't connect debugger. You can see more details in Xamarin Diagnostic output and the full exception on logs"
https://forums.xamarin.com/discussion/comment/70982
CC-MAIN-2020-40
refinedweb
3,662
57.67
Hide Forgot Description of problem: ncurses functions malfunction for UTF-8 locale; e.g. mvaddnstr() macro interprets its argument as byte count, not characters. This breaks some 3rd party applications, e.g. the 'tin' () newsreader. Version-Release number of selected component (if applicable): Tries with both ncurses-5.2-28 (RedHat Linux 8.0) and ncurses-5.3-4 (RedHat Linux 8.1 beta 3). How reproducible: The sample program is below; compile with gcc -o tst tst.c -lncurses : #include <ncurses.h> #include <stdio.h> int main(int argc, char *argv[]) { initscr(); mvaddnstr(15, 15, argv[1], 10); refresh(); sleep(5); endwin(); return 0; } Try running it with 1st argument containing only ASCII characters, e.g. > ./tst ThisIsALongString It outputs the 'ThisIsALon' string, just as expected. Then try to pass it any string containing multibyte characters and see it truncated to less than 10 characters (e.g. to 5 characters for cyrillic strings, with LANG=ru_RU.UTF-8) ncurses does what it is designed to. If you want UTF-8 capability, use the "--enable-widec" configure option. Try reading the installation instructions. I know, that's just what I did here locally. However, the prebuilt packages distributed by the Red Hat exhibit this misbehavior. Future releases for UTF-8 environments will be linked against ncursesw. As the current ncurses provides this, this is a bug with packages that link aginst ncurses (they should be linking against ncursesw if they're working in UTF-8), so I'm closing this.
https://partner-bugzilla.redhat.com/show_bug.cgi?id=86311
CC-MAIN-2019-43
refinedweb
250
61.83
Hello world in React JS day#1 Hello, guys welcome back today. In the previous article, we see the introduction to react js. In this, we going to write hello world in react js and we going to see how to use create react app tool. Making the first Hello World App in React JS Prerequisites To run the react app you need to have node and npm package manager in your system. I assume that you already installed the node and npm in your system. what is Create React App? Create react app is a tool that makes our life easier by setting up the react app because react needs a lot of setups to run. That setup is taken care of by the create react app so we developer don’t need to worry about setting up the react app. First, install create react app by executing the following command npm install -g create-react-app (-g) -> For installing the package in global space so we can access it from anywhere. After installing the create react app. let’s create our react project by executing the following command create-react-app rpsbattle (rpsbattle) -> Is our project name you can name it any name. It will install all the dependencies such as React, React-dom …. and other dependencies that need to run our react app. After successfully installed. Our project created at the same directory where we execute the command so move to our project directory by executing the following command cd rpsbattle Now let’s see the directory structure created by the create react app and see what is the function of each file in our project directory. Directory Structure Of React App rpsbattle ├── README.md ├── .gitignore ├── package.json ├── node_modules ├── serviceWorker.js └── setupTests.js README.md and .gitignore These files are related to git. Package.json It is a standard file found in every project. It contains information like the name of the project, versions, dependencies, etc. Whenever we install a third-party library, it automatically gets registered into this file. { "name": "rpsbattle", "version": "0.1.0", "private": true, "dependencies": { "@testing-library/jest-dom": "^5.11.9", "@testing-library/react": "^11.2.3", "@testing-library/user-event": "^12.6.2", "react": "^17.0.1", "react-dom": "^17.0.1", "react-scripts": "4.0.1", "web-vitals": "^0.2.4" }, ........... .... If you see under the dependencies. It lists all the dependencies need to run our react app. node_modules The node_modules folder has all the dependencies of our project. When we install the package using npm. It all stored in this folder. Public folder 1.favicon.ico,logo192.png and logo512.png These are just an asset for our project. index.html This is the main file where react js inject ours react component as plain Html. After running the program you can see the code in the browser by inspecting it. manifest.json The web app manifest is a JSON file that tells the browser about your Progressive Web App and how it should behave when installed on the user’s desktop or mobile device. robots.txt This is a text file webmasters create to instruct web robots. How to crawl pages on our website. src folder This folder contains the actual source code for our app. We can create our own subdirectory inside this directory. App.css This file contains the style for the app.js. App.js App.js is a sample React component called “App”. This the component that wraps our other components. App.test.js This is the test file that basically allows us to create the unit tests. index.css This stores the base styling for our application. index.js This stores our main Render call from ReactDOM.It imports the App.js component that we start with and tells React where to render it. import React from 'react'; import ReactDOM from 'react-dom'; import App from './App'; ReactDOM.render( <App />, document.getElementById('root') ); If you see the above code ReactDOM render the app component in the root element that present in the public/index.html. .................... ............ <div id="root"></div> ......... ............. logo.svg This is a Scalable Vector Graphics file that contains the logo of Reactjs. With this file, we are able to see the ReactJS logo on the browser. ServiceWorker.js As its name applies, it is important for registering a service that is generated automatically and creates Progressive Web Apps which are necessary for mobile React Native apps. setupTests.js This file used to set up the Jtest. How To Run Our React App? Execute the following commands to run our app npm run After running this command our create react app start the development server at localhost:3000 and convert our react js app to plain javascript by using babel and bundled into a single javascript file using webpack. These are happening behind the scenes. I am not covering about babel and webpack. If you like to know more check their official website. You will get the output in chrome similar to the below image. Now open the project directory in any editor you wish and open the app.js file and edit the content like below and save the changes. import logo from './logo.svg'; import './App.css'; function App() { return ( <div className="App"> <header className="App-header"> <img src={logo} <p> Hello World! </p> </header> </div> ); } export default App; Once you saved the development server automatically detected and reload the browser and you will get the output similar to the image below. Congrats! You write your first react app. What is JSX? If you noticed in the App.js file it looks like Html but actually, it is a JSX(JavaScript XML). ..... ... <div className="App"> <header className="App-header"> <img src={logo} <p> Hello World! </p> </header> </div> ......... .... JSX is a synthetic sugar used to write Html in react. It converted to react element once we run the program. The thing you need to know about JSX 1.The HTML code must be wrapped in one top-level Html element. If you write three paragraphs, you must put them inside a parent element, like a div element. const myelement = ( <div> <p>Reacct</p> <p>Angular</p> <p>Vue</p> </div> ); 2.To insert a large block of Html wrap it with parentheses like this const myelement = ( <ol> <li>Reacct</li> <li>Angular</li> <li>Vue</li> </ol> ); 3.To use an expression in JSX wrap it with a curly brace like this const myelement = <h1>CodeSpeedy is {95+5}% is awsome </h1>; 4.Every JSX element must be closed even though it is an empty element. const myelement=<input type="text"/> Conclusion Ok guys that’s enough for today. We learn about create react app, the directory structure of react, and JSX in this article. In the next article, we learn components in react. All the best guys for upcoming days keep learning.
https://www.codespeedy.com/hello-world-in-react-js-day1/
CC-MAIN-2021-17
refinedweb
1,153
59.6
so apparently me two days ago thought django was not going to be too difficult to learn… hashtag-regrets the official django documentation really is the best way to start: otherwise you have no clue what is going on… so apparently i tried to do this first - while copying pasting works, did not have a clue what was going on: let me try to make some sense of what is going on (in the second link) # create and go into the project directory mkdir proj_django_api cd proj_django_api # create a virtual environment to isolate our package dependencies locally pipenv --three pipenv install django pipenv install djangorestframework pipenv shell # enter the virtual env # set up a new project called dq django-admin startproject dq cd dq # create an app inside the project called timeseries python manage.py startapp timeseries # sync database for the first time python manage.py migrate # default uses sqlite # run the server to see what you have done so far! python manage.py runserver # you will see a rocket on the webpage now we create a simple view: in timeseries views.py, input the below: from django.http import HttpResponse def index(request): return HttpResponse("Hello, world. You're at the index page.") now we map it to a url. create a urls.py file in the timeseries directory and input the below. from django.urls import path from . import views urlpatterns = [ path('', views.index, name='index'), ] this is within the timeseries app. we need to tell the project (dq) to point to it. we will add the below in the dq directory’s urls.py file: from django.contrib import admin # add an import from django.urls import include, path urlpatterns = [ path('timeseries/', include('timeseries.urls')), #new path('admin/', admin.site.urls), # existing ] you can try python manage.py runserver again to see what has happened! remember to add /timeseries/ at the end of the URL. in dq/settings.py, add the below to INSTALLED_APPS: # this tells the project what apps are present! # TimeseriesConfig is present in the timeseries app.py file INSTALLED_APPS = ( ... 'rest_framework', ) in timeseries views.py, change to the below. you need to add the decorator, AND for me i was used a POST request (send a json over, calculate using my outlier algorithm saved in another pyscript, then return another json) from rest_framework import status from rest_framework.decorators import api_view from rest_framework.response import Response from .outlier import get_anomalies @api_view(['GET', 'POST']) def get_anoms(request): if request.method == 'POST': return Response({get_anomalies(request.body)}) else: return Response(f"please use POST") to test your server (make sure it is running), just create a random script somewhere and you can use requests to do a POST! import requests, json import pandas as pd with open('ec2_cpu_utilization_24ae8d.json') as f: json_data = json.load(f) r = requests.post("", json=json_data) #print(r.text) print(r.status_code) df = pd.read_json(json.loads(r.text)[0]) df.to_csv('received_data.csv') YOU ARE DONE CONGRATULATIONS!!!!!
https://www.yinglinglow.com/blog/2019/04/07/django
CC-MAIN-2021-49
refinedweb
494
60.11
This action might not be possible to undo. Are you sure you want to continue? Master of Science Thesis Sheikh Mahbub Habib Syed Zubair Chalmers University of Technology Division of Networks and Systems Department of Computer Science and Engineering Göteborg, Sweden, July 2009 electronically and make it accessible on the Internet. Security Evaluation of the Windows Mobile OS SHEIKH M. HABIB, SYED ZUBAIR © SHEIKH M. HABIB, SYED ZUBAIR, July 2009. Examiner: Dr. TOMAS OLOVSSON Chalmers University of Technology Department of Computer Science and Engineering SE-412 96 Göteborg Sweden Telephone + 46 (0)31-772 1000 Department of Computer Science and Engineering Göteborg, Sweden July 2009 TO MY WIFE AND PARENTS - SHEIKH MAHBUB HABUB TO MY PARENTS AND FRIENDS - SYED ZUBAIR . This thesis could not have been completed without their help. Gustavsson. he has been a great source of effective and feasible ideas. We are very grateful to Martin G. We are thankful to the Microsoft Corporation for publishing the documentation of Windows Mobile/CE operating system in MSDN. Finally. Sweden) for supporting our Master’s Project.ACKNOWLEDGEMENT We show significant and indescribable gratefulness to our supervisor. We also want to thank our university for giving all kinds of support throughout our Masters studies. From the documentation. Associate Professor Tomas Olovsson for his helpful contribution in giving encouragement.H. profound knowledge and all-time feedback to us. Product Manager/Senior Advisor. we got a thorough idea about the architecture of mobile/embedded operating systems. During the thesis work. National Security & Public Safety of Ericsson AB (Mölndal. we want to give heartiest thanks to our family and friends for inspiring us through out the thesis work. suggestions and guiding in the right direction throughout the research work. 1 operating system and its security features.Security Evaluation of the Windows Mobile OS ABSTRACT Smartphones are rapidly replacing the traditional cellular phones because of its PClike features which allow them to be used as web browsers. the need for securing the user data stored on these devices and the services that they provide becomes more and more vital. At last. . Our report analyzes the Windows Mobile 6.1 operating system has been performed to show the stability and robustness of the network stack in a hostile environment. We have analyzed its security model and looked into some of the related research work regarding the operating system’s security. The aim of this report is to create awareness in this area and to encourage further research into mobile operating system security. security issues are discussed in detail from an application’s perspective in order to show what type of (security) problems are faced by an application and what improvements are needed to counter those threats. As their usage grows. In a whole. These devices run complete operating systems that allow installation of third party applications. Mobile Operating Systems naturally become the target for security scrutiny as they are responsible for managing all the information and services. Mobile Operating Systems. We have looked into the OS architecture. penetration testing against the Windows Mobile 6. process management. file system. Keywords: Smartphones. memory management. Penetration testing. Security. editors and media players. device drivers and communication service and network stack architecture. Windows Mobile. this report identifies the strengths and weaknesses of the Windows Mobile operating system. Moreover. ..5 Organization of thesis…………………………………………..... 3. 2.....1.. 4 Chapter 2 Mobile Operating Systems………………………………….1 Core OS Architecture………………………………………….5. 1 1.2.....4 Device Drivers……………………………………………….2.2 Certificates………………………………………………………….1 Background………………………………………………………….TABLE OF CONTENTS Page Chapter 1 Introduction………………………………………………….. 2.4 Protection Mechanisms……………………………………………. Chapter 4 Security Issues………………………………………………………. 3.3 Security Policies……………………………………………………. 2.. 2. 2...3 Wi-Fi Encryption……………………………………………...4 Security Roles……………………………………………………… 3.2.2.3 Objectives…………………………………………………………..2 Windows Mobile……………………………………………… 2. 4...2.1 Windows Embedded OS Platforms……………………………….1 Core OS and Memory Architecture…………………………… 2. 2....3. 4.2 Cross Service Attacks……………………………………………… 4.1 Test Methodology………………………………………………… 5..3.2 File System Architecture……………………………………… 2.......6... 3...5.. 2.1 Access Permissions………………………………………………… 3..6 Security Services…………………………………………………… 3.......5..3 File System Architecture……………………………………… 2. 2.2.7 Summary…………………………………………………………. 3... 3....5 Communication Service and Network Stack Architecture…… 2.5 Users’ Influence…………………………………………………….1.1..4 Scope………………………………………………………..4 Additional Security Features…………………………………...1 Cryptography…………………………………………………..1 Network Scanning…………………………………………… 5 5 5 6 6 7 8 12 15 16 16 19 25 28 28 29 30 34 34 35 35 37 38 39 39 39 39 39 40 42 42 44 44 46 47 47 48 Page .3. 3 1.3 Third Party Applications…………………………………………..2 Problem Statement……………………………………………....2 Memory Architecture…………………………………………. Chapter 5 Practical Analysis of Network Stack……………………………… 5.2 Networking (Remote)………………………………….....6.... 3. 2 1.3 Implications………………………………………………………....1 Architecture………………………………….1 Windows CE…………………………………………………. 2 1..2.... 4....2 Storage Card Encryption……………………………………… 3. 3.. 2. 2. 3 1.1 Malware…………………………………………………………….3 Communication Service and Network Stack Architecture…… Chapter 3 Windows Mobile Security Model………………………………….6.3 Networking (Wireless)………………………………….6.2 Windows Mobile 6..2.1 Networking (Core)……………………………………. 2. 5.2 Vulnerability Scanning……………………………………….3.3.2 Vulnerability Scanning……………………………………….. 5.3 Summary………………………………………………………….5..3 Penetration Testing…………………………………………..1 Overview………………………………………………………….. Appendix B: IGMPv3 Exploit………………………………………….4 Other Historical Attacks…………………………….2.. Chapter 6 Case Study: Mobile VoIP Application……………………………..3 Mobile VoIP Security…………………………………………….4 Suggestions……………………………………………………….... 6...3 Penetration Testing…………………………………………. 5. 6..2 Mobile VoIP in Windows Mobile 6.. 5. 6.. 5.3.... 5.2......1. 5.2 Internal Attacks……………………………………………… 6..1.1 TCP Syn Flooding……………………………………..3.2.1…………………………….1 External Attacks…………………………………………….2 IP Options (IGMPv3 Exploit)………………. 5.2. 6. 6..3. 48 48 49 50 51 52 52 52 53 54 55 57 57 57 58 58 59 60 62 64 70 71 ....2.....2 Penetration Test Results and Discussion………………………….2... REFERENCES………………………………………………………………… APPENDICES Appendix A: List of Penetration Tools……………………………….... Chapter 7 Conclusions………………………………………………………….3 ARP Spoofing……………………………………….2.3. 5..1 Network Scanning…………………………………………… 5. We have looked into the threats to applications and vulnerabilities introduced by them. yet still able to link to the enterprise. Mobile workers are adding smartphones to their daily life because these devices offer extra functionality and services. 1 . Network Stack: Since strong connectivity is one of the most important features of smartphones. Applications: The ability of installing new applications is one of the hot features of smartphones. install third party applications for reading documents. how the processes are organized and what kind of file system is used. All these deployments indicate that these devices are given a lot of responsibility. how is the memory managed. Linking the enterprise through smartphones not only exposes the device itself to threat but also the company where the mobile worker works. smartphones are used. Ohio: “Smartphone are becoming the new laptop” This extends our definition of Smartphone. it is important to examine if smartphones have a robust and stable network stack in order to counter different attacks. Smartphones are “smart” because apart from handling voice calls. for example where voice and data communication are the only means to communicate between rescue workers. we have investigated what level of security a smartphone OS offers. what kind of kernel architecture is used. reading emails etc. In this report. we will provide a detailed discussion and highlight related work to answer these questions. Bluetooth and Infra-red are other network services which are common in almost every smartphone nowadays. devices and memory. Even in crisis management scenarios. In order to be able to examine the security of smartphones we target this question at three important levels: • Operating System: The operating system is the obvious candidate for probing because it runs the smartphone and manages all its resources like files. particularly the ones that have targetted desktop operating systems. Smartphones are being used in environments where Confidentiality and Integrity is of utmost importance. Quoting Dan Hoffman. these devices let the customers browse the web using GSM/3G and Wi-Fi connections. CTO of SMobile Systems in Columbus. Since the application of smartphone is rapidly growing it is natural that people start asking “Are Smartphones secure?”. • • In this thesis paper. The smartphone is a new addition to the family of mobile devices that combines the functionality of mobile phones and Personal Digital Assistants (PDAs). They are small enough to be carried in pockets.Chapter 1 Introduction Security is a major concern in the area of information and communication technology while dealing with computers. the Smartphone becomes a member of an IP-based network like a desktop or notebook and it may be accessible for users all over the Internet. Surprisingly. deployed/installed in this platform. Users of mobile devices have a craving for new services. Security of mobile operating systems was initially investigated by Jukka Ahonen [4] and Arto Kettula [5] in the year 2001 and 2000 respectively. file system. send and receive fax and games). These studies are based on older platforms but the studies show that most of the mobile operating systems lack of important features like permission based file access control. world clock. calendar. Perelson and R. they want rich Internet access despite channel constraints. By using the WLAN feature. processing power and services offered in these devices. personal information management features (e. Wireless and Telecom analyst Jeff Kagan (2008) expressed his concern over the above issues: “People are putting everything on them (phones). This is a major reason why security evaluations of mobile operating systems in terms of applications. we have explored Windows Mobile network stack vulnerabilities. PDAs etc. Moreover. calculator. It is interesting to see how secure an application. a scenario which was shown in enterprise networks using Blackberry devices [2]. can be. in the next 15 years the smartphone has been changed significantly in terms of operating system architecture. The architecture includes the kernel and memory architecture. open mobile OS platforms running in these devices let the users install third party applications where the Smartphone is used as man-in-the-middle bridging two networks together. attackers may be able to spawn link-level attacks via wireless access point.” Even though we can see that smartphones are equipped with lots of services similar to desktop/notebook computers. That’s why most of the smartphones are using IEEE 802. The people who mount security attacks are bound to focus on smartphones at some point. 1. In addition. S.g. address book. multi-user support and even memory protection. device driver management and communication services and the network stack. services and network stack are needed. e-mail. for example when using WEP encryption [1]. we would like to focus on the architecture of Windows Mobile operating system that runs on devices like smartphones. Botha [6] have done a thorough investigation regarding access control of mobile devices in terms of different OS platforms.1 Background The first smartphone Simon [3] which was a joint venture of IBM and Bellsouth in 1993 had some significant features such as touch screen technology. security is still neglected as most of the attention is given to the development and quick release of new functionality and services. 2 . Also.1. But we have to think that they are coming.11 WiFi interfaces to give the user the taste of high speed internet.2 Problem Statement For the issue of smartphone security. After that. we haven’t had major security events resulting from these wireless devices yet. because of vulnerabilities in the link layer and above. notepad.A. Also.0 and Symbian 9. In this thesis. A wide range of literature was studied in the area of smartphone security which includes application vulnerabilities through third party applications. Mulliner [7] is one of the most active researchers in the area of Smartphone security irrespective of any platforms.1 (Nokia. In those papers. a practical analysis of Windows Mobile 5. 4. To study and analyze security issues related to mobile operating systems published by researchers in this area. Most of the papers published earlier are concerned with Windows CE 4.2 or 5. our focus is on Windows Mobile 6. Besides this paper. It is known to be the first mobile phone-related code-injection attack. To enumerate security problems from an application’s perspective. In our research. To analyze the robustness and stability of the network stack in the Windows Mobile 6.Among the recent researchers in the field. 1. and I/O device vulnerabilities through interception of calls while using VOIP applications. third party security products will not be 3 .0 architecture. SonyEricsson) operating systems has been done in [9] and [10] respectively. the network stack of Windows Mobile 6. Also.1 operating system is tested in order to check the robustness and stability of smartphones while used in open or hostile environments. multimedia services.1 which is based on modified Windows CE 5. To explore the security model of Windows Mobile platforms.1. He showed how cross service attack exploits vulnerability through one interface (WLAN) and then ends up controlling another interface (GSM). The idea is to describe the OS architecture and its security issues along with some practical analysis and examples.3 Objectives The major objectives of this thesis are as follows. 1. focus was given to analyze the network stack’s robustness and stability of smartphones while running in real-world scenario. Prevention mechanisms of this type of attack have been proposed in this thesis as well. malware. he has given a detail investigation of security issues in different OS platforms that are running in smartphones in recent times.1 OS. In his Masters thesis [8]. The second problem was regarding software components or applications running on smartphones. The client part of the MMS user agent was compromised through a buffer overflow vulnerability to generate an MMS-based worm. two security problems were identified related to the increased capabilities of smartphones.0 based mobile OS or earlier versions of the Symbian operating system.4 Scope of the problem In our thesis we have analyzed the architecture of the Windows Mobile OS which is one of the commonly used operating systems by smartphone manufacturers today. ⇒ ⇒ ⇒ ⇒ ⇒ To get a thorough idea about the architecture of Windows Mobile operating systems including the application execution on these platforms. The first problem was related to the integration of multiple wireless interfaces into a single device. Collin R. security policies. Some general security implications related to OS architecture are mentioned here as well. Chapter 3 presents the security model used by Windows Mobile operating systems. This chapter also suggests some protective measures to be taken while running this type of application in the mobile operating systems. 4 . certificates. Chapter 4 illustrates the security issues and some protections mechanisms pointed out by different researchers Chapter 5 depicts a practical analysis of Windows Mobile 6. The chapter also includes an overview of Windows CE and Windows Mobile OS.1 operating systems. Our focus will be on the built in security mechanisms like access control. security roles. Chapter 6 discusses the security issues related to a Mobile VoIP application. The chapter includes access permissions. Chapter 7 states the conclusive words and plans for the future research. 1. security services and users’ influence on security. crypto libraries.1 operating system’s network stack.taken into consideration.5 Organization of thesis Chapter 2 describes the architecture of Windows Mobile 6. security policies. memory protection. certificates and code signing in the Windows Mobile OS. as the main objectives of the thesis is to evaluate the security issues concerned with the smartphone OS. The main reason for the difference is the energy and space constraint associated with special hardware and special requirements. Windows NT Embedded [14]. video games. Mobile Media Players.. Microsoft has changed the previous name of Windows CE 5. laptops and notebooks. In this thesis. the small display size and also the sophisticated touch-screen technology. For example. Also. the lack of full keyboard. it is possible to develop robots. industrial controllers. Windows Embedded CE and Windows Mobile platforms will be discussed. Other differences are related to the system applications rather than the OS kernel. Mobile Gaming Devices. 2. a mobile OS may require a hardware wake-up function to develop features like appointment reminders which can occur during the “sleep” mode to save energy in the device [8]. An overview of embedded operating system platforms (Windows CE and Windows Mobile) will be presented to help the reader distinguish between them with respect to their functionality.1 Windows CE …………………………………………………………………………….1. medical equipments.NET Micro Framework [16]. the OS architecture of Windows Mobile 6. Windows Embedded Standard [13].0 to Windows Embedded 5 . In the next two sections. cameras. It is an open. pointof-sale terminals. Windows Embedded for Point of Service [15] and . gas station pumps. Although the smartphones are capable of using most of the functionality that is present in desktops. With Windows CE. etc. From the latest release of Windows CE. Windows CE (Compact Edition) is an embedded 32-bit operating system that is used to build a broad range of intelligent devices. Mobile Phones and Industrial Mobile Devices. communication hubs. operating systems used in this type of devices are different from those used in personal computers. The differences in the system software arose from distinct UI (user interface) requirements of such devices. voting machines. Windows Embedded NavReady [12]. Microsoft has a lot of products under the Windows Embedded umbrella which are Windows Embedded CE [11].1 is discussed in the next few sections. Smartphones. mobile phones. 2. like introducing the thumbwheel.1 Windows Embedded OS Platforms Windows Embedded is a collection of operating systems and platform builder tools to build embedded devices with the rich set of componentized technologies. scalable and real-time operating system. Mobile OSes are used in different types of mobile devices like PDAs (Personal Digital Assistants). digital music players. Tablet PCs. we will focus on the operating system that runs in smartphones.Chapter 2 Mobile Operating Systems An operating system is a piece of software that controls the hardware and with which applications interact in order to carry out functions. interactive television. adds a Windows-like shell.1. APIs and platform extensions to generate the SDK (software development kit). In the next few sections. Professional). The major parts of the mobile OS architecture are the kernel. In this way. Windows Automotive. Classic. Then.2 Windows Mobile 6. the platform builder generates the OS and deploys it in the hardware which will have a bootloader itself. This is the point where Windows Mobile differs from Windows CE. is used in various third party devices such as Smartphones and Personal Digital Assistants (PDAs). radio interface layers for connected devices. Classic and Professional. a bunch of applications (e. The other versions are Windows CE . LAN or WAN or PAN) needed in the target device.1 was a minor upgrade of the existing Windows Mobile 6. Mobile handheld. Windows Mobile 6.CE 6. MSNTV. the responsible team chooses the components needed for Windows Mobile. Windows Mobile Standard (for Pocket PCs without touch screens) and Windows Mobile Professional (for Pocket PCs with phone functionality). Currently. build. memory management. the major parts of the OS architecture will be discussed that are relevant for the security evaluation of mobile operating systems. etc.0 kernel not the newer Windows Embedded CE 6. Based on processor architecture. It is done by using selected components like . The WinCE operating system comes with a platform builder which is used to configure. etc). an SDK can be developed using the CE platform builder to further extend the target OS platform. Even. 2.NET.0 and Windows CE 2. etc. There are three editions of Windows Mobile 5. PocketPC Phone and Smartphone.g.0 [23].1.0: PocketPC. 6 . based on Windows CE 5. Gateway. Office Mobile. file systems. deploy and debug a new OS based on selected board support packages (BSPs)[17] and the device type (e. 2. Windows CE 3.2 Windows Mobile The Windows CE platform builder kits are distributed among the different embedded developer audience and to internal teams of Microsoft and includes Windows Mobile (Standard.0 platform which is based on a modified Windows CE 5. SHx [20] and x86 [21].g. It was released on Feb 12. WinCE comes as a full OS to the manufacturers but Windows Mobile comes like an SDK. MIPS [19].NET compact framework or Windows Media and the network services (e. Industrial controller. it is possible to build an industrial robot based on Windows CE.g. four types of BSPs or processor architectures are supported which are ARM [18].1 OS Architecture Microsoft Windows Mobile 6 [22]. The SDK is built for the Original Equipment Manufacturers (OEMs) who integrate it in their mobile devices. may be that it will not have any windows-like interface or any display interface but can have its own components to function properly.0 [24] kernel. device drivers and communication services and network stack. 2007 in three different versions: Windows Mobile 6 Classic (for Pocket PCs without cellular radios). based on the hardware specific BSPs and ship the Windows Mobile based devices.0. Windows Mobile 6 also has three editions: Windows Mobile Standard.). IE Mobile. 1: General architecture of Core OS emphasizing Kernel. processes and threads are created. create separate heap 7 . Here. Kernel memory functions also allocate and deallocate virtual memory. interrupt handler and other executable handlers for file management.1 Core OS architecture Windows Mobile 6. Suppose an application makes a function call to Win32 API via the DLL interface to use a particular function. If the application has memory requirements less than 64 KB. time and scheduling are all included in the WinCE kernel architecture.2) kernel [25]. applications request to use virtual memory using kernel memory functions.2. Figure 2. In fig.1 is a 32-bit operating system based on a modified Windows CE 5.exe is the module which represents Windows CE kernel in all CE based devices. the application can use the local heap or create separate heaps. I/O device interaction and graphical user interface management. real-time application capabilities are ensured in Windows CE which is very important for time-critical systems. Otherwise. Interrupt handling and some file management functions are also provided by the kernel.1. Using the kernel process and thread functions.2. The kernel provides basic OS functionality like process.exe. Thread priority levels. the general architecture of the Core OS is shown together with kernel functionality related to application execution in Windows CE based devices. thread and memory management. manage memory on the local heap. They are also used to schedule and suspend a thread. 2. Nk.0 (version 5. terminated and synchronized. the core OS can be divided into three parts: the user interface.dll and process interface which consists of kernel handler Nk. priority inversion with inheritance. DLL interface which has Coredll. In this way. If the running application requires system calls. Processes can also use memory-mapped objects to share data.exe. Windowing and Event Subsystem) Services.0 [26] will now be discussed in more detail. 2. Both user space and kernel space is divided into several regions which is shown in Fig. Memory mapping and slot arrangements were pretty constant from Windows Mobile 2003 to Windows Mobile 6.2 (a) and (b). these devices use ROM (Read-Only Memory) and RAM (Random Access Memory). The Kernel process acts as the 32nd process in virtual memory.exe) via a wrapper function which is included in the DLL interface (Coredll. However.exe or user applications (e. Instead of hard-drive. Every system call causes an exception that is caught by the kernel. The kernel then handles this exception and determines which .exe can fulfill the request or determines the correct process to send the function call request to.0 platform which is a 32-bit operating system and it shares a lot of memory management concepts with Windows XP. a process can use the entire 2GB space of virtual memory. The user space is divided into 64 equal slots each of 32 MB.exe. myapp.2. In Windows XP.2 Memory architecture Typically the devices which run Windows CE based OS. When an application process makes a system call.1. Upper 2GB space from 8000 0000 to FFFF FFFF is Kernel space and can only be accessed by privileged processes running in kernel mode (KMode). Flash memory is another integral part of Windows Mobile data storage. As mentioned before.dll).g. This 32 MB space is shared by all running applications.exe). Slot 0 (0000 0000 to 0200 0000) is reserved for the currently executing process. The process that owns the function executes it using the same stack and register values that the original thread load in the calling process. Lower 2GB space from 0000 0000 to 7FFF FFFF (the User space) is used by User processes. don’t have a hard drive.exe (Graphical. The memory architecture of Windows CE 5.1 is based on Windows CE 5. Windows Mobile 6. it passes the call to Nk. Slot 2-32 are used to store 31 other processes running in the system such as Filesys. User code can use the memory from the static block of data to load an application.memory and allocate memory from the stack. GWES. memory mapping or slot arrangements has changed a little bit which will be discussed later in detail with a separate figure. The kernel then calls the proper server process to handle the corresponding system call. This 4 GB virtual address space is divided into two parts each of 2 GB space according to Core OS functions. Windows CE it is not similar to its desktop counterpart. In Windows Mobile 6. 2. 8 . it is passed to kernel (Nk. Slot 1 (0200 0000 to 0400 0000) is used to load all shared and XIP (execute in place) DLLs from ROM.exe (kernel) by executing a software interrupt or kernel trap. Device. A 32-bit OS can address a total of 4GB virtual memory.0. Some read/write sections of other XIP DLLs.dll.Figure 2. Heap. Stack.2: a) User space and b) Kernel space in Windows CE 5. 2. the 63rd slot is used for resource-only DLLs. If an application needs to allocate extra memory.0 When a process becomes active. An expanded look of the lower 64 MB (Slot0-1) are shown in Fig. The typical size of the stack is 64KB with 2KB reserved for overflow error control.3. All non-XIP DLLs. For every new thread a new stack is created. Variables in the static data block are referenced by an application throughout the 9 . Slot 33-62(4200 0000 to 7E00 0000) is reserved for object stores and memory-mapped files. it is cloned to slot 0. the OS maps the following DLLs and memory components in the lower 2 slots (0000 0000 to 0400 0000). the DLLs that have no code but only resource data such as strings and fonts. is loaded by WinCE with every application is called static data block. the operating system reserves a block of memory in the heap on a per-byte or a per-4-byte basis. buffers and other static values. Some execute-in-place (XIP) DLLs. These will have the same set of properties as local heap but are managed through a set of heap functions. A block of memory that contains string. An application starts with a local or default heap. A single application can create as many heaps as needed. Data section for each process in the slot assigned to the process. • • • • • • • Shared DLLs like Coredll. At the top of the large memory area. Exceeding the limit of stack by an application causes system access violation and it is shutdown immediately. When a process initializes. no confidential information should be kept in the memory-map file. As seen in fig. Douglas Boling of Boling Consulting has discussed DLL loading problem and significant changes made in Windows Mobile 6. sometimes they fill up slot 1 and approach slot 0.3: Expanded look of Slot 0 & Slot 1 of User space. Usually.execution time. So. it is done by means of memory-mapped files. Read/Write blocks and Read blocks are two types of static data blocks in Windows CE. there has always been problems with memory regarding a number of DLLs within the system. As there are so many DLLs in the system. An application must check the data read from such regions to ensure integrity before using it in any kind of processing. Figure 2. This results in reducing the already limited 10 . Object stores will be discussed in the File System section. There can be two types of memory mapped files. Memory of a writeable file can be used by any process to write in that particular file. slots 33-62 are reserved for memory-mapped files and object stores. Inter-process communication is achieved by means of these memory mapped files. DLLs are loaded from the top of slot 1 to bottom. Readable files in the memory can be read by all processes in the system. If two or more processes need to share a resource or a file.2 (a). One can be readable and the other can be writeable. 2. In Windows Mobile. Memory-mapped files need to be discussed here briefly. A portion of space between slots 33-62 in virtual memory is associated with the contents of the file.1 memory slot’s arrangement in his personal blog [27]. The operating system uses slot 59.address space for the currently active processes. The slot arrangement in user space has remained constant from Windows Mobile 2003 to Windows Mobile 6. changes were made where the DLLs are loaded. The new memory mapping looks like: Figure 2.0. This can also be a problem in the device manager process space where lots of device drivers. which is on top of the large memory area (memory mapped files). Moreover. However. to store the thread stacks for the device manager. XIP DLLs are still loaded in Slot 1.1. 11 . each with their own interrupt service thread.4: New slot arrangements of User Space in Windows Mobile 6.1. This introduction of additional Large DLL slots helps reducing the DLL strain on the slot architecture. Slot 61 is used in case slot 60 is full. with the release of Windows Mobile 6. The stacks for the device manager are no more reserved in the processes’ slot. tends to fill up most of the virtual memory. a new slot arrangement is introduced to reduce the DLL pressure and to solve the device manager process space. The non-XIP DLLs larger than 64KB are loaded in slot 60. In this way RAM is used much like in Desktop PCs while ROM is used as Hard disk or secondary memory.4.In fig. it seems that space for object stores and memory mapped files has been significantly shrunk but it is not the real scenario. a storage card will be shown as “\Storage Card”. 12 . the only difference is that new drives are mounted as folders under the root i. In Windows mobile. This setting allows you to mount external file systems as root. Finally. It also provides access to the RAM resident object store. we will introduce the internal file systems used by Windows CE then we will look into the main components of a file system in windows CE.3 File System Architecture The file system organizes the data stored on a storage device into files.exe manages the file systems and file system filters. The Object store is mounted as the root of the file system. Windows CE File System and its components Windows CE allows custom file systems. A process called Filesys. The Windows CE Internal File System [28] controls access to the ROM. which is close to 1 GB is reduced only by 128 MB.exe. Applications can store there data in the object store which resides in battery backed RAM. All the files (except the external file systems visible as folders under the root) visible under the root are stored in the object store. we will see how a file system is loaded. The Large memory area (object stores and memory-map files) size. 2. A Windows Mobile device can implement more than one file system at a same time which are unified under a single root “\” by Filesys. This setting was used by Windows Mobile before the introduction of “persistent memory”. ROM data is accessible through the same path and other file systems (external) are mounted as folders under the root. First. 2. Every file system tries to do this organization in such a way that makes finding and accessing information easy. • ROM only file system does not provide access to the object store. RAM and ROM file system provides access to ROM and Object store. Since Windows CE supports a variety of Block (storage) devices that can have partitions and each partition in turn can have a different file system.e.2. Since the file system manages all the data and provides access to it. Due to increasing awareness about security. Windows CE offers two choices for internal file systems. it will be a good idea to look into the file systems used by Windows Mobile. file paths are specified in the same way as in its Desktop counterparts. WM5 onwards use this setting. The applications can store their data in ROM or external storage like memory cards. most file systems come equipped with mechanisms like file permissions and fault tolerance in order to keep sensitive data safe and secure. ROM data is accessible through “\Windows”. RAM file system. “mspart. File System Drivers organise data into the traditional files and folders on a storage device. including FAT. They read or write to the block devices in fixed sized blocks. RAM system registry and property databases.dll” is an example of partition driver provided by Microsoft. UDFS/CDFS [29]. The Storage Manager manages all the external storage. TFAT.c” is located in the root of the RAM system and the unified root as well.Figure 2. Object store consists of three optional components. a file “\file. It deals with four main things: Storage Drivers or Block Drivers are drivers for Block devices. The ROM file system is hooked up to “\windows” folder.5: Windows CE file system layout Filesys. The RAM file system is hooked up to the unified root i. 13 . Windows CE Includes support for different file systems.e. Block devices are used as ROM (NAND/NOR) or external memory card in mobile devices. Windows CE allows multiple partitions on a device with each formatted for a different or same file system. Partition Drivers manage partitions on a storage device.exe has three components: • Object Store • Storage Manager • ROM Files system The Object Store provides non volatile storage for programs and user in RAM and for this to work the RAM has to be powered on all the time even when the device is switched off. The files in ROM system are accessible as read-only. anti-virus) of software in order to pre process the data before it is handed over to the file system. One of those applications is the Device Manager (Device. This means that this type of registry is suitable for devices with battery-backed RAM. Registry must be backed up if a device is to be switched off and it does not have batterybacked RAM.hv” in “\Documents and Settings” while the User hive contains data related to a user and is named “user.hv”. the battery can run out of power thus resulting in the loss of Object store therefore backing up will be a good idea in any case.• File System Filters stay on top of Installed file systems (file systems for ATA devices.exe resides in the ROM file system. we will look at what purpose the registry serves and how windows CE manages it. The Storage Manager then asks the partition driver about the partitions on a disk and the file system for each partition. File system filters are used by different types (compression. The partition driver takes this information from the Master Boot Record (MBR) and forwards it to the Storage Manager which then loads the required file system drivers for each partition and mounts the file system in the root. There are two types of hives. The Storage Manager registers with the Device Manager’s notification system in order to be notified about the block drivers being loaded or unloaded. This type exempts the system from backing up and restoring the registry on every switch on/off thus speeding up the boot process. Let’s have a look at how all these components come into play. Even if the RAM is kept ‘alive’ while switched off. NK. Another hive called the Boot hive contains settings that are applied during the boot 14 . applications. All this information is stored in the registry under the PROFILE key where each block device has an associated entry. The registry stores data about drivers. These files can be kept in any file system. Various applications are started by Filesys. which in turn initializes the registry from the default registry in the ROM file system. After acquiring this information. When the OS boots. Filesys. flash memory and SRAM cards) and process calls for file systems before they are processed by the file system. the Storage Manager loads the appropriate partition driver. Windows CE supports two types of registry • RAM-Based Registry resides in the object store. preferences and other configuration data.exe. The storage manager then queries the block drivers about information like the partition driver used and default file system for a device. the System hive which contains system data stored in a file usually named “system. Windows CE uses the Registry in the same way as desktop Windows uses it. Since the registry is used extensively in the process stated above and on occasions other than this therefore. user settings. encryption. Windows CE does not provide filtering mechanism for access to ROM and RAM file systems.exe) which loads the needed device drivers by reading from the registry.exe loads filesys. • Hive-Based Registry stores data inside hives (files).exe by reading from the registry. Both name and path of hives is subject to change depending on OEM. while the user region (where user’s programs and data lie) is accessible and writable through normal file read write mechanism. A file in flash memory cannot be overwritten simply. Now the ROM in mobile phones is a bit different from ROM in Desktops. NAND flash is faster to write and slower to read while NOR flash is faster to read and slower to write. Flash memory needs special care because of its slow erase and wearing away property.4 Device drivers Device drivers abstract the functionality of a physical or virtual device and manage the operation of these devices with the applications or operating systems. RAM is used similar to Desktops. Therefore NOR flash is used for XIPing that is. it must be erased and then written. This hive is in the ROM. programs and user data. A file system that can spread writes over the memory in order to use all blocks equally (wear leveling) and deals with the slow erase time of flash by writing a copy of an updated or edited data to a new block and erasing the old data when it has free time. It is faster than ROM and it takes a lot of power to be kept alive. The first process loaded by the kernel is filesys. The memory can be erased and reprogrammed in units of blocks. a file system that is specifically designed for flash memory is needed. Windows Mobile 5 onwards uses Hive-Based registry RAM and ROM Since we are talking about file system and data storing. Blocks have a life span limited to a maximum number of erase cycles after which they start wearing away. The File system is an example of a virtual device.2. timers and universal asynchronous receiver-transmitters (UARTs). This is the hive used in starting the file systems and drivers. ROM is used to store Operating System.process. Flash memory is arranged in blocks which are rather large (often in Kilobytes) in size. Examples of physical devices are network adapters. programs are executed in ROM without loading them into RAM thus saving valuable RAM space. There are two types of flash: the NAND Flash and the NOR Flash. audio/video adapters. Therefore. There are three main processes in Windows CE that load device drivers [30]. Now this is all OEM-dependent but an ideal system would have a chunk of NOR flash as well as a chunk of NAND flash where NOR flash will hold the programs and NAND flash will hold user data as it needs frequent writing which is relatively faster in NAND. In Windows CE. Once the file system is started System hive can be mounted thus the boot hive is no more needed and it is discarded. The OS region of ROM is unwritable except by ROM updates. a brief introduction of the memory that mobile phones use nowadays is important. 2.se which is a file system process. Flash ROM Flash ROM is widely used as ROM and can be either Internal (non-removable flash card) or external memory (removable flash cards). FMD (Flash media driver) is linked with FAL (Flash Abstraction Layer) to make a block driver that is then used by file systems like FAT to manage Flash memory. Mobile phones use flash memory as ROM or Hard Disk. This particular process loads the file system drivers which must correspond to the file 15 . 0 (MS CHAP V2) [37].5 Communication Services and Network Stack Architecture Windows CE provides communication and networking capabilities that enables CEbased devices to connect and communicate with other devices and people over both wireless and wired networks.5... all processes share a single virtual memory address space with the kernel and user space of the OS. Networking (Remote) and Networking (Wireless). the device drivers run in user mode and use specialized functions instead of kernel mode addresses while mapping hardware registers. Even the device drivers are not loaded in kernel mode and drivers does not directly use kernel mode addresses. The communication services and network stack architecture of Windows Mobile 6. Each process is loaded in its own 32 MB slot and slots are protected from each other and also from the kernel. Windows XP. each of the networking (core) elements will be discussed..2. TCP/IP Architecture The TCP/IP suite in Windows Mobile 6.1 has similar characteristics as the desktop versions (e. Windows Server 2003).1 supports four protocols like Windows XP and Windows Server 2003 does.1 Networking (Core) ……………………………………………………………………………………. Protected Extensible Authentication Protocol (PEAP) [36] and Microsoft Challenge-Handshake Authentication Protocol 2.exe (device manager process) and the registry are loaded after the file system. The EAP framework in Windows Mobile 6. it includes all the major services and network stacks needed to support the advanced mobile phones such as smartphones.. which are Challenge-Handshake Authentication Protocol (CHAP) [34]. Extensible Authentication Protocol (EAP) The Extensible Authentication Protocol (EAP) is an Internet Engineering Task Force (IETF) standard [33] that provides a framework for network access clients and authentication servers to host plug-in modules for different types of authentication methods. the graphics. As Windows Mobile is a part of the Windows CE family. Windows Networking API/Redirector and Windows Sockets. services and interfaces for easier communication between 16 . Networking (core) consists of EAP (Extensible Authentication Protocol).. the operating system executes the drivers’ entry points in kernel mode because they implement system calls. Transport Layer Security (TLS) [35]...1 can be divided in to three main parts which are Networking (Core).g. Device. Lastly.. In the next few sections.. Usually. TCP/IP. Windows CE-based device drivers [32] run in one of the system process address spaces. Windows Mobile 6 based smartphones have a TCP/IP stack with the dual stack architecture support and wide range of protocol elements.exe) loads display and keyboard drivers.2. In Windows CE based devices.system driver model. Internet Protocol Version 6 (IPv6). and events subsystem (gwes. windowing. 2. 2. The device manager process is responsible for loading the majority of the device drivers in the system and they all expose the stream driver interface [31]. Much of the Open Systems Interconnection (OSI) model link layer functionality is implemented in the NDIS interface.peers of different types. the Internet Control Message Protocol (ICMP).5: Communication stack for Windows CE. uses the Network Driver Interface Specification (NDIS) [38] interface to communicate with network card drivers.1. Netbios and Windows Networking API/CIFS redirector plays the role as presentation layer and session layer respectively.5 shows how TCP/IP fits into the architecture that Windows Mobile uses for communications Figure 2. TCP/IP and other networking protocols in Windows Mobile 6. Development of network card drivers gets much simpler for the NDIS interface. 17 . Winsock API acts as a service provider interface between application layer and transport layer. Moreover. The core protocols that are available in the TCP/IP suite are the Internet Protocol (IP). the Internet Group Membership Protocol (IGMP) and the Address Resolution Protocol (ARP). Figure 2. Also. Winsock Architecture Windows Sockets (Winsock) specifies a service provider interface between the application programming interface (API) and the underlying protocol stacks.2 interface only.1 to ensure the communication between IPv6 nodes and IPv4 nodes over an IPv4 network. Multicast Listener Discovery (MLD). Windows Networking API (WNetAPI/CIFS redirector) Windows Networking API/Redirector (SMB/CIFS) implementation in Windows Mobile based devices help to establish and terminate network connections.dll). Moreover.2 which provides easier access to multiple transport protocol stacks. Winsock client and server applications are used as endpoints for network applications. Winsock is based on the familiar socket interface from the University of California at Berkley. IPv6 function is accessible through the Winsock 2. WNet API communicates through the CIFS which is also called Server Message Block (SMB) redirector to the remote host. IPv4 function is accessible through the Winsock 1.dll and Tcpip6. the tunneling mechanisms such as 6to4 [39] and Intrasite Automatic Tunnel Addressing Protocol (ISATAP) [40] are available in Windows Mobile 6.1 and 2.TCP/IPv4 and TCP/IPv6 technology is implemented in two specialized functions (Tcpstk. 18 . Also. Figure 2. WNet API in Windows CE based devices is almost similar to WNet for Windows-based desktop operating systems except some minor changes.6: Winsock Architecture of Windows CE. The core protocol stack of IPv6 includes Internet Control Message Protocol version 6 (ICMPv6). Windows CE 4.1 onwards uses the Winsock 2. Applications can use the Universal Naming Convention (UNC) which is a system for naming files on a network so that a file on a computer has the same path when accessed from any other computer. and Neighbor Discovery (ND) and Dynamic Host Configuration Protocol version 6 (DHCPv6). Windows CE based devices do not support drive letter like desktops. it provides functions to access files on servers based on Common Internet File Systems (CIFS). Microsoft Windows Network is the only network provider in Windows CEbased devices.2 interfaces On the other hand. 2 Networking (Remote) …………………………………………………………………………………….2 provides protocol independent APIs that concurrently support IPv4 and IPv6 in applications. Networking (remote) in Windows Mobile based devices mainly consists of Remote API (RAPI) and Mobile Virtual private Networking (VPN). restrict or redirect its capabilities. i) Applications and Services Layer Applications and services created by Independent software vendors (ISVs). As RAPI is used by activesync services. a brief discussion is given regarding activesync here. including installable service providers for additional third-party protocols. For example. DNS support is implemented by NSPM.. Here. b) Layered Service Provider: It is used to modify the transport service provider and therefore the protocol that it implements.. Transport service providers use the NDIS (Network Device Interface Specification) to make easier communication with the network drivers.5. SSL LSP as layered service provider above the base service provider WSPM is called the provider chain [41].6 are given below. If we focus on the architecture. ii) Winsock Layer It is programming interface that provides enhanced capabilities over Winsock 1. iv) NDIS (Network Driver Interface Specification) Layer It is the standard network driver architecture for all Windows operating systems..1. Also. WSPM then sends and receives data using TCP. Moreover. 19 ..2. operators and enterprises belong to this layer. to expand.6 illustrates the architecture in Winsock 2... NDIS implements the link layer functionality to communicate with network drivers in physical layer.. connection manager and voice-over IP (VOIP) are need to be discussed in detail as they are responsible for voice call management through phone or IP network and network connections through different interfaces. the Secure Sockets Layers (SSL) LSP is in the layer above the WSPM provider which will encrypts and decrypts data before it calls into WSPM.2.. iii) Transport Service Providers Layer Two service providers are supported in Winsock 2. For example. A brief overview of the architecture components according to the figure 2. telephony API (TAPI).Figure 2. 2. networking (remote) will be discussed in detail. TCPv4 is fully implemented by the WSPM provider that comes with Windows CE.2: a) Base Service Provider: It implements a complete protocol.. we can see that application and services work on the top of Winsock and Transport service providers’ layer. Moreover. default Domain Name Service (DNS) namespace provider in Windows CE-based devices. Winsock 2. In the following sections. mapping and transferring data objects. It has two significant improvements over RAPI: The most significant change in RAPI2 is that it supports multithreaded information transfers between the desktop/laptop and the remote mobile device. Time stamps and user preferences are used by the synchronization process to track the changes on both devices and transfers the appropriate data so that both machines has the most recent versions. infrared. database manipulation and even query and modification of registry keys is possible.1. By using RAPI. ActiveSync is built on a client/server architecture that consists of a service manager (the server) and a service provider (the client).0 and some parts of TAPI 2. It performs the synchronization tasks such as. The service provider determines what data is tracked for changes by the service manager.5 is limited to one device at a time. through the desktop/laptop computer to which it is connected. detecting data changes. the Internet). Activesync 4.g. RAPI2 is the latest version which is the replacement of the previous RAPI library.establishing a connection. It uses the desktop pass-through (DTPT) [42] technology that enables Windows Mobile or Windows CE-based devices to transparently access external networks (e. Telephony API (TAPI) The telephony API implementation is a subset of the Microsoft Telephony Application Programming Interface (TAPI) 2. DTPT is enabled by default when the device is connected to the desktop/laptop running ActiveSync. file system manipulation. they should be made via the Connection Manager. Activesync is a service which uses RAPI to establish connections between desktops to enable above mentioned tasks. Bluetooth. Windows Mobile-based devices cannot use TAPI directly to make cellular connections. Activesync supports serial. USB. 20 . The service manager is a synchronization engine which is built into ActiveSync and is available on both the desktop computer and Windows CE-based device.Remote API (RAPI) The Remote API library enable applications on desktop to perform actions on remote Windows CE-based devices such as Windows Mobile-based smartphones. modem and Ethernet connections. Another enhancement includes support for multiple remote devices to be connected with the desktop/laptop. The desktop provider in the desktop and the device provider on the target device act as the service provider that performs the synchronization tasks specific to user data. resolving conflicts. TAPI simplifies and abstracts the details of making telephony connections between two or more devices. Multithreaded means that more than one application can transfer data at a time so that applications using RAPI2 do not have to wait for other applications to finish before beginning their operations. ActiveSync ActiveSync software provides support for synchronizing data between desktops/laptops and remote devices. Enhanced versions of Activesync might take advantage of this enhancement. supplementary and extended. to control external devices. On the other hand.dll in the Device. original equipment manufacturers (OEMs) and independent hardware vendors (IHVs) to add additional TSPs under TAPI. including Remote Access Service (RAS) connections using the Point-to-Point Protocol (PPP).exe. VoIP such as H. Virtual Private Network (VPN) connections. The Connection Manager handles many different types of connections. A device can connect to each of these networks through multiple connection paths. The Connection Manager tracks the connections whether they are in use 21 . Figure 2. Beyond the TSPI layer. General Packet Radio Services (GPRS) connections. such as the Internet or a corporate network. Connection Manager The Connection Manager is used in Windows Mobile-based or CellCore [43] enabled devices to manage network connections regardless of the service provider used for establishing the connection.exe process context. Windows Mobile-based devices can access many types of data networks.11) connections.dll library validates and arranges function parameters and forwards them to the specific service provider when an application calls a TAPI function. Windows CE-based devices come with default TSP. Windows CE as well as Windows Mobile-based devices support installable service providers. In the TAPI 2. Cost. such as outbound calls. which supports AT-command based modems. which enable independent software vendors (ISVs). Figure 2.3) connections and Desktop Pass Through (DTPT) connections.0 architecture. the service provider can use any system functions or other parts which are necessary to work with kernel-mode services desgined by OEMs. For example. the Unimodem service provider.dll library is loaded in the protected library Device.0 for Windows CE-based devices.dll which thunks the call to Tapi. A service provider can provide different levels of telephone service provider interface (TSPI) which are basic. it first links to the Coredll.323 and session initiation protocol (SIP) are examples of such providers.TAPI abstracts call-control functionality to allow different and incompatible communication protocols to expose a common interface to applications through its support of telephony service providers (TSPs). When an application calls a TAPI function. an extended service provider which has been developed by third-party vendor can provide inbound and outbound call management. Wired Ethernet (802. Wi-Fi (IEEE 802. security level and specific network considerations for the client application are the basis of choosing path for a particular connection. It provides a fast and transparent way of making a connection.7: Windows CE TAPI Architecture. a basic service provider might support basic telephony service. the TAPI. Proxy Server Connections. as well as standard device such as serial and parallel ports.7 shows the general architecture of TAPI 2. The Tapi. through a Hayes compatible modem. In Windows Mobile 6. the Connection Manager uses Connection Planner to determine the best connection to the target network. which means that high and low priority connections can communicate at the same time.dll acts as Connection planner chooses the best connections that serve the request for particular connection. The Connection Manager supports simultaneous voice and data communications [44]. the Connection Manager application must first determine the end-to-end paths from the device to the target network.or requested by applications. It closes unused connections and automatically disconnects connections when they have been idle for a specified period of time. Figure 2. based on some selected heuristics. . Connection Manager Application: ConnMgr. 2. When an application call places. bandwidth and latency of each path.8 illustrates the Connection Manager architecture. it decides which connection is optimal. There are three main parts of the Connection Manager: i. With the above mentioned information. It interacts with multiple device applications such as Internet Explorer and Opera Mobile to schedule and manage network connections. The Connection Planner also takes the decision about the priority of a connection whether it is highest or lowest and establishes the particular 22 ii. and finds all the paths from the device to the target network. Connection planner receives the end-to-end path information from the Connection Manager application. Also. it closes low-priority connections to open high-priority connections. Connection Manager retrieves all possible connections from the Connection Manager configuration service providers. voice and data communications are labeled as high-priority and low-priority connections respectively.8: Windows Mobile Connection Manager Architecture. Then. Connection Planner: ConnPlann. for example. Fig. their priorities and the available connection service providers for each application. the Internet or a corporate network.exe plays the role as Connection Manager in Windows Mobile OS. It then queries each Connection Manager configuration service provider to determine cost. It prepares a list of all connection requests. and terminating calls. Connection service providers are implemented as dynamic link libraries (DLLs) which are responsible for specific connection paths. A VPN connection using GPRS (either PPP over GPRS or NDIS GPRS) cannot be made if the Wi-Fi interface is already used for a VPN connection. Bind connection requests to the NDIS User Mode I/O driver. Connection planner disconnects it if a connection request for a higher-priority connection is made. A lower priority connection is re-established after higher priority connection has finished its tasks. Windows Mobile 6.connection. A connection between two peers can be established for example using the Session Initiation Protocol (SIP) protocol [46]. VOIP application use the Dialplan [47] component. For example. L2TP/IPSec enables enhanced security for VPN client connections from Windows CE-based devices to corporate servers. Connection information such as route and cost are provided to the Connection Manager application. Then. b. SIP has many functions. Connection Service Providers: The Connection Service Providers perform the following tasks: a. unless the high priority connection request is the same as the low priority request. Real Time Communication (RTC) [48] client API that use the VOIP Application Interface Layer (VAIL) [49] to abstract the lower-level interface. Voice over IP (VoIP) The Voice over Internet Protocol (VoIP) specifies the transmission and reception of audio over the internet. such as negotiating the codecs used during the call. VOIP is an integrated feature of Windows Mobile 6 onwards but depends on OEMs or providers whether it should be included or not. iii. c. and provisioning [50] steps to activate the VOIP settings to make it work in Windows Mobile 6 platform. Authentication and encryption should be included in the VOIP application to make it secure in the network. Connection Planner calculates the optimal path for the higher-priority connection and activates that connection. Mobile Virtual Private Networking (VPN) Windows CE supports Virtual Private Networking (VPN). For example. thus creating a VPN using TCP/IP based networks.1 onwards provides the Mobile VPN as virtual private network component (VPN). CSPNet is responsible for DTPT and wired network card connections while CSPRas is responsible for GPRS and PPP connections. A table which enumerates the connection service providers and shows the configuration information are given here [45]. The VPN support in Windows CE includes Layer Two Tunneling Protocol (L2TP). 23 . For example. PPTP is a network protocol that adds a security infrastructure for the transfer of data from a remote client to a private enterprise server. if a low priority connection is active. There is no in-built security with VoIP features in Windows Mobile 6 OS. IP Security Protocol (IPSec) [51] and Point-to-Point Tunneling Protocol (PPTP) [52]. Store provisioning information received from the Connection Manager Configuration Service Providers to the registry. transferring calls. 9: Mobile VPN system Architecture.1 which includes five components: i. 24 . routing. iv. as well as providing APIs for the other system components to control.exe) implements the IKEv2 [53] logic. packet filtering. parses the policy values and writes them into the registry for the IPSec VPN Policy Manager to track the changes.9 shows the Mobile VPN architecture in Windows Mobile 6. persisted connections. and get notifications from the VPN connections. ii. IPSec VPN Policy Manager: IPSec VPN Policy Manager (ipsecvpnpm. and NAT keepalives. IPSec VPN Configuration Service Provider (CSP): The IPSec VPN CSP is invoked when VPN group policies are applied to the device. iii. VPN UI is launched when the user selects the Control Panel and Settings entry. NAT timeout detections. IPSec VPN User Interface (UI): It displays the VPN status page to the user.Figure 2. the control and management logic to run the Mobile VPN connectivity and perform Connection Manager operations. query the status of. IPSec VPN IM driver: This is the NDIS intermediate driver that provides the functionality such as IPSec data-path transformations. and performance measurement. Figure 2. It has also better throughput than the GSM technology. It supports voice calls and data transfer services via Short Message Service (SMS) at the speed up to 9. In first phase. The Mobile VPN can use the MOBIKE [55] protocol to update the VPN security associations (SAs) in case of renegotiation of IKEv2. Networking (wireless) in Windows Mobile 6. 25 .. Therefore. Wireless Telecommunication Technologies Wireless telecommunication has evolved over time to provide enhanced bandwidth and additional multimedia services like TV...1 can be divided into two parts. 2.. which is similar to a dial-up modem.1. Wireless local and personal area networking technologies include Bluetooth. IPSec VPN 802. GPRS is an always-on technology where the customers can be online all the time but have to pay by the transfer volume (kilobytes) instead of connection time..6 kbit/s. multimedia message service (MMS). and allows applications to bind to it to exchange packets through the IPSec tunnel transparently. GPRS technology is the base of advanced and feature-rich data services like email. the IKE tunnel is used to negotiate the parameters that the L2TP/IPSec security associations will use. Infrared and Wi-Fi standards.5. like GSM. GSM: The Global System for Mobile Communications (GSM) [56] is an open. EDGE.. 3G/WCDMA. Mobile VPN uses the Internet Key Exchange (IKE) v2 to establish an encrypted and authenticated channel that will handle VPN traffic.. digital cellular technology standard used in most parts of the world for mobile voice and data services. High-speed internet access. EVDO for data communication. hashing algorithms. CDMA for mobile telephony and GPRS.v. In the second phase. Wireless telecommunication technologies can be divided into two categories. or the virtual miniport driver that represents the IPSec tunnel.. an IKE tunnel is established through certificate and Diffie-Hellman key negotiation. performance improvements through better usability and responsiveness and less bandwidth usage are the major benefits of this approach. Video.3 Networking (Wireless) ……………………………………………………………………………………. Faster reconnection. VPN miniport provides a virtual interface to be assigned the allocated private IP address. the wireless telecommunication technologies and the wireless local and personal area networking technologies. up to 40 kbit/s.3 miniport: This is called the virtual NIC. such as encapsulation mode. GPRS: The General Packet Radio Services [57] is a widely deployed data service and advanced data communication technology for GSM. HSPA. encryption algorithms and so on.. There are two phases involved in negotiation process.2. In the following few sections a brief overview is given regarding different technologies and standards of wireless telecommunication and wireless local and personal area networks available in Windows Mobile 6. previous generation of second generation (2G) is already replaced by current third generation (3G) technologies. NAT traversal [54] is supported by the Mobile VPN. EDGE: The Enhanced Data rates for GSM Evolution (EDGE) [58] is a superset of GPRS technology and is backwards compatible with GPRS. therefore the billing is done based on the transfer volume.6 kbit/s and can be used for any packet-switched application like high-speed internet access on the move. There are several radio interfaces.5G or 3G depending on the implemented data rate. EVDO is a 3G technology which offers datarates up to 2. Windows CE supports both GSM and CDMA radio stack. an always-on technology. therefore. It is the same as GPRS.8 Mbit/s in the uplink. There is another version of HSPA which is called HSPA+ or HSPA evolution increases the data rates up to 42 Mbit/s in the downlink and 11 Mbit/s with MIMO [65] technologies and higher order modulation. EVDO: The Evolution-Data Optimized [67] technology is a telecommunications standard typically for broadband internet access in CDMA networks. It was designed as an evolution of the CDMA2000 standard to support high data rates. It can be categorized either as 2. wireless Local Area Networks (LANs) and Personal Area Networks (PANs) have changed the perception of the mobile devices like smartphones extensively. It assigns unique codes to each communication to differentiate it from others in the same spectrum. those discussed above are all supported in the Windows Mobile devices available in the market. CDMA: The Code Division Multiple Access System (CDMA) [66] is a “spread spectrum” technology. allowing many users to occupy the same time and frequency allocations in a given band. The maximum data throughput speed is 9. Mobile broadbands are deployed around the world with the introduction of this technology. and. Multiple wireless interfaces in smartphones let the users to be connected at all time and therefore provides easier access to online resources easier 26 . CDMA2000 [62] according to ITU’s family of 3rd generation mobile communication systems. Wireless Local and Personal Area Networks During the last few years.1 Mbit/s in revision A. that extends and enhances the performance of existing WCDMA protocols. Wireless telecommunication technologies. 3G: It was developed by GSM community to migrate into 3G evolution to get higher speed than 2. EDGE theoretically supports throughput up to 473. standardized by 3GPP [64].4 Mbit/s in revision 0 and 3. UMTS (WCDMA) offers data speeds up to 384 kb/s along with voice services.6 kbit/s offered by CDMA. Windows Mobile equipped devices include any radio stack which is chosen by the OEMs based on the appropriate hardware. HSPA: High Speed Packet Access (HSPA) [63]. such as WCDMA [61]. It is based on the International Telecommunication Union (ITU) [59] family of standards under the IMT-2000 [60]. ITU has not provided a clear definition of the data rates to expect from the 3G providers. HSPA improves the user experience by increasing data rate up to 14 Mbit/s in the downlink and 5.5G. is the set of two technologies High Speed Downlink Packet Access (HSDPA) and High Speed Uplink Packet Access (HSUPA). The first version of the 802. It can be used as networking client but. point-topoint data transfers between devices within visible range. IrDA specifies some of the data exchange formats which are adopted by Bluetooth and. It is often refereed to as wireless Local Area Networks (WLAN). its purpose is to interconnect the small devices. in contrast to IEEE 802. IrDA and Wi-Fi (IEEE 802. and therefore. As the communication requires a short line of sight between devices. However. The shared link key is exchanged to authenticate each other and devices can exchange data onwards.0/2. while the second version. Data is encrypted with E0 stream cipher during the Bluetooth communication.11b. By definition. Not all personal wireless technologies were developed for network access but also to replace cables by interconnecting devices. headsets etc. Bluetooth points to multiple transport and service discovery protocols. The 802. Wi-Fi (Wireless Fidelity): It is a wireless technology based on the IEEE 802. Bluetooth operate in the 2. Usually. Then. IrDA is considered secure. Windows CE as well as Windows Mobile provides support for Infrared communications but currently. Bluetooth was specified by the Bluetooth SIG (Special Interest Group) and defines many “profiles”. The most common and widely used technologies for Windows Mobile phones such as Bluetooth. It supports low-speed. up to the application level protocols. is much more than a networking infrastructure technology. discourage the OEMs to include Infrared in their devices.11) are discussed in following. Bluetooth uses a pairing mechanism where a 4-digit shared secret (PIN) is used between a pair of devices to establish a trusted relation.11 (WLAN). called 802.11 standard has evolved over time to enhance the data rates. IEEE 802. no security mechanisms are implemented. Bluetooth [68] is a personal networking (cable replacement) technology which was mainly developed to connect small devices like smartphones and mobile phones with mices. Interoperability between devices built by different manufacturers was the main motivation behind it.4 GHz ISM band. Most of the devices running Window Mobile 6. most smartphones don’t include the feature. the SAFER+ block cipher (called E21 and E22) is used for shared link key derivation which is termed as master key.11 [71] standard. To avoid interference it uses the frequency hopping spread spectrum and divides the band into 79 channels (each of 1 MHz wide) and changes channel up to 1600 times/sec.11 standard provided 2 MBit/s. A Bluetooth profile such SIM access or headset profile specifies how devices communicate from the low-level radio protocol. 27 .than ever.11 defines the physical layer and media access control layer (MAC) sub-layer for wireless communications. therefore.11 technology. keyboards.1 OS include the Microsoft Bluetooth stack 2. Infrared Communications: Infrared or IrDA [70] is an old cable replacement technology for the PDAs and mobile phones. Bluetooth is not a networking technology. Bluetooth: It is a wireless communication technology that allows devices to communicate with each other in 10-meter proximity.0+EDR (Enhanced Data Rate) [69]. same as IEEE 802. In ad hoc mode.exe).1 can run maximum 32 processes including gwes. Windows CE 5 based operating systems like Windows Mobile 6. and.0 which can be abused by a poorly coded or rogue application to enable access to the address space of another process. This makes an application execution tedious in the Windows Mobile OS when compared to desktop OS like Windows XP.1 Core OS and Memory Architecture There are no kernel mode drivers in Windows Mobile 6. 28 . The most recent version 802. All of the standards noted above operate in the 2.11g of this standard provides 54 MBit/s. a fresh Windows Mobile 6. all the available Windows Mobile 6.exe. Generally.exe.4 GHz ISM (Industrial Scientific Medical) band. there is a simple API call SetProcPermissions [74] in Windows CE 5.11 supports two operating modes: infrastructure mode and ad hoc mode. Windows Mobile OS 6. The 802. a lot of security mechanisms have been introduced. wireless clients communicate directly with each other without the use of a wireless AP or wired network.1 OS runs 14-15 processes (depends on OEM) which limits the number of running 3rd party processes. 2. Moreover.3 Implications In the previous sections. device. a detailed discussion has been given about the OS architecture of Windows Mobile 6.3. services. therefore.provided 11 MBit/s.1 smartphones support Wi-Fi interfaces with IEEE 802.1 (based on Windows CE 5.exe and the kernel process simultaneously.1. In the next few sections. Because. Currently.exe. have to compete against each other for spectrum access. Based on the discussion some implications regarding different components of the architecture would be helpful while designing applications on these platforms.2). In infrastructure mode.1 supports all the latest security mechanisms for authentication in WLANs like Wi-Fi Protected Access (WPA) [72] and Wi-Fi Protected Access v2(WPA2). the 32 process slots are protected from each other by a hardware based MMU (Memory Management Unit) but it is not obvious. any node can listen to other senders if the nodes are within range. a network access point (AP) manages network access among the associated wireless or possible wired clients. A rogue application can easily launch multiple processes to exhaust the virtual address space. and. It also supports the Wired Equivalent Privacy (WEP) encryption mechanism which is already proved to be broken [73]. All network nodes have equal control over the spectrum. therefore. filesys.11a standard provides 54 MBit/s but operates in the 5 GHz band.1. Both strong data encryption standards like Advanced Encryption Standard (AES) and Temporal Key Integrity Protocol (TKIP) are supported in Windows Mobile 6. observations and limitations regarding the OS architecture will be discussed. As the wireless spectrum is a shared medium. processing a system call for an application always need process switching in the kernel (NK. IEEE 802. Due to this problem. 2.11 b/g standard. it will be reserved in the Large Memory Area (LMA) which is also called shared memory or memory-mapped files area. A FAT file system offers poor fault tolerance. (a firm dealing in software products for risk-free mobile data) shows that nearly 80 percent of embedded product returns are due to FAT related problems such as data or driver corruption and broken MBR (Master Boot Record). For example.4 million worth of returned products per year for an OEM due to FAT file system failures. LMA or shared memory can also be used for interprocess communication. As the virtual address space in both slots are limited.exe image.2 File System Architecture Windows Mobile uses FATFS for storage cards. In Windows CE. then Process 2 loads DLL A and C where the DLL C is a larger DLL which consumes most of the address space. The upper slot of 32 MB is reserved for XIP DLLs and other shared DLLs and the lower 32 MB is reserved for RAM-based DLLs and other application specific resources like stack. In this case. heap and . Datalight calculates a $6. This ensures the validity of data in case of interruptions during file operations. For external flash memory FAT file system may be used because these devices are supposed to be used/shared with windows desktop environment which does not have TFAT implementation 29 . if process 3 now loads a small DLL and a large . FATFS has got no built-in security mechanism and it is prone to fragmentation. a large XIP DLL or a collection of small DLLs loaded by a rogue or badly coded application can leak into slot 0 to exhaust the address space of the current process. the address space will not be reserved in the 32 MB slot. Windows Mobile counters this corruption by using the TFAT (Transaction-safe FAT) File system for its internal memory on top of the block driver. In the survey. In slot 0. This area is accessible and visible to any process. suppose three processes are loading a series of DLLs. It mounts FATFS as the root. The FAT file system’s interruptible file operations also result in corruption of data. A survey [75] by Datalight Inc. In this area. In addition to the above mentioned advantages.exe file. one process can corrupt other process’s data without giving any error or warning in either process. If any application wants to reserve a large memory chunk greater than 2 MB. FATFS introduces FAT related problems as well. 2. It makes sure that transactions are completed successfully or nothing is modified at all. there is a risk it leaks into current process resources. an application can reserve a maximum of 32 MB of virtual address space. FATFS is used for its good performance over small volume. In slot 1. TFAT is designed to ensure the safety of transaction for data stored/modified on the disk. Process 1 has loaded DLL A and B. a large DLL loaded by a particular process can leak into the stack space of current process.In Windows CE 5 virtual user space.3. thus provides no security at all. portability (supported by virtually every desktop) and it’s easy to implement. there can be several problems while loading DLLs by an application. the lower two slots of 64 MB are used for ROMbased DLLs and active applications. The TCP/IP architecture of Windows Mobile 6. client identity protection and key generation. Windows Sockets (WinSock) supports third-party extensions. SMB signing can be used to prevent a third-party from modifying or replaying requests and responses in the network during transport between a client device and a server. implementation of PEAP must be setup correctly. In one attack scenario [76]. In another attack scenario [77]. Moreover. Applications in peers should not allow users to directly run an executable file on a network share since it could expose the device to potential threats. the TCP/IP architecture. The Extensible Authentication Protocol (EAP) implementation uses third-party authentication code to interact with the Point-to-Point Protocol (PPP) included in Remote Access Service (RAS). otherwise poor settings can introduce severe vulnerabilities.2. ActiveSync service.3. In the first tunnel. There are lot of attacks available in desktop OS versions which have already been patched. the attacker can see the true identity information of the STA as well as gets the ability to forge EAP control messages. but which may be and are successful against Windows Mobile 6. the attacker can launch a man-in-the-middle-attack (MITM) against a victim using valid certificates from a rogue CA server and establish a TLS tunnel. Telephony API. the channel between the authentication server and access point should be protected as the channel is not protected by the TLS tunnel.1 is similar to that of Windows XP and Windows Server 2003. In this way.0 that provide enhanced security for network communications. WNetAPI communicates via SMB (Server Message Block) and SMB is susceptible to attacks like MITM (Reflection attack) [78]. data transmitted after PEAP authentication is not encrypted. If these extensions do not use proper security and authentication procedures. In phase 2 of the PEAP session. Windows Networking API/redirector is a network file system driver that provides access to files on remote computers. SMB signing must be supported by the server. the attacker can establish two different anonymous tunnels. VoIP service and Mobile VPN are discussed quite extensively in the 30 . It ensures mutual authentication. Based on these discussions. Winsock supports Secure Sockets Layer (SSL) version 2. the attacker masquerades as the STA and in the second tunnel. some general and security implications are given below. Connection Manager Architecture. Remote API (RAPI). Windows Networking API (WNetAPI) and Windows Sockets are discussed in core networking section. As the TLS tunnel is only used for authentication. PEAP is an extension of EAP and provides secure framework with the help of Transport Layer Security (TLS).1 OS. In this case.3 Communication Services and Network Stack Architecture Details about Protected EAP (PEAP). attacker acts as the AP. as the data transmitted between peers are in readable form. There are lots of attack possibilities available against PEAP framework. Based on above cases. the attacker sets up a Windows 2000 CA server and an incorrectly formatted HTML page which might exploit the browser vulnerability to add their CA to a specific client. The first tunnel with the access point and the second tunnel with the STA (peer-station).0 and 3. they can compromise the security of a mobile device or local network. There are a couple of scenarios when a device would attempt to establish multiple types of connections using the Dual Homing feature. dll or. attackers can sniff data in wireless environments where the communication channel between the desktop and the device is open and unencrypted. in restricted mode. Open mode: Enables full access to all RAPI calls. and offers potential for bridging traffic between two networks. the particular connection’s security level should be higher or similar to VPN. Also. an application. Closed mode: All RAPI calls are disabled and access to ActiveSync is denied. dual homing introduces a security threat. Some examples are: The user walks into an area with Wi-Fi coverage while having an active GPRS connection. When a device is running in restricted mode. protected registry keys are read-only.1 security levels are listed with respect to connection types supported in Windows Mobile OS by default. While having an active GPRS connection. applications like “SDA_Application_unlock. which is a critical flaw. registry key via RAPI. Based on this discussion. The Windows Mobile OS runs RAPI in restricted mode by default. DTPT (Desktop Pass-through) technology is enabled by default to connect the device to a desktop which actually exposes the device in public network thus exposes to network threats. RAPI security policy provides the following functions. Anyone including other applications can install a DLL file using RAPI. RAPI security policy is introduced with the release of Windows Mobile 5. But.networking (remote) section. Restricted mode: RAPI calls and access to ActiveSync are enabled. The Connection Manager has a dual-homing feature enabled by default which poses a security threat to the smartphone.0. ActiveSync poses a potential security risk as the device uses a TCP/IP network. In this case. and therefore. Moreover. the user docks the device and establishes a DTPT connection. ROMbased files and system files are read-only through RAPI.exe” can use vulnerabilities in the system to unlock the security policies even if the device is running in restricted mode. 31 . the ActiveSync software can be used to mount an attack [79] against the desktop which can compromise the whole network. Also. In the Connection Manager. all traffic is handled by the VPN connection until it has been disconnected or a specific request to route traffic to a connection other than the VPN. It has implications for both RAPI and RAPI v2 functions. DTPT is enabled even if the desktop is password-protected or locked. will simply fail. that attempts to use a restricted file. In both cases. In table 2. which is a critical feature in terms of security. several general and security implications are listed below. invoke certain function to run it trusted. GSM is a widely used telecommunication technology all over the world and has several security flaws since it was designed with a moderate level of security. The development of UMTS in 3G network introduced an optional USIM which uses a longer authentication key to give higher security than GSM. which eliminates the possibility of installing a fake base station by an attacker.Table 2. [84]. eavesdropping or spawn a denial-of-service (DOS) attack by enrolling into the network. and A5/2 can be broken in real-time with a ciphertextonly attack [85]. the base station and the user are mutually authenticated in a 3G network. there are approaches to make the channel secure for VoIP applications which is termed as Secure VoIP [80] [81]. As VoIP technology itself offers no security at all. Some implications regarding these technologies related to security will be discussed in the next few sections. the user looks for a low-cost solution and Wi-Fi is the cheapest technology for VoIP. Moreover. GSM only authenticates the user to the network but not vice-versa which gives the attacker a possibility to set up a fake base station that uses no encryption. This scenario may change in the future when 4G technology will be introduced. VoIP over Wi-Fi is vulnerable in the same way as a regular WLAN environment where users connect to an access point to use data connections over the internet. 1xRTT (CDMA) Security Level Level 1 (Most Secure) Level 10 Level 100 Level 100 (Least Secure) Voice over IP (VoIP) in handheld devices or smartphones has two variants: VoIP over a cellular network (3G/GSM/CDMA-EVDO) and VoIP over Wi-Fi. In addition. Both of them were found fundamentally flawed: A5/1 found vulnerable to two possible attacks by Biryukov et.al. SIM attacks [82] and SMS-based attacks [83] are the most common in the GSM network. CDMA air interface is inherently secure compared to other cellular technologies like analog Time Division Multiple Access (TDMA) systems. Wireless telecommunication technologies and wireless local and personal area networks are discussed briefly under the wireless networking section. 3G networks use the KASUMI block crypto instead of the A5/1 stream cipher but have other serious weaknesses identified by Eli Biham et. In most cases. GSM uses the cryptographic algorithms A5/1 and A5/2. On the other hand. VoIP over a cellular network is offering some sort of security since most operators today are using NAT for connections through cellular network. Also.1: Security Level based on Connection-type Connection Type DTPT NIC and Wi-Fi CSD (Circuit Switched Data) and GPRS. it is susceptible to attacks like eavesdropping or injection of VoIP traffic. Considering these scenarios. Its security strength comes from the fact that CDMA is a spread spectrum technology that uses Walsh codes. 32 . It is very easy to find a required Wi-Fi hotspot and initiate an attack like MAC spoofing.al [86]. To avoid eavesdropping. 1 smartphones in WLAN environments and will be discussed in chapter 5. CDMA2000 includes the 1xEV-DO technology which is an evolution of 1x (1xRTT). Bluebug. Synchronization is an important part of CDMA channel to have a stable link. Besides. it is vulnerable to eavesdropping and man-in-the-middle attacks. train stations etc..\\”. Today. a thorough analysis has been done of Windows Mobile 6. A user. Moreover. In Windows Mobile 6. airports. This is the reason why security was not a big issue for the Bluetooth stack. it offers a fairly granular security infrastructure which is discussed in the next section.1 use the Microsoft Bluetooth 2. Wi-Fi is a shared medium by definition. devices like smartphones have to faced many more attacks ever before.1. Bluetooth was conceived as a wireless alternative for RS232 cables originally. an attacker can download files without user permission and upload malicious files to compromise the device. who has been authenticated.11b/g standard to communicate via WLAN. As Windows Mobile 6. In January 2009. BlueSnarf and many more attacks [89] available in the earlier versions of the Bluetooth stack. three security modes are supported: Mode 0: No security. As Wi-Fi unsecured hotspots are increasing rapidly in the public areas like hotels. In consequence. which has built-in security that protects the identity of users and makes interception quite difficult. The Bluetooth security model is based on authentication and link encryption. An attacker would only hear noise if the synchronization is not correctly handled while eavesdropping. CDMA handoff capability makes difficult for the attacker to eavesdrop in CDMA channel. browse local shared folders and download and upload files through Bluetooth in a remote WM device. The vulnerability has been found in the OBEX FTP Service which is used to share files. Mode 1: Service level security in L2CAP layer [87]. café shops. above mentioned security implications regarding the Windows Mobile OS.0 stack which doesn’t support secure simple pairing. . the authentication procedure is encrypted and an HARQ (Hybrid Automatic Repeat Request) mechanism makes it virtually impossible to identify the user and to correlate user packets. As a consequence.CDMA uses the specific spreading sequences and pseudo-random codes for the forward (BS to MS) and backward (MS to BS) link. a lot of vulnerabilities have been found in the Bluetooth stack since its invention. Alberto Moreno Tablado has shown a new vulnerability [90] in Microsoft’s Bluetooth stack which is used in most of the Windows Mobile based devices. most Windows Mobile operating systems support the IEEE 802. In addition. During the attack. can browse parent directories outside local shared folders by using “. 33 ./ or . Mode 2: Link encryption in LMP layer [88]. there were lot of historical attacks like Bluejacking. BlueSmack. This security setting is enforced with the help of security roles and certificates. There are three permission types. Normal: A normal application has full access to normal assets only. unlike desktop PCs. Privileged: A privileged application can do anything. it is not allowed to run. Registry Keys and APIs into Normal and Protected. 34 . Windows Mobile uses a concept of permissions for applications. some familiar security measures like Access Control List (ACLs) and User groups that are based on permissions granted to different users are not applicable. On top of this classification. Most applications run in Normal mode. Basically all those assets (in addition to some others) are protected against manipulation. Mobile phones. Limited resources can easily be exhausted which is an ideal situation for Denial of Service (DoS) attacks. Mobile Phones differ from desktop computers in the sense that they are used by a single user instead of multiple users logging on and off. walkman and word processing in a single device thus expanding the surface for possible attacks. mobile phones have very limited resources and processing power compared to PCs and deploying heavy weight security mechanisms is therefore not possible. It offers security policies that are used to configure a security setting suitable for the device. Consequently. It cannot call protected APIs or write to system files. Windows Mobile uses a light-weight security model that grants permissions to applications using Certificates.1 Access Permissions Windows Mobile classifies its assets like Files. Finally. Another reason for not using ACLs is the strain put on the limited resources of a mobile phone by ACLs. where an application is able to change the security policy thus compromising the security of the mobile phone [91]. It can call any API and install certificates. We will proceed by describing each of the above mentioned terms one by one and explaining how they help in securing a mobile phone. Mobile phones offer strong connectivity with more than one way to connect and are always connected to at least one network and are therefore more susceptible to eavesdropping. It can write to any file or registry key. Privileged mode is not needed by most applications. Due to their mobile nature they are more susceptible to stealing or getting lost. PDA. Confidentiality becomes a bigger challenge when it comes to mobile phones where immature protocols are used with relatively new technologies like Bluetooth. are not kept in a single place. camera. It has full access to all the protected assets. Blocked: If an application is blocked. Mobile phones/smartphones combine different features like phone.Chapter 3 Windows Mobile Security Model Today’s smartphone features play a big role in the security of mobile phones. 3. Developers submit their applications for testing and certification. Mobile2Market certificates are included in the Normal or Privileged certificate stores by almost all service providers.2 Certificates A system that runs unauthenticated applications will attract more attackers because it allows the attackers to stay anonymous. enabling it to be submitted for the Mobile2Market catalogue. The application runs in Normal mode if it is signed with a certificate from the Normal store. However if a developer wants to target a wider range of mobile phones then he/she can certify the application from Microsoft’s certification and marketing program for mobile applications known as Mobile2Market. then with what privileges. It signs its application with that certificate and submits it to the CA. when it passes the test. In this way developers can get their applications to run on mobile phones in that particular service provider’s network. This testing is done by Independent Test Labs (ITL) like NSTL. developers have to cooperate with the OEM. and if allowed to run. Mobile2Market provides Logo Certification ‘Designed for Windows Mobile’. Security policies configure security settings which are enforced with the help of certificates and security roles. The developer purchases a certificate that identifies the developer’s applications to the CA. Windows Mobile uses certificates [92] in order to authenticate applications. The CA confirms the developer’s identity from the signature and replaces the developer’s signature with one of the Mobile2Market signatures. QualityLogic and Veritest. certificates stores on mobile phones are controlled by the mobile phone service provider instead of the owner of the device. Applications are logo certified in order to ensure that they are consistent around different platforms. In order to get an application signed with a Mobile2Market signature the developer has to cooperate with a trusted Certificate Authority (CA). 35 . These certificates also determine the applications’ permissions. The application is given a Product Certification Code. Windows Mobile maintains a number of certificate stores. the service provider or any other organization trusted by the service provider in order to get their application signed with a certificate that is in target mobile phones’ Normal or Privileged certificate store. If an application is signed with a certificate stored in the privileged store then the application runs in Privileged mode. In most cases. These stores determine an application’s permissions.3 Security Policies Security policies work on top of the certificates and permissions in order to decide if an application is allowed to run or not. The Privileged and Normal certificate stores are two such stores.3. Developers have to pay for this testing. supports essential APIs and don’t cause problems for other applications or features in a mobile phone. This Mobile2Market signed application can now run on a wide variety of mobile phones. 3. Therefore. protected or normal. Once allowed to execute. In one tier security it is necessary that all applications are signed and there is no concept of access limitation. These policies are combined together to form five common security configurations shown in the table below. Windows Mobile platforms support either One or Two Tier security or both. Once allowed.0 Pocket PC Windows Mobile 6 Professional Windows Mobile 6 Classic Windows Mobile 6 Standard Yes Yes Yes Yes Some of the most important security policies [93] are: • • • Unsigned CAB policy: Whether an unsigned cab is allowed to install or not Unsigned Prompt policy: Whether a user is prompted to accept or reject an unsigned CAB.0 Smartphone Yes Windows Mobile 5.2: Security Configurations Configuration Security off One Tier Prompt Description Signed and Unsigned applications run alike with all privileges without prompting the user. A signed application runs in privileged mode with full access to all assets. User is prompted before executing an unsigned application. Allows signed applications to execute. EXE or DLL.1: Tier Support Platform One Tier Support Two Tier Support Yes (default) No No No No Yes (default) Windows Mobile 5. Unsigned Application policy: Whether an unsigned application is allowed to run or not. the unsigned application runs with restricted permissions. So. the application runs with full permissions. Table 3. Table 3. it runs in privileged mode otherwise it runs in normal mode if signed with normal certificate. Used for development and testing.0 Pocket PC Yes Phone Windows Mobile 5. User is prompted before executing unsigned application. The following table shows which configuration is supported by a platform. Same as Two Tier Prompt but unsigned applications are not allowed to execute Two Tier Prompt 3rd Party Signed 36 .A device can enforce a one tier security or a two tier security models. Allows signed applications to execute with permissions derived from their certificates. if an application is signed with a privileged certificate. In two tier security the concept of restricted permission (Normal mode) is introduced. Locked Only applications signed with OEM or Operator’s (or Enterprise) certificates are allowed to run. 37 . Table 3. The resource to role association is stored in a database called Metabase which is initially configured by OEM. This Metabase can be updated or reconfigured through Metabase Configuration Service Provider.3: Security Roles Security Roles (SECROLE_MANAGER) Manage (SECROLE_ENTERPRISE) Enterprise Description This role has unrestricted access to the system. databases and files. If a message originates from a process running in Normal code group then it is given the role SERCROLE_USER_UNAUTH. 3. Operations or messages are assigned roles on the basis of their origin or the certificates they are signed with. If a message is signed with a known certificate then its role is taken from the role associated with that certificate. Using this role with an asset allows unauthenticated users full access to that asset. IT administrators have this role to manage enterprise owned devices’ settings such as setting password policies or wiping a device. Similarly a message is given SECROLE_USER_AUTH when it originates from a process running in Trusted code group. access is denied if this condition is not satisfied.3 describes some security roles. Mobile2Market certificates are removed from the certificate stores. Table 3. If a security policy has this role then the user can change that policy. This role is equivalent to Desktop’s guest user role. Equivalent to Desktop Administrator. The role of service provider who can change device settings through Wireless Application Protocol (WAP) Trusted Provisioning Server (TPS).4 Security Roles Windows Mobile uses security roles in addition to certificates in order to protect its assets. The device owner can have this role. Exchange Administrator role. The OS makes sure that the role of an operation or a message accessing a resource is equal to or higher than the role assigned to that resource. (SECROLE_OPERATOR) Operator (SECROLE_USER_AUTH) Authenticated User (SECROLE_USER_UNAUTH) Unauthenticated User These roles are assigned to resources like registry keys. [94] gives a complete list of security roles. Role based security also supports adding new assets to the system. Equivalent to User role in Desktop. used by ActiveSync) policy that is aimed at protecting the device from desktop applications by restricting their access.3. In most cases the users have found a way around this by dismantling the security policy somehow and install unsigned applications. An example is SDA_applicationunlock. When the RAPI policy is in restricted mode. where all running applications have privileged permissions. a desktop application must not be able to modify these policies if RAPI policy is restricted but the tool mentioned above seems not to be affected by it.5 Users’ Influence Security configurations like one-tier prompt and two-tier prompt where users make the final decision about allowing an unsigned application to install and run provide attackers with an opportunity to ‘talk’ a user into running their malware. SDA_ applicationunlock. the desktop applications cannot access protected APIs or edit protected registry keys. 1006(Unsigned Application execution policy) →1 (changed to allow running). Even more interesting is the scenario where such a tool is 38 .exe dismantles the application lock by changing the following registry keys in: HKEY_LOCAL_MACHINE\Security\Policies\Policies\00001001(RAPI policy) → 1 (changed to grant full access). RAPI’s restricted mode is one of the best security practices recommended by Microsoft [95]. All these keys are protected.1 (Professional) and Windows Mobile 5 (Pocket PC) and it successfully unlocks the device. but imagine a scenario (ignoring password/PIN cracking or guessing by an attacker for the time being) where an employee wants to unlock his work phone. This puts a big question mark on Windows Mobile security model. Now there is a security policy called RAPI (Remote API. Two-tier prompt mitigates this problem by restricting the permission of such applications but One-tier prompt. 1005(Unsigned CAB installation policy) →16 (changed to allow installation). Application Locking There are operators that actually disallow unsigned application to install or run but as expected the user response has not been good. T-Mobile SDA uses Windows Mobile 5 smartphone edition which uses Two-tier prompt configuration in addition to that T-Mobile disallowed unsigned applications to run. We have tested this tool with other platforms like Windows Mobile 6. Theoretically. will not provide any resistance to malware once it is allowed by the user to run. One can argue that this attack may not be successful if password is required for synchronization. 101b (Privileged Applications policy) →1 (All applications run with privileged mode).exe which was written for TMobile’s SDA phone. It renders the device completely insecure and ready to install and run anything in privileged mode. One-tier prompt is used by most of Windows Mobile powered mobile phones today.exe is a desktop application that successfully unlocks the device through a bug in ActiveSync. Some users fall prey to social engineering and do just what an attacker wants them to do. SDA_applicationunlock. The fact that a huge amount of users want to tweak their mobile phones by installing free third party software (freeware/shareware) makes the situation more exploitable because this discourages Operators and OEMs from banning unsigned applications to install and run which results in less secure security policies. 1017(Grant Manager Policy)→ 16 (changed to elevate other roles to Manager). VPN and IPSec are also there in order to secure data in transmission. 3DES.11a/b/g wireless LANs. These features are: 39 . Internet Information Service (IIS) and Internet Explorer Mobile implement SSL.6 Security Services The following security services are also offered by Windows Mobile 3. Other features like SSL. normal conversations and other information like call logs. WEP is already broken and is therefore not a good choice. emails and send them to the attacker.6.6.3 Wi-Fi Encryption Windows Mobile also supports encryption standards like Wired Equivalent Privacy (WEP) and Wireless Protected Access (WPA. 3. turning the mobile phone into a spying agent that can record calls. Services provided by CAPI enable developers to encrypt/decrypt data using secure algorithms. This feature is not enabled by default. Storage card encryption uses AES128. This can expose the device and more importantly the corporate network. SHA-1 and RSA. Services like encryption. messages.6. 3. 192. WPA2 on the other hand provides stronger encryption with key lengths of 128 and 256. 256). It has built-in support for Virtual Private Networking (VPN) using Point to Point Tunneling Protocol (PPTP) or Layer 2 Tunneling Protocol with Ipsec (L2TP/IPSec). It can be turned on by the user.1 Cryptography Window mobile provides support for cryptography through its Cryptographic API (CAPI).6. hashing and signing are offered by CAPI. An attacker can install an application like Flexispy Pro [96]. DES.2 Storage Card Encryption Windows mobile uses encryption file system filter in order to encrypt data to the storage card. 3.resident on the employee’s computer and the employee is not aware of it.4 Additional Security Features Apart from the main security model which entails Security Policies. This can help in keeping the sensitive data confidential in case the storage card containing the data is lost or stolen. CAPI supports AES (128. Windows Mobile offers some additional features that enhance the security of the device. WPA2) for use with 802. 3. Security Roles and Certificates. Enabling storage card encryption can be made mandatory with the help of security policy. of which the device is a part to some serious threats. smart cards etc. Its strong password protection.7 Summary The Windows mobile platform provides a range of security features that help keep the device safe. In addition to passwords. Two types of passwords can be enforced with Microsoft’s default LAP: Simple PIN or an alphanumeric password. Window CE comes with a default password based LAP. Local wipe can be set to activate if the user fails to input correct password for a specified amount of time. help protecting data in case of device theft or loss. The device configuration can be tuned to enable all the favorable (to the user or to the enterprise) features. Wiping overwrites the memory with a fixed pattern which makes it difficult to recover the data. Local Authentication Plugin (LAP) is an authentication mechanism that plugs in to LASS. Features like password required. An OEM or ISV can also create a LAP that can be based on advanced authentication mechanisms. Windows mobile supports Local and Remote wipe. Unauthorized penetration of the corporate network is prevented by using strong and flexible client authentication using SSL/TLS. It also specifies minimum length for password. help keeping the devices safe from malicious software (malware) and corruption. 3. Using Authentication Events (AEs) [97] with LASS. SSL and encryption support aims at protecting the data in transmission. All the settings and user data is wiped out. password strength. VPN support. The user of the device cannot block the remote wipe. Device Wipe Due to the extensive and increasing use of mobile phones in business environments. password expiry. it provides the possibility of using sophisticated authentication mechanisms like fingerprints. wipe and password policies like password required for synchronisation. The device acknowledges remote wipe back to the administrator indicating the completion of process. as in twotier prompt. Record of the old passwords is kept so that the user chooses a fresh PIN or password. Security policies aimed at allowing only signed applications to run and restricting the permissions for most of applications to run in normal mode. All this information is kept in protected registry keys. Device Lock. Remote wipe [98] is activated by issuing a wipe command through the Exchange Activesync management interface [99]. one can specify event-based authentication policies that specify when the user should be verified. which usually have sensitive business data.LASS and LAP Local Authentication SubSystem (LASS) enables standard user authentication that is independent of application. Windows Mobile 6 onwards provides Enhanced PIN strength which prevents the user from selecting a PIN with simple patterns like 1234. Device Lock Windows mobile can lock the device after a specified time of inactivity. it is a good idea to wipe the data off the device’s memory if it is stolen. 1111. Password or PIN expiry time can also be set so that the user keeps changing the password. password history and automatic lock can be enabled. certificates or Exchange 40 . We have seen many such claims proven wrong in the past. First is Microsoft’s Windows mobile team. carving out a security policy that makes the device secure enough while keeping the required level of usability. to run but unfortunately such a device will be considered useless by a great chunk of users (Power users) who want to be able to use their mobile phones like their personal computers. Nevertheless such devices will be useful to corporate authorities that value security. What can be achieved is to make it as costly as possible for an attacker to break into a system. second OEM. Solutions like Microsoft’s Mobile Device Manager [100] that help manage mobile phones like PCs can be helpful to corporate authorities who want to control their mobile fleet. it is locked and no one uses it. There are many players involved in the Windows Mobile Security model. and make it even more secure. In such a setting it becomes easy to make mistakes like assigning a bad choice of roles to assets in the metabase or carving out a flawed security policy resulting in tools like SDA_application_unlock. WM 6 onwards can prohibit Bluetooth discovery mode in order to avoid penetration through Bluetooth. Only such a device will be perfectly secured (if no one plans to steal or destroy it) but it will not be useful. Security policies that control OTA (Over the Air) access to the device also help protecting against unauthorized penetration of the device. 41 . The most secure device will be an isolated device which offers no connection to the outside world and does not install any applications. Therefore it is necessary to come up with mechanisms that makes the device secure without losing on much of its usability. There is a tradeoff between security and usability. third operator and fourth IT administrator or User in case the device is not managed by an IT administrator. Fool-proof security is only theoretical. signed or unsigned. keep it secure and up to date from a single point of control.Activesync. The role of the user cannot be ignored even if he/she has been granted the least amount of privileges because he/she is the one in custody of the device. Windows Mobile tries to provide the infrastructure to ensure just that. All of these participants have some role to play in security settings. Windows Mobile can be completely locked by not allowing third party applications. Denial of Service: To deny servie to the rightful owner/user. The EXE is started by using the CreateProcess API. The attacker may make calls from the victim’s phone or use other features like camera. The main task of the first part is to download the second part into the device. This information can be used for spam calls in the same way as spam emails. Cross-infector viruses like Cardtrap [101] that can spread from smartphones to PCs via memory cards or activesync are also there. mobile games etc. They proceed with their worm development by dividing it into two parts. 4. The second part is composed of an EXE file and an EXE starter. In addition to sending some system related information to its author. Michael Becher et al [104] demonstrate the development of a worm for Windows Mobile 5. Proof-of-Concept: Usually demonstrated by researchers in order to prove that a certain attack is doable in mobile environment. Denial of Service (DoS) attacks and breaking into the system using buffer overflow vulnerabilities or some logical errors have been common and are demonstrated by researchers [8].Concept Worm for WM5 Researchers have worked on proving that worms and viruses can be a threat to mobile devices. Stealing Information: Stealing information like contacts or other sensitive data.Chapter 4 Security Issues Smartphones are potential victims to all sorts of attacks that have been done against desktop computers.1 Malware Proof-of. this trojan also dismantles the application installation security of Windows Mobile powered devices thus making silent installation of malware possible. Viruses and worms are as much a threat for smartphones as they are for PCs. 3. CreateProcess is a protected API and therefore the exploited 42 . the infector and the spreader. For this very purpose they have demonstrated proof-of-concept attacks like Feak worm [103]. Using a paid service like calling or sending MMS will result in overcharging the victim. Taking Control: Taking full or partial control of the device so that the attacker is able to do whatever he/she wants. 4. The infector is further divided into two parts where the first part is shell code that is injected into the system by exploiting vulnerability like a buffer overflow in an application over WLAN. 2. Smartphones can be targeted by Cross Service attacks that originate from one service/interface (WLAN) and end up using another service (GSM). microphone or GPS to spy on the owner. All these attacks may have following goals: 1. An example of a DoS attack is sending a specially crafted SMS or other network packet that triggers implementation errors thus rendering the device unresponsive. There are trojans like WinCE/InfoJack [102] that is packed inside legitimate installation files like Google Maps. The spreader. the audio/video based spyware that target smartphones mainly. Prompting the user is disabled by changing the Unsigned Prompt policy so that the EXE is run silently. Spyware steals sensitive/personal information on the device and reports it back to the attacker. Mulliner’s MMS exploit [8] is a good example of vulnerabilities in trusted code. most vulnerabilities like port stealing that were present in Windows Mobile 5 are likely to be present in Windows Mobile 6. buffer overflow vulnerabilities or other logical errors are bound to be present in the code (Microsoft. Phone BotNets are very much possible [105] and such BotNets can be used to launch DoS attacks against the cellular network. It then starts infecting other devices in the network. Port stealing is a vulnerability that has been patched in Desktop OSes but it is still present in Windows Mobile.process has to be privileged for the exploit to work in a Windows Mobile Smartphone platform. he sent an MMS containing the exploit to a device. Besides other consequences (depending on what else the worm does). this will result in overcharging of the device owner.1. It is sold with the tagline of “Catch cheating spouses”. Consider a worm sending the exploit to all the contacts in the phone via MMS. Second. This is not a problem in Windows Mobile Pocket PC edition as it uses OneTier prompt configuration. OEM or third party) used in Windows Mobile 6. emails and send them to the attacker. BotNets BotNet is a network or collection of computers running an exploit that is installed by a virus or worm.1. which is embedded in the EXE. This way he was able to run arbitrary code on the device without user interaction. most of the Windows Mobile 6 platforms (Professional and Classic) do not support Two-Tier security model which means exploiting any application will work. Audio/Video based spyware Spyware is software that is installed on devices like computers or smartphones in order to spy on the user of device. Although having Two-Tier security configuration enabled may make it a bit complicated for attacks of this sort to be successful but it does not stop attacks. especially features like camera and voice recorder have given rise to relatively new kind of spyware. as Windows Mobile vulnerablities are patched less frequently compared to its desktop counterparts. Flexispy is an example of such software. it is based on the Windows CE 5. messages. The attacker can target a vulnerabiltiy in a privileged application. This software turns the device into a spying agent that can record calls. He noticed an overflow vulnerabilty in MMS handling code. spreads by stealing NetBIOS UDP port 137. Finally. Botnets have been used by attackers in scenarios ranging from launching of DoS attacks against a site or server to email spamming. Third. Spyware has been targeting information like data stored or input from keyboard on PCs but with the advent of smartphones and the increasing capabilities of this device.1 because first. Mobile worms and viruses are a reality now. Using this vulnerability. The spreader finds the active IP addresses in the network by listening to NetBIOS events on the stolen port. Vulnerabilities like this can be used by attackers to launch attacks like worms that spread via MMS. 43 . This approach is most likely to work in Windows Mobile 6. like Windows Mobile 5. normal conversations and other information like call logs. it will be useful to give an idea of the security consequences of such applications. Even though it is a bad practice. When it comes to security. They target the primary feature of combining multiple networking technologies (WLAN. an FTP server. Voip client. Mulliner introduces a Cross Service attack. Once the vulnerability is exploited the injected shell code is able to invoke the calling APIs. is targeted via WLAN. These attacks can prove quite useful from an attacker’s perspective. Everyone knows that the first solution is a bad idea but the second solution can be equally bad or even worse if the password 44 . 4. an attacker can dial a 900 number owned by him thus crossing the service boundary (from WLAN into GSM) and charging the victim a considerable amount which goes into the attackers account. GPS) in a single platform. Bank account etc. Most applications require the user to login. It is designed to invoke video capturing in a way that it is least probable for the user to detect.3 Third Party Applications Although third party applications are not a part of the operating system but nevertheless they are allowed to install and run by the OS. Using this approach. There are applications that use encryption for sensitive information but most of the times the encryption algorithms are really weak. 4. these login credentials are then stored in clear text in the registry or in a file. Therefore. Bluetooth. These files are combined by the sender. they target the weakest link. Using the same password for multiple accounts makes life easier because one does not have to remember a whole lot of passwords. which is invoked at an appropriate time. This problem is solved by saving them in a file or using a third party password or account manager.Net. Some security conscious users use different passwords for different accounts but most have problems remembering them. Imagine the consequences if such a password is compromised by a third party application. The vulnerable application.Nan Xu et al [106] introduce a proof-of-concept stealthy video capturer that is designed to work on Windows Mobile 5/6 stealthily. most users use the same password for different accounts like email. He demonstrated his attack on Windows CE . Instant Messaging client. The captured video is compressed and stored in hidden folders in small files. The current security model of Windows Mobile does not provide a specific mechanism in order to protect from this type of attack. Users are lured into installing this spyware by accompanying it with a benign looking tic-tac-toe game. and sent to the attacker via email. He injected shell code into a device by exploiting buffer overflow vulnerability in a third party application.2 Cross Service Attacks Cross Service attacks are relatively new. For example an attacker can attack the smartphone from a relatively insecure interface like WLAN or bluetooth and then try to take control of GSM. third party applications are the ones that are blamed the most for their negligence. they can compromise the security of a secure OS by introducing new vulnerabilities into the system which provides the attackers new and probably easier ways to break into the system. Cross service attacks have better chances of success because they target a wider surface and in that. That’s why. In his work on smartphone security. The user can create a Master Key in order to unlock the details kept by Password Master. credit card numbers and other details in a safe way. The attacker knows exactly where the passwords are kept by the vulnerable application. usually the attacker does not know if the user keeps such a file or not. We would like to cite two examples from his work here so that the readers have an idea of what we have said above. Data protection and management program A program called Password Master has been examined. Virus signatures were stored in plain text. An attacker can reset the master key by deleting a specific registry key. This allows full access to the credentials. It is worse because in case of a file of passwords. ii. 45 . Users who care more about security install third party antivirus programs. Vulnerable Antivirus An antivirus program called BullGuard has been examined. and then exploit the application. an attacker can insert a signature that can match all exe and dll files thus triggering automatic deletion of all those files. It requires the user to login. A sense of insecurity is better than a false sense of security which is offered by a vulnerable antivirus program. This program requires a user account in order to update the virus definitions and account credentials are stored on the device. Vulnerabilities i.manager is vulnerable. Seth Fogie [107] has done some interesting work on the security of third party applications. These are sent (encrypted) to the server each time an update is requested. iii. A signature can be changed by an attacker thus enabling an otherwise known virus to infect the system. Because the virus definitions database is not protected. which is easier. Vulnerabilities i. On the other hand the attacker only has to find out if the user is using some known vulnerable password manager. Discloses password due to a bug in password hint feature. Antivirus programs can be useful but they can be lethal if they are vulnerable themselves. Weak encryption was used to protect the stored password. Auto delete function can prove lethal. the password hint feature discloses the password regardless of the hint being right or wrong. If the user makes an account without a hint for password. This program is used to manage information like passwords. An attacker can decrypt the existing password. He/she will have to do a thorough search of the file system in order to get hold of such a file. A user feeling insecure will be cautious but a user with false sense of security will run into dangers right away thinking that he/she is invincible. ii. He has examined a set of third party mobile applications and unveiled many vulnerabilities. Keeping in mind the evolving processing power and storage capabilities Aubrey-Derrick et al [108] propose an anomaly detection system that runs a monitoring client on the device. For example the following rule in the policy file can stop the cross service attack demonstrated by Mulliner: access wireless_nonfree deny wireless_free where: wireless-free => infrared|wifi|bluetooth_voice|bluetooth_data wireless nonfree => gsm_voice|gsm_data|gprs This rule denies process that has a label from any of the free wireless services. According to the low overhead this mechanism can be termed as light weight solution to cross-service attacks. A lot of the research is done around Intrusion Detection (IDS) and malware detection. The monitoring client extracts features like free RAM. penetration testing has been performed in order to find vulnerabilities in the Windows Mobile 6. In the next chapter. Similarly. 46 .4.4 Protection Mechanisms Along with exposing the threats to mobile security. That’s why most of the intrusion detection is done on the network level. Michael and Ralf [110] use kernel-level system call interception for Windows CE based OSes for dynamic malware analysis. Mulliner [8] proposes a Labeling mechanism in order to counter cross-service attacks. Comming up with effective Antiviruses and IDS has been a challenge in the mobile world. Markus Miettnen et al [109] propose host-based IDS to be used along with network based IDS. The use of customizable policies enables his mechanism to counter new cross service attacks as well. access to non-free wireless services. This is done by specifying in the policy file which label/s a process should not have in order to be allowed to access a particular interface or resource. He labels processes and resources on the basis of network interfaces they have come in contact with. process count etc and sends them to a remote server that runs the IDS. A policy specification file is used to determine whether a process is allowed access to a resource or interface. In this way each process or resource develops a list of labels showing the network interfaces they have been in contact with directly or indirectly. researchers have been busy coming up with suggestions to enhance the security of the mobile world. Their solution works by running in kernel mode and denies all other programs the kernel mode which also proves helpful in the sense that potential malware will always have rights inferior to the protecting program thus the malware cannot tamper with it.1 system. count of SMS sent. Huge signature databases and complex intrusion detection systems are bound to take a heavy toll on the limited resource pool of smartphones. IPhone OS.1 operating system. virus detection. Vulnerability scanning. 2. log review. 47 . the network stack of smartphones is more exposed than it is through a cellular network. integrity checkers. Android OS etc. But. Penetration testing. As the WLAN feature is new in smartphones. Windows XP. By introducing this feature. fig. vulnerability scanning. penetration testing is also needed to attack different protocols using exploit codes and known penetration tools. only Windows Mobile and IPhone OS have its predecessors (e. So. These are network scanning. the test environment will mainly focus on the network stack of the Windows Mobile 6. We followed a three-step methodology in order to analyze the network stack which are: i) ii) iii) Network scanning. Most of the cellular service providers still use the NAT functionality while giving the IP functionality to the smartphones which offers some sort of security.1.5. Some of the tests are manual where an individual has to initiate and conduct the test and some of the tests are automated and requires only a little human involvement. Section 2.) is responsible for these vulnerabilities.2. Several of the above mentioned tests are often used together to get complete assessment of the security of a network.5 shows the network stack architecture of Windows Mobile OS. Windows Mobile.11) access as a must have feature which actually makes it possible to compare these devices with the latest notebooks in terms of functionality. Network stack implementations in different operating systems (e. MAC OS) among the ordinary operating systems.g. Here. Windows Server2003.1 Test Methodology NIST [111] has developed nine useful techniques to assess the network by means of network security testing. Network scanning and vulnerability scanning might be enough for security testing for smartphones but those two scanning techniques don’t give the full security posture. In the next section. which is similar to that of Windows XP and Windows Server 2003. log review. password cracking. So. In our case. the test methodology that is followed to test the robustness and stability of the Windows Mobile 6. This similarity actually makes the attackers’ task easy to find the well-known tools related to different attacks that were used against the desktop versions. most smartphones support wireless (IEEE 802. The WLAN feature in smartphones makes it possible to access internet in high-speed to a low price or even for free in public places. integrity checkers. it is very easy to find out some insecure Wi-Fi hotspots by war driving and attack the users connecting to those hotspots. password cracking.g Symbian. war dialing. 5.Chapter 5 Penetration testing of Network Stack Today. and war dialing will be irrelevant regarding the practical analysis of the network stack. it is very normal that these devices can contain an immature network stack which can be compromised by denialof-service attacks. and penetration testing. It is therefore not easy to scan these devices directly in the cellular network and plant attacks against them. virus detection.1 network stack is discussed. war driving. but they don’t identify vulnerabilities (except some common Trojan ports). first identify active hosts in the address range specified by the user using ICMP echo and reply packets through TCP/IP stack. Vulnerabilities can only be identified by a qualified individual who analyze the scanning results. rather than manual interpretation. In order to get the full security picture of a host. 5. services. scanners help to identify out-ofdate software versions. There are some specific services which are very common in Windows Mobile devices such as netbios-ns (netbios name service). it is most likely an operating system for handheld devices. These services can be used for scanning purpose. It is very important for the vulnerability scanners to be up-to-date in order to recognize the latest vulnerabilities.2 Vulnerability Scanning Vulnerability scanners use the concept of port scanners to get a more realistic view about the target device. This type of test is very helpful for selecting the right penetration tools to test the network stack of smartphones.1 Network Scanning In the network scanning.1.g. wap-push protocol etc. In order to assess the vulnerability in a highly automated way. a port scanner is involved to identify all targeted hosts on a network. they are scanned for open TCP and UDP ports which will identify the network services running on that host. Other properties such as TCP packet sequence number generation and responses to ICMP packets (e. if a host has a TCP or UDP port 2948 (WAP-push) open. As vulnerability scanners are highly automated. These scanners are better at detecting well-known vulnerabilities in a particular surface but not for the zero-day vulnerabilities.5. penetration testing is needed to check the OS network stacks whether they are ready to face the real-world attacks. Moreover. An individual with expertise in networking and OS security must interpret the results in this case.1. the time to leave (TTL) field). The port scanners identify active hosts. a vulnerability scanner should be used against the target hosts. high false positive error rate while reporting vulnerabilities is one of the major weaknesses of these scanners. For example. After active hosts have been identified.1.3 Penetration testing Penetration testing is a security testing methodology in which experts try to evaluate the security features of a system based on their understanding of the system design 48 . Port scanners like Nmap [112]. Moreover. These types of scanners identify open ports and services on the target hosts and also provide information regarding vulnerabilities through different ports and related descriptions. applicable patches or system upgrades and possible threats to the operating systems and applications by comparing them with lists of known exposures. also provide a clue to identify the operating systems. applications and operating systems. 5. This process is called operating system fingerprinting. A comprehensive list of active ports and the services associated with them is listed through the port-scanning tool. scanners like Nmap gather some additional information through open ports to identify the target operating system. more emphasis is given to those attacks which will trigger vulnerabilities in the network stack of smartphones.1. In 49 . This gives the attacker useful information about the architecture and the potential vulnerabilites of that system. port scanning and vulnerability scanning is done to find out open ports and vulnerabilities related to them. an inside attack is done against the Windows Mobile 6. The planning phase sets the groundwork for penetration tests.and implementation. antivirus. tools and descriptions of the attacks are reported in the same table.g. According to NIST testing methodology guidance [111]. In the discovery phase. Penetration testing can be designed to simulate an inside or outside attack. Reporting is also an essential part for this phase.2 Penetration Test Results and Discussion As a part of the planning phase. in the discovery phase. In this phase. Vulnerabilities are analyzed and reported manually and also with the help of vulnerability scanners. lists of the attacks. In the attack phase. Nmap and Nessus [113] were used as network scanning and vulnerability scanning tools respectively. On the other hand. packet capture technology is a very important tool for analyzing the attacks.1 based smartphone. firewalls. In this phase. Next. there are four phases needed on the way to the successful penetration testing. In fig. IDS) into account. In our case. In the planning phase. Additional Discovery Planning Discovery Attack Reporting Figure 5. 5.1: Four phases for Penetration testing. an outside attack includes all the perimeter defense tools like firewalls. OS Fingerprinting is done in order to identify the OS running on the target device. The main objective of penetration testing is to choose the appropriate tools and techniques used by attackers to identify methods of gaining access to a target device. four phases of penetration testing are shown. real exploit code and different tools which are frequently used by attackers are taken into account 5. Inside attack doesn’t take the security software (e. antivirus etc. a complete list of attack tools along with brief descriptions are given in Appendix A. the detailed result also reveals the underlying kernel and processor architecture of the target device. a number of attack scripts and some well-known attack tools like HPing [114].1.168.0 and ARM architecture respectively.80. Table 5. The MAC address of the target device was also retrieved through ping scan. In the next few sections.23. Packit [116] were used. the OS fingerprinting technique of Nmap was used to identify the operating system running on the target. Target host was shown alive in the network. etc. In Table 5.0.the attack phase. ping scan first was used to find the target smartphone in the network.0. a well-known packet sniffing tool was used to monitor the OS responses. it is useful to try some old desktop OS based attacks against the relatively new Windows Mobile in order to see whether those attacks are countered here as well.1: Network Scanning Report Name of the scan Port Command Results Host discovery-ping scan Intense scan. according to database of Nmap. the port discovery technique shows that the system has two services running which are netbios-ns (137). The target was a HTC Touch Diamond smartphone running Windows Mobile 6. netbios-dgm (138). TCP # nmap -PE -v -p1-65535 PA21. one running Windows XP (victim) and another Ubuntu Linux (attacker).3389 -A -T4 192. initial sequence numbers. which is based on Windows CE 5. These tests gave several hints about the internal network stack architecture of the device and the services it offers.100 50 . The tests show that the Netbios-ns service discloses the netbios-name of the machine. This investigation showed that it is reasonable to assume that the device has a network stack architecture which is similar to its desktop counterparts like Windows XP and Windows Server 2003.168. Therefore. Second. A DLink access point (also used as victim) was used to isolate the attack environment from the public network and simulate a typical real-world scenario. the netbios-datagram service (netbios-dgm) offers support for connection-less file sharing activities. in order to find out the open ports in the target device.1. At last.1 Network Scanning In the network scanning phase. Wireshark [117]. window size. something which confirmed the resemblence between the Windows Mobile and desktop OS based on TTL. an exact match of the OS (Windows Mobile 6) can be found.100 Target device is found alive and MAC address shown. None of the tcp ports were open. arpspoof [115]. Moreover. both using UDP ports. All tests were initiated from two clients. 5.1.1 OS. The detailed test results are given in Table 5. results are reported according to the test methodology stated above. By examining the results of the OS fingerprinting test from Table 5. TCP and UDP intense scanning was performed respectively. all TCP ports ICMP # nmap -sP 192. All these services are enabled by default.2. firstly. CVE : CVE-1999-0621 Other references : OSVDB:13577 Nessus ID : 10150 Risk: Low Nessus ID : 10287 Risk: None Nessus ID : 19506 general/udp general/tcp Makes a traceroute to the remote host. Microsoft Windows CE 5. HTC TyTN II (Kaiser) mobile phone (Microsoft Windows Mobile 6).2.0 (ARM).2).100 MAC Address: 00:18:41:94:C7:EE (High Tech Computer) Device type: terminal|general purpose|media device| phone| specialized Running: HP Windows PocketPC/CE.UD P # nmap -O 192.168. used for resolving the IP address to device name Description The remote host listens on udp port 137 and replies to NetBIOS nbtscan requests. Risk factor & Reference ID Risk: None.0.00). the traceroute information found against the target OS is irrelevant here. Microsoft Windows Mobile 6 or Zune audio player (firmware 2. This script displays.23.168.2. the Nessus scanner was used to assess the vulnerabilities related to the open ports and services running in the device. Registered or GPL) -The version of the Nessus Engine -The port scanner(s) used -The port range scanned -The date of the scan 51 .0.1 OS in Table 5. for each tested host.SERVICE 137/udp open netbios-ns 138/udp open|filtered netbios-dgm OS fingerprinting TCP. Microsoft Windows XP SP2 5.80. Microsoft Windows 2003|PocketPC /CE|XP.100 PORT .2 Vulnerability Scanning To perform vulnerability scanning. Microsoft Windows XP Professional SP2. information about the scan itself: -The version of the plugin set -The type of plugin feed (Direct. TABLE 5. We have summarized the results from the vulnerability scanning of Windows Mobile 6. Microsoft Windows Server 2003 SP1.STATE .3389 -sU -A -T4 192. No significant vulnerabilities were reported by Nessus scanner except the traceroute information marked as (Risk: Low) for the target OS since it leaked information about the devices that could be useful for an attacker.2: Vulnerability Scanning Results Port name netbiosns(137/udp) Synopsis It is possible to obtain the network name of the remote host Traceroute i)It is used to ping the remote host. the vulnerability scanning has been performed in the internal network.Intense scan plus UDP UDP # nmap -PE -v PA21. ii)Also. Microsoft embedded OS details: HP Compaq t5520 thin client (Microsoft Windows CE 5. As. . The result is shown in Table 5. 5.. we have performed many attacks against the target.1 operating system never experienced any denial-of-service due to the attack. Some reports also suggest that the vulnerability could be used to execute arbitrary code on affected systems. We used the Linux machine as the attacker with the Windows Mobile 6...1 TCP Syn Flooding …………………………………………………………………………………….....3.3. 52 . The Windows Mobile 6. it is very natural that a target device will communicate slowly on Internet because of network congestion... For more details about the attack see Appendix B...2. It can be sent to the target system and causes it to hang until it is reset.1 based smartphone as the target in a WLAN environment where both machines were connected to internet through a DLink access point. The reason behind this behavior is that no TCP ports were open in the target device. During the flooding attack.2. the device running WM 6.-The duration of the scan -The number of hosts scanned in parallel -The number of checks done in parallel general/icmp It is possible to determine the exact time set on the remote host. This allows an attacker to know the date which is set on your machine. the “IGMP V3 DoS vulnerability” [119]. the Neptune or TCP SYN flooding attack [118]. The device hangs after receiving a specially crafted IGMP packet. Hping3 was used to carry out this attack and we used the Linux machine as the attacking host in the WLAN environment. The results are shown in Table 5.3..3. Risk: None CVE : CVE-1999-0524 Nessus ID : 10114 5.1 [121].. It is easy for an attacker to continuously send such packets to the target and render it completely useless..1 operating system’s network stack (TCP) is resistant to a well-known exploit. But. There is no patch available in Microsoft’s security bulletin website related to this attack for Windows Mobile 6. This may help him to defeat all your time based authentication protocols...2. The network stack (IP) turned out to be vulnerable against a well-known exploit. Moreover.3 Penetration Testing In the penetration testing phase.2 IP Options (IGMPv3 Exploit) ……………………………………………………………………………………. 5. a real exploit tool [120] was used after being modified to gcc compatible code to suite our Linux system. The following paragraphs show the detailed results from these tests.. The remote host answers to an ICMP timestamp request. 168.5.0.100 Result During the flooding. Soft reset needed to put the target OS into normal state. Because.. No warning message shown in the smartphone. the phone can communicate but slowly on Internet.101 ARP spoofing (gratuitous ARP reply to target itself) ARP spoofing (gratuitous ARP reply to target itself ) # arpspoof -t 192.. and one smartphone (HTC Touch Diamond) to which the gratuitous ARPs were sent. We used the arpspoof tool from the dsniff 2. No changes observed in the target.168. DoS attack was observed immediately if attacker doesn’t reply to the routed packets received from victim.0. 53 .. the attack was done with the target’s own IP and MAC address with the intention to see if we could confuse the target.168.1 -X 00:0e:35:73:02:10 -y 192.168.168.0.0...100 192.0..168. Target OS completely hanged after receiving the crafted IGMP packet. is vulnerable against a denial-of-service attack when receiving a gratuitous ARP for its own IP address.. one DLink router/gateway as a victim (whose ARP entry was being spoofed).101 -X 01:01:01:01:01:01 -y 192. Table 5.168.100 (target.101 -Y 00:18:41:94:c7:ee -i eth1 -e 00:00:00:00:00:05 -E 00:18:41:94:c7:ee -c 0 # packit -t arp -A 1 -x 192. In a second test. The attack has been done in two ways.0..168.3.101 -Y 00:18:41:94:c7:ee -i eth1 -e 00:14:6c:7b:4a:ad -E 00:18:41:94:c7:ee -c 0 ARP Spoofing request Hijacks the traffic between router (AP) and target.3.HTC Touch Diamond).101 (attacker).0. we also observed that traffic can be hijacked between the target and router/access point using forged ARP request and reply packets.3 ARP Spoofing …………………………………………………………………………………….168. The results are shown in Table 5.. 192.0.1 (victim-Access Point).1 OS.0. and 192.0. We observed that the target device running WM 6. The first attack was done using the target’s own IP address but with a MAC address not belonging to the device in order to create an address conflict. IP Options attack (IGMPv3 exploit) # .100 # packit -t arp -A 2 -x 192. none of the TCP ports were open in the device. Testing was performed on the Ubuntu Linux host as the attacker.168.168. During these tests. The hosts were 192.168./exploit 192. No changes observed in the target. It is not vulnerable to DoS and never freezes. Here.3: Penetration testing results Attack name & network stack level TCP SYN flood Command to initiate the attack # hping3 --flood --syn –randsource 192.0.2.3 suite and also Packit tool for the ARP spoofing attacks.100 192.0. . No changes observed in the target.168. No answers.101 -Y 00:18:41:94:c7:ee -i eth1 -e 00:14:6c:7b:4a:ad -E 00:18:41:94:c7:ee -c 0 Hijacks the traffic between router (AP) and target. Boink. No changes observed in the target.0. No changes observed in the target. 5.1 -X 00:0e:35:73:02:10 -y 192. No warning message shown in the smartphone.1 -X 00:0e:35:73:02:10 -y 192.0.0. Opentear. Blat.101 RARP(request) RARP(reply) Inverse ARP (request) Inverse ARP (request) #packit -t arp -A 3 -x 192.0. Newtear. DoS attack was observed immediately if attacker doesn’t reply to the routed packets received from victim.2.0. Some other historical attacks such as the Land attack.168. DoS attack was observed immediately if attacker doesn’t reply to the routed packets received from victim. Hijacks the traffic between router (AP) and target.0.1 -X 00:0e:35:73:02:10 -y 192.. No warning message shown in the smartphone.0.168. Here.101 -Y 00:18:41:94:c7:ee -i eth1 -e 00:14:6c:7b:4a:ad -E 00:18:41:94:c7:ee -c 0 #packit -t arp -A 4 -x 192. All of the attacks mentioned above were handled successfully by the Windows Mobile 6..1 network stack except the NETBIOS session request flooding attack.168. No changes observed in the target. Teardrop.1 -X 00:0e:35:73:02:10 -y 192.. No answers.ARP Spoofing reply # packit -t arp -A 2 -x 192. Ping of Death.168. since port 139 is not open. No answers. Bonk..168.102 -Y 00:18:41:94:c7:ee -i eth1 -e 00:14:6c:7b:4a:ad -E 00:18:41:94:c7:ee -c 0 #packit -t arp -A 8 -x 192.168.168.0..101 -Y 00:18:41:94:c7:ee -i eth1 -e 00:14:6c:7b:4a:ad -E 00:18:41:94:c7:ee -c 0 #packit -t arp -A 9 -x 192.0.4 Other Historical Attacks ……………………………………………………………………………………. no responses were coming from the device but TCP retransmission was occurring rapidly without specifying the NetBIOS 54 .1 -X 00:0e:35:73:02:10 -y 192.0. a lot of data traffic was sent through port 139 to crash the remote device.3. Naptha.168.168. In this attack. ARP spoofing using Broadcast # arpspoof -i eth1 192. and NETBIOS session request flooding were performed in order to test the network stack’s stability and robustness. Syndrop.0. Here...102 -Y 00:18:41:94:c7:ee -i eth1 -e 00:14:6c:7b:4a:ad -E 00:18:41:94:c7:ee -c 0 No answers..168. In this case. This problem is perhaps somewhat expected but the devices could at least give a warning when MAC or IP addresses suddenly change.1 was found vulnerable.1 smartphone experienced a denialof-service and couldn’t connect to local access point anymore. It ceased to function and could not communicate reliably on the WLAN during a SYN DoS attack where the devices were flooded with SYN packets. The WM 6. It was possible to hijack the network traffic from the device using faked ARP replies. After 2 minutes (approx. This is an attack where a malicious user conducts a spoofing attack through the Neighbor discovery protocol present in IPv6. 5. In an earlier paper [14]. The vulnerability discloses sensitive information and results in a DoS attack.). We have carefully selected tools and attack methods in order to be as successful as possible in the tests.1 based smartphones. it seems that the network stack in Windows Mobile devices contain several vulnerabilities and is not fully capable to operate in complex and hostile network environments such as on the Internet where hosts are constantly searched and scanned and many of these attacks are known to be constantly present [122]. security researchers and practitioners and third party vendors who develop firewalls.name for the destination and source. The tests show that it is easy to spawn DoS attacks against the Windows Mobile based smartphones. In another attack scenario.1 device faced DoS against a well-known IGMPv3 exploit. It can be exploited by someone adding a faked entry to the neighbor’s cache via an ND solicitation request. The target device could not be turned off and the Connection Manager service became useless. the WM 6. The vulnerability is caused due to a bug in some implementations of the Neighbor discovery protocol while processing neighbor solicitation requests. we have shown that it was also possible to do similar attack against Windows Mobile 5 devices with similar results where the devices were rendered useless. which may contain spoofed IPv6 address. The only way to solve this situation was to take out the battery and restart the OS. 55 . The Windows Mobile device was also found vulnerable against ARP spoofing. Netbios names were randomly chosen by the flooder tool. Looking at the results from the tests. We believe that these findings will be beneficial to the vendors. Since the target device has a dual stack architecture (IPv4 and IPv6). session request flooding was directed at NETBIOS protocol 139 (tcp) of the target device which completely made the device unresponsive.3 Summary In this chapter. Neighbor discovery spoofing was performed and was successfully handled by the target OS. The tests include some rather well-known attacks against vulnerabilities like session hijacking using forged ARP request and reply packets which are patched in desktop operating systems. The Windows Mobile 6. we have analyzed the stability and robustness of Windows Mobile 6. 56 . mobile VoIP application is shown as an example to point out the security issues that it will face from application and network perspective. In the next chapter. The work should also be useful for end-users and organizations who need to know what level of security and stability these devices offer.antivirus. The results here should make it possible to know in what situations these devices should or should not be used. and intrusion detection systems for Smartphones. 1 operating system includes the “Internet Calling” feature by integrating a SIP client. A number of technical forums have already published the way to enable VoIP support in Windows Mobile 6 based devices. A fairly large number of these users will use the smartphone as VoIP platform. in most cases.2 Mobile VoIP in Windows Mobile 6. subscribers can take the advantage of free hotspots and transport the call only paying the VoIP service providers’ charges.Chapter 6 Case Study: Mobile VoIP Application Mobile Voice over IP (VoIP) is a communication technology where an individual can dial and receive voice calls in handheld devices (e. subscribers of cellular network are not subjected to long distance charges from their cell operator.1 Overview Any handheld device that supports high-speed IP communications can use the mobile voice over IP (VoIP). there will be discussion on Mobile VoIP support in Windows Mobile 6. Although subscribers have to pay applicable charges. According to another prediction by Disruptive Analysis [124]. SIP server). smartphones or PDAs) via the IP network. they are very low compared to the long distance cellular charges. In case of Wi-Fi. In the next few sections. the OS vendors are integrating VoIP support such as SIP Clients in the smartphones.1 and related security issues. This is the reason why we have chosen a Mobile VoIP application as a part of our case study while evaluating the security of Windows Mobile 6. Typically. has pointed out in a report [123] that around 75 percent of corporate lines will be using VoIP in next 2 years. So. VoIP subscribers are going to increase significantly in coming years. 6. Several survey reports show that. Watchguard Technologies. a security firm. As the underlying protocol for the VoIP application has no built in security. voice calls are made with the help of a VoIP service provider (e. voice calls are made through a voice carrier’s network but in VoIP.1 The Windows Mobile 6. third party unsigned applications are becoming a part of VoIP support in Windows Mobile OS.g. Due to the VoIP technology. This strategy does not prove helpful in stopping the users from using VoIP application over cellular network or Wi-Fi. the total number of VoIP subscribers worldwide is expected to reach nearly 100 million. Calls can be transported through Wi-Fi (known as VoWLAN or Voice over WLAN) or a cellular carrier’s data network (known as Vo3G or Voice over 3G).g. Moreover. As a result. 57 . there could be serious threats towards the voice data over the air as well as in the operating system on which the application will run.1 operating system. OEMs and the cellular operators don’t enable this feature as a business strategy. 6. by the end of this year. there could be over 250 million VoIP over cellular network users by the end of 2012. This can pose more security threats. But. 6. There can be several scenarios related to DoS attacks. SPAM over Internet Telephony. Attackers can simply send crafted TCP or IP packets to the target which can interrupt the VoIP conversation. Eavesdropping As the voice traffic is sent in clear through Wi-Fi during VoIP conversation. malicious user can sniff the traffic over the air. clog voicemail boxes and even annoy the user to switch off the mobile phone which is unlikely in terms of email. SPIT can slow system performance. Major threats include ARP spoofing. First. denial-of-service. Denial-of –Service Attack This type of attack is quite common in data networks. attacks can be done over the air when VoIP is running over Wi-Fi or 3G environments. Another typical scenario is that a malicious user can inject excessive traffic to flood the network causing other VoIP users to disconnect. an attacker residing in the same network can send gratuitous ARP reply packets (spoof the MAC of AP) to the target to redirect the voice traffic to attacker’s device. In another scenario. 6. This is termed as external attacks.3 Mobile VoIP Security Security threats to Mobile VoIP come in two ways. A brief description of these attacks is listed below. eavesdropping. In section 5. Both types of attacks will be discussed in the following two sections. like junk e-mail.3. ARP Spoofing If an individual is using a public Wi-Fi access point to place a VoIP call. In section 5. 58 . Spam over Internet Telephony (SPIT) Spam over Internet Telephony (VoIP) are sent for direct monetary benefits or denialof-service which can damage recipient’s business investment in terms of missed calls or disruption of calls. Second. The attacker can inject malicious traffic or simply observe the traffic to retrieve confidential information. Windows Mobile 6.3.3.1 External Attacks Mobile VoIP over Wi-Fi faces threats similar to other applications using Wi-Fi.1 is found vulnerable to such DoS attacks. SPIT can be generated in similar ways as ordinary spam with botnets that target millions of VoIP users from the compromised systems. the penetration result shows that the Windows Mobile 6. Attackers can inject traffic into the wireless network without even connecting to the access point. an attacker can run multiple packet streams such as a lot of SIP call requests or SIP registrations to exhaust the server resources causing busy signals or disconnecting of a Mobile VoIP users. a third party application running on the Windows Mobile OS can be used to perform internal attacks. In case of Mobile VoIP. and voice phishing (Vishing) attacks.1 OS is vulnerable to this type of attack. In Windows Mobile 6. ARP spoofing. Some threats introduced by this situation are listed below In the first scenario.g. by installing a third-party SIP client in Windows Mobile 6. WLAN or Cellular Network). 6. such as user names. In this way. 59 . Most of the solutions for secure mobile voip focus on the external attacks but not the internal attacks which are related to the platform security on which the voip application runs. the rogue application may enter the wrong settings for SIP server. reply attacks or message integrity violations. account numbers and passwords.Voice Phishing (Vishing) Attack This type of attack attempts to get users to disclose personal and sensitive information. Microsoft has included the SIP client as a part of VoIP support but OEMs or operators disable the feature as a business strategy. the rogue SIP server will act as SIP proxy just to relay the traffic to the original server which can be treated as man-in-the-middle-attack. A rogue SIP client may act as a Trojan by including an extra functionality which can record the VoIP calls and send it to an attacker through any interface (e. In the third scenario. In one-tier security. the SIP settings including user name and password can be leaked easily which can be used later on for the attacker’s own purpose. This scenario is quite popular in the smartphone operating systems like Windows Mobile and Symbian. any unsigned third-party application runs in privileged mode and it can access any restricted API of the target OS. the users install unsigned applications to enable the feature and use a third party SIP client to place a VoIP call.2 Internal Attacks Unsigned third-party applications have always been a blessing for the users to tweak the target OS platform and do some useful tasks which cannot be done with bundled applications that come from OEMs or OS vendors. To overcome the security issues that are faced by Mobile VoIP architecture. Rachid [125] has discussed about SPAM over Internet Telephony (SPIT) and its protective measures in detail. That particular server may be installed by the attacker to give proper service to the user but intercept all voice traffic on the way where the user will not notice anything at all.3.1 OS with one-tier security (most of the devices in the market are using the one-tier security based WM platform) can compromise the whole platform. As a result. The rogue application may include a key logger software that records the type sequence while installing the SIP client. Since Windows Mobile OS supports SIP (Session Initiation Protocol) protocol by default. a number of solutions are available to secure it. The next section provides a general guideline for platform security. In this scenario. The trick works by spamming users using SPIT technique and luring them to call their bank or service provider to verify account information. secure mobile VoIP solutions discussed here mainly focus on SIP. In the second scenario.1. as well as eavesdropping of the media session. Moreover. Israel [81] has proposed a secure model of mobile VoIP which will counter most of the above mentioned attacks like DoS attacks. after installation. Limit applications’ capabilities Applications (privileged or normal) must not be allowed to change the security policies. These buffers can have large sizes ( > 2MB ) depending on the underlying algorithms. All the assets must be assigned highest possible required roles. This will provide some protection against the unsigned or normal applications by restricting their access to unprotected assets only.4 Suggestions Here are some of the suggestions that can help towards providing a more secure system: Security Awareness Well. Windows mobile reserves address space larger than 2MB in LMA instead of the 32 MB process slot.6. this is the most cliched suggestion for security. By strong security policy we mean that unsigned applications must not be accepted wherever it is possible. Awareness can really strengthen security in the case of smartphones by enabling the application developers to take care of the security while developing software. Privileged certificates provide applications the capability of changing the security policies. Check if policies are intact We have seen that the policies can be changed by applications like “SDA_application_unlock”. Development specific Applications like voip client or other streaming applications use jitter buffers. Special care must be taken not to reserve space for them in the Large Memory Area (LMA) (Slot 33. By default. It also helps the users to make good use of the provided security mechanisms like passwords.Slot 58) because it is unprotected and accessible to all other applications. it is useful to check frequently if the policies are intact. encryption. This behaviour can be changed by the developer by reserving the address space in small chunks ( < 2MB ). Settings like RAPI restricted mode must be used and all other features provided by Windows Mobile like password required. device lock and wipe etc must be enabled. This practice is highly recommended until one is sure that the policies cannot be changed maliciously. We had to start with it because we feel that it is really needed in the mobile world by users and application developers. Therefore. Two-Tier Access Devices that handle sensitive personal or business data must have Two-Tier Access enabled. Carving out strong security policies is of no use if we are not sure that they will remain intact. This can be dangerous because even if the applcation is not 60 . This can be done by the IT administrator or the cellular network operator. security policies etc. Carving out a strong security policy Since OEMs or corporate IT administrators can change or tune the security behaviour of smartphones therefore they must carve out strong security policy for their devices. Windows mobile applications can be privileged or normal. Major players in the field of security products have now started taking interest in introducing products for smartphones. It will be very useful to have an extra line of defence in shape of antivirus software and firewalls. Privileged applications have full access to all the assets (protected or otherwise). These vulnerabilities must be patched with the same zeal and speed as in desktops. Patches We have seen that there are vulnerabilities in the operating systems. it can have a vulnerabilty (e. At the very least. The Trusted code can have vulnerabilites like buffer overflows and the network stack can be vulnerable to attacks that are already patched in the desktop environment. The assets of Windows Mobile must be divided into more classes and new types of certificates must be introduced to grant access to those classes.g buffer overflow) that can be exploited by an attacker thus enabling the attacker to change the security policies. 61 . Third Party Security Products Security products like antivirus software or firewalls were not a good solution for smartphones in the past because of smartphone’s limited processing power and other resources but with the increase in smartphone capabilities their usage in this environment is becoming possible.malicious. This leads us to our next suggestion for Windows Mobile vendors. all the assets that are part of the security policies must be classified under a new class say Root and Root certificates must not be issued to ordinary applications. This means that such an application is capable of doing anything like changing the security policies or sending files to remote locations even though its original job maybe browsing the Internet. This is rather coarse grained. Windows Mobile automatic update feature can be helpful but it is useless if the vendors do not introduce patches for their mobile OS. but Windows Mobile does not offer such a restriction. we have focused on Windows Mobile operating system security since this is the second most popular OS in the consumer market of smartphones and we feel that its similarity to Windows desktop systems will attract the Windows traditional clientelle. It uses a hardware based MMU that protects each prosess’s slot. trojans and worms is realized and BotNets are going to be a reality in the mobile world very soon. The division of application permissions into only two levels (privileged and normal). The network stack of Windows Mobile is found vulnerable to some known attacks which have been taken care of in desktop operating systems. as we have seen. with privileged applications being capable of doing virtually anything in the system. it is not impossible to bypass the security policies. is coarse grained. Windows Mobile does offer a security infrastructure that uses security policies with code signing. Smartphone operating systems face the same threats that are faced by the desktop OSes face.Chapter 7 Conclusions In this thesis. We can say that while the vendors of mobile OSes have started to pay attention to OS security. Most attacks today are aimed at Symbian platforms due to its higher availability in the consumer market. In this thesis. but this can soon change and be Windows Mobile very soon. penetration testing has been done against the Windows Mobile 6. 62 . Normally. DoS attacks may not be as lethal in the mobile world as they are in the Desktop world but they are often combined with other attacks (e. But. The problem with the smartphone operating systems is that a known vulnerability (that is long patched in Desktop OSes) tends to exist for a longer time due to delayed patches thus they can be an easy target to exploit. an application that needs access to a single protected asset in order to carry out its job should not be allowed to access all the protected assets. Threats from vulnerabilities like buffer overflows or other logical errors in trusted code or third party applications cannot be ignored.g Blind Spoofing) which make them more dangerous. The limited smartphone resources are easy to consume thus DoS attacks are also easy to perform. The threat from malware like viruses. we have realized that most of the problems we have pointed out in this report are common to almost all mobile operating systems. Although our report focuses on Windows Mobile. these mobile OSes still have a long way to go in order to prove themselves truly worthy of the trust that is invested in them by their users today.1 operating system in order to expose the vulnerabilities on the network level. Windows Mobile supports a limited number of processes and it uses a slot based memory architecture. 63 . IDS/IPS development on the smartphones’ operating systems in an efficient way can be a good research area. • The development of an automated penetration testing tool will ease the security assessment for mobile devices. • Intrusion detection or prevention has always been a big challenge for the smartphones due to its limited processing resources and disk memory.We suggest the following future work in connection to this thesis: • Third-party security products are being introduced in the mobile world. It will be interesting do evaluate the of mobile OSes in presence of these products. Santa Barbara. ISBN: 1-4244-2136-7. Published on 27th Aug 2008. . R.. Cyril Jacob.. Mark Handley. NV 2006. Cyril Jacob.aspx .microsoft. June. SHx BSPs.. pp. presented at Defcon 14-Las Vegas.cs.com/en-us/library/ms901733.free. 2006. Perelson. “An Investigation into Access Control for Mobile Devices”.A. “Security of Smart Phones”. visited on April. Jukka Ahonen.ac. Collin R.. Windows NT Embedded. “PDA OS Security: Application Execution”. Jesse D´Aguanno. ARM BSPs.com/en-us/library/ms950386. 218.org/ .aspx . Sheikh Mahbub Habib.com/en-us/library/cc5 33015.ucl.. Tomas Olovsson.microsoft.com/en-us/library/ms902369... Windows Embedded Standard 2009. ISSA 2004. x86 BSPs. “Blackjacking-Owning the Enterprise via the Blackberry”. S.php. Collin Richard Mulliner.aspx . . px . “A Practical Analysis of the Robustness and Stability of the Network Stack in Smartphones”. Published on 13th June 2008. Helsinki University of Technology.microsoft.aspx . Board Support Package Overview.NET Micro Framework.com/en-us/library/ms 901743. MIPS BSPs.praetoriang. “An Analysis of the Robustness and Stability of the Network Stack in Symbian-based Smartphones”.aspx . Accepted in Journal of Networks (special issue).fi/Opinnot/Tik-110. ISSN 1455-9749. visited on April 2009. “Security Comparison of Mobile OSes”.REFERENCES [1] [2] Andrew Bittau. visited on April 2009.com/en-us/library/ms 376738. 11th IEEE International Conference on Computer Science & Information Technology (ICCIT 2008). visited on April 2009. Windows Embedded CE.. Arto Kettula.. Joshua Lackey.501/2000/papers/kettula.microsoft.uk/bittau-wep.fr/computers/pocket/simon.microsoft.Mulliner.. Academy Publisher.pdf. Botha. visited on April 2009.pdf . Windows Embedded for Point of Service.tml. Visited on 11th March 2009.com/en-us/library/ms902398. Windows . visited on April 2009. visited February 2009.microsoft. Tomas Olovsson. Johannesburg. Windows Embedded NavReady. visited on April 2009. South Africa.microsoft.microsoft. 64 [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] . Gallagher Estate.com/enus/library/bb848016. visited on April 2009. visited January 2008. Master’s Thesis submitted in University of California. visited on 7th July 2008. visited on April 2009.com/en-us/library/ms902328. visited on April 2009.microsoft.com/en-us/library/bb847932.. ISSN: 1796-2056. June 2004.html.tkk. “The Final Nail in WEP’s Coffin”.aspx . 2009.microsoft.asp x . visited on April 2009. Sheikh Mahbub Habib. 393-398. Publications in Telecommunications Software and Multimedia TML-C7.net/presentations/blackjack. . microsoft. [38] Network Drivers. visited May 2009 [42] ActiveSync Desktop Pass-through (DTPT).com/en-us/library/aa447511. [30] Developing a Device Driver.microsoft. visited on April 2009.org/rfc/ rfc3056.aspx . visited April 2009. [28] Internal File System Selection.microsoft.com/en-us/libra ry/aa450572.. [44] Simultaneous Voice and Data Connections. . Memory Architecture.. published on 28th Aug 2008.microsoft.aspx .com/en-us/library/ms892542. visited on April 2009.aspx .0.aspx.aspx . [26] Windows CE 5.ietf. [36] Protected Extensible Authentication Protocol. [24] Windows Embedded CE Documentation.. 65 .microsoft.microsoft.txt.microsoft.ietf.microsoft. . .. RFC 3261.com/en-us/library/bb158486.aspx .org/rfc/rfc3261.aspx .. [46] SIP: Session Initiation Protocol.com/en-us/library/aa922194. published on 28th Aug 2008.RFC 3748...[22] Windows Mobile 6.. Visited April 2009. [32] Windows CE Drivers.. RFC 3056. visited April 2009. visited April 2009. [43] CellCore.0 (MS-CHAP V2).microsoft. visited April 2009. published on 6th June 2008.org/html/rfc5214 ... 28th Aug 2008. .com/en-us/library/aa 910649.com/en-us/library/ aa450566. published on 28th Aug 2008.microsoft. published on 28th Aug 2008. published on 28th Aug 2008. . [39] Connection of IPv6 Domains via Ipv4 clouds.aspx . visited may 2009. published on 28th Aug 2008. [23] Windows CE 5. visited on April 2009. June 2002.com/ blog/?p=4 ... [40] Intra-site Automatic Tunnel Addressing Protocol (ISATAP).ietf. Published on 27th Aug 2008.com/en-us/library/bb4164 93. [33] Extensible Authentication Protocol (EAP). [45] Connection Service Providers.. Feb 2001.. published on 17th Sept 2008. visited April 2009. published on 17th Sept 2008.microsoft. .microsoft.aspx .aspx . March 2008.microsoft.1 Memory Management Changes. [34] Challenge-Handshake Authentication Protocol.com/en-us/lib rary/bb416439.microsoft.com/enus/library/aa924332.com/en-us/library/ms9237 14.com/en-us/library/ ms894042... [35] Transport Level Security.aspx .aspx .com/enus/library/aa925739.0 Kernel Overview. [37] Microsoft Challenge-Handshake Authentication Protocol 2. RFC 5214. June 2004. [31] Stream Interface Driver Architecture. published on 28th Aug 2008.microsoft.aspx .txt .. visited on April 2009. published on 17th Sept 2008. visited April 2009. [41] Layered Protocols and Provider Chains.microsoft.microsoft.org/rfc/rfc3748.com/en-us/library/aa9147 41. [29] CDFS/UDFS File Systems.aspx . Published on 16th Oct 2008. [25] Windows CE 5.com/enus/library/aa910850.aspx . [27] Windows Mobile 6. visited April 2009.aspx .com/en-us/libr ary/bb159115.txt . [47] DialPlan component, , published 28th Sept, 2008. [48] Real-time Communications (RTC) Client API, , visited May 2009. [49] VOIP Application Interface Layer (VAIL), , visited May 2009. [50] VoIP Provisioning for Windows Mobile Powered Devices, , visited May 2009. [51] Securing L2TP using IPSec, , Nov 2001. [52] Point-to-point Tunneling Protocol, , July 2001. [53] Internet Key Exchange (IKEv2) Protocol, , Dec 2005. [54] Negotiation of NAT-Traversal in the IKE, , January 2005. [55] IKEv2 Mobility and Multihoming Protocol, , June 2006. [56] Global System for Mobile Communications (GSM), technology/gsm/index.htm , visited May 2009. [57] General Packet Radio Service (GPRS), gprs/index.htm , visited May 2009. [58] Enhanced Data rates for GSM Evolution, /edge.htm , visited May 2009. [59] International Telecommunication Union, , visited May 2009. [60] IMT-2000, , visited May 2009. [61] Wideband Code Division Multiple Access, technology/tech_articles/WCDMA.shtml , visited May 2009. [62] 3G CDMA2000, , visited May 2009. [63] HSPA, , visited May 2009. [64] The 3rd Generation Partnership Project, , visited May 2009. [65] Jacob Sharony, “Introduction to Wireless MIMO-Theory and Applications”, IEEE LI, Nov 2006. [66] CDMA Technology, , visited May 2009. [67] 3G - CDMA2000 1xEV-DO Technologies, 1xEV-DO.asp , visited may 2009. [68] Bluetooth, , visited May 2009. [69] Core Specification v2.0 + EDR, Building/Specifications/ , visited May 2009. [70] Infrared Data Association, , visited May 2009. [71] IEEE Standards Association, IEEE 802.11, /802.11.html , visited May 2009. [72] Introduction to Wi-Fi (802.11), tro.php3 , visited May 2009. [73] Andrew Bittau, Mark Handley, Joshua Lackey, “The Final Nail in WEP’s Coffin”,, visited May 2009. [74] SetProcPermissions, , published 28th Aug 2008. 66 [75] Datalight Reliance Return on Investment Analysis Guide, , visited May 2009. [76] Joshua Wright, “Attacking 802.11 Networks”, LightReading Live!,1st Oct 2003. [77] Problems Created by Man-in-the-Middle Attacks, , visited April 2009. [78] Vulnerability in SMB Could Allow Remote Code Execution, , published 11the Nov 2008. [79] Exploited Systems through ActiveSync, tent.aspx?g=security&seqNum=326 , published 26th Sep 2008. [80] Alex Talevski, Elisabeth Chang, Tharam Dillon, “Secure Mobile VoIP”, pp. 2108-2113. IEEE ICCIT 2007 Proceedings, ISBN: 0-7695-3038-9. [81] Israel M. Abad Caballero, “Secure Mobile Voice over IP”, Master of Science Thesis, June 2003, KTH, Sweden. [82] GSM Cloning, , visited May 2009. [83] N.J Croft, M.S Olivier, “A Silent SMS Denial of Service (DoS) Attack”, University of Pretoria, South Africa, Oct 2007. [84] Alex Biryukov, Adi Shamir, David Wagner, “Real Time Cryptanalysis of A5/1 on a PC”, Fast Software Encryption Workshop 2000,April 2000, Newyork,USA. [85] Elad Barkan, Eli Biham, and Nathan Keller,”Instant Ciphertext-Only Cryptanalysis of GSM Encrypted Communication”, pp.600-616, Vol. 2729/2003, Lecture Notes in Computer Science, Springer, 2003. [86] Eli Biham, Orr Dunkelman, Nathan Keller, “A Related-Key Rectangle Attack on the Full KASUMI”, pp. 443-461, Vol. 3788/2005, Lecture Notes in Computer Science, Springer, 2005. [87] Logical Link Control and Adaptation Protocol (L2CAP), , visited May 2009. [88] Link Manager Protocol, , visited May 2009. [89] M. Herfurt, M. Holtmann, A. Laurie, C. Mulliner, T. Hurman, M. Rowe, K. Finisterre, J. Wright. the trifinite group. , visited April 2009. [90] Microsoft Bluetooth Stack OBEX Directory Traversal, , under review, visited May 2009. [91] Privileged APIs,, published 28th Aug 2008. [92] Certificate Management,. aspx , published 28th Aug 2008. [93] Security Policy Settings, aa455966.aspx , visited April 2009. [94] Security Roles, , visited April, 2009. [95] Certificate Mgmnt., , published 28th Aug 2008. [96] Flexispy Pro, , visited 20th May 2009. [97] Authentication Events, , visited April 2009. 67 [98] Remote Wipe,, published 19 Sept 2008. [99] Remote Wipe, E6851D23-D145-4DBF-A2CC-E0B4C6301453&displaylang=en ,published 9th Feb 2008. [100] MS Mobile Device Manager, business/solutions/enterprise/mobile-device-manager.mspx ,visited May 2009. [101] Cardtrap virus,, visited May 2009. [102] WinCE/Infojack, 6/windows-mobile-trojan-sends-unauthorized-information-and-leaves-devicevulnerable/ , visited May 2009. [103] P.Haas, iruses.pdf , visited May 2009. [104] Michael Becher, Felix C. Freiling and Boris Leider, "On the Effort to Create Smartphone Worms in Windows Mobile” , IEEE proceedings 2007. [105] Georgia Tech Information Security Center report, “Emerging Cyber Threats Report for 2009”, October 2008. [106] Nan Xu, Fan Zhang, Yisha Luo, Weijia Jia, Dong Xuan and Jin Teng, “Stealthy Video Capturer: A New Video-based Spyware in 3G Smartphones”, ACM, Wisec’09, Zurich. [107] Seth fogie, “Airscanner Vulnerability Summary: Windows Mobile Security Software Fails the Test”, Airscanner Corp., 14th Aug 2004. [108] Aubrey-Derrick Schmidt, Frank Peters, Florian Lamour, “Monitoring smartphone for anomaly Detection”, ACM , Mobilware’08, Innsbruck. [109] Markus Miettinen, Perttu Halonen, Kimmo Hätönen, “Host-Based Intrusion Detection for advanced mobile devices”, IEEE, AINA’06. [110] Michael Becher and Ralf Hund, “Kernel-Level Interception and Applications on Mobile Devices”, May 2008. [111] John Wack, Miles Tracy, Mrurgiah Souppaya, “Guideline on Network Security Testing”, NIST Special Publication 800-42, October 2003. [112] Nmap security scanner, , Accessed on Feb 2009. [113] Nessus Vulnerability Scanner, , released on Feb 2009. [114] Salvatore Sanfilippo, “Hping”, . Accessed on Feb 2009. [115] Dug Song, “Dsniff 2.3”, gsong/dsniff/ . Accessed on Feb 2009. [116] Packit (network injection and capture tool), jects/packit/ , Accessed on Feb 2009. [117] Wireshark, , Accessed on Feb 2009. [118] Syn flood,. html . Accessed on April 2009. [119] CVE ID: CVE-2006-0021 “IGMP v3 DoS Vulnerability”, , National Vulnerability Database, National Institute of Standards and Technology, Published on February 14, 2006. [120] Alexey Sintsov, “Microsoft Windows XP/2003 (IGMP v3) Denial of Service Exploit (MS06-007)”, rm.com/exploits/1599 , Published on March 21, 2006. 68 “Security Firm Lists Leading VoIP Threats”.mspx . eSecurity Planet. Campus-Wide Information Systems. Page:342 – 358.com/technet/ security/Bulletin/MS06-007.[121] Microsoft Security Bulletin MS06-007. 2008. 16th April 2009. [124] Nicole Lewis. “SPAM over Internet Telephony and how to deal with it”. [122] Wolfgang John. of Secure IT. Diploma Thesis. [123] David Needle. Tomas Olovsson: Detection of malicious traffic on back-bone links via packet header analysis. 28th Jan 2008. 69 . Volume:25. ISBN/ISSN 1065-0741. Fraunhofer Inst. Issue:5. 14th Feb 2006. “VoIP Over 3G Wireless Gets Real”.microsoft. [125] Rachid El Khayari.. VOIP-News. Packit: It is an auditing tool that allows penetration testers to monitor. Wireshark: It is a free and open source (GNU GPL) protocol analyzer tool widely used in industries and educational institutions. 70 .com LLC. Nessus Vulnerability Scanner: It is a free (Home feed only) and licensed utility by Tenable Network Security for vulnerability analysis in the network level. and inject customized IP traffic in the network. Some ARP and RARP (reverse ARP) spoofing attacks are done with this tool. Netwox (Network Toolbox): It is a free and open source (GNU GPL) collection of 222 tools used for security auditing. IPv6 attack like neighbor discovery spoofing is done using this tool. Nessus (Home feed) is used in the vulnerability scanning section of this thesis. Wireshark is used to analyze the related protocols involved in the practical analysis of network stack. manipulate. HPing is used for TCP Syn flooding and Land attack in our thesis. In our thesis. DSniff: It is a collection of tools for network auditing and penetration testing by Dug Song.APPENDIX A LIST OF PENETRATION TOOLS Nmap(Network Mapper) Scanner: It is a free and open source (GNU GPL) utility for network exploration and security auditing by Insecure. Nmap is used in the network scanning section of this thesis. HPing: It is a free and open source (GNU GPL) command-line oriented TCP/IP packet assembler/analyzer. We used arpspoof tool of DSniff package for ARP spoofing attacks. In our thesis. Figure B.code=5. igmpHeader. igmpHeader. igmpHeader.1: Partial IGMPv3 Exploit Code. When the packet is analyzed by a packet analyzer. //bug is here ipHeader.checksum=0. If we investigate the code given in Figure 1.options[1]=htons(0x<0000). igmpHeader. the bug is seen in the IP Header’s options field.ResvSQVR=0x0. It is shown as a normal IGMP V3 membership query packet. igmpHeader.0.0").0.addes=dst_addr. Same attack has already caused vulnerability in Windows Mobile 6. source or target none complained it as a malformed packet.group=inet_addr("0. //Ip options ipHeader.options[0]=htons(0x0000). //v3 Membership Query igmpHeader.1 powered smartphones.type=0x11. Network stack of target device cannot process that particular buggy packet and get frozen permanently until it has been reset by the user. where the bug was introduced in the IP header. 2006. igmpHeader. 71 . Alexey Sintsov published the exploit code on March 22.APPENDIX B IGMPV3 EXPLOIT IGMPv3 exploit is one of the well-known vulnerabilities in desktop operating systems. igmpHeader.num=htons(1).QQIC=0. This action might not be possible to undo. Are you sure you want to continue? We've moved you to where you read on your other device. Get the full title to continue reading from where you left off, or restart the preview.
https://www.scribd.com/document/95965957/122052
CC-MAIN-2016-50
refinedweb
29,749
51.04
In Python, an array is a data structure used to store multiple items of the same type. Arrays are useful when dealing with many values of the same data type. An array needs to explicitly import the array module for declaration. A 2D array is simply an array of arrays. The numpy.array_split() method in Python is used to split a 2D array into multiple sub-arrays of equal size. array_split() takes the following parameters: array(required): Represents the input array. indices_or_section(required): Represents the number of splits to be returned. axis(optional): The axis along which the values are appended. To split a 2D array, pass in the array and specify the number of splits you want. Now, let’s split a 2D array into three sections or indices. import numpy as np array1 = np.array([[1, 2], [3, 4], [5, 6], [7, 8], [9, 10]]) # splitting the array into three indexes new_array1 = np.array_split(array1, 3) print('The arrays splitted into 3 sections are:', new_array1) numpymodule. array1. numpy.array_split()method to split the given array into three sections, and then assign the result to a variable named new_array1. new_array1. RELATED TAGS CONTRIBUTOR View all Courses
https://www.educative.io/answers/how-to-split-a-2d-array-in-numpy
CC-MAIN-2022-33
refinedweb
197
58.99
OK, the eventual goal of my project is a kind of word insert/delete/replace. You read a file... "The quick brown fox jumps over the lazy dog." and you can delete all instances of "fox", replace "fox" with "cat", insert "giraffe" wherever you want, etc. Word processing stuff. So I figured I'd have a big file and start reading it into a stream, parse that stream, then do whatever (replace, add, delete). Except I can't seem to get that far since it breaks with my simple little "insert" experiment program. The program below starts with a stream... abcdefghijkmnopqr The goal is to end up with this stream... abcdefgzzzhijkmnopqr What I get is this... zzzdefghijkmnopqr So I have at least two problems. - "zzz" is being written at position 0, not position 7. Note the 7 in line 33. - "zzz" overwrites the characters in the stream. It doesn't insert. Code is below. I'd like to know - What I'm doing wrong. - Is this (manipulating iostreams) a decent approach to the whole problem? #include <iostream> #include <sstream> #include <cassert> using namespace std; void PrintStream(iostream& stream) { int startPos = stream.tellg(); assert(stream.good()); cout << "Start position = " << startPos << endl; char c; while(stream.good()) { c = stream.get(); if(stream.good()) { cout << c; } } cout << endl; stream.clear(); stream.seekg(startPos); assert(stream.good()); } void Manipulate(iostream& stream) { stream.seekg(7, ios_base::beg); assert(stream.good()); stream.write("zzz", 3); assert(stream.good()); } int main() { string aString = "abcdefghijkmnopqr"; stringstream stream(aString); stream.seekg(0, ios_base::beg); assert(stream.good()); PrintStream(stream); Manipulate(stream); stream.seekg(0, ios_base::beg); assert(stream.good()); PrintStream(stream); cin.get(); return 0; } Note. All "assert" statements succeed. Actual program output is the following. Start position = 0 abcdefghijkmnopqr Start position = 0 zzzdefghijkmnopqr
https://www.daniweb.com/programming/software-development/threads/364731/stream-problems
CC-MAIN-2017-17
refinedweb
295
72.53
Plugin Debugger Debug Sublime Plugins with Winpdb graphical python debugger. Details Installs - Total 4K - Win 3K - Mac 667 - Linux 896 Readme - Source - bitbucket.org PluginDebugger This package contains some little glue code to use nice Winpdb graphical python debugger for debugging sublime plugins. Usage Here is a little python snippet of debug_example.py: import sublime_plugin import sys class DebugExampleCommand(sublime_plugin.WindowCommand): def run(self, **kwargs): sys.stderr.write("started\n") i = 4 import spdb ; spdb.start() z = 5 - spdb.start() - winpdb will be launched, if not yet launched from Plugin Debugger. Each later call of this function sets a breakpoint. If winpdb (started from Plugin Debugger) has been terminated in between, it will be restarted. - spdb.setbreak() - sets a breakpoint. You need to have to attached debug client for using this. Note If you start winpdb manually, use sublime as password for finding scripts on localhost. Install Install this Package using Package Control. Additionally to this package you have to install Winpdb from (or apt-get install winpdb on debian-like systems). Configuration The only configuration option is plugin_debugger_python, which can be set in your User Settings file Packages/User/Preferences.sublime-sttings. This specifies the full path to your python installation, where you installed Winpdb. Please note, that this must be a python 2.x (2.7 recommended). You can also debug Python3 with this. I recommend using Preferences Editor to set it ;) . Test your installation Run "Plugin Debugger: run debug_example (opens Debugger)" from command palette. Your sublime text will freeze for few seconds and then will open a winpdb window ready for debugging DebugExampleCommand. Module rpdb2 havily hooks into python interpreter, so if you really want to quit the debug session, you have to restart your sublime text. Once Winpdb has opened, you should keep it open, because it will inform you on any uncaught exception. If you close winpdb, your sublime simply freezes on an uncaught exception (because it breaks on that exception), but you are not informed on this because of missing frontend. Snippets There is a spdb snippet, which inserts: import spdb ; spdb.start(0) Bugs Please post bugs to Known Issues I tried to automatically unload rpdb2 library and undo all its hooking into python system, but failed till now. I also tried to get a nice status bar message about loading the Plugin Debugger using Package Control's thread_progress, but I did not manage yet to run a thread unattended of the debugger yet (that it is not affected by setbreak call). For now I will stop working on automatic unloading, because restarting sublime text after a debug session is fine for me (at least for now). Changes - 2014-04-12 - Add Python-3.3.3-Lib.zip, for correct display of pyhton lib debugging. - Handle now also all Packages files, even if in .sublime-package files. - 2014-01-22 - pre-import packages, which are imported by rpdb2, such that they are loaded from ST environment rather than from environment, where winpdb is installed - replace Plugin Debugger.sublime-settings by Preferences.sublime-settings for easier settings handling. - run external python from temporary directory, to prevent to have sublime text folder in modules path. Author Kay-Uwe (Kiwi) Lorenz <kiwi@franka.dyndns.org> ( Support my work on Sublime Text Plugins: Donate via Paypal
https://packagecontrol.io/packages/Plugin%20Debugger
CC-MAIN-2022-21
refinedweb
549
58.28
Important: Please read the Qt Code of Conduct - How do I hide the window title bar from QDockWidget I want to hide the default "window title" bar that's on top of the dockwidget, I couldn't find the answer anywhere. UPDATE: update to a more accurate title for future reference Hi What title bar ?? If you mean when you use it as a window so its shown with the normal border and title then you can do setWindowFlags(Qt::Window | Qt::FramelessWindowHint); @mrjj said in How do I hide the window title bar from QTabWidget: setWindowFlags(Qt::Window | Qt::FramelessWindowHint); The title bar when the tabwidget in within the main window. @lansing Hi That seems to be a QDockWidget title bar ? If Yes to that , try dockWidget->setTitleBarWidget(new QWidget(dockWidget)); @lansing Hi Do you read the docs? Its always a good idea to skim over what features a class has. Anyway, its setFloating(false); Yes I looked at that doc for over 10 minutes and I couldn't find it. There's nothing about dragging and moving the widget when the title bar was hidden. This doesn't make the dockwidget draggable. m_ui->myDockWidget->setTitleBarWidget(new QWidget(m_ui->myDockWidget)); m_ui->myDockWidget->setFloating(false); Well it uses the title bar for dragging and you remove it so im not sure why you expect that to still work ? But you can fix it with But why not just leave it alone if you want all the features it provides? Maybe you could hide it when it get docked but then you cannot undock it again. So not really sure what you really want as you cant both remove it and use it :=) If you only need it hide it from visibility, you could try to use the stylesheet to set dock widget title as the same color as the background. Customize dock widget. I'll post some pyside2 code to show what I mean..... I'm not able to reproduce the title bar as shown in this picture without knowing your code. . By default, I get no background. But here is the code import sys from PySide2.QtWidgets import QMainWindow, QWidget, QLabel, QDockWidget, QTabWidget, QApplication class MainWindow(QMainWindow): def __init__(self): super(MainWindow, self).__init__() self.t1 = QLabel("tab 1") self.t2 = QLabel("tab 2") self.tabWidget = QTabWidget() self.tabWidget.addTab(self.t1, "Tab 1") self.tabWidget.addTab(self.t2, "Tab 2") self.dockWidget = QDockWidget() self.dockWidget.setWidget(self.tabWidget) self.dockWidget.setStyleSheet("QDockWidget:title {background-color: none;}") self.setCentralWidget(self.dockWidget) if __name__ == '__main__': app = QApplication(sys.argv) mw = MainWindow() mw.show() sys.exit(app.exec_()) @tIk90wT said in How do I hide the window title bar from QTabWidget: QDockWidget:title Thanks, I also added border: 0pxin the style and the window bar is gone. However it looks weird with blank space on top and the control is uncomfortable, I missed a few time on my dragging because I missed clicking in the title areas. I tired to move the dockWidgetContents up to make it looks better. I set the geometry of the starting y axis to negative, but it's not having any effect. m_ui->dockWidgetContents ->setGeometry(0, -50, m_ui->dockWidgetContents ->width(), m_ui->dockWidgetContents ->height()); @lansing Do you use a layout inside ? that space on top kinda fits with a layout default content margins. My doctwidget layout QDockWidget ----dockWidgetContents (vertical layout) --------QTabWidget ------------QTextEdit --------QTextEdit The dockWidgetContentsis the layout of the dockWidget. Ok so on the vertical layout, set its margins to zero and see. default its 12 pixels. Im not sure if that is the case but it could explain the huge gap. setContentsMargins(0,0,0,0); The top gap does become smaller, but when docked, I still couldn't move the contents into the title bar area. @lansing Well when a widget is in a layout its not possible to move it. you could try to use and then setFixedHeight(1); and see if that changes anything. My program crashed on run QWidget * titleBar = m_ui->myrDockWidget->titleBarWidget(); titleBar->setFixedHeight(1); @lansing ah sorry my bad Docs says "Returns the custom title bar widget set on the QDockWidget, or nullptr if no custom title bar has been set." but we didnt anymore so its null. ok so not an option unless we do m_ui->myDockWidget->setTitleBarWidget(new QWidget(m_ui->myDockWidget)); But you are right. Its too hard to hit if smaller so guess we just have to leave it. I just wonder where that space comes from as the examples don't have it. @mrjj said in How do I hide the window title bar from QTabWidget: I just wonder where that space comes from as the examples don't have it. That must be the layout default margins...The OP mentioned using a vertical layout... @Bonnie Also my thought but we did try setContentsMargins(0,0,0,0); higher up. So maybe that space is really just the hidden titlebar, it just look bigger to me than the samples. @mrjj said in How do I hide the window title bar from QTabWidget: setContentsMargins(0,0,0,0); I would guess that the OP call that on the widget, not the layout... Ah...maybe not... I'm not sure what's the current situation now. I thought the gap in the later picture would be that titlebar?
https://forum.qt.io/topic/116668/how-do-i-hide-the-window-title-bar-from-qdockwidget
CC-MAIN-2020-40
refinedweb
890
56.55
I came forth to write this down because of my own frustration of having a hard time to find good sample codes to get me to where I wanted. Basically, I wanted my ASP.NET page, where I had a form for entering expense item, to do the following: UpdatePanel My objective was quite simple, but it took me more time than what I thought would need to accomplish it. And the reason was simply because this time around, I was unable to find enough references to guide me through, so I had to spend a lot of time developing a solution via trial-and-error method. Therefore, once I came up with a solution, I told myself I needed to share this with others as I have benefited a lot in the past from the community by looking at other programmers' postings. The first challenge I encountered was to retrieve all form elements that were server bound – either a web control by itself or a web control inside a user control. The ClientID of web control could be used, but I did not like to mix too many server tags <%=var%> with JavaScript, plus they might impede the JavaScript performance. Thankfully, the jQuery wildcard selector has made this easy and clean. <%=var%> Here is an example of how this is done: var recordid = $(“[id$=’txtRecordID’]”).val(); This will get me the text value in a server-side textbox control: <asp:TextBox </asp:TextBox> But what about a TextBox in a Web User Control? That was what I used the wildcard for – to get to the element that has a client ID ending with a server control ID. Normally, if I simply place a TextBox control on a web form, the client ID will exactly be the same as the ID I use in server side; in that case: TextBox $(“#txtRecordID”) will return the same element object as: $(“[id$=’txtRecordID’]”) But most of the time, we would group certain web controls on a User Control and then use the UserControl on a web form. In that case, $(“#txtRecordID”) will get you nothing because now you have a user control ID added to the client ID for the TextBox, so now, you will use the wildcard selector instead: $(“[id$=’txtRecordID’]”). To assure that the wildcard will return the unique element, you can add the User Control server ID to make it look like this: $(“[id$=’ucMyControl_txtRecordID’]”) (In some cases, you might have two or more server controls that are all named txtRecordID, sitting in separate user controls that are embedded in the same web form, and then it is absolutely necessary that you add the user control ID into the jQuery wildcard search. I picked up this syntax for searching element endings with a specific server ID from the web community somewhere, sometime ago, but I was unable to locate the source again to give a reference URL here. But I did come across a similar post as of this writing at that was using wildcards for searching elements beginning with a specific ID, and the syntax looks like this: $(“[id^=pnl]).hide() - hide all the divs that start with “pnl” in their control IDs. txtRecordID $(“[id^=pnl]).hide() div Now that I cleared the way to gain access to server side elements via the jQuery wildcard selector, I collected the data and stringed them into a JSON format. JSON is just another lightweight alternative of XML format, and here is my simplified version of the JSON data string: var json=”{‘RecordID’:’” + $(“[id$=’txtRecordID’]”).val() + “’,’ItemName’:’” + escape($(“[id$=’txtItemName’]”).val()) + “’,’CategoryID’:’” + (“[id$=’ddlCategories’]”).val() + “’}”; I packed my form data into a JSON formatted string simply because I wanted to use the JSON.Net API tool to parse it later on at the other end of the wire. I used the “escape” function in front of all text fields to encode any possible odd character such as an apostrophe that could throw the JSON parser off. escape Next, I send the data via jQuery AJAX, using the syntax as below: var to=”DataProcessor.aspx?Save=1”; var options = { type: "POST", url: to, data: json, contentType: "application/json;charset=utf-8", dataType: "json", async: false, success: function(response) { //alert("success: " + response); }, error: function(msg) { alert(msg); } }; var returnText = $.ajax(options).responseText; Here, response in the success section is the same as the responseText returned from $.ajax. response responseText $.ajax Then, I went into the DataProcessor.aspx code-behind and did the following steps to retrieve and parse the data: Request if (Request.ContentType.Contains("json") && Request.QueryString["Save"]!=null) SaveMyData(); SaveMyData InputStream System.IO.StreamReader sr = new System.IO.StreamReader(Request.InputStream); string line = ""; line = sr.ReadToEnd(); Console.WriteLine(line); At this point, the string variable line contains a JSON formatted string, like: line "{'ItemName':'Steve%27s%20Demo%20Item','CategoryID':'1','RecordID':''}" I could have created my own function to parse out this string and be done with it. But why should I re-invent the wheel while there is already something called JSON.Net out there? Surely, I found the needed API from and downloaded the library for the .NET 3.5 version. Following the instructions that came with the download, I added the Newtonsoft.Json.dll assembly to my project and imported the namespace Newtonsoft.Json.Linq to the page class; then there it was – just one line of code to do the parsing: Newtonsoft.Json.Linq JObject jo = JObject.Parse(line); Once my JSON-formatted string variable, line, was parsed, I accessed each property using the following code: Console.WriteLine((string)jo["RecordID"]); Console.WriteLine(Server.UrlDecode((string)jo["ItemName"])); Remember, I did an escape for ItemName when I packed the form data into a JSON string in DataForm.aspx? Here, I did an equivalent of “unescape” using Server.UrlDecode so the “%20” will return to space and “%27s” to “’s”, and so on. You don’t want to UrlDecode before running the data downloaded from Request.InputStream through the JObject parser as some JSON unfriendly characters like an apostrophe (“’”) will blow the parser. So, only do UrlDecode after JObject.Parse(), and do it to each individual property of string type. Once the data is parsed, I process them in the data tier and return a record ID back to the front page simply by calling Response.Write(). To simplify the demo, I did not include any database logic here; instead, I simply returned the “CategoryID” that was passed in the JSON string from the Category dropdown list on the front page to indicate if the database update was successful or failed: ItemName Server.UrlDecode UrlDecode Request.InputStream JObject JObject.Parse() Response.Write() Response.Write((string)jo["CategoryID"]); This is what got me to jQuery.ajax(options).responseText. You can also pack a long HTML string and shuffle it back to .ajax().responsText to render on the front page. jQuery.ajax(options).responseText .ajax().responsText Now, I returned to the front page and retrieved the responseText as follows: var returnText = $.ajax(options).responseText; //alert(returnText); if (returnText > 0) { $("[id$='txtRecordID']").html( returnText); $("#divMsg").html("<font color=blue>Record saved successfully.</font>"); //$("#divMsg").addClass("SuccessMsg"); //if you prefer to use class } else { $("#divMsg").html("<font color=red>Record was not saved successfully.</font>"); //$("#divMsg").addClass("ErrorMsg"); //if you prefer to use class } Putting them all together, here is how things moved: used the jQuery wildcard selector $(“[id$=’serversidecontrolid’]”) to retrieve form data that is stored in server controls such as TextBox and DropDownList, constructed them into a JSON string, and then used jQuery.Ajax() to POST JSON data to a different ASP.NET page for processing; downloaded the JSON data from the other side using a System.IO.StreamReader to read off the Request.InputStream object, then used the JSON parser API provided from JSON.Net to parse the JSON data string into a JObject. Once the data is saved, we use Response.Write to return the result to the front page. $(“[id$=’serversidecontrolid’]”) DropDownList jQuery.Ajax() System.IO.StreamReader Response.Write I was satisfied with this light weight solution that helped me get rid of the heavy foot-printed ASP.NET AJAX UpdatePanel to overcome the page postback. My goal of writing this paper, also my first ever article for CodeProject, was to share my research and connect pieces so others will not have to repeat the frustration I experienced when I first started it and could only find a piece here and a piece.
http://www.codeproject.com/Articles/43317/Saving-ASP-NET-Form-Data-with-jQuery-AJAX-and-JSON?fid=1551087&df=90&mpp=25&sort=Position&spc=None&select=3293467&tid=3293467
CC-MAIN-2014-52
refinedweb
1,415
62.17
Angular 2.0- Communication among components using Services In this article, we are going to discuss about how different components can share data using services. To understand services properly, I have developed a simple Angular 2 app in which components are sharing data with each other. About Angular 2.0 Angular 2.0 is a new framework that helps in developing Single page JavaScript applications in a modular, maintainable and testable way. It is rewritten from Angular 1, following the best practices for future. The actual framework is written in TypeScript. TypeScript is just a superset of JavaScript. Any valid js code is a valid typescript. You need not to learn new a language. TypeScript provides many features of ECMA Script 6 that are supported by many browsers like classes, interfaces, access modifiers (public, private), IntelliSense and compile time checking. And of course, Angular 2 has a vast community support and is backed by Google. Unlike Angular 1, it has 4 key components: - Components– Encapsulates the template, data and behaviour of the view. - Directives– To modify DOM elements and/or extend their behaviour. - Routers– To navigate in the application. - Services– Encapsulates the data calls to backend, business logic and anything not related to view. In Angular 2 application development, there must be one root component that will be the parent of all the child components. It is just like the tree where all the components are individual and reusable. Below is a sample component- export class AppComponent { } Yes, it is just a class that encapsulates some code. We exported the class so that other components can import it and use it accordingly. Similarly, Service is also a simple exported TypeScript class. About the App In this web app, there is a component biodata-list.component.ts that helps to list all the biodata that will be actually received from the service and the other component biodata-form.component.ts holds the form that will let users to post their data. We will also see how 2-way data binding works in Angular2 using [(ngModel)]. After submitting the form, the saved data automatically reflects in the above list which is controlled by different component. Actually the data is shared using service between these components. PREREQUISITES NodeJs version >=4 NPM version >=2 Follow the following steps to run this app: 1. npm install -g typescript 2. npm install -g typings 3. git clone 4. cd angular2-service 5. npm install 6. npm start 7. see your app in localhost:3000. Note- If your changes don’t reflect to the browser, just restart the server. Implementation Details In this app, there are 3 components- app.component.ts, biodata-list.component.ts and biodata-form.component.ts. First we created model/contract for the form data using Interface in biodata.ts. The Typescript interface helps in defining abstractions or protocols. It is used to let angular know about what type of data and methods we are going to use. In the biodata abstraction, we can easily understand about the details of properties like data type, number and names of properties. export interface biodata { name: string, phone: string, qualification: string } Since we are not using any server to get data from database, so we will create a sample mock file mock-biodata-list.ts for the biodata form data. In this file we created a constant BiodataSet of type biodata which we just created above. import {biodata} from './biodata'; export const BiodataSet: biodata[] = [ ]; Now its time to create our actual service biodata.service.ts. We will import Injectable function from angular 2 core library. When this Injectable function is used in any file as @Injectable() decorator, it tells angular that it is to be injected in different components or services. The components which get instance of the injectable service can use its methods for various purpose like data fetching/posting to server. Don’t forget to put parenthesis after the injectable decorator, otherwise it will lead to an error. In its class, all the methods are included to deal with the data fetching or posting. Mock data should also be imported here in service. import {Injectable} from 'angular2/core'; import {BiodataSet} from './mock-biodata-list'; @Injectable() export class biodataService { getList() { return Promise.resolve(BiodataSet); } saveNewData(data) { BiodataSet.push(data); } } How to use this service in component? We will use this service in biodata-list.component.ts and biodata-form.component.ts components to share the data. In one component we used to get the list of all form data from backend and to save form data in other component. The following things are required to use this service in biodata-list.component.ts: Import biodataService class from service. import {biodataService} from './biodata.service'; Pass the class in the providers metadata of component decorator. providers: [biodataService] Import the mock data list and data interface from respective files. import {BiodataSet} from './mock-biodata-list'; import {biodata} from './biodata'; Create a variable AllBiodata of type biodata that we just imported from biodata.ts. public AllBiodata: biodata[]; In its constructor method, define a private variable _biodataService of type biodataService, so that Angular2 knows what type of object is needed here. constructor(private _biodataService: biodataService) {} Then define a method in this component that will fetch the data list from the promise that is returned by the service method and set the variable AllBiodata to the data object returned by the service but this method will not get executed automatically. We can call this in our constructor but it is not good practice and it might lead to bad performance. getAllBiodata() { this._biodataService.getList().then((BiodataSet: biodata[])=>this.AllBiodata = BiodataSet); } To overcome above problem, we will Angular 2 Lifecycle hooks. During the initiation phase of the component, we used OnInit protocol from angular2 core library. We have to implement this protocol in the component class. export class BiodataListComponent implements OnInit {... After implementing this protocol, it is necessary to define ngOnInit method otherwise it will lead to error. Now in this method, we will call the getAllBiodata() method to execute it during the initiation of the component. import {OnInit} from 'angular2/core'; .. ... ngOnInit() { this.getAllBiodata(); } Similarly, we will use this service in other components where data are to be shared. import {Component} from 'angular2/core'; import {NgForm} from 'angular2/common'; import {biodataService} from './biodata.service'; import {BiodataSet} from './mock-biodata-list'; import {biodata} from './biodata'; @Component({ selector:’biodata-form’, template: ` All html code goes here ` }) export class biodataFormComponent { …. constructor(private _biodataService: biodataService) {} …. } Now go ahead and run the actual app in your browser and see the real code. I hope this post helps clear up the concepts of services in Angular 2.
https://www.tothenew.com/blog/angular-2-0-communication-among-components-using-services/
CC-MAIN-2022-40
refinedweb
1,110
50.84
Here is my code can someone tell me what is wrong? 12/15 purify error Change your function to build the list instead of removing from it, def purify(lst): new_lst = [] for item in lst: if item % 2 == 0: new_lst.append(item) return new_lst ohh i tried that way but i used to loop the items in the new_list instead in the lst thank you I have an identical code as yours zeziba but I am getting an error message saying; Oops, try again. Your function crashed on [1] as input because your function throws a “‘NoneType’ object is not iterable” error. My code is; def purify(lst): new_list = [] for a in lst: if a%2 == 0: new_list.append(a) print new_list Can anyone see whats wrong with it? You new_list = has to go before your def purify(lst): I did exactly the same thing Why is this only return [4] when input is [4,5,5,4], instead of returning [4,4] ? def purify(x): new_list=[] for y in x: if y%2==0: new_list.append(y) return new_list I think your return statement should have the same indentation as for loop. You are right…> I tried deleting that post but it appears the post turned into a zombie and popped back up. That did the trick, appending it to the list instead of removing the odd numbers. Mucho gracias zeziba. Not print It should be return in a function def purify(x): lst = if len(x) == 1: return else: for i in x: if i % 2 == 0: lst.append(i) return lst i have a same error TAT my code is: def purify(lists): odd_numbers_list = for number in lists: if number % 2 != 0: odd_numbers_list.append(number) >return odd_numbers_list print odd_numbers_list i was told " Your function fails on purify([1]). It returns [1] when it should return ." too. your odd_numbers_list= should be the first line in the for loop. So right under for number in lists: A post was split to a new topic: Help with error - your function fails on
https://discuss.codecademy.com/t/12-15-purify-error/8139
CC-MAIN-2019-43
refinedweb
342
70.53
Finding the number of sub matrices having sum divisible by K Get FREE domain for 1st year and build your brand new site Reading time: 30 minutes | Coding time: 10 minutes You are given a N X N matrix of integral values and you have to find the count of the sub matrices such that the sum of the elements of the sub matrices is divisible by a positive integer k. We use a Dynamic Programming approach to solve this in O(N^3) For example if the N X N matrix is : and the value of k=4 The different sub matrices which have sum divisible by k(4 in this case)are: a). A sub matrix with sum=0 b). A sub matrix with sum=4 c). A sum matrix with sum=8 d). A sub matrix with sum=8 e). A sub matrix with sum=-4 Input format: We will have to input the value of N for the N X N matrix and the value of k(for checking the divisible sub matrices).The matrix will also be entered by the user. Output format: The output will be a non negative value indicating the count of number of sub matrices having sum divisible by k. Method of Implementation: We will solve the problem by applying the following) ] Step 2: Now for every left and right index we will calculate the temporary sum[] array. Step 3: Next we will apply the algorithm mentioned below to count the total number of subarrays having sum divisible by k. Step 4: So likewise we will get the total number of subarrays (having sum divisible by k) for all the sum[] arrays for every possible left and right column index pair. Step 5: Adding the result for all the left and right column index pair will give us the count of the number of the rectangular submatrix such that the sum of the elements is divisible by k. Pseudocode: Consider all the left and right indexes for every column pair. Store the sum of every row within the left and right indexes in sum[]. Follow the steps to find the number subarray for sum[] which has sum of elements divisible by k. Create a count[] array where count[i] stores the count of the remainder i after dividing current cumulative sum till any index in sum[]. Linearly traverse sum[] and keep on calculating the cumulative sum and find (curr_sum % k) and increment its count in the count[] array. Now linearly traverse the count[] array and if count[i]>1 then we have to find the number of ways we can choose the indices for the subarray and it can be done in (count[i](count[i]-1))/2 ways i.e. result+=(count[i](count[i]-1))/2 . result+=(count[i]*(count[i]-1))/2 - Next we will also have to include the count of the elements whose sum=0 and for that we will do, result += count[0]. result += count[0] // for sum = 0 - Likewise we will get the result for all the left and right indices of columns and we will report the cumulative result. Implementation: #include<bits/stdc++.h> using namespace std; #define MAX 100 int NumberOfSubarrayDivSumK(int sum[], int n, int k); void countOfSubMatrices(int mat[MAX][MAX], int n, int k) { //Initialising the variables int AreaMaximum= 0; int left,right,i; int sum[n]; int result = 0; //Considering all the left and right column pair indices for(left=0; left<n; left++) { for(i=0;i<n;i++) sum[i]=0; //end of the column index for(right=left; right<n; right++) { //Calculating the cumulative sum for(i=0;i<n;i++) { sum[i]=sum[i]+ mat[i][right]; } result += NumberOfSubarrayDivSumK(sum , n , k); } } cout<<"The total number of sub matrices divisible by k are: "<<result; } int NumberOfSubarrayDivSumK(int sum[], int n, int k) { int count[k]; int result=0; int presentSum=0; int LenMax=0; int i,j; //assigning all the values to 0 for(i=0;i<k;i++) { count[i]=0; } //calculating the count of all the remainders possible for(i=0;i<n;i++) { presentSum=presentSum+sum[i]; count[((presentSum % k )+ k) % k]++; } for(i=0;i<k;i++) { //if there are more than one subarrays with same remainder if(count[i]>1) { result += (count[i]*(count[i]-1))/2; } } //including the elements which add up to 0 result += count[0]; return result; }]; } } countOfSubMatrices(mat, N, k); return 0; } Complexity Analysis: Time complexity: O(n^3) Space complexity (Auxillary Space): O(n) Explaination with an example: For eg: if for N=3 our matrix is: and the value of k=4 then first we consider all the possible pairs of columns: (0, 0) (0, 1) (0,2) (1, 1) (1, 2) (2, 2) So now will calculate the sum[] which is a temporary array for storing the sum: eg. for (0, 1) sum[]={0, -1, 4} Now we will apply the above algorithm to calculate the number of subarrays such that the sum is divisible by k(4 in this case). So the total number of sub matrices we get is 6. They are: Applications: The above code can be used to find the count of the number of the rectangular submatrix such that the sum of the elements is divisible by k. Question You are given a N X N matrix and you have to find out the number of rectangular submatrix such that the sum of all the elements in that submatrix is divisible by an integer k. Input: Enter the value for N: ......... Enter the value for k: ......... Enter your N X N matrix: ......... Output: The total number of sub matrices divisible by k are: .........
https://iq.opengenus.org/finding-number-of-sub-matrices-having-sum-divisible-by-k/
CC-MAIN-2021-43
refinedweb
956
51.52
Jun 26, 2017 04:53 AM|sevi|LINK Working with vs 2017, a very simple webform app with master page, which was created with the default.aspx page in the root of the project. The default.aspx page has had no changes, it links to various tutorial pages at microsoft.com. I created a new folder called admin, and in it create another default.aspx. The app no longer complies; it gives this error: Error BC30269 'Protected Sub Page_Load(sender As Object, e As EventArgs)' has multiple definitions with identical signatures Why? The only postings I can find about this kind of error are usually from someone moving to a new machine or copy/pasting code and not adjusting the code for the new page name. I don't recall any limitation with asp.net where one cannot have two pages with the same name, as long as they're in separate folders. Star 8620 Points Jun 27, 2017 05:45 AM|Cathy Zou|LINK Hi sevi I create a Default.aspx page in the root of my project page. Then I create a folder named “Admin”, in the folder. I create a page with some name”Default.aspx”, when I run one of it. However, There is no error. Everything woks ok. If I copy the Default.aspx page in the root of my project page, Then paste it in Admin folder. I encounter a error says that Type 'Default' already defines a member called 'Page_Load' with the same parameter types The solution for above error is While coping the files in asp.net you should rename the file names as well as class names in aspx.cs file. Best regard Cathy Jun 29, 2017 05:56 AM|zxj|LINK Hi sevi, Usually when you use Visual Studio to generate some code for you, it does them as Partial Class ... so you can create another partial class and add your own code to the same class without loosing your changes when the tool regenerates it's code. Try to rename the Partial Class to the new page's classname. Regards, zxj Jun 29, 2017 04:06 PM|sevi|LINK This is how I interpreted what you wrote: to rename the code behind class name, and change the inherits tag on the default.aspx page. It compiles now. So one has to make this kind of change if one has two pages with the same name? Did this behavior change in recent versions of asp.net? Because there are numerous tutorials out there that use default.aspx in various folders and I don't recall any mention of needing to adjust this. Thanks for helping me sort it out. All-Star 32791 Points Jun 29, 2017 07:30 PM|mgebhard|LINK You should be able to create a new folder and create a default.aspx page within the folder. I've done this many time in VS 2017. Next time you create a default page. Take a moment to look at the namespace and see if it conflicts with another default page. For example, if you create a default page in an admin folder the namespace would look similar to the following. namespace MyWebApp.admin { public partial class _default : System.Web.UI.Page { All-Star 32791 Points Jun 30, 2017 10:12 AM|mgebhard|LINK seviSo the namespace is not automatically managed by vs, so as to not create conflicts. No VS automatically creates the namespace. Whatever is happening is unique to your system. 7 replies Last post Jun 30, 2017 10:12 AM by mgebhard
https://forums.asp.net/t/2124051.aspx?Can+t+create+another+default+aspx+in+another+folder+witthout+error+multiple+definitions+with+identical+signatures+
CC-MAIN-2018-34
refinedweb
596
74.49
I'm working through a book now and I have a question regarding one of exercises (#6). So we have a hand-made Fraction class Fraction: def __init__(self, num, den): if not (isinstance(num, int) and isinstance(den, int)): raise ValueError('Got non-int argument') if den == 0: raise ValueError('Got 0 denominator') self.num = num self.den = den # some class methods def __lt__(self, other): selfnum = self.num * other.den othernum = other.num * self.den return selfnum < othernum # some class methods # trying it out x = Fraction(1, -2) y = Fraction(1, 3) x < y False if def __lt__(self, other): selfnum = self.num * other.den othernum = other.num * self.den if self.den * other.den > 0: return selfnum < othernum else: return selfnum > othernum If you assume that both denominators are positive, you can safely do the comparison (since a/b < c/d would imply ad < bc). I would just store the sign in the numerator: self.num = abs(num) * (1 if num / den > 0 else -1) self.den = abs(den) Or: self.num = num self.den = den if self.den < 0: self.num = -self.num self.den = -self.den And your __lt__ method can be: def __lt__(self, other): return self.num * other.den < other.num * self.den
https://codedump.io/share/LHsJPeRwBuV6/1/comparing-custom-fractions
CC-MAIN-2017-43
refinedweb
210
60.82
Advertisement Today we will learn about Mouse Events in OpenCV Python. We will be creating a black background and on the left mouse click print X and Y Coordinates and on the right mouse click print RGB Value of that pixel. Let’s Code Mouse Events in OpenCV! Mouse Events OpenCV Algorithm Import Packages import numpy as np NumPy for manipulation in Color matrix. import cv2 as cv Computer Vision for reading and showing images and image manipulation. from matplotlib import pyplot as plt Matplotlib for data visualization. (Optional) Read and Show Image for Mouse Events OpenCV img = cv.imread(‘./img/sea.jpg’) cv.imread() function is an OpenCV function in Python that makes it easy for the computer to read images. It takes one argument i.e the image source in the form of Absolute or Relative Path. cv.imshow(‘image’, img) The image is shown in the window named ‘image‘. Set Mouse Callback in OpenCV Next Step is to setMouseCallback() function. It takes two parameters: - The Window name on which the image is being shown - A Callback function that will process the image and will return output on some mouse events. cv.setMouseCallback(‘image’, click_event) Function for Mouse Click Event in OpenCV The Callback function takes four parameters: - The Mouse Event - X coordinate where the mouse event happens - Y coordinate where the mouse event happens - flags - params Our AIM: Left Click to get the x, y coordinates. Right, Click to get the BGR/RGB color scheme at that position. Firstly we have to declare some variables for text, text font, and text color. You Can choose it according to your choice. The setMouseCallback() function sends the mouse event with it to the callback function. If the mouse event in the click_event() function matches the Left Button Down then X and Y Coordinates are converted to string values. def click_event(event, x, y, flags, params): ''' Left Click to get the x, y coordinates. Right Click to get BGR color scheme at that position. ''' text = '' font = cv.FONT_HERSHEY_COMPLEX color = (255, 0, 0) if event == cv.EVENT_LBUTTONDOWN: print(x, ",", y) text = str(x) + "," + str(y) color = (0, 255, 0) elif event == cv.EVENT_RBUTTONDOWN: b = img[y, x, 0] g = img[y, x, 1] r = img[y, x, 2] text = str(b) + ',' + str(g) + ',' + str(r) color = (0, 0, 255) cv.putText(img, text, (x, y), font, 0.5, color, 1, cv.LINE_AA) cv.imshow('image', img) If the mouse event in the click_event() function matches the Right Button-Down then the BGR (Blue, Green, and Red) value of that pixel is recorded and converted to the string value. After the condition check is done, the text is put on the image at the X and Y coordinate where the mouse events occurred. After putting text on the image that the image is shown on the ‘image’ window and the mouse click event is shown in the form of text on the image. Mouse Events GitHub Full Source code of Mouse Events OpenCV at GitHub.
https://hackthedeveloper.com/mouse-events-opencv-python/
CC-MAIN-2021-43
refinedweb
503
72.97
Destroy a pixmap and frees associated resources. #include <screen/screen.h> int screen_destroy_pixmap(screen_pixmap_t pix) The handle of the pixmap which is to be destroyed. Function Type: Flushing Execution This function destroys the pixmap associated with the specified pixmap. Any resources and buffer created for this pixmap, whether locally or by the composition manager, will be released. The pixmap handle can no longer be used as argument in subsequent screen calls. Pixmap buffers that are not created by composition manager but are registered with screen_attach_pixmap_buffer() are not freed by this operation. The application is responsible for freeing its own external buffers. 0 if the pixmap buffer was destroyed, or -1 if an error occurred (errno is set).
http://www.qnx.com/developers/docs/qnxcar2/topic/com.qnx.doc.qnxcar2.screen/topic/screen_destroy_pixmap.html
CC-MAIN-2022-33
refinedweb
117
56.96
4981/splitting-a-python-string words = text.split() The split() function transforms the string into a list of strings containing the words that make up the sentence. The split() method in Python returns a list of the words in the string/line , separated by the delimiter string. This method will return one or more new strings. All substrings are returned in the list datatype. Syntax : string.split(separator, max) separator : The is a delimiter. The string splits at this specified separator. If is not provided then any white space is a separator. max : It is a number, which tells us to split the string into maximum of provided number of times. If it is not provided then there is no limit. Return: The split() breaks the string at the separator and returns a list of strings. You can use word.find('o') as well to ...READ MORE class Solution: def firstAlphabet(self, s): self.s=s k='' k=k+s[0] for i in range(len(s)): if ...READ MORE If you are talking about the length ...READ MORE How about: >>> 'hello world'[::-1] 'dlrow olleh' This is ...READ MORE suppose you have a string with a ...READ MORE You can also use the random library's ...READ MORE Syntax : list. count(value) Code: colors = ['red', 'green', ...READ MORE can you give an example using a ...READ MORE Here's a generator that yields the chunks ...READ MORE for c in "string": ...READ MORE OR Already have an account? Sign in.
https://www.edureka.co/community/4981/splitting-a-python-string
CC-MAIN-2021-10
refinedweb
251
79.06
Wed, 06/10/2015 - 20:46 If you are using mpich I guess the c++ wrapper may be mpiCC but you may still need a cpp or equivalent suffix for c++ source files. You should be using the standard header, Alternating Fibonacci Should I allow my child to make an alternate meal if they do not like anything served at mealtime? V6.0 is rather old though, you should consider an upgrade. YA novel involving immortality via drowning Why is this C++ code faster than my hand-written assembly for testing the Collatz conjecture? Results 1 to 10 of 10 Thread: Cannot open include file: 'iostream.h Tweet Thread Tools Show Printable Version Email this Page… Subscribe to this Thread… Display Linear Mode Switch to Hybrid Was This Post Helpful? 0 Back to top MultiQuote Quote + Reply #8 meromamu New D.I.C Head Reputation: 0 Posts: 5 Joined: 30-October 12 Re: fatal error C1083: Cannot open I am a bit new to programming, I understand the basics. Windows cannot open this file: foo.BAR Windows Service - Notes Issue - Cannot Open ID File PowerPoint 2003 cannot open PPT file Browse more C / C++ Questions on Bytes Question An answer is an answer is an answer; whether you are "sure" about it or not does not change that. You don't need a semicolon after the final curly bracket. I was just right clicking on my project and selecting properties. Fatal Error C1083: Cannot Open Include File: 'string': No Such File Or Directory To start viewing messages, select the forum that you want to visit from the selection below. Also, you do not declare keywords and function names. int a, b, c, m=c*2, y, cout, cin, endl, if, else, for, main, exit; the variable m = c * 2; Cannot Open Source File Iostream H Visual Studio 2013 Related Sites Visual Studio Visual Studio Integrate VSIP Program Microsoft .NET Microsoft Azure Connect Forums Blog Facebook LinkedIn Stack Overflow Twitter Visual Studio Events YouTube Developer Resources Code samples Documentation Downloads Use spaces instead. –Code-Apprentice Jul 29 '12 at 23:30 1 I assume your include path includes the VC include directory (under program files). It looks like as of VS 2010 and later you need to include #include "stdafx.h" in all your projects. #include "stdafx.h" #include You may have to register or Login before you can post: click the register link above to proceed. Cannot Open Source File String.h Visual Studio 2015 Select the VC++ Directories option under Configuration Properties. 3) Somehow for me, all these values had been cleared. What do I do with my leftover cash? Thanks! –8bitcartridge Jul 25 '13 at 16:30 add a comment| up vote 6 down vote The system could not find the Visual C++ compiler (CL.exe)... Not the answer you're looking for? contact us Search: Advanced Forum Search Forums Programming Web Development Computers Tutorials Snippets Dev Blogs Jobs Lounge Login Join! Cannot Open Source File Iostream Visual Studio 2015 I hope the trojans are a joke, and not something you plan to do with any programming skills you learn... Cannot Open Source File Iostream H Visual Studio 2015 more stack exchange communities company blog Stack Exchange Inbox Reputation and Badges sign up log in tour help Tour Start here for a quick overview of the site Help Center Detailed share|improve this answer answered Oct 8 '14 at 18:18 Ross117 750915 Worked for me on VS2013. –jhclark May 13 at 18:46 add a comment| Your Answer draft saved see here Edited by GentlemanWalrus Monday, February 11, 2013 3:14 PM Monday, February 11, 2013 2:49 PM Reply | Quote Microsoft is conducting an online survey to understand your opinion of the Msdn It's quick & easy. The customer expects the porting from BC++ to VC++6.0 and hence have no other option but to user VC++6.0 I checked the "ignore standard path" option in preprocessor settings but the Visual Studio Iostream Not Found There should not be line feed after "#include" and "using". I've tried reinstalling 2-3 times, but that didn't work. The Microsoft Jet database engine cannot open the file The Microsoft Jet database engine cannot open the file... If you choose to participate, the online survey will be presented to you when you leave the Msdn Web site.Would you like to participate? Why do cars die after removing jumper cables? Download Iostream.h Header File I am really new –jamesbond Jul 30 '12 at 23:08 I have updated my answer, to meet your request. –Secko Jul 31 '12 at 17:10 add a comment| up Is This A Good Question/Topic? 0 Back to top MultiQuote Quote + Reply Replies To: fatal error C1083: Cannot open include file: 'iostream': No su #2 jjl Engineer Reputation: 1186 share|improve this answer answered Jan 4 '11 at 16:43 Ralf 7,38221740 Great answer Ralf... more stack exchange communities company blog Stack Exchange Inbox Reputation and Badges sign up log in tour help Tour Start here for a quick overview of the site Help Center Detailed I found the following trick worked for me, maybe it will work in your case too: - Open a new instance of VS2010 - Create new console application with the def Intellisense Cannot Open Source File Iostream H That is entirely unhelpful. –b1nary.atr0phy Aug 28 '12 at 14:04 add a comment| 4 Answers 4 active oldest votes up vote 16 down vote accepted I've run into the same issue Any solutions? Jim Was This Post Helpful? 0 Back to top MultiQuote Quote + Reply #10 meromamu New D.I.C Head Reputation: 0 Posts: 5 Joined: 30-October 12 Re: fatal error C1083: Cannot Ewerything else seems to be OK Last edited on Mar 9, 2008 at 2:11pm UTC Mar 9, 2008 at 2:32pm UTC croconile (10) Dont put .h with the Expression evaluates numerically inside of Plot but not otherwise What would be the consequences of a world that has only one dominant species of non-oceanic animal life? Join them; it only takes a minute: Sign up fatal error C1083: Cannot open include file: 'iostream': No such file or directory up vote 4 down vote favorite 1 I've reinstalled First, do not use The usage of "le pays de..." Is adding the ‘tbl’ prefix to table names really a problem? Things you should try: Rerun the Visual Studio 2010 installer and make sure you selected to install Visual C++ tools for your platform (either x86 or amd64). I am trying to compile a C++ file, and everytime I do, the 'command prompt' window says that it cannot open the 'iostream.h' file. 'iostream.h' is just something which the book And a line does not start with a semicolon. are there and have proper values. Please see the FAQ page for debugging suggestions and I am pretty sure that this happens in this line: locR = numroc_(&matrix_size, &block, &myrow, &izero, &nprow); Top Log in to post asked 3 years ago viewed 17497 times active 11 months ago Upcoming Events 2016 Community Moderator Election ends Nov 22 Related 8fatal error C1083: Cannot open include file: 'Windows.h': and scons3017How Here's the piece of code:1 2 3 4 #include "stdafx.h" #include Jan 4 '11 at 14:15 5 @T.E.D.: No, CL.exe is the compiler in Visual Studio, the linker is named LINK.exe. –Charles Bailey Jan 4 '11 at 14:52 With Reference Sheets Code Snippets C Snippets C++ Snippets Java Snippets Visual Basic Snippets C# Snippets VB.NET Snippets ASP.NET Snippets PHP Snippets Python Snippets Ruby Snippets ColdFusion Snippets SQL Snippets Assembly Snippets Well, why would it be attached to icpc?
http://geekster.org/cannot-open/error-cannot-open-source-file-iostream.html
CC-MAIN-2017-51
refinedweb
1,300
67.49
. you have a string "str", actually in C an array of char; make a new string, call it Out, set it to empty. make an array of integers,lets call it "ws" for "word starts". Now go thru the string a character at a time, say by indexing: str[ ThisCol ] if the current character is the start of a word, note it's place into the next empty slot in ws. (The start of a word is something like: it's a letter, the first one after a space, or at column 1.) Now go thru the ws array from the end to the beginning. Take each word start, and append each character to the output string until you get to the end of the word. (You can figure out the end of the word either by looking at the character, or by looking to see if you've hit the next start column as listed in "ws".) Watch out, there are a few nasty boundary conditions which you can get around by either adding some sentinels, or by checking with if() statements. #include <stdio.h> #include <string.h> void main() { char word[80]; int length; char temp[80]; int i; strcpy(word, "here right there Hi"); length = strlen(word); for(i=0; i<length; i++) { temp[i] = word[length - 1 - i]; } temp[i] = 0; printf("%s", temp); return; } Easily upload multiple users’ photos to Office 365. Manage them with an intuitive GUI and use handy built-in cropping and resizing options. Link photos with users based on Azure AD attributes. Free tool! /* reverse the entire string */ reverse(temp_str); /* here we try to reverse characters in each word */ token = strtok(temp_str, " "); while( token != NULL ) { reverse(token); strncat(reverse_str, token, strlen(token)); token = strtok(NULL, " "); if ( token != NULL ) strncat(reverse_str, " ", 1); } .... void reverse(char s[]) { int c, i, j; for (i=0, j=strlen(s)-1; i < j;i++, j--) { c = s[i]; s[i] = s [j]; s[j] = c; } } YEAH!!! #include <stdio.h> void reverse(); int main() { reverse(); return 0; } void reverse() { char ch; if((ch=getchar())!='\n') reverse(); putchar(ch); } The logic is simple,each time when you enter a character the character(along with function) is pushed on the stack and when when the call returns to the function the character is popped And i think that is almost correct.Anyway...i just gave my idea that recursion can be used!! On run time basis my code will beat all other Algorithms.. Your mistaken assumptions, as far as I can tell, are: 1. The task was to write a program. cutie2000 never said whether the task was to write a function or a program or just a code snippet. 2. The task involved reading a string from stdin. cutie2000 never said that. The only question asked was how to reverse the words in a string. 3. The task involved writing the reversed string to stdout. cutie2000 never said that. The only question asked was how to reverse the words in a string. 4. The string to be reversed contained a newline. This is a bad assumption, and there were no contextual clues to encourage this idea. 5. Your recursive code is faster than iterative code. Do actual timing tests, then get back to us. I think you'll find that your program wastes a great deal of time setting up stack frames and calling functions, not to mention the fact that you output the string with a bunch of putchar() functions despite the fact that outputting the string was not part of the task description. But for checking the code performance..i need your code.. you haven't posted any!!! 1. Look up the word "sophomoric" in the dictionary and learn what it means. 2. Explain exactly how your code manages to reverse the order of words in its input. Hint: your code doesn't do it. >>cutie2000 never said whether the task was to write a function or a program or just a code snippet. Yeah,i have already said that is just an idea that recursion can be used.So let me see your code doing the same thing(that my code does) and then let's do a performance analysis. Why you are so afraid of putting up your code here!!!Empty words doesn't make any sense.You go around critising my code and i can't do the same for you because you haven't posted any!!!Boy!!,I have to agree you are really smart.
https://www.experts-exchange.com/questions/21124710/reverse-a-string.html
CC-MAIN-2018-13
refinedweb
753
81.83
Delorean 0.2.1 library for manipulating datetimes with ease and clarity Delorean: Time Travel Made Easy Delorean is a library for clearing up the inconvenient truths that arise dealing with datetimes in Python. Understanding that timing is a delicate enough of a problem delorean hopes to provide a cleaner less troublesome solution to shifting, manipulating, and philsophy. Pretty much make you a badass time traveller. Getting Started Here is the world without a flux capacitor at your side: from datetime import datetime from pytz import timezone EST = "US/Eastern" UTC = "UTC" d = datetime.utcnow() utc = timezone(UTC) est = timezone(EST) d = utc.localize(d) d = est.normalize(EST) return d Now lets warm up the delorean: from delorean import Delorean EST = "US/Eastern" d = Delorean(timezone=EST) return d Look at you looking all fly. This was just a test drive: check out out what else delorean can help with below. - Author: Mahdi Yusuf - License: MIT license - Package Index Owner: myusuf3 - DOAP record: Delorean-0.2.1.xml
https://pypi.python.org/pypi/Delorean/0.2.1
CC-MAIN-2017-30
refinedweb
169
54.32
Previously. No matter what language you’re writing, libraries all serve the same purpose: to provide common, reusable functionality and save you from writing lots of repetitive code. And no matter what language you’re writing, if you find yourself writing the same thing over and over, odds are you could use a library to help you out, whether it’s one that somebody’s already written or one that you write yourself and reuse. When you’re writing JavaSript, it’s very easy to end up in just that situation; there are a number of common things you’ll need to do repeatedly, like attach listeners to events or fire off an XMLHttpRequest, which will be pure misery if you have to write all the code for them each time you use them. For example, consider what’s needed to attach an event listener: thiswill vary from browser to browser). Writing that sort of thing just once is tedious. Writing it over and over again is a nightmare. So the natural solution should be some sort of reusable implementation which normalizes all the browser quirks and gives you a single interface to work with. And that’s precisely what libraries are for. Of course, it’s possible to do a lot of things in JavaScript without XMLHttpRequest or event listeners or a number of other things that most of the good libraries provide, and if you’re in that situation then you probably have no need for one. But most modern applications of JavaScript are heavily driven by events, AJAX, animations and the like, so sooner or later most people will need some sort of JavaScript library. This is a trickier question; if you know enough about JavaScript, the DOM, XMLHttpRequest and all the browser quirks, then you can probably write a nice little quite of common functionality for yourself. So why rely on someone else’s possibly-buggy code? I can think of a couple reasons: So unless you have unique needs which rule out using any of the available JavaScript libraries, it’s probably best to use one of them instead of rolling your own. It’s really tough to answer this in an objective fashion, because it’s not entirely an objective question; once you get beyond the basics, every programmer has slightly different needs and slightly different preferences, so there isn’t a “One True Library” out there which will be right for everyone. As far as objectivity goes, there are a few things every good library has to have in order to be usable: XMLHttpRequest, including the ability to specify callbacks to fire when the request finishes. Any library which doesn’t provide all four of these doesn’t really register on my radar; without this functionality I don’t think you can do modern JavaScript programming. Subjectively, there are several other things I look for; missing one of them isn’t a deal-breaker by itself, but I’ll always gravitate toward libraries which meet most or all of these criteria: getElementByIdfor no good reason; if I want it aliased to a shorter name, I’ll do that myself. Libraries which take it on themselves to do that for me are just adding unneeded overhead and polluting the namespaces I have to work with. Ajax. Again, these things are highly subjective; most of them are rooted in what I personally think constitutes good coding practice, and there’s room for lots of opinions on that topic. If you find yourself disagreeing with me on one or more of these points, just ignore them and move on; you’re probably not going to change my mind about them and I’m probably not going to change yours.. If you know of a library which does get the documentation right (and which isn’t produced by a company whose name rhymes with “wahoo”), let me know about it. If you’re a JavaScript library developer and you want some free publicity, bring your documentation up to snuff and point me at it. I’d really love to have to rescind this complaint at some point. And that’s about all I’ve got on evaluating and choosing JavaScript libraries. I do have a favorite out of the current crop, and it shouldn’t be too hard for regular readers to figure out what it is, but I won’t push it on anybody; pick the tool that’s best at helping you do your job, use it and don’t fret about what someone else thinks. Coming up sometime in the next week: some thoughts on how to close the JavaScript knowledge gap. Stay tuned. Comments for this entry are closed. If you'd like to share your thoughts on this entry with me, please contact me directly. I think you can safely take out the “JavaScript” bit here :) myself included, of course. Great article again James. Thanks. Even though I’m just starting with javascript and am not too familiar with the different libraries I think I can agree with everything you wrote. It’s probably good you didn’t mention any specific names (of the different libraries), to prevent some kind of flame war in the comment section here between opponents of the libraries. But still, I’d like to hear about your favorites and less favorites some day :) I disagree with your point about not extending built-in objects. Doing so for Object is, of course, a Bad Thing. But if you can come up with a good reason to avoid extending Date, Number, Array, etc I would love to hear it. I’ve seen problems introduced by extending more specific objects. Also, I disagree about a library requiring some animation support. My own feeling is that there should be a core Javascript library that handles normalizing and extending the core language functionality, DOM functionality, Event, etc. Then built on that framework should be more UI-specific things for drag and drop, animations, etc. Documentation is certainly the sore spot. I would be interested in hearing anyone’s feedback about the documentation I provide for my own libraries :) For example, the documentation at and examples at are both way above what most library authors provide. Anyway, good articles and I enjoy reading the comments as well! Another reader here who was hoping you’d name names. At least maybe a chart ticking off how you feel the various common libraries stack up to your checkpoints. I want names too. I know of several libraries-Prototype, YUI, Fork, Scriptaculous-but I don’t know them intimately and would like to know which one you like. Sorry, folks, naming of names will not be happening here, at least not today. To clarify, I’m not asking you to tell me which library to use (I’d probably ignore you anyway), but I would be interested in seeing how you think they stack up based on your criteria. I’m betting there’d be quite a few things highlighted I wasn’t aware of. That, and I’m curious if the good might outweigh reservations I’ve had about this one or that one. Conversely, if I might become more aware of issues with ones I like. I’ve tended to go with my gut and which one “feels right”, but I’m quite aware that my gut can be (and often is) an idiot. Was viewing your source to see if you are using any javascript libraries on this page and then I realised that you include everything you use three times. Might want to look into that ;) Yeah, I just fixed that. Looks like when I started using the syntax highlighter it accidentally ended up inside the forloop that spits out feed links for the entry categories, so it was outputting once for each category the entry was in. One of the primary complaints I have about my favoured IDE.. documentation is there, sure enough. Examples on how to use, difference between one method and another of a similar name, required includes, stuff like that which is fairly important - all these are simply not there. This IDE is not new, it’s fairly robust in the marketplace and been active for over 10 years now; ya’ll want to know the funniest thing? When commenting on the documentation first question their Quality guys ask? “Whats the problem with documentation?”…. Another JS library is jQuery (). Lot of buzz actually around this library, but don’t ask me some questions, i never use it :) Good article, as always. Nowadays, the main problem isn’t whether you should use a JavaScript library but which one? That is a big question I hope James addresses in the rest of this series. For those wanting a quick overview of the libraries out there, I put together a list: wimp! so you’ll just let self-serving commenters post names and links instead?
http://www.b-list.org/weblog/2007/jan/22/choosing-javascript-library/
crawl-002
refinedweb
1,494
66.78
0 I have to write a program that takes a two dimensional array of dice and sees the combinations the dice makes within 360000 rolls. I have my code to work, but what I'm completely stumped with is printing it in a matrix. I know you have to embedd 2 for loops, but I don't know how else to go about it. what I need to do is something like is Die 1 1 2 3 4 5 6 1 D 2 I 3 E 4 Results 2 5 6 My code just prints the combinations going down as so result result result etc... Here is my code: #include <iostream> #include <iomanip> #include <cstdlib> #include <ctime> using namespace std; int main() { const int die1 = 6; const int die2 = 6; int dice[die1][die2]; int count = 0; while (count <= 360000) { dice[rand() % 6][rand() % 6]++; count++; } for (int i = 0; i < die1; i++) { for (int j = 0; j < die2; j++) { cout << dice[i][j] << endl; } } } Any help is truly appreciated. Mind you that I am in a beginner's class so no complicated programming...thank you!
https://www.daniweb.com/programming/software-development/threads/457283/two-dimensional-dice-array
CC-MAIN-2018-43
refinedweb
187
66.91
LE BIG Info Advertising Report event. More... #include <hci_api.h> LE BIG Info Advertising Report event. Definition at line 820 of file hci_api.h. Number of new payloads in each BIS event. Definition at line 827 of file hci_api.h. Encryption enabled. Definition at line 835 of file hci_api.h. Framing mode. Definition at line 834 of file hci_api.h. Event header. Definition at line 822 of file hci_api.h. Number of times a payload is transmitted in a BIS event. Definition at line 829 of file hci_api.h. ISO interval. Definition at line 826 of file hci_api.h. Maximum size of the PDU. Definition at line 830 of file hci_api.h. Maximum size of the SDU. Definition at line 832 of file hci_api.h. Number of Sub-Events in each BIS event in the BIG. Definition at line 825 of file hci_api.h. Number of BIS. Definition at line 824 of file hci_api.h. Transmit PHY. Definition at line 833 of file hci_api.h. Offset used for pre-transmissions. Definition at line 828 of file hci_api.h. SDU interval. Definition at line 831 of file hci_api.h. Sync handle identifying the periodic advertising train. Definition at line 823 of file hci.
https://os.mbed.com/docs/mbed-os/v6.15/mbed-os-api-doxy/struct_hci_le_big_info_adv_rpt_evt__t.html
CC-MAIN-2021-49
refinedweb
202
72.42
Symfony UX 2.0 & Stimulus 3 Support Symfony UX is an initiative and set of libraries centered around the Stimulus JavaScript library. And today, I'm pleased to announce several new releases: - Version 2.0 of all symfony/ux libraries - Version 3.0 of @symfony/stimulus-bridge - Version 2.0 of @symfony/stimulus-testing What does this all mean? Let's find out! Stimulus 3 Support Stimulus 3.0 - a new major version - was released in September. It includes a few nice new features - like a "debug" mode and "values defaults" - but no major changes and no backwards compatibility breaks. So why the new major version? Because the library was renamed from stimulus to @hotwired/stimulus. Yup, the name of the library changed... but not much else. However, the name change required Symfony's UX libraries to need a new major version. Symfony UX Changes There are 4 big changes with the new Symfony UX releases: 1) Support changed from stimulus to @hotwired/stimulus The biggest change with the new major releases listed above is that support for stimulus was dropped and replaced with @hotwired/stimulus (i.e. version 3 of the library). This difference won't be noticeable in your applications, except that you'll need to adjust the import { Controller } from 'stimulus' lines in your code (see about upgrading below). 2) Support for IE11 was dropped Version 3 of Stimulus dropped support for IE11. We did the same thing in our Symfony UX libraries and incorporated a brand new build system. The result is smaller final JavaScript sizes. If you need to continue supporting IE 11, use Stimulus 2 and the previous version of the UX libraries. 3) data- Attributes Changed to the Values API Many of the UX packages allowed you to configure things by adding data- attributes to an element. Those have been replaced by using the "Values API" from Stimulus, which is a bit nicer anyways. For example, if you use symfony/ux-lazy-image, then previously the code looked like this: 1 2 3 4 5 6 {# Code for the old, 1.x version #} <img src="{{ asset('image/small.png') }}" {{ stimulus_controller('symfony/ux-lazy-image/lazy-image') }} This code would now need to be updated to this: 1 2 3 4 5 6 7 {# Code for the new, 2.x version #} <img src="{{ asset('image/small.png') }}" {{ stimulus_controller('symfony/ux-lazy-image/lazy-image', { src: asset('image/large.png') }) }} /> See the README - or CHANGELOG (e.g. Lazy Image CHANGELOG) - of each library for a full set of changes. In addition to the above items, symfony/ux-chartjs was updated to use chart.js version 3, and various new events were added to UX controllers to make them more configurable. How do I Upgrade? To upgrade, you'll need to update a number of packages at the same time and make an adjustment to each Stimulus controller in your project: - Remove stimulusfrom your package.jsonfile and replace it with "@hotwired/stimulus": "^3.0". Also change your @symfony/webpack-encoreversion to ^1.7and @symfony/stimulus-bridgeto ^3.0. After making these changes, run yarn install. Update all of your controllers to replace any imports for stimuluswith imports from @hotwired/stimulus: 1 2 3 -import { Controller } from 'stimulus'; +import { Controller } from '@hotwired/stimulus'; - In composer.json, update any symfony/ux-*packages that you have installed to version ^2.0. Run composer up "symfony/ux-*". Once that finishes, run yarn install --force. And... that's it! Congratulations on upgrading to Stimulus 3. Happy UX'ing! As with any Open-Source project, contributing code or documentation is the most common way to help, but we also have a wide range of sponsoring opportunities. Symfony UX 2.0 & Stimulus 3 Support symfony.com/blog/symfony-ux-2-0-and-stimulus-3-supportTweet this __CERTIFICATION_MESSAGE__ Become a certified developer! Exams are online and available in all countries.Register Now stimulus-use (stable version) is still incompatible with stimulus 3. And also they claim > ⚠️ Stimulus 3 has several breaking changes. Symfonycasts still promote stimulus-use (and it's actually pretty handy lib) although this tutorial is non-reproducible (unless you switch to beta, which is probably ok for tutorial or pet project, but no further) After 2 months it's still "hanging" and I'm not comfortable with using beta versions in prod (branch and PR are there, but failing tests in CI are scary) Altogether that conveys an image and "feeling" of immature/unstable ecosystem at best * symfony-ux and encore pushes to stimulus v3 (see symfony/webpack-encore-bundle recipe) * symfony/ux components use stimulus v3 * symfonycasts promotes to use of stimulus-use (great library all around) but they are incompatible. For new projects that creates friction in sense that "old stimulus mindset and skills" are of limited use. For old projects that's basically a blocker for updating symfony/webpack-encore-bundle recipe (and moving to stimulus v3), but symfony/ux-* dependes on stimulus 3 now in symfony 6 Michael Brauner said on Dec 9, 2021 at 19:41 #1
https://symfony.com/blog/symfony-ux-2-0-and-stimulus-3-support
CC-MAIN-2022-05
refinedweb
841
57.57
Abstract: What do you do when an object cannot be properly constructed? In this newsletter, we look at a few options that are used and discuss what would be best. Based on the experiences of writing the Sun Certified Programmer for Java 5 exam. Welcome to the 120th edition of The Java(tm) Specialists' Newsletter, this time sent from a cold but beautiful village of Hinxton in England. This week I am presenting two Design Patterns Courses at the European Bioinformatics Institute close to Cambridge in the UK. We are having an absolute blast, with stimulating discussions around design and Java. Learning.JavaSpecialists.EU: Please visit our new self-study course catalog to see how you can upskill your Java knowledge. Last month I wrote the latest Java Programmer Certification examination, version 5.0. An improvement over 1.1, even though some of the questions were still obscure. Hardly any parrot-style memorization was needed, but you did need an excellent understanding of Java 5. Some tips for the Sun Certified Programmer for the Java 2 Platform, Standard Edition 5.0: I may not reveal any questions, or give hints of what may come up. You can get that information from Sun's website. Now onto the topic of the newsletter, throwing exceptions from constructors. What is the best way to deal with objects that cannot be properly instantiated? Here are some suggestions, which I will explore in more detail: Let's deal with each suggestion. This, believe it or not, is the most common approach in practice. Input parameters are not adequately checked to ensure that they are within specification. As a result, the code fails later, rather than immediately. You want to know about errors as soon as possible. The sooner you see the problem, the easier it is to understand what caused it. With this approach, completely innocent code experiences spurious values or runtime exceptions. So, whilst this is the most undesirable of all approaches, it is the most common. I suggest you look in your code to see where it is possible to construct objects that actually should never see the light of day. This tells the client that is constructing the object that something bad may happen when you try to make it. Examples are java.net.Socket and java.io.FileInputStream. In the first case, you might try to open a socket to an invalid address, in the second, the file might not be found. Whilst this approach is not bad, it should only be used for situations that are beyond the control of the client using your objects. It is debatable whether FileNotFoundException should be thrown by the constructor of FileInputStream. Ideally the client should first verify that the file exists before making an instance of it. However, since the operating system is beyond the control of Java, you cannot guarantee atomicity. This is in my opinion usually the best approach. It expresses to the user accurately what the problem is - he presented an incorrect argument to the constructor. To be fair to the user, document in your JavaDoc that you will throw the IllegalArgumentException, together with the conditions under which it will be thrown. public class Person { private final String name; private final int age; private static final int MAXIMUM_AGE = 150; /** * Person constructor representing a natural * person. Name may not be null. Age must be * non-negative and less than MAXIMUM_AGE. * * @throws IllegalArgumentException if name is * null or if age is out of range. */ public Person(String name, int age) { this.name = name; this.age = age; if (this.age < 0 || this.age > MAXIMUM_AGE) { throw new IllegalArgumentException( "age out of range: " + this.age + " expected range 0 <= age < " + MAXIMUM_AGE); } if (this.name == null) { throw new IllegalArgumentException( "name is null"); } } } NullPointerException was the first bug that I had to fix in some "legacy" code, back in 1997. I usually equate "NullPointerException" with bug. Not a bug in the client code, but rather in the library that I am using. Examples of code that throws NullPointerException deliberately includes the Hashtable's put() method. The old Hashtable was written to not handle null values. This was corrected in the java.util.HashMap. import java.util.*; public class HashTableTest { public static void main(String[] args) { System.out.println("HashMap test"); HashMap hm = new HashMap(); hm.put(null, "hello"); System.out.println("Hashtable test"); Hashtable hs = new Hashtable(); hs.put(null, "hello"); } } If you decide to go this route, make sure to document this fact to the user. Yes, no one reads comments, but at least you will be covered. You can point to the javadoc and say: "Didn't you read my comment?" This is probably the worst way to signal to your user that you could not construct the object due to the parameters he sent you. An AssertionError should only be thrown in places where it cannot happen. If you see this Error, you know that something completely strange has occurred. For example, you have managed to divide an int by zero. Another bad idea is putting Java 1.4 assertions into your code. You can switch them on and off at runtime, but the problem I have with that approach is that you may end up with a bad object, or you may not. To summarise, in my opinion, the most correct approach is to be strict in your constructors and throw IllegalArgumentException or IllegalStateException if you encounter something you do not like. Right, off to bed to catch up on some beauty sleep. Tomorrow we continue with the Adapter pattern and will discuss whether the MouseAdapter is an Object Adapter or a Class Adapter. It was quite fun today showing my class how to build dynamic proxies....
https://www.javaspecialists.eu/archive/Issue120.html
CC-MAIN-2018-13
refinedweb
953
66.64
It’s been a couple of years that we released the first samples showing how to take advantage of ACS from Windows Phone 7 applications; the iOS samples, released in the Summer, and the Windows8 Metro sample app last Fall demonstrated that the pattern applies to just any type of rich clients. Although we explored the general idea at length, and provided many building blocks which enable you to put it in practice without knowing all that much about the gory details, apparently we never really provided a detailed description of what is going on. The general developer populace (rightfully) won’t care about the mechanics, but I am receiving a lot of details questions from some of the identirati, who might end up carrying this approach even further, to everybody’s advantage. I am currently crammed in a small seat in cattle class, inclusive of crying babies and high-pitch barking dogs (yes, plural), on a Seattle-Paris flight that is bringing me to a 1-week vacation at home. It’s hardly the best situation to type a lengthy post, but if I don’t do it I know this will but me for the entire week and I don’t want it to distract me from the mega family reunion and the 20th anniversary party of my alma mater so my dear readers, wear confortable clothes and gather around, because in the next few hours (for the writer, at least) we are going to shove our hands deep into the guts of this pattern and shine our bright spotlight pretty much in every fold. Thank you Ping Identity for the noise-cancelling headset you got me during last Cloud Identity Summit, without which this post would not have been possible Rich clients. How do they handle authentication? Let’s indulge our example bias and pick some notable case from the traditional world. Take an email client, like Outlook: in the most general case, Outlook won’t play games and will simply ask you for your credentials. The same goes for SQL Management Studio, Live Writer, FileZilla, Messenger. That’s because your mail server, SQL engine, blog engine, FTP server and Live ID all have programmatic endpoints that accept raw credentials. A slightly less visible case is the use of Office apps within the context of your domain, in which you are silently authenticated and authorized: however I assure that you are not using your midichlorians to submit that file share to your will. It’s simply that you are operating in one environment fully handled by your network software, where there is an obvious, implicit authority (the KDC) which is automatically consulted whenever you do anything that requires some privilege. There is an unbroken chain of authentication which starts when you sign in the workstation and unfolds all the way to your latest request, and it is all supported by programmatic authentication endpoints. All this has worked pretty well for few decades, and still enjoys very wide application: however it does not cover all the authentication needs of today’s scenarios. Here there’s a short list of situations in which the approach falls short: Ready for some good news? All of the above can be solved, or at least mitigated, by a bit of inventiveness. Rich clients render their UI with their own code, rather than feeding it to a browser: but if the browser works so well for the authentication scenarios depicted above, why not opening one just when needed for driving he authentication phase? The idea is sound, but as usual the devil is in the details. With some exceptions, the protocols used for browser based authentication aim at delivering a token (or equivalent) to one application on a server; if we want this impromptu browser trick to work, we need to find a way of “hijacking” the token toward the rich client app before it gets sent to the server. The good news keep coming. ACS offers a way for you to use it from within a browser driven by a rich client, and faithfully deliver tokens to the calling client app while still maintaining the usual advantages it offers: outsourcing of trust relationships, support for many identity provider types, claims transformation rules, lightweight token formats suitable for REST calls, and so on. If all you care is knowing that such method exists, and you are fine with using the samples we provide without tweaking them, you should stop reading now. If you decide to push farther, here there’s what to expect. The way in which you use ACS from a browser in a rich client is based on a relatively minor tweak on how we handle WS-Federation and home realm discovery. Since I can’t assume that you are all familiar with the innards of WS-Federation (which you would, if you would have taken advantage of yesterday’s 50% off promotion from O’Reilly ) I am going to give you a refresher of the relevant details. Done then, I’ll explain how the method works in term of the delta in respect to traditional ws-fed. The figure below depicts what append during a classic browser-based authentication flow taking advantage of ACS. In order to keep things manageable I omitted the initial phase, in which an unauthenticated request to the web app results in a redirect to the home realm discovery page: I start the flow directly from there. Furthermore, instead of showing a proper HTML HRD page I use a JSON feed, assuming that the browser would somehow render it for the user so that its corresponding entries will be clickable. Hopefully things will get clearer once I get in the details. Let’s say that you want to help the user to authenticate with a given web application, protected by ACS. ACS knows which identity providers the application is willing to accept users from, and knows how to integrate with those. How do you take advantage of that knowledge? You have two easy ways: The second alternative is actually pretty clever! Let’s take a deeper look. How do you obtain the JSON feed for a given RP? You just GET the following: The first part is the resource itself, the IP feed; it follows the customary rule for constructing ACS endpoints, the namespace identifier (bold) followed by the ACS URL structure. The green part specifies that we want to integrate ACS and our app using WS-Federation. The last highlighted section identifies which specific RP (among the ones described in the target namespace) we want to deal with. What do we get back? The following: [ { "Name": "Windows Live™ ID", "LoginUrl": "", "LogoutUrl": "", "ImageUrl": "", "EmailAddressSuffixes": [] }, { "Name": "Yahoo!", "LoginUrl": "", "LogoutUrl": "", "ImageUrl": "", "EmailAddressSuffixes": [] }, { "Name": "Google", "LoginUrl": "", "LogoutUrl": "", "ImageUrl": "", "EmailAddressSuffixes": [] }, { "Name": "Facebook", "LoginUrl": "", "LogoutUrl": "", "ImageUrl": "", "EmailAddressSuffixes": [] } ] Well, that’s clearly meant to be consumed by machines: however JSON is clear enough for us to take this guy apart and understand what’s there. First of all, the structure: every IP gets a name (useful for presentation purposes), a login URL (more about that later), a rarely populated logout URL, the URL of one image (again useful for presentation purposes) and a list of email suffixes (longer conversation, however: useful if you know the email of your user and you want to use it to automatically pair him/her to the corresponding IP, instead of showing all the list). The login URL is the most interesting property. Let’s take the first one: This is the URL used to sign in using Live. The yellow part, together with various other hints (wreply, wctx, etc) suggests that the integration with Live is also based on WS-Federation. This is what is often referred to as a deep link: it contains all the info required to go to the IP and then get back to the ACS address that will take care of processing the incoming token and issue a new token for the application. The part highlighted green shows such endpoint. Note the wsfederation entry in the wctx context parameter, it will come in useful later. You don’t need to grok al the details here: suffice to say that if the user clicks on this link he’ll be transported in a flow where he will authenticate with live id, will be bounced back to ACS and will eventually receive the token needed to authenticate with the application. Al with a simple click on a link. Want to try another one? let’s take a look at the URL for Yahoo: Now that’s a much longer one! I am sure that many of you will recognize the OpenID/attribute exchange syntax, which happens to be the redirect-based protocol that ACS uses to integrate with Yahoo. The value of the return_to parameter hints at how ACS processes the flow: once again, notice the wsfederation string; and once again, all it takes to authenticate is a simple click on the link. Google also integrates with ACS via OpenID, hence we can skip it. How about Facebook, though? Yet another integration protocol: this time it’s OAuth2. The flow leverages one Facebook app I created for the occasion, as it’s standard procedure with ACS. Another link type, same behavior: for the user, but also for the web app developer, it’s just a matter of following a link and eventually an ACS-issued token comes back. Alrighty, let’s say that the user clicks on one of the login URL links. In the diagram that’s step 2. In this case I am showing a generic IP1. As you know by now, the IP can use whatever protocol ACS and the IP agreed upon. In the diagram I am using WS-Federation, which is what you’d see if IP1 would be an ADFS2 or Live ID. WS-Federation uses an interesting way of returning a token upon successful authentication. It basically sends back a form, containing the token and various ancillary parameters; it also sends a Javascript fragment that will auto-post that form to the requesting RP. That’s what happens in step 2 (IP to ACS) and 3 (ACS to the web application). Let’s take a closer look to the response returned by ACS: 1: <![CDATA[ 2: <html><head><title>Working...</title></head><body> 3: <form method="POST" name="hiddenform" action=""> 4: <input type="hidden" name="wa" value="wsignin1.0" /> 5: <input type="hidden" name="wresult" value="&lt;t:RequestSecurityTokenResponse 6: Context=&quot;rm=0&amp;amp; 7: id=passive&amp;amp;ru=%2fdefault.aspx%3f&quot; 8: xmlns:t=&quot;;><t:Lifetime> 9: <wsu:Created xmlns: 10: 2012-03-28T19:19:56.488Z</wsu:Created> 11: <wsu:Expires xmlns:2012-03-28T19:29:56.488Z</wsu:Expires> 12: </t:Lifetime> 13: <wsp:AppliesTo xmlns:<EndpointReference xmlns=""><Address></Address></EndpointReference> 14: </wsp:AppliesTo> 15: <t:RequestedSecurityToken> 16: <Assertion ID="_906f33bd-11ca-4d32-837b-71f8a3a1569c" 17: IssueInstant="2012-03-28T19:19:56.504Z" 18: 19: <Issuer></Issuer> 20: <ds:Signature [...] 21: lt;/Assertion></t:RequestedSecurityToken> 22: [....] 23: lt;/t:RequestSecurityTokenResponse>" /> 24: <input type="hidden" name="wctx" value="rm=0&id=passive&ru=%2fdefault.aspx%3f" /> 25: <noscript><p>Script is disabled. Click Submit to continue.</p><input type="submit" value="Submit" /> 26: </noscript> 27: </form> 28: <script language="javascript"> 29: window.setTimeout('document.forms[0].submit()', 0); 30: </script></body></html> 29: window.setTimeout('document.forms[0].submit()', 0); 31: ]]> Now THAT’s definitely not meant to be read by humans. I did warn you at the beginning of the post, didn’t I Come on, it’s not that bad: let me be your Virgil here. As I said above, in WS-Federation tokens are returned to the target RP by sending back a form with autopost: and that’s exactly what we have here. Lines 3-27 contain a form, which features a number of input fields used to do things such as signaling to the RP the nature of the operation (line 4: we are signing in) and transmitting the actual bits of the requested token (line 5 onward). Lines 28-30 contain the by-now-famous autoposter script. And that’s it! The browser will receive the above, sheepishly (as in passively) execute the script and POST the content as appropriate. The RP will likely have interceptors like WIF which will recognize the POST for what it is (a signin message), find the token, mangle it as appropriate and authenticate (or not) the user. All as described in countless introductions to claims-based identity. Congratulations, you now know much more about how ACS implements HRD and how WS-Fed works than you’ll ever actually need. Unless, of course, you want to understand in depth how you can pervert that flow into something that can help with rich clients. …not from a Jedi Let’s get back to the rich client problem. We already said that we can pop out a browser from our rich client when we need to – that is to say when we have to authenticate the user - and close it once we are done. The flow we have just examined seems almost what we need, both for what concerns the HRD (more about that later) and the authentication flow. The only thing that does not work here is the last step. Whereas in the ws-fed flow the ultimate recipient of the token issuance process is the entity that requires it for authentication purposes, that is to say the web site, in the rich client case it is the client itself that should obtain the token and store it for later use (securing calls to a web service). It is a bit if in the WS-Federation case the token would stop at the browser, instead of being posted to the web site. Here, let me steal my own thunder: we make that happen by providing in ACS an endpoint which is almost WS-Federation, but in fact provides a mechanism for getting the token where/when we need it in a rich client flow. We call it javascriptnotify. Take a look at the diagram below. That looks pretty similar to the other one, with some important differences: Here I did in step 1 the same HRD feed/page simplification I did above; I’ll get back to it at the end of the post. The URl we use for getting the feed, though, has an important difference: The URI of the resource is the same, and the realm of the target service is obviously different; the interesting bit here is the protocol parameter, which now says “javascriptnotify” instead of “wsfederation”. Let’s see if that yields to differences in the actual feed: { "Name": "Windows Live™ ID", "LoginUrl": "", "LogoutUrl": "", "ImageUrl": "", }, { "Name": "Yahoo!", "LoginUrl": "", "LogoutUrl": "", "ImageUrl": "", }, { "Name": "Google", "LoginUrl": "", "LogoutUrl": "", "ImageUrl": "", }, { "Name": "Facebook", "LoginUrl": "", "LogoutUrl": "", "ImageUrl": "", } ] The battery is starting to suffer, so I have to accelerate a bit. I am not extracting the Login URLs of the various IPs, but I highlighted the places where ws-federation has been substituted by javascriptnotify (facebook follows a different approach, more about is some other time: but you can see that the two redirect_uri are different). The integration between the IP and ACS – step 2 - goes as usual, modulo a different value in some context parameter; I didn’t show it in details earlier, I won’t show it now. What is different, though, is that coming back from the IP there’s something in the URL which tells ACS that the token should not be issued via WS-Federation, but using Javascriptnotify. That will influence how ACS sends back a token, that is to say the return portion of leg 3. Earlier we got a form containing the token, and an autoposter script; let’s see what we get now. 1: <html xmlns=""> 2: <head> 3: <title>Loading </title> 4: <script type="text/javascript">: </script>: 16: </head> 17: <body> 18: </body> 19: </html> I hope you’ll find in you to forgive the atrocious formatting I am using, hopefully that does not get in the way of making my point. As you can see above, the response from ACS is completely different; it’s a script which substantially passes a string to whomever in the container implements a handler for the notify event. And the string contains.. surprise surprise, the token we requested. If our rich client provided a handler for the notify event, it will now receive the token bits for storage and future use. Mission accomplished: the browser control can now be closed and the rich native experience can resume, ready to securely invoke services with the newfound token. The notifi-ed string also contains some parameter about the token itself (audience, validity interval, type, etc). Those parameters can come in handy to know what the token is good for, without the need for the client to actually parse and understand the token format (which can be stored and used as amorphous blob, nicely decoupling the client from future updates in the format or changes in policy). A bit more on token formats. ACS allows you to define which token format should be used for which RP, regardless of the protocol. When obtaining tokens for REST services, you normally want to get tokens in a lightweight format (as you’ll likely need to use them in places – like the HTTP headers - where long tokens risk being clipped). In this example, in fact, I decided to use SWT tokens for my service urn: testservice. However ACS would have allowed me to send back a SAML token just as well. One more point in favor of keeping the client format-agnostic, given that the service might change policy at any moment. That’s it for the flow! I might be biased, but IMHO it’s not that hard; in any case, I am glad that the vast majority of developers will never have to know things at this level of detail. If you are interested in taking a look at code that handles the notify, I’d suggest getting the ACS2+WP7 lab and taking a look at Labs\ACS2andWP7\Source\Assets\SL.Phone.Federation\Controls\AccessControlServiceSignIn.xaml.cs, and specifically to SignInWebBrowserControl_ScriptNotify.Now that you know what it’s supposed to do, I am sure you’ll find it straightforward. Before closing and getting some sleep, here there’s a short digression on home realm discovery and rich clients. At the beginning of the WS-Federation refresher I mentioned that web site developers have the option of relying on the precooked HRD page, provided by ACS for development purposes, or take the matter in their own hands and use the HRD JSON feed to acquire the IPs coordinates programmatically and integrate tem in their web site’s experience. A rich client developer has even more options, which can be summarized in the following; Well folks, I have a confession to make. Right now I am still writing from a plane, and there are still babies crying around, but this time it’s the Paris-Seattle: the vacation is over and I am coming back. I didn’t finish this post on my way in, the magic noise cancelling headset fought bravely but in the end the babies-dogs combined attack could not be contained. Believe it or not, I actually managed to enjoy my vacation without thinking about this: however as soon as I got on the return plane I ALT-TABbed my way to Live Writer (left open for the whole week) and finalized. Good, because I caught few bugs that eluded by stressed self of one week ago but were super-evident (“risplenda come un croco in polveroso prato” – from memory! No internet on transatlantic flights) to the rested self of time present. I am not sure how generally useful this post is going to be. Once again, to be sure; this is NOT for the general-purpose developer. However I do know that this will provide some answers to very specific questions I got; and If you got this far, however, something tells me that “general-purpose” is not the right label for you As usual: if you have questions, write away!
http://blogs.msdn.com/b/vbertocci/archive/2012/04/04/authenticating-users-from-passive-ips-in-rich-client-apps-via-acs.aspx
CC-MAIN-2015-35
refinedweb
3,384
57.3
Hi, > > > as a result, FFmpeg does not currently build on systems that don't > > > define uint_fast64_t. > > > > Just out of curiosity, what system are you using that doesn't define > > this type? > > OpenBSD You should file a bug, because ISO C _requires_ uint_fast64_t (surprised me, too). How about this? Regards, Wolfram. --- ffmpeg/libavcodec/common.h Mon May 23 09:53:35 2005 +++ ffmpeg-wg/libavcodec/common.h Tue Jun 7 12:17:51 2005 @@ -125,13 +125,13 @@ #endif #ifdef EMULATE_FAST_INT -/* note that we don't emulate 64bit ints */ typedef signed char int_fast8_t; typedef signed int int_fast16_t; typedef signed int int_fast32_t; typedef unsigned char uint_fast8_t; typedef unsigned int uint_fast16_t; typedef unsigned int uint_fast32_t; +typedef uint64_t uint_fast64_t; #endif #ifndef INT_BIT
http://ffmpeg.org/pipermail/ffmpeg-devel/2005-June/002410.html
CC-MAIN-2016-30
refinedweb
118
54.22
C++ Tutorial - Functors(Function Objects) I - 2017 Note: some of the code may be using C++ 11 features. What is a functor? Let's start with a very simple usage of functor: #include <iostream> #include <string> class A { public: void operator()(std::string str) { std::cout << "functor A " << str << std::endl; } }; int main() { A aObj; aObj("Hello"); } We have a class A which has an operator() defined. In main(), we create an instance and passing in a string argument to that object. Note that we're using an instance as if it's a function. This is the core idea of functors. A functors' claim: "Anything behaves like a function is a function. In this case, A behaves like a function. So, A is a function even though it's a class." We counters with a question: Why do we need you, functor? We can just use a regular function. We do not need you. Functor explains the benefits of using it:"We're not simple plain functions. We're smart. We feature more than operator(). We can have states. For example, the class A can have its member for the state. Also, we can have types as well. We can differentiate function by their signature. We have more than just the signature." A functor is a parameterized function. #include <iostream> #include <string> class A { public: A(int i) : id(i) {} void operator()(std::string str) { std::cout << "functor A " << str << std::endl; } private: int id; }; int main() { A(2014)("Hello"); } Now, the class A is taking two parameters. Why do we want like that. Can we just use a regular function that takes the two parameters? Let's look at the following example: #include <iostream> #include <vector> #include <algorithm> using namespace std; void add10(int n) { cout << n+10 << " "; } int main() { std::vector<int> v = { 1, 2, 3, 4, 5 }; for_each(v.begin(), v.end(), add10); // {11 12 13 14 15} } In main(), the function add10 will be invoked for each element of the vector v, and prints out one by one after adding 10. But note that we hard coded 10. We can use a global variable for that. But we do not want to use global variable. We may use template like this: template<int number> void addNumber(int i) { std::cout << i + number << std::endl; } int main() { std::vector<int> v = { 1, 2, 3, 4, 5 }; for_each(v.begin(), v.end(), addNumber<10>); // 11 12 13 14 15 } But there is a caveat: we cannot easily change the value to addNumber because a template should be resolved at compile time and thus it requires const. Therefore, we cannot do this in the main(): int main() { std::vector<int> v = { 1, 2, 3, 4, 5 }; int val = 10; for_each(v.begin(), v.end(), addNumber<val>); // Not OK } As you guessed it, we need to use functor as shown in the code below: #include <iostream> #include <vector> #include <algorithm> class A { public: A(int k) : val(k) {} void operator()(int i) { std::cout << i + val << std::endl; } private: int val; }; int main() { std::vector<int> v = { 1, 2, 3, 4, 5 }; int val = 10; for_each(v.begin(), v.end(), A(val)); // 11 12 13 14 15 } Note that we set the A::val via constructor. If we want another value for the addition, we may use another instance with a different value for the constructor. In this way, we can change the value to add using the state of the function object. We get this kind of flexibility by using functors. This is the fundamental advantage of functors - they can easily preserve a state between calls. In other words, they can support multiple independent states, one for each functor instance while functions only support a single state. Another good thing is that we do not have to write all functors. STL provides quite a few functors for us. - Arithmetic binary functors plus, minus, multiplies, divides, modulus - Relational binary functors equal_to, not_equal_to, greater, greater_equal, less, less_equal - Logical binary functors logical_and, logical_or - Built-in functors of unary type (available from <functional>) negate, logical_not Simple usage of them could be: int n = std::multiplies<int>()(4, 5); // n = 4*5 if (std::not_equal_to<int>()(n, 10)) std::cout << n << std::endl; // 20 In the code below, transform() requires a functor that takes one parameter, however, the multiplies() takes two. That's why we need a bind(). The bind() take its first parameter from the set and the second parameter 10 10. #include <iostream> #include <vector> #include <set> #include <algorithm> #include <iterator> // std::back_inserter #include <functional> // std:: bind int main() { std::set<int> s = { 1, 2, 3, 4, 5 }; std::vector<int> v; // multiply set's elements by 100 and store it into v std::transform(s.begin(), s.end(), // source std::back_inserter(v), // destination std::bind(std::multiplies<int>(), std::placeholders::_1, 100) // C++11 ); } The transform() takes elements from the set as it's first parameter (std::placeholders::_1, and multiplies the second parameter (100). Then, it puts each of them into the vector via std::back_inserter(). One of the issues in our earlier example can be resolved by using the bind() function: #include <iostream> #include <vector> #include <algorithm> #include <functional> // std:: bind void addNumber(int i, int number) { std::cout << i + number << std::endl; } int main() { std::vector v = { 1, 2, 3, 4, 5 }; for_each(v.begin(), v.end(), std::bind(addNumber, std::placeholders::_1, 10)); // 11 12 13 14 15 } - Functors (Function Objects) I - Introduction - Functors (Function Objects) II - Converting function to functor - Functors (Function Objects) - General
http://www.bogotobogo.com/cplusplus/functor_function_object_stl_intro.php
CC-MAIN-2017-34
refinedweb
929
62.48
Three distinct points are plotted at random on a Cartesian plane, for which , , such that a triangle is formed. Consider the following two triangles: A(-340,495), B(-153,-910), C(835,-947) X(-175,41), Y(-421,-714), Z(574,-645) It can be verified that triangle ABC contains the origin ‘O’, whereas triangle XYZ does not. Using triangles.txt, a 27K text file that contains the coordinates of one thousand random triangles, find the number of triangles for which the interior contains the origin ‘O’. The first step to solving this problem is to divide a triangle, say, ABC, into 3 parts: AOB, AOC, BOC. The second step is to see if the sum of the areas of these three triangles is equal to the area of ABC. If it is, we can say that the origin ‘O’ lies inside ABC and vice versa. The area of the triangles can be found by the following formula. The program below is an implementation of the algorithm in Python. It considers every triangle present in triangle.txt, calculates all the required areas, checks if the containment condition holds, counts the number of triangles for which it does, and finally outputs the number. def calculate_area (A, B, C): return 0.5 * abs((A[0] - C[0]) * (B[1] - A[1]) - (A[0] - B[0]) * (C[1] - A[1])) def contains_O (AOB, BOC, AOC, ABC): return ABC == (AOB + BOC + AOC) def main (): file = open('triangle.txt', 'r').readlines() origin = [0, 0] contained = 0 for line in file: line = line.strip('\n').split(',') line = [int(i) for i in line] area_ABC = calculate_area(line[0:2], line[2:4], line[4:6]) area_AOB = calculate_area(line[0:2], origin, line[2:4]) area_AOC = calculate_area(line[0:2], origin, line[4:6]) area_BOC = calculate_area(line[2:4], origin, line[4:6]) if contains_O(area_AOB, area_BOC, area_AOC, area_ABC): contained = contained + 1 print(contained) main() RELATED TAGS CONTRIBUTOR View all Courses
https://www.educative.io/answers/how-to-solve-the-project-euler-triangle-containment-problem
CC-MAIN-2022-33
refinedweb
323
60.55
As you might have read elsewhere, I’m leaving the Bazel team and Google in about a week. My plan for these last few weeks was to hand things off as cleanly as possible… but I was also nerd-sniped by a bug that came my way a fortnight ago. Fixing it has been my self-inflicted punishment for leaving, and oh my, it has been painful. Very painful. Let me tell you the story of this final boss. The problem A few weeks ago, an engineer in the Abseil team approached me because they were trying to do an optimization to their C++ library ( absl). Their problem was that all of the tests at Google passed… except for the tests that verify Blaze (not Bazel) on macOS, which crashed in an obscure way. I didn’t have a lot of time to look into the problem myself; but, thankfully, trybka@—one of our C++ experts—did and noticed what was wrong by paying close attention to a failing stack trace: C [libvfs.dylib+0x1beb24] absl::DeadlockCheck(absl::Mutex*)+0x114 C [libvfs.dylib+0x1b9b22] absl::DebugOnlyDeadlockCheck(absl::Mutex*)+0x32 C [libvfs.dylib+0x1b9a4c] absl::Mutex::Lock()+0x1c C [libvfs.dylib+0x12e67f] base::internal::VLogSiteManager::UpdateGlobalV(int)+0x1f C [libvfs.dylib+0x12f4bf] $_0::operator()() const+0x2f C [libvfs.dylib+0x12f489] $_0::__invoke()+0x9 **** JUMPING INTO ABSL IN A DIFFERENT DYLIB **** C [network.dylib+0xc285ba] absl::flags_internal::FlagImpl::InvokeCallback() const+0x6a C [network.dylib+0xc28dcd] absl::flags_internal::FlagImpl::SetCallback(void (*)())+0xad C [network.dylib+0x54ea74] absl::flags_internal::FlagRegistrar<int, true>::OnUpdate(void (*)()) &&+0x24 **** JUMPING BACK TO ABSL IN THE ORIGINAL DYLIB **** C [libvfs.dylib+0x136cd0] __cxx_global_var_init+0x30 C [libvfs.dylib+0x136d79] _GLOBAL__sub_I_vlog_is_on.cc+0x9 In the stack trace above, you can see how control flow jumps back and forth between two different shared libraries ( dylibs in this post, as I’m focusing on macOS), and how the calls that cross the boundaries are all within Abseil. Now, you might think: that should be fine. If the two shared libraries we have above, libvfs.dylib and libnetwork.dylib, call into another common one, Abseil, the dynamic linker will not load the common library twice. And you’d be right… except… Bazel and Google love building static binaries. There is no libabsl.dylib in there: the absl symbols are duplicated across the two shared libraries. Visually, here is what we had in the executable: And thereby lies the problem: Blaze on Mac was pulling in three separate dylibs, two of which bundled Abseil and conflicted at runtime. This, in turn, was blocking the submission of the optimization to Abseil I mentioned, which was supposed to bring significant improvements Google-wide. Time was of essence to fix this issue. (Except it wasn’t because the author of the change found a different way to write their code and avoid the issue, but the issue had already gotten my interest.) The basics behind JNI Before getting into how to resolve this issue, let’s pause for a second to understand what’s involved in getting JNI code to work with Bazel. JNI is nothing magical: when you declare a Java function as native, the JVM will transform calls to that function as calls to a C function whose name is derived from the class that contains the native method plus the method’s name. For example, if we have a Java function definition like this: package com.google.devtools.build.lib.unix; public class NativePosixFiles { public static native String readlink(String path) throws IOException; } The JVM will look for a matching C symbol with the following definition. Note how the symbol name encodes the Java package name, the class name, and the method name in a straightforward way (unlike C++’s name mangling): extern "C" JNIEXPORT jstring JNICALL Java_com_google_devtools_build_lib_unix_NativePosixFiles_readlink( JNIEnv *env, jclass clazz, jstring path) { // Native implementation for the NativePosixFiles#readlink(String) method. } The way the native function is invoked is irrelevant, but it probably happens via dlsym(3). The important thing, however, is that the JVM must have access to these symbols in one way or another, and that the lookup must happen at run time. In standard Java, the way you do this is by calling System.loadLibrary() or System.load() on the native libraries. The former call searches for a library, using operating system-specific naming schemes, in the directories specified by the java.library.path property. The latter call simply loads the given file, although, surprisingly, the file’s base name must also follow the operating system’s conventions. But Bazel offers another mechanism in the Java rules. By using a custom Java launcher, you can handle native libraries in a different way, and that’s what we do inside Google. We have a launcher that bundles all native libraries of the Java target into a single object, then links that object into the JRE binary, and then attaches the JRE to the JAR to form a “self-executable” JAR. This final executable does not need to do any System.loadLibrary() calls to manually pull in JNI dependencies: they are all already in the JVM’s address space because they were put there at binary link time. Very nice… except we only do that for Blaze on Linux, which means neither Blaze on Mac nor Bazel—on any platform—benefit from this. And this is where things get nasty: our internal-only “common case” is magical, but everything else has grown workarounds to deal with JNI… and they all have problems. Initial assessment: SNAFU Here are the many ways we had in Bazel to load the JNI code before any of my changes took place (that is, at or before commit e36e9da), and the many little details that got in the way of cleaning things up: The Google-internal Java launcher for Blaze on Linux. Blaze on Linux needn’t worry about JNI at all as its native code is transparently loaded. The UnixJniLoaderclass, which was explicitly called in Bazel from non-Windows platforms and in the Blaze on Mac case. The WindowsJniLoaderclass, which was explicitly called in Bazel from the Windows platform. The JniLoaderabstract class, which was a more recent attempt at making JNI code loading more transparent but was only used in a bunch of new classes. Some arbitrary System.loadLibrarycalls throughout the code in places where it was known that JNI was unconditionally required. I’m not sure if any of these were in Bazel itself, but the Google-internal modules that required JNI indeed did this—and they did so, especially, to load the non-public libnetwork.dyliband libvfs.dylib. Checks for an io.bazel.enableJniJVM property to determine whether to load the JNI code or not. This property was added to aid the few use cases that could run without JNI (e.g. the bootstrapping process), although it was not correctly recognized throughout the codebase. In particular, Windows unconditionally needed JNI, whereas Unix platforms could work without it. Weird hacks to transform .solibraries into .dylibs for macOS’s consumption. A futile attempt at simplifying JNI usage throughout the codebase by adding the target-os-unix-native-liband the target-os-native-libtargets, which in retrospect made things even more confusing. Knowledge in the top-level Bazel binary packaging rules of the JNI libraries to bundle them into the final self-extracting executable, with extra logic to tell the JVM where they are via the java.library.pathproperty. I’m sure I’m forgetting some interesting details, but you can see from this list that there are a ton of “corner cases” to consider in order to clean things up. I mean… things were really messed up. There is one thing that did work reasonably well in this design though, and that was the fact that unit tests didn’t have to worry about whether their dependencies needed JNI or not. This wasn’t bullet-proof though: some tests and build rules did have to know about JNI and had to either disable it with io.bazel.enableJni=0 or put the shared libraries in the right place. A very fragile scenario, indeed. First cleanup attempt Armed with this knowledge, which was more or less present in my mind before I started but which wasn’t complete until I finished, I started trying to tackle the problem. IMPORTANT: The key observation to resolve this problem is that, at build time, there is zero coupling between building the Java code that defines native methods and building the native C code that supports them. This is bad because neither you nor the compiler can’t know if things will work until you run them. This, however, is good because it gives us the freedom to build the native code separately from the Java code, and allows us to glue them together at run time. My first thought was: let’s remove all dependencies from the Java rules to the native libraries. Then, let’s add a single cc_library target that generates a single native library by pulling all individual bits as deps. And, finally, let’s add a trivial System.loadLibrary statement to Bazel’s and Blaze’s main function to load their own single artifact. I prototyped this solution and was excited by the huge amount of garbage I could remove. The final solution was elegant, very easy to understand, and both Blaze and Bazel both worked… except… tests failed left and right. You see: tests don’t invoke Bazel’s main entry point so they didn’t get a chance to load the needed JNI library in this design. I could have littered the test build graph with explicit dependencies on the JNI library and explicit calls to System.loadLibrary, but that would have been awful. I was trying to simplify things, not make them worse! So I had to scrap this idea and go back to the drawing board. A better approach The two constraints driving my search for a solution were simple: Ensure we load a single shared library at run time to avoid name clashes. I had to loosen this restriction a little bit because our build ends up pulling another shared library that’s out of our control: Netty’s kqueueand epollnative bindings. Upon careful examination, I saw that this library is trivial and doesn’t have any dependencies, so we can ignore it for our purposes. But this highlights that the solution I came up with is not generic enough to work for arbitrary projects. Ensure that any Java target that depends on another target that needs JNI can do so without having to care, at all, that there is a JNI dependency anywhere. In my opinion, this was an unnegotiable property of the solution. If you have a piece of code depending on something else, you should give zero effs on how that happens: the dependency should just work for you. If the dependency wants to pull in native code, so be it, but you ought not to know. Simply put, the design I envisioned is depicted in the following diagram: Simple, huh? We want a single shared library that bundles all individual native libraries at build time to remove duplicate symbols. We want to load that single library at runtime. And we want the contents of this library to support Google-internal extensions when building Blaze inside of Google, and to remain Google-agnostic when building Bazel outside. Now, the question was: how do we get here, given all the many corner cases we have to deal with? Steps towards the solution The general thought process was: let’s try to isolate all JNI loading logic into a single Java package. Once we have that, we can try to make all other Java code depend on it and not have to worry about native dependencies. And to make that possible, we have to rely on extracting the JNI code from a Java resource instead of relying on the caller correctly preparing the java.library.path property on entry. More specifically, here are the steps I took: Bring clarity into the situation by forcing all JNI load requests to happen via a common code path. The end goal for this was to isolate all JNI code in a single place, making it easier to see how to change it. See commit 50adefb. Simplify the way the Windows code paths loaded JNI. This wasn’t a necessity as part of this project, but was raised as a suggestion during a code review… and I like simplicity, so I did it. See commit 9ce9088. Now that we had all callers funneling their JNI requests via the JniLoader, there was no reason to keep the separate UnixJniLoaderand WindowsJniLoaderclasses, so they could be merged. This highlighted some important differences between the two: namely, that Windows relies on the runfiles library clutch to load the JNI code, and that this was only done for testing purposes. Ugly. This had to go, but not yet. See commit 7128580. With the WindowsJniLoadergone, there was no reason to keep the separate windows/jnisubpackage in the source tree… so I merged it into its parent. Again, unnecessary simplification as part of this project, but it was easy enough, so why not? See commit d69744d. The key piece of this puzzle was replacing the way we actually look for and find the JNI library. Instead of relying on external actors such as the Bazel client, the bootstrap script, or unit tests to put the JNI library in java.library.path, we should be able to put the library inside a JAR resource and extract it at runtime. And it was possible. With this, all madness was gone and, finally, once and for all, the JniLoadercould work by trivially pulling it in as a dependency. See commit a2a2f9d. Modify the Google-internal fork of src/main/native:unix_jnito link in the previous networkand vfstargets so that the libunix.dylibbecomes the only JNI library we have to load at run time. This change was trivial at this point: just a matter of changing BUILDdependencies and replacing ad-hoc System.loadLibrary()calls with calls to our improved JniLoader.loadJni(). I cannot link to this change because it has no correspondence in Bazel though, given that Bazel hasn’t yet suffered from this issue (but it could!). I like how this sequence highlights the importance of decoupling refactoring and cleanup work from the actual fixes. All of the steps in this list except for the last one were essentially no-ops and only laid the ground work to make the last step simple. And simple it was. Doing the work this way made it much easier for my reviewer to see what was going on and to assess that each step worked correctly. (Mind you, I actually broke the build somewhere in between two steps… and it was “easy” enough to fix it because the delta between changes was small.) And with that, the original bug is now fixed! Parting words Executing this cleanup has been very frustrating as I briefly mentioned in the opening. Part of the reason is because of the many corner cases that had to be dealt with and how these were not clear upfront. But the other and major part of this came from the fact that Blaze and Bazel have divergent BUILD files in some areas, and ensuring they are all consistent with each other is painful. Throw in long BazelCI turnaround times and the need to get things to work on three different platforms… and you are in for pain. Anyhow. The bug is now fixed. But is what I outlined above the final solution? No, not really. What I’ve done is still an ad-hoc solution to Bazel’s own build process and use of JNI. Anyone writing and building Java code with Bazel also needs to interact with JNI at some point, and those developers are subject to the same issues I’ve described here. I don’t think we have a great answer for them yet. In other words: help welcome. So long, and thanks for all the fish!
https://jmmv.dev/2020/10/bazel-jni.html
CC-MAIN-2022-21
refinedweb
2,694
62.27
The Environmental Impact of PHP Compared To C++ On Facebook 752 Kensai7 writes "Recently, Facebook provided us with some information on their server park. They use about 30,000 servers, and not surprisingly, most of them are running tons of CO2 per year. Of course, it is a bit unfair to isolate Facebook here. Their servers are only a tiny fraction of computers deployed world-wide that are interpreting PHP code." Assumes PHP Dev Effort = C++ Dev Effort (Score:5, Funny) What about all the cycles compiling and debugging C++ code? Or all the trees torn down for C++ books? Or the environmental impact of C++ developers? I mean, have you ever had to share a cube with one of them? Pheewww. Re:Assumes PHP Dev Effort = C++ Dev Effort (Score:5, Insightful) I know your being funny but you've got a good point. Developing and maintaining C++ code is not like developing and maintaining PHP script. Which of course is why we have PHP to begin with. It's designed for the web and ease of implementation. Sure C++ would be faster running but not necessarily more efficient in terms of dollars. Re:Assumes PHP Dev Effort = C++ Dev Effort (Score:4, Funny) Have no fear, turning devs into disposable resources will ensure bright future to efficiency being judged only in hardware terms. Incompetent developers require more servers (Score:4, Interesting) It's a phenomenon we have also noted. Sure C++ would be faster running but not necessarily more efficient in terms of dollars. I think you'll find that the servers come out of the operational budget, not the development one. So the costs of running 10x more servers don't factor into development effort. The costs should of course be charged back to the dev teams. It isn't just one server (Score:4, Insightful) Running a server is cheap. Paying a developer is not. Civilisation is largely about the multiplication of human effort through the consumption of energy and automation. So, we multiply this developer's effort by a couple of thousand when running one machine and then do the same on another several hundred machines beyond. Each costs several thousand dollars to purchase and several thousand more every year in electricity, in cooling, networking, management and maintenance. So, the effects of developer incompetence are also multiplied several thousand times often across hundreds or thousands of systems. Millions if we're really lucky. So it isn't just one server, it's just one extra datacenter. It often pays to hire better people. running a server for a day - $1 You think you get a real server for that? You get a tiny division of a server for that kind of money. 2) why doesn't these big server farms start looking at migrating code from PHP to C or C++ when the PHP+web design is solid? The network effect. They migrate to Java instead. Speed to delivery is nearly always primary importance. Indicating speculative projects and disposable code. Re: (Score:3, Insightful) If you take APC or similar compile caches into account, I think you'll find that the gap is remarkedly smaller than you'd expect. It'll never close entirely, but given that I've seen 20x speedups on some pages, the benefit is huge. Plus, C++ is an environment-hostile choice (Score:4, Funny) Imagine if every website was implemented as an ASIC. Then we could talk about efficient datacenters. Maybe, if you're relly strapped for cash, you could implement each website in an FPGA. But that should only be a stopgap measure until you can afford a proper implementation. Re:c++ is 'write-only' code (Score:5, Insightful) Sure, when the code is written by someone who really knows how to use C++. Ever read bad PHP code? Bad Java code? I have seen programmers do things like this: int int1, int2, int3, int4, int6, int7; No, that is neither a joke nor an exaggeration, and the missing number is deliberate. This is a declaration I saw on a recent project. This kind of poor coding is language agnostic, and it is entirely irrelevant whether someone is using C++, PHP, or even a language like Haskell (bad Haskell code is worse than that worst C++ code I have ever seen -- if you use a functional language, get it right!). On the other hand, I have seen some maintainable C++ code, with appropriate and useful comments, well thought out classes and class relationships, and expert use of the STL. I once worked on a project with C++ code that dated back to the early 90s, and had been continuous updated to support new features and needs, to make use of the STL (yes, this can be written into old code without causing a disaster), and so support systems that did not even exist when the code was originally written. Don't blame the language, blame programmers who never learned about good programming practices. Blame computer science programs that give people degrees they do not deserve. Blame an industry that will hire anyone who can write a hello world program and then assume that they are capable of writing a maintainable system with millions of lines of code. The best programming language in the world will not solve the problem of poor programmers and poor coding practices. Re: (Score:3, Insightful) Ever read bad PHP code? My hobby is refactoring PHP code. Note I say hobby, and not job. After cutting my teeth with C, I moved on to web development with Perl. I was really annoyed at all the quirks in that language, namely, bizarre subroutines instead of functions, and clever regular expressions everywhere. Perl was just a pain, and I still don't like it! So, I decided to give PHP a spin, and I liked it because it was closer to the C code I used to write. It didn't take long for me to realize there was something seriously wron Re:c++ is 'write-only' code (Score:4, Insightful) Maybe you should learn the language first. It seems there are an awful lot of people who love to comment on the complexity and performance of C++, who never bothered to really learn the language. Yet this doesn't stop them from pretending the be experts on it. Re:c++ is 'write-only' code (Score:5, Insightful) What C++ has always lacked, and PHP, Java and others do not, is a bundle of standard libraries that let you do things like process XML, talk to databases, and make templating EASY. That's it. php does the same things C++ does, but go one beyond and add a rich library and of course, the ability to skip the "compile" step in the write -> compile -> test I agree with you, but there's one small thing I don't get. Faced with this piece of information, someone thought the logical thing to do was to, er, write an entirely new language? Re:c++ is 'write-only' code (Score:4, Funny) Re:c++ is 'write-only' code (Score:5, Interesting) It's more like you decide you want a whole new room dedicated to watching movies, but in order to add that to your current house you'd have to spend tens of thousands of dollars and get approval from city hall and your homeowner's association. Just for a fairly small addition. So instead you decide to go build a new house the way you like it, from the ground up, and while you're at it you add ethernet outlets into the planning because you always wanted that in your old house but you would have had to take down the drywall in order to get them where you wanted. Re: (Score:3, Insightful) Strings are a perfect example. The C++ standard defines a string type that is decent enough and fixe Re: (Score:3, Insightful) Re: (Score:3, Funny) "Faced with this piece of information, someone thought the logical thing to do was to, er, write an entirely new language?" by my understanding, the whole new language slant is because of the nightmare of c++ code out there to reuse, with unintended consequences. php is very web centric and java the last attempt at a 'universal' coding setup. python is an example of new language and how more complicated new language implementation is. Are you suggesting that they wrote PHP to avoid code reuse, that there hasn't been an attempt at a cross-platform language since Java, and that Python is complicated, all in the same paragraph? Ridiculous (Score:3, Insightful) Re:Ridiculous (Score:5, Funny) What about the impact of whole classes of C++ bugs that don't exist in C++ I've spent many a sleepless night worrying about C++ bugs that don't exist in C++. I'm glad I'm not alone. Re: (Score:3, Insightful) Developers that are diligent enough to make only 1 memory-related bug/year can certainly spell variable names correctly. If you have statically typed language, you rely on types. If you have dynamic, you rely on unit tests. Both are probably equally slow :) Languages not for everyone (Score:4, Insightful) Re:Languages not for everyone (Score:4, Funny) A PHP programmer who turns out good PHP code The Easter Bunny, Santa Claus, a PHP programmer who turns out good PHP code, and Steve Balmer are in the four corners of a room. In the center of the room is a chair. Who throws the chair first? Steve Balmer, because the other three don't fucking exist! Re:Languages not for everyone (Score:5, Insightful) > a PHP programmer who turns out good PHP code Yeah, that'd be me. Hi! We do exist, and there are plenty of us. Granted, we tend to be outnumbered 100:1 by the PHP programmers who produce complete crap. The same is probably true of nearly any language. Re:Languages not for everyone (Score:5, Funny) A self proclaimed good PHP programmer... yeah there are about a 100 of those to every 1 that doesn't do that. Re: (Score:3, Funny) A PHP programmer who turns out good PHP code Ontological argument: A good PHP programmer is better than a PHP programmer that doesn't exist. Therefore a good PHP programmer must exist. Re: (Score:3, Interesting) Unfortunately, the C++ programmer who writes bad C++ code is more common than the C++ programmer who writes good C++, and the bad C++ is probably harder to rework than bad php. I once rewrote a bit of software that some MIT grads did. Theirs was 20K lines of C++, used 110 MB ram (constantly newing and deleting), used dozens of threads (constantly spawning and harvesting), and drove the system to its knees (90% system, 10% user load). My 2K (yes, one-tenth) lines of straight C used 5 threads (preallocated) Re: (Score:3, Insightful) 10:1... Really? (Score:5, Insightful) Re: (Score:3, Informative) Re: (Score:3, Informative) In terms of total page delivery latency for a typical I/O bound application, sure. In terms of actual cpu usage, 10x overhead for any dynamically typed language is to be expected. If the application servers are CPU bound, that means a lot more servers. In addition, dynamic languages do not compile or JIT well, compared to statically typed languages, which severely limits the overhead reduction achievable. First post from TFA nails it (Score:2, Informative) The REAL solution (Score:5, Funny) Just serve up plain text files. Anything else is pure decadence! where did he get this factor? (Score:5, Insightful) Re: (Score:3, Informative) Re:where did he get this factor? (Score:4, Insightful) I could copy and paste each paragraph of your post followed by some comment to the effect I have no idea how the paragraph is relevant, but I will spare the readers and just pick a couple general points of confusion. You seem to confuse static (plain html, something that does not enter into the conversation _at_ _all_) with server generated (precisely the sort of thing PHP is used for). E.g., in fact, for static content, it can be highly efficient. . . . the majority were serving static content that was pregenerated and refreshed every so often Again, no one is talking about static content, including GP who was talking about forms, and server generated pages. You also seem to be confused about the general load on the server (factoring in thing like total MB served or something) as opposed to precisely the CPU load (again this is what we are talking about: no one cares about the fact that more complex web sites need CSS and JS files served up). E.g., The AJAX code itself must be sent, at the very least, as well as the various UI elements and CSS that are necessary with AJAX -- all of which is still being served like a static page. Re: Server side overhead with AJAX applications (Score:3, Insightful) Ergo, it is going to reduce the processing necessary on the server to do any given job Any given job, yes. But if there are a lot more "jobs" (i.e. more requests that require server side processing), the efficiency of the language used on the server side tends to become more critical, not less, especially if the per request overhead is significant, something that happens to be one of Facebook's primary complaints about PHP. No. (Score:5, Insightful) Simply put: no. The reason why they have so many servers is because Facebook contains so much data. The servers are there for a reason, and the reason is CACHING. The overhead of PHP is very small for a platform that is all about sharing data and the bulk of processor time surely goes towards fetching that data in the first place. What, do you seriously think that when you hit your home page on Facebook, there are database queries issued for that? Lulz. Besides, I'm almost sure that FB uses something like Zend Accelerator, which increases code execution speed a lot. Anyway, just no. Re: (Score:3, Informative) Very true: they are are big contributor to projects like memcache. Please (Score:2, Informative) I don't care about your environmentalism. Re:Please (Score:5, Informative) Why stop there? (Score:5, Insightful) Re: (Score:3, Insightful) Seriously, if you take into account templates and inlining, there is a good chance that moderately good C++ code will run faster than moderately good assembly, on x86-64 of course - simply because assembler coder would not have the patience to take all opportunities of inlining. You might think so, but it's not as simple as all that. You also have to take into account the CPU's caching behavior; large numbers of inlines can (i.e., I've seen it happen) make the size of the working set too large to fit in L1 (or even L2) cache. That in turn means that you're taking a substantial performance hit. What's better, the size of those caches is dependent on exactly what sort of processor you're working with, so compilers don't take them into account. Inlining is a trade-off, as it increases Just think how much greener they could be... (Score:2, Funny) ...were they to rewrite it all in assembly language! Interpreted Languages... (Score:3, Insightful) For something that is deployed to tens of thousands of machines.. Is there some reason why these languages couldn't be compiled and optimized? Code is just the programmer's will expressed as text that the machine can somehow interpret, right? If there is so much PHP out there, why wouldn't/couldn't there be an efficient compiler (by which I mean something that produces executables and not just "executables that are really just an interpreter tacked onto a script") The dearth of such compilers on the market suggests to me that the gains wouldn't be as great as claimed for the majority of applications where interpreted languages are used. Re:Interpreted Languages... (Score:4, Informative) For example, consider the following. Say bad things about PHP all you want (it deserves it) but one of the things you don't generally see with PHP code is a buffer overflow, where you try to copy a bunch of strings and concatenate them together and you run out of room and don't notice it and you go clobbering memory. That's because the string manipulation code goes through a bunch of checks when you're appending strings. You can't just skip these checks and hope that everything will work the same. You may know that such and such a code-path isn't going to need all the bounds checking because you're, say, idunno, assembling fixed-length ZIP+4 codes or something, but the scripting language can't be informed of that fact using any extant mechanism (nor is it clear how you could integrate such a mechanism with the powerful abstraction that lets you not worry about the rest of your strings to begin with). Moreover, as has already been pointed out, a lot of the computational price of rendering a web page is database queries and memory-cached-object queries which employ compiled code already. The string-manipulation overhead isn't all that significant compared to the abstraction that it buys you. It's probably a better idea to track down logic issues, where your code does stupid useless computations that it doesn't need that make it slow, or could do certain computations in advance to make it faster, or such. I think there's a lot more potential for interesting machine optimization of code for things coming from the functional paradigm, where you can mathematically show the equivalence of certain portions of code with its optimized replacement, and that this paradigm will be making a resurgence in some places during the upcoming era of 128-core processors. This might be interesting. Re:Interpreted Languages... (Score:5, Informative) Actually, Facebook uses APC [facebook.com] to compile and optimize the code in the shared memory so it doesn't have to be compiled over and over again. There are other libraries for caching PHP functions on many different levels as well, and they're open source, for the most part. Some real bright minds from Facebook and other large PHP applications have contributed to them. Bottom line: PHP is quite powerful and efficient when built and extended properly. Umm... no. (Score:4, Insightful) Does the author seriously believe that Facebook isn't running some sort of PHP compiling/caching service, like APC or something similar? It would be ridiculous for them NOT to be running something like that, which eliminates much of the advantage C++ would enjoy through being pre-compiled. While there still may be a reduction if Facebook were magically changed to precompiled C++ code, the reduction would be fairly minimal. In addition to that, you'd need to factor in the debugging and coding/compiling times, which would exceed the PHP times by an order of magnitude at least. Re:Umm... no. (Score:5, Interesting) The author is pulling numbers out of his ass and has no clue about what uses most time (waiting for database results mostly), about PHP accelerators and about caching systems like memcached. He's comparing performance of php script running on a raw PHP installation versus running a C++ version of the same script, doing calculations that almost never apply to real world scenarios. I don't see how any company would use C++ to develop their whole systems except maybe for some CGI scripts. Not even Google does it, afaik they use Perl and Python a lot. Anyway, the number of servers has no direct correlation to the programming language. Out of those thousands of systems, lots of them are read only database servers in a cluster, lots are only serving static files (thumbnails, images used in CSS files on people's pages and so on), some servers are used solely for memcached instances and content used very rarely, some are load balancers.... Basically, the author has no clue. I always found Livejournal's presentation about scaling very insightful, especially as it's a pretty big site, just like Facebook and other big time sites. The second link gives a lot of details about how they fine tune mysql and other parts of the system, which just goes to show how the apparent speed improvement of C++ versus PHP can overall be actually insignificant. [google.com] [danga.com] Re: (Score:3, Informative) the author...has no clue about what uses most time (waiting for database results mostly) Like many here, you are confusing page delivery latency with total processor overhead. If you need more than one processor for page processing, how many you need has little or nothing to do with how much latency there is elsewhere in the system. Even if PHP is running 10 times slower... (Score:2) I'm assuming the claim about 10 times is true, which I don't really think so... But they could have done something - like precompile the PHP, just like JIT of Java, to make it better or on par with compiled C program. There are PHP accelerators like Zend Accelerator for that. Re: (Score:3, Insightful) A trolling weak argument (Score:5, Insightful) What a troll. Any point or argument based on assumptions is very weak. Here there are two: "..Let's assume this to be ..." and "...assuming a conservative ratio of 10...". Don't make stuff up. -Foredecker Assuming... (Score:5, Insightful) "assuming a conservative ratio of 10 for the efficiency of C++ versus PHP code" ARRRRRGGGGHHHHHHHHHHHHH Why? On what evidence? I mean, I hate PHP as much as the next guy, but last time I wrote a web application platform in C++, I got to the end, analysed the result and went "Great, I've made the fast bit even faster. Now, about that database engine..." Re: (Score:3, Informative) Latency is a different question than efficiency. If your page generation efficiency is bad, on a small setup the difference may be imperceptible. On a large installation, i.e. one with a large number of servers dedicated to page generation, the efficiency of those servers makes a big difference. Holding latency constant, in a large installation less efficient page generation means more servers. In a small installation, not so much. Hells about to freeze over ... (Score:5, Interesting) .. because I didn't ever think I'd be defending PHP. However, it is a much better choice for a web application than C or C++ - and I say that as someone who codes C, C++ and Java for a living. There are no decent web frameworks for C++, memory management is still an issue despite the STL, and the complexity of the language means both staff costs and development time are inflated. Peer review is harder, as the language is fundamentally more difficult to master than PHP. Compared to Java, the development tools are poorer, and things like unit testing a more complicated despite the availability of things like Cppunit. There's no "standard" libraries for things like database access, and no literature that I am aware of that describes how you would go about designing a framework for C++. You'd most likely end up porting something like Spring to C++, and the even if you published your code on the web, I doubt much of a community would build up around it. If you want a less contentious argument, and one which can be backed up with hard evidence, then argue PHP that should be replaced with Java. A well written Java web application, using a lightweight framework such as Spring or PicoContainer, should outperform ad-hoc C++ code. He's not wrong.... But... (Score:3, Interesting) Seriously, years ago I started working on a c++ version of j2ee (not just servlets, the whole kit) and i mean providing similar functions not identical methods of execution obviously. It wasnt terribly hard actually. But it all falls apart really quickly cause of several reasons: 1) platform architecture - the dependence here, even between different versions of the same distribution was a pain and essentially spelt the end of my work. So I was stuck with "do i make web apps c++ soruce, or shared library binaries?" to which there is only one real answer for portability - source. 2) its a systems langauge - dear god that makes it painful for so many reasons. There are caveats to both those, but the reality is that php exists because it fulfils a need and it does it quite well. To compare the two (c++ and php) is a little ridiculous and ultimately this article just reeks of "please everyone advertise my c++ web tool kit for me!". Sure, facebook (and trillions of others) MIGHT move to c++ web tool kit, but find me a dev that knows how to code an app it, now find me 2, now find me 200 cause thats how many i'd need to write and maintain faceboot apps in c++. Even taking the OP's assumtion c++ is 10 times more efficient at what php does and that you could actually code facebook in it as actually acurate and that php vs c++ is a one-to-one relationship for things like code maintenance, your still stuck with "how many API's am i going to have to re-write and how many php api's do i use that dont even exist in c++". Its ludicrous to assume that you could drop-in replace php with witty without ending up coding tonnes of c++ code just to do things that PHP already provided. Not to mention the zillions of little extensions that revolve around php to accelerate its web-abilities (memcached for example). The number of things that can be used along side php for web-related things and the number of api's in-built to php just mean witty is never even going to be viable as an alternative. Lets also not forget there are millions of people round the globe using php for web stuff - which ultimately leads to php being a good web language (i.e. security problems being found, optimizations, etc etc). Of course, wouldn't facebook be using something like zend to compile php pages? I mean seriously, if the 25000 servers are running php and not running zend the waste here just in cost of servers would be unbelievable - shear idiocy on facebooks part (if it were true, and i'd very much doubt it) and I imagine zend would have almost given it away for free just so facebook could say "we got a x% improvement using the zend compiler". So, I wonder how many people are now learning about witty for the first time (which seems like the only real reason for the article to begin with). Better advertising than adwords! Author needs a clue about metrics (Score:5, Informative) Yes, PHP is a heck of a lot slower on proccessor-bound tasks than C++. In a pure benchmarking contest, no doubt C++ will win. But what about when both languages have to query a database (be it mysql/postgress/oracle, etc)? In this case, both are blocked on the speed of the database. a 15 ms query takes 15 ms no matter what language is asking. Facebook is not calculating pi to 10 gazillion digits, and it is not checking factors for the Great Internet Mersenne Prime Search. It is serving up pages containing tons of customized data. This is not proessor-bound... it is I/O bound both on the ins and outs of the database and the ins and outs of the http request. It is also processor bound on the page render, but the goal of this many machines is to cache to the point where page renders are eliminated. Once a page is rendered, it can be cached until the data inside of it changes. For something like facebook, I bet a page is rendered once for every ~10 times it is viewed by someone. Caching is done in ram, and large ram caches take a lot of machines. So lets look at those 30,000 machines not by their language, but by their role. We can argue the percentages to death, but lets assume 1/3rd are database, 1/3rd are cache, and 1/3rd are actually running a web server, assembling pages, or otherwise dealing with the end users directly (BTW, I think 1/3rd is way high for that.) So 1/3rd of the machines are dealing with page composition and serving pages. If they serve a page ~10 times for every render request, then abtou 1/10th of the page requests actually cause a render... the rest are being served from cache. Those page renders are I/O bound, as in the example above - waiting on the database (and other caches, like memcached), so even if they are taking a lot of wait cycles, they are not using processor power on the box. The actual page composition (which might be 20% of the processing that box is doing), would be a lot faster in C++... So 10,000 servers, the virtual equivalent of 2000 are generating pages using php, and could be replaced by 200 boxes using stuff generated in C++. So the choice of using php is adding ~1800 machines to the architecture. or ~6% of the total 30,000. Given that a php developer is probably 10x more productive than a developer in C++, is the time to market with new features worth that to them? I bet it is. Re: (Score:3, Insightful) You cache pages on your server so that instead of going to the database to fetch info, the info is already there. Until you have a good reason to believe the info has changed. Say, the user updated something or someone posted a message. Then you go back and get new data and cache it again. You also cache page components. Parts of the page that are on a different update schedule than other parts of the page may be cached separately or not at all (like ads). This is stupid (Score:3, Insightful) Companies use PHP to develop and run web app functionality because it saves them huge amounts of time and money over rolling out the same thing if you were to write it all in C++. Realize what the cost structure of a company like Facebook is - the amount they pay their engineers, marketing personnel, and so on is significantly more than their amortized server expenses and server operating expenses (including energy costs, etc.). Furthermore, the 10x speedup assumption seems ridiculous - how much time is spent on their server in compute-intensive PHP loops where huge gains would be made from switching to C++? And how much of the "code" is really database queries of various sorts? Furthermore, you can generally isolate small areas like that in your codebase and rewrite them as modules in C or C++ to be invoked from PHP land - and if they could easily cut their server expenses even in half (let alone by 90%) by having a few engineers spend a few weeks rewriting some components, don't you imagine they've probably set about doing that already? Re-casting a discussion in terms of greenhouse gas emissions or energy use doesn't change any of this - saving energy generally means saving money, unless it takes more expensive resources (such as 100s of humans, who have to spend hundreds of months re-writing code in C++, while they, their families, and dependents emit tons upon tons of greenhouse gases, use electricity, buy groceries, and so-on and so-forth). The cheapest solution certainly isn't always the most environmentally friendly solution (such as when negative externalities are involved - lower labor and pollution standards in China, for example, that make a less "green" product manufactured there less costly in the US), but a vastly more expensive solution that no company in its right mind would implement isn't necessarily greener just because it might save some electricity and a few servers once it was implemented. Time for Congress to legislate language efficiency (Score:4, Funny) This is brilliant! I think it's clear now the direction we must go. Overuse of energy-guzzling languages like PHP have put us on an unsustainable trajectory fueling out of control global warming. Congress must act to regulate the use of these energy-guzzling languages. No longer will programmers and corporations be permitted to turn out inefficient code with impunity. PHP, Perl, Ruby, Bash, your days are numbered! Just wait until we can get UN involved. Python, you and your CO2 spewing simplicity are next! Wasted Energy (Score:3, Insightful) Isn't this "study" a waste of energy? I am a C/C++ programmer by trade; I'm not fond of PHP. Yet this "C++ saves energy over PHP" argument smells like more selfish politics to me. And selfish politics is what is bringing doom down on humanity's head -- the use of PHP vs. C++ is a sideline, a distraction, and only truly valuable for people who have a philosophical axe to grind. You want to save a lot of energy? Shut down all the computers running MMOs. And stop wasting cycles looking for alien signals in cosmic radio waves. And get rid of banal YouTube videos... and... the list is endless. The science behind Global Warming is being used to further political and social agendas that have little or nothing to do with adapting our species from a potential environment change. In the end, selfish politics will kill us all. We will become a footnote in history is we do not discover enlightened self-interest. Green Languages?? (Score:3, Insightful) Ok, this has gone WAY too far .. we all need to just take a step back.. PHP vs. C++ (Score:4, Insightful). This logic is crap (Score:4, Interesting) It would take a really serious amount of in-depth analysis of the server application to even approach knowing what the efficiency impact of using a compiled language vs an interpreter would be on any specific stack. Or even stacks in general. Plus we don't even know what it really means to be "using PHP". What is PHP doing? Is it processing templates, doing just some post or pre processing with some kind of XML pipeline in the middle, how is the PHP deployed, etc? It is simply ridiculous to make any assertions and claim accuracy for them. I'm no PHP fan boy by a LONG shot, but I know from hard experience that often a higher level tool which is optimized for a particular job can get the job done quite a lot MORE efficiently than a lower level one that isn't. Coding C++,savings would be due to the 2015 launch (Score:4, Insightful) Re:php is bad for the environment (Score:5, Insightful) Seriously, is somebody taking seriously the 1 to 10 ratio of the story? I mean, maybe raw execution of pure code is going 10 times slower in PHP than C++ (ouch, I didn't know that) but even then, it's far from representing the same ratio when talking about a number of servers. You have to take into account all other parameters (disk access, network, IO, etc... Those aren't 10 times as slow in PHP one would guess). I would be astonished if this ratio is close to be the truth. Does anyone have any insight/information on this? Re:php is bad for the environment (Score:5, Informative) Re:php is bad for the environment (Score:4, Interesting) Some optimized assembler would make a difference (ducks). But network latencies, number of sustainable TCPs per session, db latency, weird table lookups (even arp drags a server down when you have 20K+ connects) are all at issue. Add in various dirty caches, file locks/unlocks and other OS machinations, and life can be tough for any app written in anything. Then there are the backup servers, the availability servers, the DNS servers, the coffee servers, it just gets bogged down. A 10:1 efficiency claim is probably just language fanboy-ing..... or a consulting job looking for a spot marked X. Certainly it's nice to be green... but using better optimization tricks (like GCD) for multi-cores is bound to help.... tickless kernels..... SSDs..... C++ wouldn't be my first pick. Re:php is bad for the environment (Score:5, Funny) "even arp drags a server down when you have 20K+ connects" Are you perhaps a server admin in my company? I swear this is the best excuse for poor performance I've ever heard. Re:php is bad for the environment (Score:5, Insightful) It probably is a valid excuse if you have 20,000 client machines connecting locally via ethernet from a B class subnet such that the arp tables on the server keep overflowing. Of course if you, as a system administrator ever let such an environment be setup you probably are really good at excuses anyway. Re: (Score:3, Interesting). Testing showed that the optimum So the bindings make a difference? (Score:3, Insightful) Why is it that a decent PHP (or Python, or Ruby) MySQL binding couldn't do the exact same thing? Re: (Score:3). Are you trying to imply that PHP establishes an entirely new connection to the database for every query? If so, you basically lose all credibility you might otherwise have. Figures off by a factor of 10 to 100 (Score:3, Informative) My own experience doing server development in c was that it's a minimum of 30:1 (and in in some cases, much greater). Plus the speed differential is huge, and also in favour of c. There's a big difference between a couple of hundred requests a second and 6,000 - 10,000. Then again, the php code had to be served through apache, while the c code was served directly by a custom server sitting on a separate socket, so there's no telling how much of the overhead was from apache. Even the absolute worst-case Re: (Score:3, Insightful) What kind of work were those 10K req/sec on your own custom server doing? Was it a standard db-backed web app, or something more specialized and computationally intensive? Not that I doubt the difference you saw - but I'm still skeptical of the 10:1 factor as applied to Facebook servers, which seem relatively standard webapp cycle (request -> datastore lookup -> html), *just from the programming language*. Admittedly, I don't do PHP, so the language could be as bad and impossible to scale as you claim. Re:Figures off by a factor of 10 to 100 (Score:4, Informative) Those were actual benchmarks run at peak load for 5-minute periods. sustained rate of over 600,000 queries in 5 minutes, or 2,000 per second (around 2,200 iirc), on absolutely craptastic hardware, against an 8 gig mysql table. Benchmark was by running ab (apache benchmark) against a custom forking server instead of apache, tested with between 100 and 400 simultaneous requests. Threads were never "reaped", always reused, so it was important that there were no memory leaks, but never having to spawn another thread after initial startup also contributed to the :-) Re: (Score:3, :-) You're forgetting all the php optimizers, script and chunks of code caching in bytecode and ram caching of scripts. These make a major difference, but are probably just used on larger websites (like on facebook) Re: (Score:3, Insightful) Yes, it is harsh, but anyone who has not programmed in c and assembler, and then spouts off nonsense about how php can't possibly be 10x slower, doesn't have the programmer mind-set. That mindset includes understanding the runtime environment - which means knowing the limitations of your tool - in this case php. That means you'll not "have" to do something in php because "when all you know is php, everything looks like it needs a script" rather than a different tool. Case in point - generating test data Re: (Score:3, Insightful) While you're churning away your super optimized C code which runs faster than god knows what and finally debugging the library to handle your super cusotmized tcp/ip replacement, I'll have already rolled out the application you wanted to do, but in some "non-programming/scripting" language like PHP, Ruby, Python, or hell... even Java. There's a purpose for every language out there and frankly, writing some form of code to have a computer perform specific tasks is called programming. So please contain your e Re: (Score:3, Insightful) Which would be very relevant if Facebook was doing heavy number-crunching. The only numbers on the site are comment and friend counts, which isn't especially taxing work (especially since it's all de-normalized). The majority of FB is database activity and transforming that into HTML and JSON. If you want to place blame for inefficiency, MySQL would probably be your best bet. Re: (Score:3, Funny) Seriously, is somebody taking seriously the 1 to 10 ratio of the story? Only 1 to10 ?!? I would have thought 1 to 100. Re:php is bad for the environment (Score:4, Informative) From my personal experience: Data-heavy applications run at a complete crawl in PHP. 10 times slower, is, in my opinion, a vast understatement. Then again, that’s not the point of PHP. The point is, that in PHP, provided you already know how to program, also get things done more than 10 times faster, than in C++. Because there is a simple function with defaults and automatisms for literally everything. Only if those defaults and automatisms are other than what you expect, you will get into big trouble. And because the PHP interpreter is truly a horrible piece of shit (I was able to run totally illegal constructs, with plain text right in the middle of the code, and it ran, doing nothing of what I expected it to do.), that happens quite a lot. It’s one reason that drove me to the extreme strictness of Haskell, where you have to get it right upfront, so it doesn’t bite you in the ass later. Re:php is bad for the environment (Score:5, Funny) Re:php is bad for the environment (Score:5, Insightful) "development" also has one. Not to mention clients. 20K servers is nothing compared to the millions of clients drawing higher power due to running looping flash commercials. Re: (Score:3, Interesting) You mean kind of like Road Send [roadsend.com] Re: (Score:3, Informative) Re:Sounds like cheap C-- drugs ! (Score:4, Insightful) while true it ignores things like your comparing a simple search box, with millions of users who post multi megabyte files to their personal space for everyone to see. try it some day save a facebook user's page locally and see just how much data is coming down that pipe, on top of the scripts that are running. Your comparing googles front door with facebooks entire company. Google probably has that many servers running web crawlers, and twice over again to store that massive database they use. Re:people use PHP? (Score:4, Informative) Re: (Score:3, Insightful) It was built from day one to integrate with Apache, it's not a nasty bolt-on hack like mod_perl. It's in-process so there's no startup overhead like with CGI So mod_php is not a nasty bolt-on hack? Re:people use PHP? (Score:5, Insightful) mod_php has never integrated into Apache nearly as deep as mod_perl did. That is, lower level Apache APIs are not exposed to PHP. Using mod_php is an acceptable replacement for CGIs, but mod_perl does a lot more than that. That means taking over the entire server life cycle handlers to the point where, in Apache2, you can implement (say) a Gopher server if you want. mod_perl is not a hack. PHP, as a language and an API, very much is. Re: (Score:3, Funny) I came here for an argument! Re:people use PHP? (Score:5, Insightful) Your post is really annoying. Did you mean to be so obnoxious? And +5, Insightful. Come on, php isn't popular with slashdotters but whatever one calls reverse fanboyism it isn't cool either. No, features that make web development "dead simple" are those that actually do something to make web development simpler... Absolutely. And PHP does it. That's why it's so popular. There may be even more that can be done but if no popular language is doing it already that argument is kind of pointless. You contradict yourself. No he doesn't. You might not like scripting / dynamic languages but taking the best (or a good stab at taking the best) of scripting, C and perl can actually make some things more straight-forward. Need a regular expression? Used to function calls rather can syntactical regex? Need perl regex? preg_match. Patently false. PHP has no dependency on Apache now, it originally used CGI, and continues to support CGI, FastCGI, and operation as a module in web servers other than Apache (such as IIS). The CGI startup overhead problem has many solutions, such as FastCGI, AJP, proxying, etc. Patently missing the point. PHP and Apache go together so well it created the LAMP mindshare space. But "not in-process" does not imply the use of CGI, and it does not imply the use of any system with long loading times. Furthermore, "in-process" is potentially insecure and can be less reliable - as all code runs in the same process. Who cares? His point is startup cost which is generally higher for forks vs modules and you're just plain going to get more scalability compared to the traditional perl cgi forking method. Hence mod_perl. Give me a break. You can dislike anything you want but why do you even bother when you don't have all the facts. +5, Insightful. Dear me... Re:people use PHP? (Score:5, Insightful) I use it because I can code up relatively fast, relatively secure dynamic websites in a very short amount of time. I can install it on a webserver in seconds and it integrates beautifully with Apache and MySQL. Maybe there is a better solution out there, but PHP has always done what I need it to and I've never had a problem with it. It's never given me a reason to look elsewhere. What I don't understand is all of the PHP-haters out there. Really, who cares if it is "the script kiddie's substitute for cgi-perl"? Isn't the proper measure of a tool if it does what you need it to and not who else uses it? Re:people use PHP? (Score:5, Insightful) Re:people use PHP? (Score:4, Interesting) Actually, both parent and GP are right. PHP is wonderful for web development, but has more than a few annoying quirks with regard to consistency. On the flipside, it has hands-down some of the best documentation on the planet, which makes the quirks tolerable, and is a big part of the reason why the language is so popular (especially with new programmers) I'm seriously hoping that a new PHP release finally clears up all of the inconsistencies in the main namespace once and for all. It'll be painful at first, but a very-good-thing in the long term. Updating old scripts could even be a semi-automated process, given that the necessary changes are extremely superficial. Re: (Score:3, Informative) I remember when it was the script kiddie's substitute for cgi-perl. What does it offer from a theoretical and engineering PoV, apart from a Visual Basic learning curve? Market penetration. From managerial perspective, you can hire PHP developers a dime a dozen, and replace them very quickly if needed. From developer perspective, you can grab any of those "PHP in 10 nanoseconds for complete idiots" books, an Apache+PHP+MySQL bundle installer for Windows, and learn it in a few days to the level sufficient to be hired. Of course, the typical quality of a PHP solution is what you'd expect from such approach, but when did it ever stop anyone? If you mean technological advantages, Re: (Score:3, Insightful) And everything exuding heat is perfectly natural, no problems there. The deaths and environmental changes from heat exchange in rivers near power plants don't happen, nope, uh uh. Water's perfectly natural you need it to live, no way to drown in it, nope, uh uh. Re:F1 car in normal street. (Score:5, Funny) You can go to work in a F1 car, or your normal car. I wish. My F1 always gets stuck in the gutter at the end of the driveway. But the same argument could apply (Score:4, Insightful) Yes. I know the difference. C is an elegant if simple language, which is hard to program properly. C++ is an abomination that attempted to take the elegant, simple nature of C by bolting on spare body parts from dead object-oriented corpses, resulting in a language that is neither simple nor elegant, which is even harder to program properly. See, I know the difference. But if the point is to gain efficiency, why would you stop at C++? It's not a magical perfect balance of performance with elegance. C would give better performance than C++. Sure, there's the non-OO tradeoff (though you could quite easily gain the benefits of OO, though not as elegantly as C++), and then you don't have to deal with fucking templates (which are really nice to program, but a bitch to clean up when someone else has fucked them up for you). The premise of the article is stupid, and shows a pure lack of understanding of PHP, web service architecture and implementation, and a not-inconsiderable dose of C++ fanboi-ism. Re:so where is the example of a company doing this (Score:4, Interesting) I have done projects like this, and received massive speedups and performance increases. The issue is that you need to understand the real reasons why rewriting a program in C and/or assembly gives a massive performance increase. Inevitably, the reason why the C program is so much faster, is that a programmer has went through and rethought the application. The programmer eliminated string copies, string manipulations, data communication overheads, and data manipulation/translation overheads by rethinking the programs design. For example, imagine a very simple application designed to take a digital input, and display a red/green indicator to a user depending on the input state. Count every time a major string overhead, data communication overhead, or data translation overhead occurs in each of the proposed solutions. Web Solution 1. Input digital input via PLC (Data Overhead #1) 2. Upload data from input via PLC communications protocol to PC (Data Overhead #2) 3. Make data available to other programs, for example RSSQL makes real-time I/O appear as SQL database queries (Data Overhead #3) 4. Use PHP or ASP to generate a web page based on a SQL query for the real-time input (Data Overhead #4) 5. Use a web browser to query the relevant web page. (Data Overhead #5) Web Solution performance: it might be able to update the display screen every 1/5 second. Embedded C Solution 1. Input a data point using real-time I/O 2. Paint a computers display screen accordingly. (Data Overhead #1) C Solution Performance: 1/60 second, limited by the refresh rate of the monitor. Assembly / Microcontroller Solution 1. Input the data point, with INP , AX 2. Output the data point to a Red/Green LED, with OUT AX, Note: the assembly implementation doesn't have any string manipulation, so it doesn't have any significant data overhead. Assembly Execution Time: Less than 1 micro-second. The crucial concept from the above example is that the programmer reduced overhead and execution time, by simplifying program operation. The problem was solved in 3 different ways, and the fastest solution wiped out all the communication/string/data management overhead. If you want to make a computer program very fast, it is necessary to reduce data communication, string manipulation, and complex data structure overhead. Which languages do this and why: .NET encourage carefree string use and data structure use. The have automatic garbage collection. As such, minimal penalties exist for the programmer to use strings. Level 1 - Simplest: Assembly is the best at wiping out string overhead, because engineers willingly migrate complex functionality to hardware before implementing it in assembly. In this case, the display screen was eliminated in favour of a direct output to an LED. Level 2 - Low-Level: C is remarkably quick at string manipulation programs, because programmers minimize the amount of string manipulation. String manipulation in C sucks, and is difficult to get correct. As such, programmers attempt to minimize it, or use optimized tools like lex/flex or yacc/bison that automate the difficult problems. Level 3 - Garbage Collected: Java and Level 4 - Scripted: PHP, Perl, Python are higher level languages focused on easy programming for high-level tasks. They pretty much assume the programmer doesn't care about the overhead of processing strings or complex data structures. Instead, they make it easy for the programmer to program the complex data structures. An application like FaceBook has to have some complex data structures to do its job. In that case, a migration from PHP to C will likely not produce great benefits, because the C program still has to do all the same work the PHP program does. The old rule was that interpreters were very slow. With modern techniques, just about any language can be sufficiently compiled to Re: (Score:3, Funny) Alright wise guy. Explain twitter.
http://developers.slashdot.org/story/09/12/20/1433257/the-environmental-impact-of-php-compared-to-c-on-facebook
CC-MAIN-2015-11
refinedweb
9,084
60.95
We have several file servers and around 400 users divided between them. Lately we've been having reports of disconnects or people losing their mapped drive to their file server, documents taking a long time to open or save. By the time they report it and I remote to the server, all is fine, and they agree that it's running fine again - the whole thing is very intermittent. One thing I have noticed is that when problems are being reported on one of the servers, there are usually 70 - 80 user sessions open. When no one is reporting problems I will look and see that there are only maybe 40 sessions. The other errors I see (not in the error logs, but when I log on) is a failure to load my profile because of "insufficient resources'. During the lower user session times, I do not get this error and my profile loads fine. The machines have 4 GB, plenty of storage space available. The one with the most problems is physical, but one that is virtual has similar problems. Another physical one built around the same time has no problems. Could we have two many user sessions going on at the same time for these servers to handle? --Sandy 4 Replies Jan 27, 2012 at 1:17 UTC I would start looking for clues in the App and System logs. I don't think it's the OS itself, have had much higher user counts than that with no problem. Would look at hardware as well as Anti-Virus. Probably start with AV as when I've had problems in the past, it's either been AV on the server, or users configured to scan their network drives killing the poor unsuspecting server's throughput. Jan 27, 2012 at 2:09 UTC Take a look at this to see if it helps. http:/ http:/ Feb 16, 2012 at 2:16 UTC Echoing GoofyGeek's post, 4GB seems like not even close to enough memory for a file server hosting for that many concurrent sessions. I'd take a look at his links and then see if that is the bottleneck for you. Feb 16, 2012 at 2:28 UTC You could use DFS namespace for the shares with replicated folders so the user demands balance between the replicated folders!
https://community.spiceworks.com/topic/194228-how-many-user-sessions-can-server-2003-standard-support
CC-MAIN-2016-50
refinedweb
391
76.76
ZK Using Grinder 3.0 Gerard Cristofol, Project Manager, Indra Sistemas S.A. July 25, 2007 ZK Framework 2.4.1 - Grinder 3.0 beta Abstract This document presents a way to use Grinder Framework to load test ZK Applications. Also proposes an IdGenerator (New Features of ZK 2.4.1 ) implementation that keeps Grinder scripts simple The Issue Before 2.4.1 was released, the dynamic id assignment gave automated test tools a hard time. Mostly cause stress tools record navigation and events and don’t expect component ids to change after that. The Solution We’ll solve the problem (thanks to 2.4.1 new feature) developing an IdGenerator, witch generates suitable ids for the test tool. Such ids are easy to handle within Grinder and stay the same between sessions. That being solved, we will make desktop ids travel in the response so Grinder Worker threads can avail it. The ZK Side Build a ZUL Page First we’ll need a ZK WebApp where the extra components are to be added. I’ve used NetBeans (First ZK application with NetBeans ) to build an application that’s not particularly more complex than a helloworld. In this particular case: <?xml version="1.0" encoding="UTF-8"?> <?page title="ZK::Hello Grinder!"?> <window title="My First Testeable Window" border="normal" width="200px"> Hello, (Grinder) World! <textbox id="caption" value="x"/> <button label="add x"> <attribute name="onClick">{ caption.value = caption.value + "x"; System.out.println("I've been pressed! from " +caption.desktop.id); }</attribute> </button> <textbox id="thecaption2"/> <intbox id="theinteger"/> </window> The only remarkable chunk of code is the onClick event for the button. That’s the event we’ll try to invoke from Grinder. The other components are there just for you to see how component ids are built IdGenerator Next we will need a implementation of IdGenerator, here is my suggestion public class GrinderIdGenerator extends DHtmlLayoutServlet implements org.zkoss.zk.ui.sys.IdGenerator { private static AtomicInteger _desktop = new AtomicInteger(); private static ThreadLocal _page = new ThreadLocal(); private static ThreadLocal _response = new ThreadLocal(); @SuppressWarnings("unchecked") protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { _response.set(response);//soon to be used... super.doGet(request, response); } @SuppressWarnings("unchecked") public String nextComponentUuid(Desktop desktop, Component component) { //NOTE: Limited to one page per desktop Page page = (Page) desktop.getPages().iterator().next(); HttpServletRequest hsr = (HttpServletRequest) page.getDesktop().getExecution().getNativeRequest(); String pageURI = hsr.getRequestURI(); AtomicInteger ai = (AtomicInteger)_page.get(); String compid = pageURI +"_"+ component.getClass().getName() +"_"+ ai.getAndIncrement(); System.out.println("component id: " + compid); return compid; } @SuppressWarnings("unchecked") public String nextPageUuid(Page page) { HttpServletRequest hsr = (HttpServletRequest) page.getDesktop().getExecution().getNativeRequest(); _page.set(new AtomicInteger());//not really needed since is threadlocal return hsr.getRequestURI(); } @SuppressWarnings("unchecked") public String nextDesktopId(Desktop desktop) { String dtid = "Desktop_"+_desktop.getAndIncrement(); System.out.println("desktop id: " + dtid); HttpServletResponse response = (HttpServletResponse)_response.get(); response.addHeader("Desktop", dtid);// ...and here it is! return dtid; } } Couple of things I want to mention here: - This is a simple implementation that uses the native request uri to name the page after. This implementation only considers components as being held in the first page, the generated ids will look something like /WebApplication1/index.zul_org.zkoss.zul.Button_3 not specially pretty but easily identifiable. - Notice that I extend DHtmlLayoutServlet in this very class. That’s because I keep the response in a ThreadLocal variable, and use it later in nextDesktopId as soon as the DesktopId is generated. A cleaner implementation this might be achieved if process method on DHtmlLayoutServlet was public. Configuration files ZK has to be aware of our plans. Two files have to be changed: web.xml <servlet> <description>ZK loader for ZUML pages</description> <servlet-name>zkLoader</servlet-name> <servlet-class>es.indra.demo.GrinderIdGenerator</servlet-class> <init-param> <param-name>update-uri</param-name> <param-value>/zkau</param-value> </init-param> </servlet> And zk.xml <system-config> <id-generator-class>es.indra.demo.GrinderIdGenerator</id-generator-class> </system-config> That’s it. No more work on the ZK side The Grinder Side Installation - Download Grinder 3 beta - Create the grinder.properties file. this file controls the console and agent processes. A simple configuration will do, set grinder.processes, grinder.threads and grinder.runs equal to 1. - More detailed info availavle at Record a Test Grinder records the browser activity using TCPProxy. TCPProxy is a proxy process that you place between your browser and a server. Here is detailed information and even a picture The browser has to be configured to use TCPProxy (normally on port 8001). I’m pretty sure (would bet on it) you are going to have to repeat this configuration more than once, so I suggest you to use SwitchProxyTool or equivalent extension before it drives you insane. Once the proxy is running and browser setup, recording a test can’t be more simple, just bring the application and click our only button like there’s no tomorrow. Big time fun! Fix the grinder script The generated script is almost ready, just a couple of tweaks have to be made. Edit grinder.py locate the procedure where GET is done and add python code to retrieve the Desktop header: # A method for each recorded page. def page1(self): """GET index.zul (request 101).""" result = request101.GET('/WebApplication1/index.zul') self.token_desktop = result.getHeader("Desktop") return result Now replace every occurrence of the string Desktop_n for the freshly created field. You should have one for each button click you made. def page5(self): """POST zkau (request 501).""" result = request501.POST('/WebApplication1/zkau', ( NVPair('dtid', self.token_desktop), NVPair('cmd.0', 'onClick'), NVPair('uuid.0', '/WebApplication1/index.zul_org.zkoss.zul.Button_3'), NVPair('data.0', '26'), NVPair('data.0', '4'), NVPair('data.0', ''), ), ( NVPair('Content-Type', 'application/x-www-form-urlencoded'), )) return result The script is now ready to go. Bring it on! Time to start the Grinder, like always detailed info is easy to reach on the Web - Start the Console - Go to script tab and set grinder.py as the script to run . - Start an Agent (you could start many of course) - Go to the Console again and click “Send files to worker processes” - Press play to start the worker process. Now, in the Graphs tab, there’s different colours stuff turning on and off which is always a good sign. Furthermore, in case you don’t trust colors, check the Tomcat log to see what’s happening: Grinder is doing the browsing. You can add more agents/threads and se how multiple threads interact with your app. Future Many improvements could be made to the generator to handle multiple desktops and use a persistent id generator that keep assigning the same id to a component no matter what changes happen in the zul file. I didn’t go deeper in that because: 1) components are not complete by the time the id is requested for them and 2) I’m not sure adding components to a zul page doesn’t imply new functionalities with mean record a new Grinder script anyway. I shall be delighted to hear your suggestions on zk forums. Gerard Cristofol is a Project Manager at Indra Sistemas, Spain. He has got a degree in Computer Science, a MPM, RSA Certification, and extensive experience in Java related technologies: web applications, enterprise integration. He is now focusing on ESB service infrastructures.
https://www.zkoss.org/wiki/Small_Talks/2007/July/ZK_Using_Grinder_3.0
CC-MAIN-2022-27
refinedweb
1,224
50.63
ticket summary component version priority severity milestone owner status created _changetime _description _reporter 8025 """During interactive linking, GHCi couldn't find the following symbol"" typechecker error with TemplateHaskell involved" Compiler (Type checker) 7.6.3 normal infoneeded 2013-06-30T19:43:53Z 2014-11-22T03:40:07Z }}}" mojojojo 8916 """error: thread-local storage not supported for this target"" when building cross-compiling GHC on OSX" Compiler 7.6.3 normal new 2014-03-21T19:02:33Z 2014-03-21T19:02:33Z "Target is `x86_64-pc-linux`. I'm using the toolchain from, and configured `--with-target=x86_64-pc-linux`. {{{ ===--- building phase 0 /Applications/Xcode.app/Contents/Developer/usr/bin/make -r --no-print-directory -f ghc.mk phase=0 phase_0_builds make[1]: Nothing to be done for `phase_0_builds'. ===--- building phase 1 /Applications/Xcode.app/Contents/Developer/usr/bin/make -r --no-print-directory -f ghc.mk phase=1 phase_1_builds utils/unlit/ghc.mk:18: utils/unlit/dist/build/.depend.c_asm: No such file or directory utils/hp2ps/ghc.mk:24: utils/hp2ps/dist/build/.depend.c_asm: No such file or directory includes/ghc.mk:146: includes/dist-derivedconstants/build/.depend.c_asm: No such file or directory includes/ghc.mk:184: includes/dist-ghcconstants/build/.depend.c_asm: No such file or directory utils/genapply/ghc.mk:26: utils/genapply/dist/build/.depend.haskell: No such file or directory utils/genapply/ghc.mk/binary/ghc.mk:3: libraries/binary/dist-boot/build/.depend-v.haskell: No such file or directory libraries/binary/ghc.mk:3: libraries/binary compiler/ghc.mk:450: compiler/stage1/build/.depend-v.haskell: No such file or directory HC [stage 0] includes/dist-ghcconstants/build/mkDerivedConstants.o In file included from rts/Capability.h:25, from includes/mkDerivedConstants.c:27:0: rts/Task.h:238:0: error: thread-local storage not supported for this target make[1]: *** [includes/dist-ghcconstants/build/mkDerivedConstants.o] Error 1 make: *** [all] Error 2 }}}" joelteon 8573 """evacuate: strange closure type 0"" when creating large array" Runtime System 7.6.3 normal rwbarton new 2013-11-29T14:43:36Z 2014-11-18T04:17:51Z "Consider the following code: {{{ module Main where import Data.Array xs :: [Int] xs = [0 .. 64988] crash :: Int -> IO () crash m = array (0, m) [ (x, x) | x <- xs ] `seq` return () strangeClosureType = do print (sum xs) crash (maxBound - 1) segFault1 = crash (maxBound - 1) segFault2 = do print (sum xs) crash (maxBound - 2) }}} If I compile the program using `ghc --make Main.hs -O -main-is strangeClosureType`, then I get the following error message: {{{ Main: internal error: evacuate: strange closure type 0 (GHC version 7.6.3 for i386_unknown_linux) Please report this as a GHC bug: Aborted (core dumped) }}} If I don't use `-O`, or if I let `segFault1` or `segFault2` be `main`, then I get the following error message instead: {{{ Segmentation fault (core dumped) }}} If the number `30000` is replaced by some other number, then the strange closure error may be replaced by a segfault, or even no error at all. Perhaps this is another instance of bug #7762; I have only tested using GHC 7.6.3. I am using base 4.6.0.1 and array 0.4.0.1." nad 9775 """Failed to remove"" errors during Windows build from hsc2hs" Build System 7.8.3 low new 2014-11-05T22:45:29Z 2014-11-06T13:06:11Z "The Windows build over here spews a bunch of errors during the build like the ones below. All of them seem to be related to hsc2hs. The errors do not break the build for some reason, so the problem is not critical, but we should look into what's going on here. Note that the errors are indeterministic and the set of files affected varies between builds. Examples: {{{ ghctrace.2/3.log:Failed to remove file libraries/base/dist-install/build/System/CPUTime_hsc_make.exe; error= DeleteFile ""libraries/base/dist-install/build/System/CPUTime_hsc_make.exe"": permission denied (Access is denied.) ghctrace.2/4.log:Failed to remove file libraries/hpc/dist-boot/build/Trace/Hpc/Reflect_hsc_make.exe; error= DeleteFile ""libraries/hpc/dist-boot/build/Trace/Hpc/Reflect_hsc_make.exe"": permission denied (Access is denied.) ghctrace.2/4.log:Failed to remove file compiler/stage2/build/Fingerprint_hsc_make.exe; error= DeleteFile ""compiler/stage2/build/Fingerprint_hsc_make.exe"": permission denied (Access is denied.) ghctrace.3/1.log:Failed to remove file libraries/hpc/dist-boot/build/Trace/Hpc/Reflect_hsc_make.exe; error= DeleteFile ""libraries/hpc/dist-boot/build/Trace/Hpc/Reflect_hsc_make.exe"": permission denied (Access is denied.) ghctrace.3/4.log:Failed to remove file libraries/old-time/dist-install/build/System/Time_hsc_make.exe; error= DeleteFile ""libraries/old-time/dist-install/build/System/Time_hsc_make.exe"": permission denied (Access is denied.) ghctrace.3/5.log:Failed to remove file libraries/Win32/dist-install/build/Graphics/Win32/GDI/Pen_hsc_make.exe; error= DeleteFile ""libraries/Win32/dist-install/build/Graphics/Win32/GDI/Pen_hsc_make.exe"": permission denied (Access is denied.) }}}" gintas 9253 """internal error: evacuate(static): strange closure type 0"" when running in QEMU" Runtime System 7.8.2 normal simonmar new 2014-06-30T15:59:50Z 2014-07-01T17:31:40Z "We're trying to use NixOS to automatically run our acceptance tests in a VM on commit, in order to guarantee system stability. However, we're periodically seeing our web server component, compiled with GHC 7.8.2 is failing to start up. When it fails, the error message is always: {{{ internal error: evacuate(static): strange closure type 0 (GHC version 7.8.2 for x86_64_unknown_linux) Please report this as a GHC bug: }}} Happy to do more debugging on this, but I'll need to be pointed in the right direction. " ocharles 7836 """Invalid object in processHeapClosureForDead"" when profiling with -hb" Profiling 7.4.2 normal 7.12.1 infoneeded 2013-04-14T13:26:40Z 2014-12-23T13:34:10Z "Running the attached program, compiled with ""-threaded -Wall -auto-all -caf-all -fforce-recomp -fprof-auto-top -fprof-auto-calls"" - with the following flags: +RTS -hc -hbdrag,void -RTS The output is as follows: {{{ leaks: internal error: Invalid object in processHeapClosureForDead(): 60 (GHC version 7.4.2 for i386_apple_darwin) Please report this as a GHC bug: Abort trap: 6 }}} " hyperthunk 1851 """make install-strip"" should work" Build System 7.10.1-rc1 normal normal 7.12.1 new 2007-11-08T01:40:57Z 2015-01-26T15:34:12Z "With the bindists (not sure about a normal build tree) install-strip doesn't work: {{{ $ make install-strip make: *** No rule to make target `install-strip'. Stop. $ }}} It is defined in mk/install.mk, so it presumably is meant to. The blurb after running configure should mention it, too. The target is described in the GNU coding standards: " igloo 10244 """memory barriers unimplemented on this architecture"" on ARM pre-ARMv7" Runtime System 7.10.1 normal simonmar new 2015-04-05T16:31:03Z 2015-04-05T16:31:03Z "Phab:D33 broke the build on ARM pre-ARMv7, because there is no definition of `store_load_barrier` or `load_load_barrier` for those platforms. (Granted, the old fall-back behavior of doing nothing was almost certainly incorrect.) I don't know whether we need CPU-level barriers here, or whether they are available. At a minimum, we need a compiler-level barrier. If we need a CPU-level barrier and it isn't provided by the instruction set, then I guess we should disable SMP for these platforms in `mk/config.mk.in`." rwbarton 9677 """operation not permitted"" when running tests on Windows" Compiler 7.9 normal new 2014-10-11T19:20:27Z 2014-10-11T19:20:27Z "When I run ghc tests on windows using ""make test"", I see instances of the following: =====> T5442d(normal) 86 of 4093 [0, 1, 0] cd ./cabal && $MAKE -s --no-print-directory T5442d T5442d.run.stdout 2>T5442d.run.stderr [Errno 1] Operation not permitted: './cabal/package.conf.T5442d.global' [Errno 90] Directory not empty: './cabal/package.conf.T5442d.global' [Errno 1] Operation not permitted: './cabal/package.conf.T5442d.user' [Errno 90] Directory not empty: './cabal/package.conf.T5442d.user' [Errno 1] Operation not permitted: './cabal/package.conf.T5442d.extra' Not sure if these are legitimately harmful, but something is going wrong. I am running ghc head on latest msys2. I also tried running the process as administrator to rule out permission issues due to handling symlinks." gintas 9210 """overlapping instances"" through FunctionalDependencies" Compiler (Type checker) 7.8.2 normal new 2014-06-16T12:52:56Z 2014-06-19T07:56:20Z "This program prints `(""1"",2)`, but if you reverse the order of the two instances, it prints `(1,""2"")`. {{{ {-# LANGUAGE FlexibleInstances #-} {-# LANGUAGE MultiParamTypeClasses #-} {-# LANGUAGE FunctionalDependencies #-} -- extracted from import Control.Applicative import Data.Functor.Identity modify :: ((a -> Identity b) -> s -> Identity t) -> (a -> b) -> (s -> t) modify l f s = runIdentity (l (Identity . f) s) class Foo s t a b | a b s -> t where foo :: Applicative f => (a -> f b) -> s -> f t instance Foo (x, a) (y, a) x y where foo f (a,b) = (\fa -> (fa,b)) <$> f a instance Foo (a, x) (a, y) x y where foo f (a,b) = (\fb -> (a,fb)) <$> f b main = print $ modify foo (show :: Int -> String) (1 :: Int, 2 :: Int) }}} Note that the two instances involved `Foo (Int, Int) (String, Int) Int String` and `Foo (Int, Int) (Int, String) Int String` are not actually overlapping. But, they have the same `a`, `b`, and `s` fields and it seems that this makes GHC think that either one is equally valid, thanks to the fundep." rwbarton 9070 """Simplifier ticks exhausted""" Compiler 7.6.3 normal new 2014-05-03T14:29:04Z 2014-05-05T13:55:04Z "I know nothing more than what GHC said (I'm on debian, installed from the repositories). {{{ (GHC version 7.6.3 for x86_64-unknown-linux): Simplifier ticks exhausted When trying UnfoldingDone base:GHC.Base.$fMonadIO_$c>>={v rJ2} [gid] To increase the limit, use -fsimpl-tick-factor=N (default 100) If you need to do this, let GHC HQ know, and what factor you needed To see detailed counts use -ddump-simpl-stats Total ticks: 31764 }}} The code triggering the error is the code commented out in lines 52-54: {{{ 42 redis_family_raw <- getEnv redisFamilyVar 43 redis_addr <- getEnv redisAddressVar 44 redis_port <- getEnv redisPortVar 45 46 (family,redis_family) <- case (parseFamily family_raw,parseFamily redis_family_raw) of 47 (Just fam, Just rfam) -> return (fam,rfam) 48 x -> fail $ ""Unknown address family: "" ++ show x 49 50 sockaddr <- getSockAddr family broker_address broker_port_raw 51 52 --redis_conn <- case redis_family of 53 -- WAFamilyInet -> R.connect $ defaultConnectInfo { connectHost = redis_addr, connectPort = Service redis_port } 54 -- WAFamilyUnix -> R.connect $ defaultConnectInfo { connectPort = UnixSocket redis_addr } }}} However, this bug occurs only when compiling with cabal. " dermesser 9348 """Symbol not found"" when using a shared library" Runtime System 7.8.3 normal simonmar new 2014-07-22T20:20:01Z 2014-07-24T01:14:07Z :" alex.davis 314 #line pragmas not respected inside nested comments Compiler (Parser) 6.4 high normal 7.12.1 new 2005-02-28T18:09:41Z 2015-01-19T14:54:16Z "{{{ If one tries to compile a .hs file, with the -cpp option and the file contains C-style comments (/* comment */), then the linenumbers GHC reports are wrong. Minimal file exhibiting the error: --- {- /* GHC has implemented ""The Applicative Monad Proposal"", meaning the Applicative typeclass is now a superclass of Monad. This is a breaking change and your programs will need to be updated. Please see the GHC 7.10 Migration Guide on the GHC wiki. }}} with an empty `href`. This ticket is so we don't lose track of this bug for 7.10.2." thoughtpolice 10261 Don't build runghc if we don't have GhcWithInterpreter Build System 7.10.1 low new 2015-04-07T20:14:53Z 2015-04-08T18:53:04Z On configurations with GhcWithInterpreter=NO, we should not build runghc (since it will never work), and especially not install runghc/runhaskell scripts/symlinks when the system might already have another working runhaskell (e.g., Hugs). rwbarton 8887 Double double assignment in optimized Cmm on SPARC Compiler (CodeGen) 7.9 normal new 2014-03-13T15:58:44Z 2014-11-07T12:43:36Z "Hello, while reading ffi003 asm/opt-cmm for fixing this on SPARC I've noticed this code, this is optimized Cmm dump: {{{ 112 c1or: 113 _s1nw::F64 = F64[_s1nv::P32 + 3]; 114 _c1oj::I32 = sin; 115 _c1ok::F64 = _s1nw::F64; }}} this assignment to _s1nw::F64 looks useless as we may assign directly to _c1ok::F64, may we not? Both optimized and non-optimized Cmms attached." kgardas 3081 Double output after Ctrl+C on Windows Runtime System 6.10.1 normal normal ⊥ new 2009-03-09T09:52:21Z 2011-03-16T11:00:33Z "{{{ C:\Temp>type Test.hs import Control.Exception import System.Cmd main = system ""sleep 1m"" `finally` putStrLn ""goodbye"" C:\Temp>ghc --version The Glorious Glasgow Haskell Compilation System, version 6.10.1 C:\Temp>ghc --make Test.hs [1 of 1] Compiling Main ( Test.hs, Test.o ) Linking Test.exe ... C:\Temp>test.exe ^Cgoodbye goodbye C:\Temp> }}} The {{{^C}}} is the consoles way of saying that Ctrl+C was pressed - i.e. I ran the program and hit Ctrl+C while the sleep was still ongoing. I can replicate this issue from the DOS prompt and from the Cygwin prompt. It does not occur from GHCi." NeilMitchell 3903 "DPH bad sliceP causes RTS panic ""allocGroup: requested zero blocks""" Data Parallel Haskell 6.13 low 7.12.1 benl new 2010-03-01T08:47:36Z 2014-12-23T13:33:45Z "{{{ $ ghci -XPArr ... Prelude> :m GHC.PArr Prelude GHC.PArr> sliceP 10 10 [::] ] go_r2RF = \ (ds_a1YK :: [Int]) -> case ds_a1YK of _ [Occ=Dead] { [] -> GHC.Types.True `cast` (Sym Data.Monoid.NTCo:All[0] :: Bool ~R# Data.Monoid.All); : y_a1YP ys_a1YQ -> case y_a1YP of _ [Occ=Dead] { GHC.Types.I# x_a1Tk -> case x_a1Tk of _ [Occ=Dead] { __DEFAULT -> go_r2RF ys_a1YQ; 0 -> GHC.Types.False `cast` (Sym Data.Monoid.NTCo:All[0] :: Bool ~R# Data.Monoid.All) } } } end Rec } lvl4_r2RG :: Int -> Data.Monoid.All [GblId, Arity=1, Str=DmdType] lvl4_r2RG = \ (x_aqY [OS=ProbOneShot] :: Int) -> case x_aqY of _ [Occ=Dead] { GHC.Types.I# y_a1Uc -> case GHC.Prim.tagToEnum# @ Bool (GHC.Prim.<=# 4 y_a1Uc) of _ [Occ=Dead] { False -> GHC.Types.True `cast` (Sym Data.Monoid.NTCo:All[0] :: Bool ~R# Data.Monoid.All); True -> $sgo1_r2RE (GHC.Prim.remInt# y_a1Uc 2) (letrec { go1_a1S5 [Occ=LoopBreaker] :: [Int] -> [Int] [LclId, Arity=1, Str=DmdType ] go1_a1S5 = \ (ds_a1S6 :: [Int]) -> case ds_a1S6 of _ [Occ=Dead] { [] -> GHC.Types.[] @ Int; : y1_X1T4 ys_X1T6 -> case y1_X1T4 of _ [Occ=Dead] { GHC.Types.I# x1_X1VM -> case GHC.Prim.tagToEnum# @ Bool (GHC.Prim.<=# (GHC.Prim.*# x1_X1VM x1_X1VM) y_a1Uc) of _ [Occ=Dead] { False -> GHC.Types.[] @ Int; True -> GHC.Types.: @ Int (case x1_X1VM of wild5_a1TE { __DEFAULT -> case GHC.Prim.remInt# y_a1Uc wild5_a1TE of wild6_a1TJ { __DEFAULT -> GHC.Types.I# wild6_a1TJ }; (-1) -> GHC.Real.$fIntegralInt1; 0 -> GHC.Real.divZeroError @ Int }) (go1_a1S5 ys_X1T6) } } }; } in go1_a1S5 Main.main3) } } }}} foldr, however, fuse just fine: {{{ primes = 2:3:filter isPrime [5,7..] :: [Int] isPrime x = foldr (&&) True . map (/= 0) . map (x `rem`) . takeWhile ((<= x) . (^2)) $ primes main = print . length . takeWhile (<= 2^24) $ primes }}} {{{ 365,770,752 bytes allocated in the heap 48,197,488 bytes copied during GC 13,031,232 bytes maximum residency (7 sample(s)) 1,570,524 bytes maximum slop 28 MB total memory in use (0 MB lost due to fragmentation) Tot time (elapsed) Avg pause Max pause Gen 0 694 colls, 0 par 0.016s 0.029s 0.0000s 0.0005s Gen 1 7 colls, 0 par 0.031s 0.032s 0.0046s 0.0146s INIT time 0.000s ( 0.000s elapsed) MUT time 3.438s ( 3.439s elapsed) GC time 0.047s ( 0.062s elapsed) EXIT time 0.000s ( 0.003s elapsed) Total time 3.484s ( 3.504s elapsed) %GC time 1.3% (1.8% elapsed) Alloc rate 106,406,036 bytes per MUT second Productivity 98.7% of total user, 98.1% of total elapsed }}} {{{ lvl4_r2qr :: Int -> Bool [GblId, Arity=1, Str=DmdType] lvl4_r2qr = \ (x_aqW [OS=ProbOneShot] :: Int) -> case x_aqW of _ [Occ=Dead] { GHC.Types.I# y_a1tq -> case GHC.Prim.tagToEnum# @ Bool (GHC.Prim.<=# 4 y_a1tq) of _ [Occ=Dead] { False -> GHC.Types.True; True -> case GHC.Prim.remInt# y_a1tq 2 of _ [Occ=Dead] { __DEFAULT -> letrec { go_a1ud [Occ=LoopBreaker] :: [Int] -> Bool [LclId, Arity=1, Str=DmdType ] go_a1ud = \ (ds_a1ue :: [Int]) -> case ds_a1ue of _ [Occ=Dead] { [] -> GHC.Types.True; : y1_X1vf ys_X1vh -> case y1_X1vf of _ [Occ=Dead] { GHC.Types.I# x1_X1x9 -> case GHC.Prim.tagToEnum# @ Bool (GHC.Prim.<=# (GHC.Prim.*# x1_X1x9 x1_X1x9) y_a1tq) of _ [Occ=Dead] { False -> GHC.Types.True; True -> case x1_X1x9 of wild6_X1x3 { __DEFAULT -> case GHC.Prim.remInt# y_a1tq wild6_X1x3 of _ [Occ=Dead] { __DEFAULT -> go_a1ud ys_X1vh; 0 -> GHC.Types.False }; (-1) -> GHC.Types.False; 0 -> case GHC.Real.divZeroError of wild7_00 { } } } } }; } in go_a1ud Main.main3; 0 -> GHC.Types.False } } } }}} And List.all from ghc 7.8 base library does fuse, so this is regression. Windows 8.1 x64, ghc --info: {{{ [(""Project name"",""The Glorious Glasgow Haskell Compilation System"") ,(""GCC extra via C opts"","" -fwrapv"") ,(""C compiler command"",""$topdir/../mingw/bin/gcc.exe"") ,(""C compiler flags"","" -U__i686 -march=i686 -fno-stack-protector"") ,(""C compiler link flags"","""") ,(""Haskell CPP command"",""$topdir/../mingw/bin/gcc.exe"") ,(""Haskell CPP flags"",""-E -undef -traditional "") ,(""ld command"",""$topdir/../mingw/bin/ld.exe"") ,(""ld flags"","""") ,(""ld supports compact unwind"",""YES"") ,(""ld supports build-id"",""NO"") ,(""ld supports filelist"",""NO"") ,(""ld is GNU ld"",""YES"") ,(""ar command"",""$topdir/../mingw/bin/ar.exe"") ,(""ar flags"",""q"") ,(""ar supports at file"",""YES"") ,(""touch command"",""$topdir/touchy.exe"") ,(""dllwrap command"",""$topdir/../mingw/bin/dllwrap.exe"") ,(""windres command"",""$topdir/../mingw/bin/windres.exe"") ,(""libtool command"","""") ,(""perl command"",""$topdir/../perl/perl.exe"") ,(""target os"",""OSMinGW32"") ,(""target arch"",""ArchX86"") ,(""target word size"",""4"") ,(""target has GNU nonexec stack"",""False"") ,(""target has .ident directive"",""True"") ,(""target has subsections via symbols"",""False"") ,(""Unregisterised"",""NO"") ,(""LLVM llc command"",""llc"") ,(""LLVM opt command"",""opt"") ,(""Project version"",""7.9.20141129"") ,(""Project Git commit id"",""447f592697fef04d1e19a2045ec707cfcd1eb59f"") ,(""Booter version"",""7.8.3"") ,(""Stage"",""2"") ,(""Build platform"",""i386-unknown-mingw32"") ,(""Host platform"",""i386-unknown-mingw32"") ,(""Target platform"",""i386-unknown-mingw32"") ,(""Have interpreter"",""YES"") ,(""Object splitting supported"",""YES"") ,(""Have native code generator"",""YES"") ,(""Support SMP"",""YES"") ,(""Tables next to code"",""YES"") ,(""RTS ways"",""l debug thr thr_debug thr_l thr_p "") ,(""Support dynamic-too"",""NO"") ,(""Support parallel --make"",""YES"") ,(""Support reexported-modules"",""YES"") ,(""Support thinning and renaming package flags"",""YES"") ,(""Uses package keys"",""YES"") ,(""Dynamic by default"",""NO"") ,(""GHC Dynamic"",""NO"") ,(""Leading underscore"",""YES"") ,(""Debug on"",""False"") ,(""LibDir"",""D:\\msys32\\usr\\local\\lib"") ,(""Global Package DB"",""D:\\msys32\\usr\\local\\lib\\package.conf.d"") ] }}} " klapaucius 9854 Literal overflow check is too aggressive Compiler 7.8.3 normal new 2014-12-02T15:48:24Z 2014-12-02T16:06:40Z "The literal overflow check is too aggressive. Sometimes you want to give a literal as a hexadecimal value that does fit inside e.g. an `Int`, like so: {{{ Prelude> 0xdc36d1615b7400a4 :: Int
https://ghc.haskell.org/trac/ghc/report/1?format=tab&sort=summary&asc=1&USER=anonymous
CC-MAIN-2015-18
refinedweb
3,140
51.75
I downloaded Code::Blocks from here: I'm learning c programming. When I run the following program, I get error: iostream: No such file or directory error: syntax error before "namespace" warning: type defaults to `int' in declaration of `std' warning: data definition has no type or storage class In function `main': error: `cout' undeclared (first use in this function) error: (Each undeclared identifier is reported only once error: for each function it appears in.) error: `cin' undeclared (first use in this function) I'm running the following program: #include <iostream> using namespace std; int main() { int x; x = 0; do { // "Hello, world!" is printed at least one time // even though the condition is false cout<<"Hello, world!\n"; } while ( x != 0 ); cin.get(); } I tried Dev-C++, I get the same error. How to fix this? Best Solution Is this in a file like "program.c" or "program.cpp"? If it's a .c file, then your compiler may be interpreting it as C, and not C++. This could easily cause such an error. It's possible to "force" the compiler to treat either such extension as the other, but by default, .c files are for C, and .cpp files are compiled as C++. It's either this, or somehow your default "include" directories for the standard library are not set up right, but I don't know how you'd fix that, as that'd be compiler/environment dependent.
https://itecnote.com/tecnote/c-codeblocks-dev-c-error-iostream-no-such-file-or-directory/
CC-MAIN-2022-40
refinedweb
241
65.62
13 December 2019 2 comments Python, Web development, Django, Docker, JavaScript Heroku is great but it's sometimes painful when your app isn't just in one single language. What I have is a project where the backend is Python (Django) and the frontend is JavaScript (Preact). The folder structure looks like this: / - README.md - manage.py - requirements.txt - my_django_app/ - settings.py - asgi.py - api/ - urls.py - views.py - frontend/ - package.json - yarn.lock - preact.config.js - build/ ... - src/ ... A bunch of things omitted for brevity but people familiar with Django and preact-cli/create-create-app should be familiar. The point is that the root is a Python app and the front-end is exclusively inside a sub folder. When you do local development, you start two servers: ./manage.py runserver- starts cd frontend && yarn start- starts The latter is what you open in your browser. That preact app will do things like: const response = await fetch('/api/search'); and, in preact.config.js I have this: export default (config, env, helpers) => { if (config.devServer) { config.devServer.proxy = [ { path: "/api/**", target: "" } ]; } }; ...which is hopefully self-explanatory. So, calls like GET actually goes to. That's when doing development. The interesting thing is going into production. Before we get into Heroku, let's first "merge" the two systems into one and the trick used is Whitenoise. Basically, Django's web server will be responsibly not only for things like /api/search but also static assets such as / --> frontend/build/index.html and /bundle.17ae4.js --> frontend/build/bundle.17ae4.js. This is basically all you need in settings.py to make that happen: MIDDLEWARE = [ "django.middleware.security.SecurityMiddleware", "whitenoise.middleware.WhiteNoiseMiddleware", ... ] WHITENOISE_INDEX_FILE = True STATIC_URL = "/" STATIC_ROOT = BASE_DIR / "frontend" / "build" However, this isn't quite enough because the preact app uses preact-router which uses pushState() and other code-splitting magic so you might have a URL, that users see, like this: and there's nothing about that in any of the Django urls.py files. Nor is there any file called frontend/build/that/thing/special/index.html or something like that. So for URLs like that, we have to take a gamble on the Django side and basically hope that the preact-router config knows how to deal with it. So, to make that happen with Whitenoise we need to write a custom middleware that looks like this: from whitenoise.middleware import WhiteNoiseMiddleware class CustomWhiteNoiseMiddleware(WhiteNoiseMiddleware): def process_request(self, request): if self.autorefresh: static_file = self.find_file(request.path_info) else: static_file = self.files.get(request.path_info) # These two lines is the magic. # Basically, the URL didn't lead to a file (e.g. `/manifest.json`) # it's either a API path or it's a custom browser path that only # makes sense within preact-router. If that's the case, we just don't # know but we'll give the client-side preact-router code the benefit # of the doubt and let it through. if not static_file and not request.path_info.startswith("/api"): static_file = self.files.get("/") if static_file is not None: return self.serve(static_file, request) And in settings.py this change: MIDDLEWARE = [ "django.middleware.security.SecurityMiddleware", - "whitenoise.middleware.WhiteNoiseMiddleware", + "my_django_app.middleware.CustomWhiteNoiseMiddleware", ... ] Now, all traffic goes through Django. Regular Django view functions, static assets, and everything else fall back to frontend/build/index.html. Heroku tries to make everything so simple for you. You basically, create the app (via the cli or the Heroku web app) and when you're ready you just do git push heroku master. However that won't be enough because there's more to this than Python. Unfortunately, I didn't take notes of my hair-pulling excruciating journey of trying to add buildpacks and hacks and Procfile s and custom buildpacks. Nothing seemed to work. Perhaps the answer was somewhere in this issue: "Support running an app from a subdirectory" but I just couldn't figure it out. I still find buildpacks confusing when it's beyond Hello World. Also, I didn't want to run Node as a service, I just wanted it as part of the "build process". Finally I get a chance to try "Deploying with Docker" in Heroku which is a relatively new feature. And the only thing that scared me was that now I need to write a heroku.yml file which was confusing because all I had was a Dockerfile. We'll get back to that in a minute! So here's how I made a Dockerfile that mixes Python and Node: FROM node:12 as frontend COPY . /app WORKDIR /app RUN cd frontend && yarn install && yarn build FROM python:3.8-slim WORKDIR /app RUN groupadd --gid 10001 app && useradd -g app --uid 10001 --shell /usr/sbin/nologin app RUN chown app:app /tmp RUN apt-get update && \ apt-get upgrade -y && \ apt-get install -y --no-install-recommends \ gcc apt-transport-https python-dev # Gotta try moving this to poetry instead! COPY ./requirements.txt /app/requirements.txt RUN pip install --upgrade --no-cache-dir -r requirements.txt COPY . /app COPY --from=frontend /app/frontend/build /app/frontend/build USER app ENV PORT=8000 EXPOSE $PORT CMD uvicorn gitbusy.asgi:application --host 0.0.0.0 --port $PORT If you're not familiar with it, the critical trick is on the first line where it builds some Node with as frontend. That gives me a thing I can then copy from into the Python image with COPY --from=frontend /app/frontend/build /app/frontend/build. Now, at the very end, it starts a uvicorn server with all the static .js, index.html, and favicon.ico etc. available to uvicorn which ultimately runs whitenoise. To run and build: docker build . -t my_app docker run -t -i --rm --env-file .env -p 8000:8000 my_app Now, opening is a production grade app that mixes Python (runtime) and JavaScript (static). Heroku says to create a heroku.yml file and that makes sense but what didn't make sense is why I would add cmd line in there when it's already in the Dockerfile. The solution is simple: omit it. Here's what my final heroku.yml file looks like: build: docker: web: Dockerfile Check in the heroku.yml file and git push heroku master and voila, it works! To see a complete demo of all of this check out and Follow @peterbe on Twitter Deploying your Python + Javascript app to Heroku is easy today. 1. You just have to add official python and nodejs buildpacks to your config. 2. Put requirements.txt and package.json to the root of your github repo. For example see I just don't like putting everything into the root folder. Especially since it prevents adding more other folders later. Sooner or later you might have 3 different package.json files.
https://www.peterbe.com/plog/python-preact-app-deployed-on-heroku
CC-MAIN-2020-40
refinedweb
1,138
67.55
This is the first article of a multi-part series. It serves the purpose of introducing you to cryptography. I cannot promise an easy ride through this journey, but it will be a comprehensible and amazing one. As much as we’ll get into this, we’ll realize how complex cryptography can be. But that shouldn’t scare you away. I start by assuming that the reader has no experience with cryptography but has a bit of coding experience. If that doesn’t describe you, it’s not a problem per se. You can skip the code parts and still understand the rest of the series. So let’s move on — we have no time to waste! What Will I Learn Reading This Series? By reading this multi-part article series, you should become familiar with cryptography. We will get into a brief explanation of the whole process before moving on to real-life examples. Then we will venture into the coding of several quick-and-easy encryption and decryption algorithms in ANSI C. Therefore, you will be able to see its results with your own eyes. We learn the best by doing it! As a closure to the series I will try to challenge you to study cryptography. Hopefully, I might spark and bring to life some of those hidden desires. You will find a few of the most influential and best literature on cryptography as further reading and reference at the epilogue of this series. You will also get to know the latest algorithms that are currently used in various areas. Of course, their complexity is unimaginable compared to the ones we will code in C. {mospagebreak title=The Study of Cryptography} Nowadays we can say that cryptography is part of computer science due to the explicit formulas, algebra and/or calculation schemes. As a science, cryptography plays the role of securing communication between two parties. A large part of it is encryption and decryption but in the past few years its range has been dramatically expanded. In a nutshell, if it is de/encryption then the information gets "encrypted" and then "decrypted" by knowing a secret "key" or password. Like I said, cryptography’s utilization has been expanded and it isn’t uncommon at all to find authentication, digital signature verification and such. Basically this way the receiver is "verified" to determine if s/he is indeed eligible to view the secured information. This is another usage of cryptography that plays a huge role in our era of computing. In any form of communication the information is sent by a "sender" and received by a "receiver." To secure the information it can be encrypted by a specific mathematic algorithm via a specified "key." As soon as the information arrives at the receiver, if s/he knows the required key then the process can be reversed. Let’s introduce two terms: "plaintext" means un-encrypted data, while "ciphertext" means encrypted data. Almost all of the legacy (and simple) encryption algorithms work on a user-specified key. That means that technically anybody who knows the key is able to decrypt and regenerate the secured information into its original form — plaintext. If the "attacker" doesn’t knows the exact key then the whole decryption process requires a lot of effort but mostly time and resources. More about this later on. Real-World Encryption Algorithm Examples In this section we will venture into the coding of encryption algorithms. All of these examples involve working with files. So you might want to create a test text document. It is up to you. Keep in mind that the encryption algorithms described in this article are solely for educational purposes. You should not rely on them for top-priority files and/or data. However, the author guarantees that the algorithms presented in this article will not do any harm to your computer. They do not contain any virus, trojan, worm, spyware, malware, or anything that’s malicious. {mospagebreak title=First Example: Byte Adder Algorithm} In a nutshell, here is the encryption process: we will open the file and linearly add N number of bytes to each of its bytes. The decryption process is reversed: we will subtract N number of bytes from each byte. Throughout this example the number N stands for the key. It might be a user-specified "whole number" or can be generated from a specific string (we are going to discuss this variation in the second example later on). To make this example even simpler we will request an ‘int’ for N. The algorithm is quite easy. Check it out: while (s.get(origbyte)) { byte=(int)origbyte+digits; d.put(byte); } In the above algorithm string "s" stands for source and "d" for destination. Digits is a numeric integer; it represents the numbers of bytes that are going to be added. The algorithm is the same for decryption — the only difference is that we change the sign of the "digits." If it was, say, 50 for encryption, then it is going to be -50 for decryption. Easy. Now let’s run our code to encrypt the following text file with N=50: This is a text file for testing purposes. Hope you enjoy! Developer Shed. Test, test, test. sdfsdfsdf. 123456789 The encrypted outcome looks like this: †š›¥R›¥R"R¦—ª¦R˜›ž—R˜¡¤R¦—¥¦› ™R¢§¤¢¡¥—¥`Rz¡¢—R«¡§R— œ¡«S?<?<v—¨— ž¡¢—¤R…š—–`R†—¥¦^R¦—¥¦^R¦—¥¦`R¥–˜¥–˜¥–˜`Rcdefghijk Here is the full code: #include <fstream.h> //or <fstream> and ‘using namespace std;’ #include <stdio.h> #include <string.h> char srcfile[30], dstfile[30]; int digits; void main() { printf("Source file: "); scanf("%s",srcfile); printf("nDestination file: "); scanf("%s",dstfile); printf("nN= "); scanf("%d",&digits); printf("nPress 1.> if Encryption or 2.> if Decryption: "); int opt; scanf("%d",&opt); if (opt==2) digits*=-1; // or digits=-digits; ifstream s(srcfile, ios::binary); ofstream d(dstfile, ios::binary); if (!s || !d) printf("nERROR: Couldn’t open one of the files! "); char origbyte, byte; while (s.get(origbyte)) { byte=(int)origbyte+digits; d.put(byte); } s.close(); d.close(); printf("nAll done. "); } If you want to download the compiled executable and the above source instead of copying, pasting and compiling it yourself, just click on the attached download button. {mospagebreak title=Second Example: Variation of the Byte-Adder} The main idea of this algorithm is the same as that of the aforementioned Byte-Adder. However, instead of adding/subtracting a user-specified numeric value we will generate, via a quick mathematical formula, an integer from the user-specified string. Therefore, we will work again with two text files and a character string that stands for the password. Here’s the algorithm: (in this case— ‘-‘ for encryption or ‘+’ for decryption) for (int i=0;i<len;i++) total += pw[i]; c -= pw[len-1]*(total / len); Variables: "len" stands for the length of the password; "pw" represents the given password string; and "c" will be the newly generated byte as we pass through the content of the file. For the exact same input text file check out the generated output: Í ÍÍ!%!ÍÍÍ! !Í" ÛÍõÍ&"Í&η·ñ#Í ÛÍ !ÙÍ! !ÙÍ! !ÛÍ ÛÍÞßàáâãäåæ The password was: "longpassword123" in the above example. Now, let’s see the complete program: #include <fstream.h> //or <fstream> and ‘using namespace std;’ #include <stdio.h> #include <string.h> char srcfile[30], dstfile[30], pw[30]; void main() { printf("Source file: "); scanf("%s",srcfile); printf("nDestination file: "); scanf("%s",dstfile); printf("nPassword: "); scanf("%s",pw); printf("nPress 1.> if Encryption or 2.> if Decryption: "); int opt; char c; scanf("%d",&opt); int total=0; int len=strlen(pw); for (int i=0;i<len;i++) total += pw[i]; ifstream s(srcfile); ofstream d(dstfile); if (!s || !d) printf("nERROR: Couldn’t open one of the files!"); while (s.get(c)) { if (opt==1) d << c – pw[len-1]*(total / len); else d << c + pw[len-1]*(total / len); } s.close(); d.close(); printf("nAll done."); } You can download its compiled version in executable format and also the source by clicking on the archive button below. It is archived in RAR. It’s in working condition. I have purposefully reduced the complexity of these algorithms to make them "obviously simple," easy and comprehensible for a beginner too. My intention with this introductory article was to familiarize you, the reader, with encryption algorithms rather than writing about top-notch algorithms. I am aware that there are zillions of other variations of the aforementioned algorithms and different ways to code it — more efficiently, more secure, etc. — but for now, our top priority was to understand what cryptography is all about. Conclusions I’ve shown you two possible de/encryptions. You should be able to use this mindset and work out a few variations of your own, too. Just use your imagination. It’s that simple. Play with encryption. That’s the beauty of it — you can actually see your results! You encrypt it… you get gibberish. You decrypt it… you get the original file. It works! Jokes aside, unfortunately, I am running out of space right now. However, be sure to stick around for the second part of this series. There I will show you one of the most popular encryption algorithms. It’s the XOR. Yes, we will XOR. In the next part I will also explain three unique ways to break an XOR-based encryption. None of those are secret but very few users actually grab a book to do the research to find out how. Obviously, use them just for educational purposes. And to whet your appetite… there will be a third part to this series too! In that article I will talk about security and reliability. I will show you some of the top-notch encryption algorithms and security solutions that currently dominate the cryptographic landscape. I will also spark some of your hidden desires toward this and motivate you to read some of the best literature that’s available on this topic. Expect to see an awesome compilation of further reading and references. Until then — keep coding, learning, reading and researching. See you at part two!
http://www.devshed.com/c/a/Security/An-Introduction-to-Cryptography/
CC-MAIN-2016-22
refinedweb
1,701
58.08
As Karl Edler wrote: > The reason why I wanted them unified was so that I could write one > piece of UART code that would work with all avr micro-controllers. Well, there are (unfortunately) more stumbling blocks along that road. That starts with older devices naming their registers just UDR, UCSRA etc. while newer AVRs always number them (UDR0, UCSR0A) even if there is only one USART available. So in the end, you won't get away without an abstraction layer, and that abstraction layer could then include something like: #ifndef USART0_RX_vect # ifdef USART_RXC_vect /* ATmega16/32 etc. */ # define USART0_RX_vect USART_RXC_vect # elif defined(USART_RX_vect) /* ATmega165/169 etc. */ # define USART0_RX_vect USART_RX_vect # else # error "you lose" # endif #endif etc. pp. However, this is beyond the scope of what avr-libc wants to be. -- cheers, J"org .-.-. --... ...-- -.. . DL8DTL NIC: JW11-RIPE Never trust an operating system you don't have sources for. ;-)
http://lists.gnu.org/archive/html/avr-libc-dev/2010-05/msg00014.html
CC-MAIN-2013-20
refinedweb
148
72.76
The current system of registering in-VM taskdefs has some problems. Currently, any JAR in ant/nblib/*.jar is checked for META-INF/taskdefs.properties (or typedefs.properties) which produce defs loaded in a class loader based on o.n.c.m.SystemClassLoader + Ant. 1. JARs from disabled modules are loaded. (Issue #36702) 2. The class loader for an in-VM taskdef JAR includes too much. (Related to issue #35756) 3. The format is ad-hoc and does not match Ant 1.6's "antlib" convention. 4. There is no support for using XML namespaces (in Ant 1.6) to scope names. 5. Scanning for JARs does not work well with InstalledFileLocator. (Details in issue #36701) Proposed change: 1. Each module may install exactly one such in-VM def jar. It must be named ant/nblib/a-b-c.jar if the module's code name base is a.b.c, but this need not be located in the same installation dir as the Ant module itself. 2. The task/type definition(s) must reside in ant/nblib/a-b-c.jar!/a/b/c/antlib.xml. 3. When running under Ant 1.5.x, antlib.xml is parsed only for simple <taskdef name="..." classname="..."/> and <typedef name="..." classname="..."/>. Any other elements or attributes are ignored. (TBD: produce an error or warning instead?) The parsing is done manually by the Ant module. 4. When running under Ant 1.6.x, full antlib syntax is permitted. The parsing is done by Ant's proper methods. 5. The class loader for ant/nblib/bridge.jar should be just Ant (i.e. the "main class loader" used to load Ant itself) + netbeans/modules/ant.jar (i.e. Ant module's loader), in that order. (Could rename bridge.jar to org-apache-tools-ant-module.jar, TBD.) 6. The class loader for ant/nblib/a-b-c.jar should consist of just Ant + a.b.c's module loader, in that order. 7. When running under Ant 1.5.x, or under 1.6.x with a project whose root <project> element is in the default namespace, the namespace URI used for definitions is the default namespace. This should provide compatibility for old non-namespace-aware build scripts. Note that the project for which this check is done is only the top-level project selected by the user to run; any subprojects are assumed to be similar. 8. When running under Ant 1.6.x with namespaces enabled (as determined by <project> having the namespace antlib:org.apache.tools.ant, which should also be supported by the MIME resolver for Ant scripts), the new definitions should be placed in the expected namespace antlib:a.b.c, which the script can refer to via namespace prefixes etc. as usual. Besides providing support for scoping, this means that an in-VM lib JAR which also provides fallback impls that can work *without* NetBeans classes has a chance of working without modification using command-line Ant 1.6 - just place it in the classpath. The change is incompatible for modules installing in-VM tasks; they will need to rename their def JAR and change the def format to use antlib.xml. However (a) only three modules are currently known to be using this system, none of them in the standard NB distro (apisupport/ant, debuggerjpda/ant, and ant/browsetask); (b) the current in-VM system was anyway introduced (as an incompatible change) in the trunk after NB 3.5, to support issue #20211, so it has never been in a stable release. Branched: cvs rtag -b -r BLD200312231900 ant_nblib_38306 ant debuggerjpda apisupport *** Issue 36778 has been marked as a duplicate of this issue. *** Might be necessary to modify #7 and #8 so that under Ant 1.5.x, the defs are added to the project with no namespace qualification, but under Ant 1.6.x they are added *both* with and without namespace qualification. Reasons for the change: 1. There does not appear to be any way to ask a Project, once parsed, whether it was using antlib:org.apache.tools.ant on <project> or not. You could do this check separately using AntProjectCookie, though that would be a bit of a performance hit in case the project would not otherwise have been parsed. 2. It is probably pretty common to have a wrapper script which calls the real implementation. The real implementation might be namespace-qualified; who knows if the wrapper script is or not. 3. Not really intuitive that making a usually no-op change (adding the namespace qualification to <project>) would affect these taskdefs. Branching also core and openide to support issue #38330 - same tag name and base tag. Seems to be working OK. Code completion does not grok the namespaced tags yet, but that is a separate issue (much lower priority). Specifically these things work: 1. Running w/ no namespaces under Ant 1.5. 2. Running w/ no namespaces under Ant 1.6. 3. Running w/ namespaces under Ant 1.6. 4. Error signaled under Ant 1.6 w/ namespaces if the task name is wrong, or the namespace is wrong, or the namespace prefix is nonexistent. 5. Warning signaled under Ant 1.6 w/ namespaces by Ant module when attempting to load custom defs with an unknown element in antlib.xml. 6. Error signaled under Ant 1.6 w/ namespaces by Ant itself when antlib.xml contains an unknown element. 7. Under all 3 scenarios, if the module defining the task is disabled, the build fails with a proper message from Ant that the task was not defined. 8. From commandline, running namespaced build script with Ant 1.6 with CLASSPATH containing openide.jar and org-netbeans-modules-ant-browsetask.jar, the definition succeeds with the implicit antlib and NbBrowse gets as far as a NPE (URLDisplayer.default == null), i.e. as far as can be expected given that it cannot run successfully outside the NB JVM. Performance is still not great for Ant 1.6 - cca. 100msec per build to load custom defs from three modules. Will add caching to try to improve this. Under Ant 1.5, overhead is still negligible (1msec). Added caching of class loaders for ant/nblib/*.jar (easy), and removed SPL comments from antlib.xml's to speed parsing. Seems to have reduced overhead (under Ant 1.6) somewhat, to perhaps 40msec per build for three provider modules (times are quite variable however). OptimizeIt says that most of that overhead is parsing antlib.xml, and that most of the parsing time is actually spent getting parser objects; this should be unnecessary, so I filed Created attachment 12677 [details] Current patch (lightly edited to exclude irrelevant temporary changes, and also including a patch to autoupdate; #38330 patch not included) Submitting for API review. Incompatible, but only within the 3.6 dev cycle, and only for modules using this part of the Ant module API - as far as I know, just three, all experimental, all maintained by myself - and the changes required to support the new registration mechanism are trivial (packaging only, not code). I have added an arch desc for the Ant module in the trunk, for your reference. Should be visible in next daily Javadoc build. The trunk arch desc reflects changes made in this issue in a couple of places. Provisionally marking this section of the API as devel stability; should be supported, but there is little data to tell if the API is adequate for all users, since I seem to be the only one. Planned for commit to trunk during the day (US time) on Wednesday. committed 1.64 ant/manifest.mf committed 1.7 ant/api/doc/changes/apichanges.xml committed 1.4 ant/api/doc/org/apache/tools/ant/module/api/package.html committed 1.7 ant/browsetask/build.xml committed 1.4 ant/browsetask/manifest.mf removed 1.2 ant/browsetask/src/META-INF/taskdefs.properties committed 1.2 ant/browsetask/src/org/netbeans/modules/ant/browsetask/antlib.xml committed 1.7 ant/src-bridge/org/apache/tools/ant/module/bridge/impl/BridgeImpl.java committed 1.2 ant/src-bridge/org/apache/tools/ant/module/bridge/impl/NbAntlib.java committed 1.19 ant/src/org/apache/tools/ant/module/api/IntrospectedInfo.java committed 1.8 ant/src/org/apache/tools/ant/module/bridge/AntBridge.java committed 1.4 ant/src/org/apache/tools/ant/module/bridge/AuxClassLoader.java committed 1.3 ant/src/org/apache/tools/ant/module/bridge/BridgeInterface.java committed 1.3 ant/src/org/apache/tools/ant/module/bridge/DummyBridgeImpl.java committed 1.4 ant/src/org/apache/tools/ant/module/resources/mime-resolver.xml committed 1.19 apisupport/ant/build.xml committed 1.13 apisupport/ant/manifest.mf removed 1.2 apisupport/ant/src/META-INF/taskdefs.properties committed 1.2 apisupport/ant/src/org/netbeans/modules/apisupport/ant/antlib.xml committed 1.26 autoupdate/src/org/netbeans/modules/autoupdate/SignVerifier.java committed 1.6 debuggerjpda/ant/build.xml committed 1.2 debuggerjpda/ant/manifest.mf removed 1.2 debuggerjpda/ant/src/META-INF/taskdefs.properties committed 1.2 debuggerjpda/ant/src/org/netbeans/modules/debugger/jpda/ant/antlib.xml committed 1.35 performance/threaddemo/build.xml
https://netbeans.org/bugzilla/show_bug.cgi?id=38306
CC-MAIN-2016-44
refinedweb
1,541
53.27
Closed Bug 343998 Opened 15 years ago Closed 15 years ago JS1 .7 landing broke windows ce . Categories (Core :: JavaScript Engine, defect) Tracking () People (Reporter: dougt, Assigned: dougt) Details (Keywords: fixed1.8.1) Attachments (1 file, 2 obsolete files) js3250.lib(jsmath.obj) : error LNK2019: unresolved external symbol copysign referenced in function math_atan2 copysign isn't around. using the same workaround for the bustage in VC6 and VC7 seams to work fine. patch coming up. This patch uses the existing |js_copysign| function impl. in js/. Alternatively, I could implement copysign in the windows ce shunt layer. We could simply drop off "&& !defined WINCE" from the #elif, but I think it is more clear being verbose. Attachment #228579 - Flags: review? OS: Windows XP → Windows CE Hardware: PC → PocketPC Comment on attachment 228579 [details] [diff] [review] Patch v.1 What defines js_copysign for WINCE? Isn't there a libm or libc copysign or _copysign or whatever that we can use? /be on the devices i have tested against _copysign is available. I verified that _copysign for WINCE doesn't have the same problems outlined in bug 329383. Also, i have verified that with the latest sdk, it is perfectly fine to use the builtin tan and exp functions. Attachment #228579 - Attachment is obsolete: true Attachment #228700 - Flags: review?(brendan) Attachment #228579 - Flags: review?(brendan) Comment on attachment 228700 [details] [diff] [review] patch v.2 >Index: jslibmath.h >=================================================================== >RCS file: /cvsroot/mozilla/js/src/jslibmath.h,v >retrieving revision 3.17.6.1 >diff -u -1 -0 -r3.17.6.1 jslibmath.h >--- jslibmath.h 7 Jul 2006 02:12:02 -0000 3.17.6.1 >+++ jslibmath.h 10 Jul 2006 18:52:01 -0000 >@@ -70,21 +70,23 @@ > > #define fd_acos acos > #define fd_asin asin > #define fd_atan atan > #define fd_atan2 atan2 > #define fd_ceil ceil > > /* The right copysign function is not always named the same thing. */ > #if __GNUC__ >= 4 || (__GNUC__ == 3 && __GNUC_MINOR__ >= 4) > #define fd_copysign __builtin_copysign >-#elif defined _WIN32 && !defined WINCE >+#elif defined WINCE >+#define fd_copysign _copysign >+#elif defined _WIN32 > #if _MSC_VER < 1400 > /* Try to work around apparent _copysign bustage in VC6 and VC7. */ > #define fd_copysign js_copysign > extern double js_copysign(double, double); > #else > #define fd_copysign _copysign > #endif > #else > #define fd_copysign copysign > #endif This is all within #if !JS_USE_FDLIBM_MATH (just noting). It looks ok, just wondering why you can't share the #define fd_copysign _copysign code now, by changing from #elif defined _WIN32 && !defined WINCE to just testing for "Windows (32 or CE)": #elif defined _WIN32 IOW, fall into the common code, including the _MSC_VER < 1400 test. (Is _WIN32 the right macro?) >@@ -106,21 +108,21 @@ > * Use math routines in fdlibm. > */ > > #undef __P > #ifdef __STDC__ > #define __P(p) p > #else > #define __P(p) () > #endif > >-#if (defined _WIN32 && !defined WINCE) || defined SUNOS4 >+#if defined _WIN32 || defined SUNOS4 This is in the big #else for that #if !JS_USE_FDLIBM_MATH -- the major comment introducing the guts of this #else says "Use math routines in fdlibm". Did you test compiling with JS_USE_FDLIBM_MATH defined too? > #define fd_acos acos > #define fd_asin asin > #define fd_atan atan > #define fd_cos cos > #define fd_sin sin > #define fd_tan tan > #define fd_exp exp > #define fd_log log > #define fd_sqrt sqrt Reason I ask is because the next three significant lines are: extern double fd_atan2 __P((double, double)); extern double fd_copysign __P((double, double)); extern double fd_pow __P((double, double)); which means JS_USE_FDLIBM_MATH defined as 1 implies most transcendentals are *not* from fdlibm, but atan2, pow, and the underlying copysign are. That's not quite right if we can use _copysign from the OS/SDK. The hard cases are atan2 and pow, where signed zeroes and other bugs exist. It'd be good to test these too. /be I did not want to depend on the _MSC_VER macro here. For the MS compiler with the ppc/sp sdk, this isn't a problem at all (the macro is defined). I am worried about using the Intel WINCE arm compiler (which probably does not use this macro). To get the build off the floor (tbox has been burning for a few days), lets just ignore this change for now: -#if (defined _WIN32 && !defined WINCE) || defined SUNOS4 +#if defined _WIN32 || defined SUNOS4 If in agreement, i will open up a new bug to test and follow up. Yeah, that sounds good -- it means JS_USE_FDLIBM_MATH is defined for Windows builds, or at least for WINCE builds -- can you confirm? My point was to test both forks of the #if !JS_USE_FDLIBM_MATH river, and to try to unify with existing WIN32 ifdefs. If you can do that in this bug, I'm good with not filing a followup. /be this simply defines fd_copysign to the right thing on windows ce as discussed above. Attachment #228700 - Attachment is obsolete: true Attachment #228720 - Flags: review?(brendan) Attachment #228700 - Flags: review?(brendan) pretty sure that |JS_USE_FDLIBM_MATH| is never defined on windows (32 or ce) Comment on attachment 228720 [details] [diff] [review] fixes wince build bustage only. r=me, thanks. /be Attachment #228720 - Flags: review?(brendan) → review+ Comment on attachment 228720 [details] [diff] [review] fixes wince build bustage only. a=schrep since b1 is out the door. Trivial WinCE fix Attachment #228720 - Flags: approval1.8.1? → approval1.8.1+ patch landed on 1.8 branch (not sure what keyword to add for something post ff2a1). Checking in jslibmath.h; /cvsroot/mozilla/js/src/jslibmath.h,v <-- jslibmath.h new revision: 3.17.6.2; previous revision: 3.17.6.1 done This patch should also land on trunk. brendan, I can't check this into mozilla/js. Can you do the honor or point me to someone with the right stuff. I checked this in for Doug: mozilla/js/src/jslibmath.h 3.22 Status: NEW → RESOLVED Closed: 15 years ago Resolution: --- → FIXED Flags: in-testsuite-
https://bugzilla.mozilla.org/show_bug.cgi?id=343998
CC-MAIN-2021-10
refinedweb
957
67.25
In football analysis and video games, radar charts have been popularised in a number of places, from the FIFA series, to Ted Knutson’s innovative ways of displaying player data. Radar charts are an engaging way to show data that typically piques more attention than a bar chart although you can often use both of these to show the same data. This article runs through the creation of basic radar charts in Matplotlib, plotting the FIFA Ultimate Team data of a couple of players, before creating a function to streamline the process. To start, let’s get our libraries and data pulled together. import pandas as pd from math import pi import matplotlib.pyplot as plt %matplotlib inline #Create a data frame from Messi and Ronaldo's 6 Ultimate Team data points from FIFA 18 Messi = {'Pace':89,'Shooting':90,'Passing':86,'Dribbling':95,'Defending':26,'Physical':61} Ronaldo = {'Pace':90,'Shooting':93,'Passing':82,'Dribbling':90,'Defending':33,'Physical':80} data = pd.DataFrame([Messi,Ronaldo], index = ["Messi","Ronaldo"]) data Plotting data in a radar has lots of similarities to plotting along a straight line (like a bar chart). We still need to provide data on where our line goes, we need to label our axes and so on. However, as it is a circle, we will also need to provide the angle at which the lines run. This is much easier than it sounds with Python. Firstly, let’s do the easy bits and take a list of Attributes for our labels, along with a basic count of how many there are. Attributes =list(data) AttNo = len(Attributes) We then take a list of the values that we want to plot, then copy the first value to the end. When we plot the data, this will be the line that the radat follows – take a look below: values = data.iloc[1].tolist() values += values [:1] values [33, 90, 90, 82, 80, 93, 33] So these are the point that we will draw on our radar, but we will need to find the angles between each point for our line to follow. The formula below finds these angles and assigns them to ‘angles’. Then, just as above, we copy the first value to the end of our array to complete the line. angles = [n / float(AttNo) * 2 * pi for n in range(AttNo)] angles += angles [:1] Now that we have our values to plot, and the angles between them, drawing the radar is pretty simple. Follow along with the comments below, but note the ‘polar=true’ in our subplot – this changes our chart from a more-traditional x and y axes chart, to a the circular radar chart that we are looking for. ax = plt.subplot(111, polar=True) #Add the attribute labels to our axes plt.xticks(angles[:-1],Attributes) #Plot the line around the outside of the filled area, using the angles and values calculated before ax.plot(angles,values) #Fill in the area plotted in the last line ax.fill(angles, values, 'teal', alpha=0.1) #Give the plot a title and show it ax.set_title("Ronaldo") plt.show() Comparing two sets of data in a radar chart One additional benefit of the radar chart is the ability to compare two observations (or players, in this case), quite easily. The example below repeats the above process for finding angles for Messi’s data points, then plots them both together. #Find the values and angles for Messi - from the table at the top of the page values2 = data.iloc[0].tolist() values2 += values2 [:1] angles2 = [n / float(AttNo) * 2 * pi for n in range(AttNo)] angles2 += angles2 [:1] ,"Messi",color="red") plt.figtext(0.2,0.85,"v") plt.figtext(0.2,0.8,"Ronaldo",color="teal") plt.show() Creating a function to plot individual players This is a lot of code if we want to create multiple charts. We can easily turn these charts into a function, which will do all the heavy lifting for us – all we will have to do is provide it with a player name and data that we want to plot: def createRadar(player, data): Attributes = ["Defending","Dribbling","Pace","Passing","Physical","Shooting"] data += data [:1] angles = [n / 6 * 2 * pi for n in range(6)] angles += angles [:1] ax = plt.subplot(111, polar=True) plt.xticks(angles[:-1],Attributes) ax.plot(angles,data) ax.fill(angles, data, 'blue', alpha=0.1) ax.set_title(player) plt.show() createRadar("Dybala",[24,91,86,81,67,85]) And how about we do the same thing to compare two players? def createRadar2(player, data, player2, data2): Attributes = ["Defending","Dribbling","Pace","Passing","Physical","Shooting"] data += data [:1] data2 += data2 [:1] angles = [n / 6 * 2 * pi for n in range(6)] angles += angles [:1] angles2 = [n / 6 * 2 * pi for n in range(6)] angles2 += angles2 [:1] ax = plt.subplot(111, polar=True) ,player,color="teal") plt.figtext(0.2,0.85,"v") plt.figtext(0.2,0.8,player2,color="red") plt.show() createRadar2("Henderson", [76,76,62,82,81,70],"Wilshere", [62,82,71,80,72,69]) Summary Radar charts are an interesting way to display data and allow us to compare two observations quite nicely. In this article, we have used them to compare fictional FIFA players, but analysts have used this format very innovatively to display actual performance data in an engaging format. Take a look at Statsbomb‘s use of radar charts with real data, or learn more about visualisation in Python here.
https://fcpython.com/visualisation/radar-charts-matplotlib
CC-MAIN-2022-33
refinedweb
920
58.82
Here are the steps im supposed to take for where im stuck at: create a for loop that runs through your array list of values. For each iteration of the loop, you will be drawing a Rectangle to represent the current value’s bar. In each loop, you should do the following: 1. Calculate the height of a single bar (barHeight) – Remember to use the max value you previously found. 2. Calculate the width of a single bar (barWidth) (This can be done outside the loop as well.) 3. Create a random color, and set that color – remember that a color can be created and saved into a Color variable using the Color constructor, which takes 3 int values 0-255 representing red, green, and blue. 4. Draw the Rectangle for the current bar – Start at an (x,y) of (xleft,height-barHeight), and remember to use g2.fill(). 5. Calculate a new xleft for the next iteration of the loop, by adding the width of a single bar to the previous value. Here is my code import java.awt.Graphics; import java.util.ArrayList; import java.util.Collections; import java.util.Random; public class BarChart { private int width = 0; private int height = 0; private ArrayList<Double> barChartValues; private Random color; public void BarSize(double newHeight, double newWidth) { newHeight = height; newWidth = width; ArrayList<Double> barChartValues = new ArrayList<Double>(); Random color = new Random(); } public void add(double newValue) { barChartValues.add(newValue); } public void draw(Graphics g2) { double max = Collections.max(barChartValues); int xleft = 0; for (int i = 0; i < barChartValues.size(); i++) { } } } This post has been edited by g00se: 17 November 2012 - 08:56 AM Reason for edit:: Fixed code tags
http://www.dreamincode.net/forums/topic/300517-having-trouble-on-a-bar-graph-project/
CC-MAIN-2016-40
refinedweb
280
55.34
I am trying to put the filename that I am reading from to the file using this code: The file is named ABC and this is what I want to write to the file: "file2". I am trying to type in << myfile << but nothing is written to the file. I want it to recognice what is written in the paratheses after the declaration of myfile. Is this possible in any way ? Code: #include "stdafx.h" #include <iostream> #include <fstream> #include <sstream> #include <string> using namespace std; int main () { std::string Date; int Time = 0; char Comma; ofstream Test; Test.open ("file2.txt"); ifstream myfile ("ABC.txt"); getline(myfile, Date, ','); myfile >> Time; // 2111 myfile >> Comma; if (Time == 2100) { Test << myfile << Time <<"\n"; } return 0; }
http://cboard.cprogramming.com/cplusplus-programming/97610-write-txt-filename-into-file-printable-thread.html
CC-MAIN-2014-15
refinedweb
123
82.04
Structure used for the attributes of a time specification. More... import "saeaccess.idl"; Structure used for the attributes of a time specification. A time specification defines the time(s) to apply specified actions to a service. The time specification can identify the year, month, day, hour, and minute, as well as the time zone for the action to be taken. The syntax for the attributes in a time specification must be in a specified syntax. For information about the syntax to use for entering attributes, see the SDX Objects Guide. Day of the month. Day of the week. Time zone only or TimeZone and effectivePeriod (space separated). Hour. Minute. Month of the year. Four digits that indicate the year.
https://www.juniper.net/documentation/software/management/src/src411x/SDK/doc/idl/sae/html/structsae_1_1TimeSpec.html
CC-MAIN-2021-25
refinedweb
119
69.18
On 03 Sep 2001 14:00:14 +0200, Alexandre Courbot <alexandrecourbot at linuxgames.com> wrote: >Hello everybody, > >In the project I'm working on, (written in C++) we use Python as a >scripting language, to control the behavior of game elements >(characters, etc...) from scripts. Here is a simplified representation >or what we used to do: > >class character >{ > PyObject * locals; > > PyCodeObject * script; > > // Parse 'file' with PyParser_SimpleParseFile and create > // the 'script' code object with PyNode_Compile > void set_script (string file); > > // Function called once per game cycle, so the character > // can do it's business > void update () > { > // data::globals is the game's globals namespace > // which contains all wrapped methods and types > PyEval_EvalCode (script, data::globals, locals); > } > > > // Control methods > void go_north (); > void go_south (); > void go_east (); > void go_west (); > > void speak (string t); >}; > >locals is built at constructor time, and is supposed to contain the >character's locales variables. You can assume it just contains a >'myself' member, which is a reference to the instance of the character >so it can be accessed from Python. That way, into the Python script set >with set_script () I can do something like myself.go_north () to control >the character from Python. So far, it works like a charm. > >The problem is, that sometimes the script could have it's own variables. >For example, the following script would make the character utter some >random speech: > >-------------------------------------------------------------------- >speech = ["Hello!", "What time is it?", "The knights who say 'ni!'"] > >myself.speak (speech[randint (0, 2)]) >-------------------------------------------------------------------- > >That would make the character randomly say one of these 3 sentences each >time it's updated. > >The problem is, that the speech list is created each time the script is >run. That's a waste of CPU time, and we'd like to avoid this. So we >tried a different approach: each script has 3 functions: init () which >is called when the script is set, cleanup () which is called when the >script is deleted (constructor/destructor behavior, actually) and run () >which is the script itself (that is, what is run by update). Ideally, >our script would then be built this way: > >--------------------------------------------------------------------- >def init (): > speech = ["Hello!", "What time is it?", \ > "The knights who say 'ni!'"] > >def cleanup (): > del speech > >def run (): > myself.speak (speech[randint (0, 2)]) >--------------------------------------------------------------------- > >The PyCodeObject is built differently: First we import the script as a >module (PyImport_ImportModule), then we get the run () function by doing >a 'PyObject * function = PyObject_GetAttrString (module, "run")' and, >finally, we get the function code object by 'script = (PyCodeObject *) >PyObject_GetAttrString (function, "func_code");'. Idem for init () and >cleanup (). Then we run init by 'PyEval_Evalcode (initcode, >data::globals, locals) and we expect to have speech created into the >locals PyObject * (at least, that's what was done with the old code, >even though it wasn't necessary). And, when update () is called, the run >() function is called... raising a NameError because it cannot find >neither 'myself' nor 'speech'! And if I print the locals () inside the >function, it displays an enpty dictionary, even though the locals are >correctly passed from C++! That's where we are totally stuck. I've >browsed the archive of this list, and found that several people had >problems with locals () inside function calls, that's probably where the >problem lies. > >What we would like to achieve is to keep the init () cleanup () run () >structure (or anything equivalent) for our scripts and being able to >have persistant variables between scripts calls through the 'locals' >PyObject * so Python can communicate with C++ and vice versa, and mainly >so the script can have persistant variables. Speed is also a very >important issue for us, as the compiled scripts are run very often (70 >times per second!). We'd be very thankfull to anyone who could help us >solving this issue. Any comment or suggestion are welcome. > >Thanks for your time, > >Alex. > > > I tried to use Python for a script Game too. And I don't have this problem (but I have other ones ...) In my application, my Global vars in the Python modules are preserved at each trame... (I thought you mean global to the python module when you said local vars...) In order to call the functions I used the PyObject_CallFunction( LogicalEntryPoint, NULL ); instruction. LogicalEntryPoint being the entry point of the function object code. I found it using, at load / reload time : module = PyImport_Import(modName); To get the module. mdict = PyModule_GetDict( Module ); To get the dictionnary of the module. And : LogicalEntryPoint = PyDict_GetItemString(mdict, "LogicalTrame"); to get the entry point. I also check it is really callable... And it works... At the next step ( if I ever found some time for this ), I will try to use StackLess python in order to stay within a script, with all locals vars preserved... Hope it helps, Emmanuel
https://mail.python.org/pipermail/python-list/2001-September/074984.html
CC-MAIN-2018-05
refinedweb
784
71.34
WiringPi supports a number of extensions. These extensions allow your programs to use additional hardware in a seamless fashion to using the Pi’s on-board GPIO. For example – you may have an MCP23017 I2C GPIO expander chip (e.g. on the Quick 2 Wire board) to extend wiringPi with this chip, you just need to know its I2C bus address (usually 0x20), then integrate it in your program as follows: #include <wiringPi.h> #include <mcp23017.h> and call: mcp23017Setup (120, 0x20) ; And from then, you can use the standard pinMode(), digitalWrite() and digitalRead() functions as before, but use 120 as the base of the device. If you had 16 LEDs connected to it and wanted to turn them all on: for (i = 0 ; i < 16 ; ++i) { pinMode (120 + i, OUTPUT) ; digitalWrite (120 + i, HIGH) ; } and so on. similar setups exist for a number of GPIO expander chips and more are being added all the time. If you have a specific requirement then do get in-touch, or consider writing your own! Pins 0-63 are reserved for the Raspberry Pi’s own on-board GPIO, but there is almost no limit to the pins and range you may use for your own hardware (subject to the limits of an signed 32-bit integer)
http://wiringpi.com/extensions/
CC-MAIN-2018-17
refinedweb
214
67.69
From: "Richard W.M. Jones" <rjones redhat com> Best to read the comment. --- resize/Makefile.am | 2 +- resize/resize.ml | 33 ++++++++++++++++++++++++++++++++- 2 files changed, 33 insertions(+), 2 deletions(-) diff --git a/resize/Makefile.am b/resize/Makefile.am index 70ace37..1234c96 100644 --- a/resize/Makefile.am +++ b/resize/Makefile.am @@ -56,7 +56,7 @@ bin_SCRIPTS = virt-resize # -I $(top_builddir)/src/.libs is a hack which forces corresponding -L # option to be passed to gcc, so we don't try linking against an # installed copy of libguestfs. -OCAMLPACKAGES = -package str -I $(top_builddir)/src/.libs -I ../ocaml +OCAMLPACKAGES = -package str,unix -I $(top_builddir)/src/.libs -I ../ocaml if HAVE_OCAML_PKG_GETTEXT OCAMLPACKAGES += -package gettext-stub endif diff --git a/resize/resize.ml b/resize/resize.ml index cd4c9d6..3118d74 100644 --- a/resize/resize.ml +++ b/resize/resize.ml @@ -1010,6 +1010,36 @@ let () = (* Copy over the data. *) let () = + (* Obviously when you've got a function called 'hack_for_...' you + * know it cannot be good. + * + * This works around a bug in qemu's qcow2 block driver. If there + * are I/Os in flight and you send SIGTERM to qemu, then qemu + * segfaults. This particularly happens when the output file is + * growing rapidly (because of the partition copies below) and we + * close the handle (which kills qemu). + * + * The ugly workaround is to monitor the disk file and wait until it + * stops growing before closing the handle. + * + * + * + *) + let hack_for_rhbz836710 g ?format outfile = + match format with + | None | Some "qcow2" -> (* only for qcow2 or auto *) + let get_size () = (Unix.LargeFile.stat outfile).Unix.LargeFile.st_size in + let rec loop size = + g#sync (); + g#sleep 1; + let size' = get_size () in + if size <> size' then + loop size' + in + loop (get_size ()) + | _ -> () + in + List.iter ( fun p -> match p.p_operation with @@ -1033,7 +1063,8 @@ let () = (match p.p_type with | ContentUnknown | ContentPV _ | ContentFS _ -> - g#copy_device_to_device ~size:copysize source target + g#copy_device_to_device ~size:copysize source target; + hack_for_rhbz836710 g ?format:output_format outfile | ContentExtendedPartition -> (* You can't just copy an extended partition by name, eg. -- 1.7.10.4
https://www.redhat.com/archives/libguestfs/2012-July/msg00015.html
CC-MAIN-2015-18
refinedweb
334
61.53
The Monty Hall Problem is the probability you will win if you choose to change doors? What is the probability you win if you choose to remain with your decision. Simplistically, one might guess your probability of winning went from 1/3, to 1/2. This however, is incorrect. If you stay, your probability of winning is still 1/3. If you change, your chances of winning is 2/3...How does this counter-intuitive result play out? One can tabulate all the possibilities, and the contestants decision: LookingLookingDoor1 Door 2 Door 3 Switch Stay Prize Nothing Nothing Prize Nothing Nothing Prize Nothing Nothing Prize Nothing Nothing Prize Nothing Prize or changing...and the simulation backs everything up - average for 3 doors: stay: 33%, change 66%. import java.util.Random; /** * Simulates the MontyHall problem. 3 doors, 2 with goats and 1 with car. You choose a door, * Monty hall opens one of the other two to reveal a goat. How often will you be correct if you * stay? How often if you switch? * @author copeg * */ public class MontyHallSimulation implements Runnable{ /*Random number generator */ private static final Random RANDOM = new Random(); /*Number of rounds to simulate*/ private int rounds; /*Number of doors total*/ private int doors; /*Rate for staying*/ private double stayRate = 0; /*Rate for changing*/ private double changeRate = 0; /** * Constructs a MontyHallSimulation with 3 doors, to iterate 1000 times. */ public MontyHallSimulation(){ this(1000); } /** * Constructs a MontyHallSimulation with the number of rounds to simulate, and * 3 doors. * @param rounds The number of rounds to simulate. */ public MontyHallSimulation(int rounds){ this(rounds, 3); } /** * Constructs a MontyHallSimulation with the number of rounds and doors to use in * the simulation. * @param rounds The number of rounds to simulate * @param doors The number of doors to use in the simulation. * @throws IllegalArgumentException if doors is less than 3. */ public MontyHallSimulation(int rounds, int doors){ if ( doors < 3 ){ throw new IllegalArgumentException("Cannot simulate the problem with less than 3 doors."); } this.rounds = rounds; this.doors = doors; } /** * Implementation of the Runnable interface. Simulates the Monty Hall Problem. * This loops doors number of times, determining whether staying or changing * results in a correct answer. */ public void run(){ int stayCount = 0; int changeCount = 0; for ( int i = 0; i < rounds; i++ ){ int choose = RANDOM.nextInt(doors);//choose a door at random int solution = RANDOM.nextInt(doors);//find a random place where the car will be. if ( solution != choose ){//Car is in the other door - if you change you win changeCount++; }else{//If you stay you win. stayCount++; } } stayRate = stayCount/(double)rounds; changeRate = changeCount/(double)rounds; } /** * Retrieves the rate one will be correct if one stays. This method returns * zero unless run has been called. * @return */ public double getStayRate(){ return stayRate; } /** * Retrieves the rate one will be correct if one changes. This method returns * zero unless run has been called. * @return */ public double getChangeRate(){ return changeRate; } /** * Application entry point. * @param args */ public static void main(String[] args){ MontyHallSimulation sim = new MontyHallSimulation(1000, 1000); sim.run(); System.out.println("Choose to stay: percent correct - " + sim.getStayRate()); System.out.println("Choose to change: percent corect - " + sim.getChangeRate()); } } How about the same problem with 1000 doors? You are virtually guaranteed to win if you change your decision (what are the chances you chose the right door in the first place?). A fun statistical problem to investigate and simulate...happy coding! *The chart does not translate well in this format, however the link below contains a better, more readable version to inspect. Links: Wikipedia: The Monty Hall Problem Description of the Monty Hall Problem
http://www.javaprogrammingforums.com/blogs/copeg/13-monty-hall-problem.html
CC-MAIN-2018-22
refinedweb
590
58.18
How to set region of interest ROI?? Hello expert ! I'm a newbie.. Can anyone of you help me how to set region of interest(ROI) of an opening image?? I'm still learning opencv by myself from internet wthout any guidance from my lecturer..so stress trying to learn something new without any help..But Im not give up !!! Actually I want to set a ROI on a wounded skin. Let say I open an image, then I want to set ROI in that image.. My ROI is in wounded area. Here is what Im trying. But there is no ROI..it just open the image.Can anyone of you help me please?? #include <opencv\cv.h> #include <opencv\highgui.h> using namespace cv; int main(int argc, char** argv) { int key = 0; IplImage* img = cvLoadImage( "C:\\Users\\acer\\Documents\\Visual Studio 2012\\Projects\\o1.jpg" ); cvNamedWindow( "Example1", CV_WINDOW_NORMAL ); cvShowImage("Example1", img); cvWaitKey(0); cvReleaseImage( &img ); cvDestroyWindow( "Example1" ); /* sets the Region of Interest*/ cvSetImageROI(img, cvRect(150, 50, 150, 250)); return 0; } here example of the wounded image that i want to set ROI
https://answers.opencv.org/question/25096/how-to-set-region-of-interest-roi/
CC-MAIN-2020-24
refinedweb
185
69.68
Introduction to OpenCV Threshold OpenCV is a technique which helps in processing images. The threshold in OpenCV helps in assigning pixel values. These pixel values are allocated to threshold values. These values are then compared with the threshold values and the image is segmented by setting this value to maximum if pixel value is greater than threshold and if less then it is set to 0. To segment the image this technique is most commonly used and helps in getting better results. Python also provides a function for setting this threshold value which we will use soon as we go ahead. Syntax: Syntax for using this function is as below: cv.threshold(src, thresholdValue, maxValue, threshold type) Parameters: - src: This will be the source image which should be grayscale. Grayscale images are black and white images. - thresholdValue: This will be the value of threshold which will be above the pixel value and below the pixel value. The threshold values will keep changing according to pixels. - maxValue: This will be the maximum value which can be assigned to a pixel. - threshold type: The threshold type is the technique or type which will be applied on the image. We can also add the destination just like we have added the source. The image will be stored at this location once it is processed. How Threshold Function Works in OpenCV? In order to create binary images, the images must be segmented. This segmentation is done by using OpenCV threshold. This thresholding is simple thresholding and adaptive thresholding. The pixel value which is used there has to be a corresponding threshold value which should be the same. The logic will remain same that if pixel value is smaller than threshold then it will be set to 0 and if it is greater than it will be set to maximum value. It can use thresholding techniques like THRESH_BINARY, THRESH_BINARY_INV, THRESH_TRUNC, THRESH_TOZERO, THRESH_TOZERO_INV. These techniques help in creating greyscale images. These techniques works as below: - cv.THRESH_BINARY: The intensity of pixels which is greater than threshold will be set to 255 else it will be black which is 0. - cv.THRESH_BINARY_INV: In this case the intensity of pixels will be the inverse of THRESH_BINARY. That is 0 when pixel value is less than threshold else it will be white. - cv.THRESH_TRUNC: When pixel intensity becomes greater than threshold value it will be truncated to threshold. After this the pixel values should be set to the value which will be same as threshold value and other values will be the same. - cv.THRESH_TOZERO: All pixels having values less than threshold, the pixel intensity for these is set to zero. - cv.THRESH_TOZERO_INV: This will work in the opposite way of above function. Example of OpenCV Threshold Let us have a look at an example and see the working of OpenCV and how it works with different thresholds. The original image which we will process is as below: Let us process this image and turn it into greyscale using OpenCV techniques. Code: # Python program example to understand threshold and its techniques # importing necessary libraries import cv2 import numpy as np # specify the path where the image is and read it using imread img1 = cv2.imread('eduCBA.JPG') # cv2.cvtColor is used with the image to change it to grayscale image imagenew = cv2.cvtColor(img1, cv2.COLOR_BGR2GRAY) # use different thresholding techniques. #pixels which will have value greater than specified pixel values will be set to 255 ret, thrsh1 = cv2.threshold(imagenew, 150, 255, cv2.THRESH_BINARY) ret, thrsh2 = cv2.threshold(imagenew, 200, 255, cv2.THRESH_BINARY_INV) ret, thrsh3 = cv2.threshold(imagenew, 125, 255, cv2.THRESH_TRUNC) ret, thrsh4 = cv2.threshold(imagenew, 180, 255, cv2.THRESH_TOZERO) ret, thrsh5 = cv2.threshold(imagenew, 160, 0, cv2.THRESH_TOZERO_INV) # display the different images cv2.imshow('Image after applying Binary Threshold', thrsh1) cv2.imshow('Image after applying Binary Threshold Inverted', thrsh2) cv2.imshow('Image after applying Truncated Threshold', thrsh3) cv2.imshow('Image after applying Set to 0', thrsh4) cv2.imshow('Image after applying Set to 0 Inverted', thrsh5) # De-allocate any associated memory usage if cv2.waitKey(0) & 0xff == 27: cv2.destroyAllWindows() titles = ['Original Image','BINARY','BINARY_INV','TRUNC','TOZERO','TOZERO_INV'] images = [imagenew, thrsh1, thrsh2, thrsh3, thrsh4, thrsh5] for i in range(6): plt.subplot(2,3,i+1),plt.imshow(images[i],'gray') plt.title(titles[i]) plt.xticks([]),plt.yticks([]) plt.show() The above code first imports the necessary libraries which are needed to process the image. If you get an error for OpenCV library you can install it by using below command: Code: pip install opencv-python Once this is installed you will get below message: Output: The code then goes to the path where the image is stored. It then reads this image by using the imread function. The cv2.cvtcolor is a function present in cv2 library. It will convert this image to gray. Now we will use the above specified thresholding techniques and then observe the changes which happen to the image once the pixels are rounded off to a particular threshold. We have applied all 5 threshold techniques. We have taken different pixel values of the image before applying the techniques to ‘imagenew’. The last technique sets the threshold to 0. We call these functions by just using the cv2 function followed by the function name which gives us the needed result. Once we apply these, we can print the output by using the imshow function which is also present in the cv2 library. These outputs will be displayed in different windows as well as in the form of grid as we have used the plot library. We clear the memory after we have performed the necessary actions. Output of original image will be now as you can observe below: The above image as you can observe has a greyscale and has been changed as per different pixel and threshold values. Conclusion OpenCV is a deep learning algorithm which helps in processing images before they can be used for classifications using different models and algorithms. The threshold function helps in creating a grayscale image so that this image can have only two outcomes which will be black or white. This helps in processing better and applying the models in an efficient way. Recommended Articles This is a guide to OpenCV Threshold. Here we discuss the introduction, how threshold function works in OpenCV? and example respectively. You may also have a look at the following articles to learn more –
https://www.educba.com/opencv-threshold/?source=leftnav
CC-MAIN-2022-05
refinedweb
1,075
66.33
Subject: Re: [OMPI users] trying to use personal copy of 1.7.4 From: Jeff Squyres (jsquyres) (jsquyres_at_[hidden]) Date: 2014-04-24 14:07:12 On Mar 13, 2014, at 3:15 PM, Ross Boylan <ross_at_[hidden]> wrote: >. If you care for the reason why, it's because many (most? all?) of OMPI's plugins depend on symbols in the main MPI library. Hence, if those symbols can't be found in the process' namespace when OMPI tries to dlopen one of its plugins, that dlopen will fail due to the symbols it needs not being able to be resolved. It seems weird because libmpi.so *is* in the process (obviously), but it just can't be found by the plugin because libmpi.so may well be in a private namespace -- and therefore its symbols are hidden from the plugin that is being dlopened. Weird, but true. I honestly don't know what happens if you have a library opened in a private namespace in a process and then you dlopen it again in a public namespace in the same process. Do you actually get a second copy of libmpi (with a second copy of all of its global symbols), or is the linker smart enough to realize you already have it loaded and effectively move it into the public namespace? I'm not sure. -- Jeff Squyres jsquyres_at_[hidden] For corporate legal information go to:
http://www.open-mpi.org/community/lists/users/2014/04/24252.php
CC-MAIN-2014-52
refinedweb
236
79.8
React Hooks Hooks are the new feature introduced in the React 16.8 version. It allows you your knowledge of React concepts. Motivation Behind Hooks Hooks solves wide range of problems. Hooks also helps to minimize the complexity of components. One of the most common features any application provides is the states. State is a plain JavaScript object used by React to represent an information about the component's current situation. It's managed in the component (just like any variable declared in a function). Before Hooks, when we wanted to maintain states, we need to implement class components with state. Maintaining class components with states is always cumbersome process as it requires to write more code, need to remember syntax etc. There is one sentence used in React Documentation Classes confuse both people and machines We can find when we use class components with states the logic is very much tightly coupled with classes and it is very difficult to reuse the class components, it also not easy to organize the code when we use classes. Classes along with states and other class members requires “this”. So, the understanding of this keyword is very important in case of classes. Actually, this keyword is confusing in JavaScript, we have to understand how this works in JavaScript, which is very different from how it works in most languages. There is one disadvantage we have to remember to bind the event handlers. No Breaking Changes Before we continue more on Hooks, please note that- Hooks are Completely opt-in- We can write hooks in few components without rewriting existing code. 100% backwards-compatible. Hooks don’t contain any breaking changes. Why we should opt- in React Hooks Before Hooks Functional components only having props. It renders the values from props. Let’s consider below scenario where we need to print name of the user- Functional component without using Hooks export function PrintName(props) { return ( <div> <h1> Name: {props.name} </h1> </div> ); } Class Component with Lifecycle method- componentDidMount import React from 'react'; class PrintName extends React.Component { state = { names: [], }; async componentDidMount() { try { const res = await fetch("/api/data"); this.setState({ names: await res.json() }) } catch (e) { console.error(e); } } render() { return <h1> Hello {this.state.names[0]} </h1>; } } Let’s refactor the PrintName component from class to functional component using useEffect Hook import React, { useEffect, useState } from 'react'; function PrintName() { const [names, setNames] = useState([]); useEffect(() => { fetch("/api/data").then( res => setNames(res.data) ) }, []); return ( <> <div>Hello {names[0]}</div> </> ) } From above example we can see when we use functional components without hooks, they are not able to do much, they just print the values received in props. When we used the class component, we can see they are more flexible and we can bring dynamic nature by implementing life cycle methods e.g. componentDidMount. But we also observed it requires more code to accommodate the dynamic nature into class components. Also, as per our discussion in above section we need to write cumbersome code to maintain state in class components. The changes we made to bring state management and dynamic nature we introduced complexity, also these changes made class component more rigid (we cannot reuse or separate the logic). Finally, when we refactor the same example with Hook- useEffect in functional component we can easily achieved same output with very minimal code changes, without any new complexity. When to use a Hooks If you write a functional component, and then you want to add some state to it, previously you do this by converting it into a class. But now you can do it by using a Hook inside the existing function component. Rules of Hooks Hooks are similar to JavaScript functions, but you need to follow these two rules when using them. Hooks rule ensures that all the state-full logic in a component is visible in its source code. These rules are: 1. Only call Hooks at the top level Do not call Hooks inside loops, conditions, or nested functions. Hooks should always be used at the top level of the React functions. This rule ensures that Hooks are called in the same order each time a component render. 2. Only call Hooks from React functions You cannot call Hooks from regular JavaScript functions. Instead, you can call Hooks from React function components. Hooks can also be called from custom Hooks. Pre-requisites for React Hooks Node version 6 or above NPM version 5.2 or above Hooks State Hook state is the new way of declaring a state in React app. Hook uses useState functional component for setting and retrieving state. Hooks Effect The Effect Hook allows us to perform side effects (an action) in the function components. It does not use components lifecycle methods which are available in class components. In other words, Effects Hooks are equivalent to componentDidMount(), componentDidUpdate(), and componentWillUnmount() lifecycle methods. Side effects have common features which the most web applications need to perform, such as: Updating the DOM, Fetching and consuming data from a server API, Setting up a subscription, etc. In React component, there are two types of side effects: Effects Without Cleanup Effects With Cleanup Custom Hooks A custom Hook is a JavaScript function. The name of custom Hook starts with "use" which can call other Hooks. A custom Hook is just like a regular function, and the word "use" in the beginning tells that this function follows the rules of Hooks. Building custom Hooks allows you to extract component logic into reusable functions. References Discussion (4) Thank you for providing all the details. This is really helpful. Very useful and explained in detail! thanks! This is very useful. Thanks for sharing! Very detailed, Thanks for sharing!
https://practicaldev-herokuapp-com.global.ssl.fastly.net/shardulpathak/react-hooks-4c0o
CC-MAIN-2021-43
refinedweb
954
64.3
\input texinfo @c -*- Texinfo -*- @c %**start of header @setfilename make.info @include version.texi @set EDITION 0.70 @set RCSID $Id: make.texi,v 1.45 2006/04/01 06:36:40 psmith Exp $ @settitle GNU @code{make} @setchapternewpage odd @c Combine the variable and function indices: @syncodeindex vr fn @c Combine the program and concept indices: @syncodeindex pg cp @c FSF publishers: format makebook.texi instead of using this file directly. @c ISBN provided by Lisa M. Opus Goldstein <opus@gnu.org>, 5 May 2004 @set ISBN 1-882114-83-5 @c %**end of header @copying This file documents the GNU @code{make} utility, which determines automatically which pieces of a large program need to be recompiled, and issues the commands to recompile them. This is Edition @value{EDITION}, last updated @value{UPDATED}, of @cite{The GNU Make Manual}, for GNU @code{make} version @value{VERSION}. Copyright @copyright{} 1988, 1989, 1990, 1991, 1992, 1993, 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2002, 2003, 2004, 2005, 2006 finalout @c ISPELL CHECK: done, 10 June 1993 --roland @c ISPELL CHECK: done, 2000-06-25 --Martin Buchholz @dircategory GNU Packages @direntry * Make: (make). Remake files automatically. @end direntry @iftex @shorttitlepage GNU Make @end iftex @titlepage @title GNU Make @subtitle A Program for Directing Recompilation @subtitle GNU @code{make} Version @value{VERSION} @subtitle @value{UPDATED-MONTH} @author Richard M. Stallman, Roland McGrath, Paul D. Smith @page @vskip 0pt plus 1filll @insertcopying @sp 2 Published by the Free Software Foundation @* 51 Franklin St. -- Fifth Floor @* Boston, MA 02110-1301 USA @* ISBN @value{ISBN} @* @sp 2 Cover art by Etienne Suvasa. @end titlepage @summarycontents @contents @ifnottex @node Top, Overview, (dir), (dir) @top GNU @code{make} @insertcopying @end ifnottex @menu * Overview:: Overview of @code{make}. * Introduction:: An introduction to @code{make}. * Makefiles:: Makefiles tell @code @code{make} on the command line. * Implicit Rules:: Use implicit rules to treat many files alike, based on their file names. * Archives:: How @code{make} can update library archives. * Features:: Features GNU @code{make} has over other @code{make}s. * Missing:: What GNU @code{make} lacks from other @code{make}s. * Makefile Conventions:: Conventions for writing makefiles for GNU programs. * Quick Reference:: A quick reference for experienced users. * Error Messages:: A list of common errors generated by @code{make}. * Complex Makefile:: A real example of a straightforward, but nontrivial, makefile. * GNU Free Documentation License:: License for copying this manual * Concept Index:: Index of Concepts * Name Index:: Index of Functions, Variables, & Directives @detailmenu --- The Detailed Node Listing --- Overview of @code{make} * Preparing:: Preparing and Running Make * Reading:: On Reading this Text * Bugs:: Problems and Bugs An Introduction to Makefiles *. Command Syntax * Splitting Lines:: Breaking long command lines for readability. * Variables in Commands:: Using @code{make} variables in commands. Command Execution * Choosing the Shell:: How @code{make} chooses the shell used to run commands. Recursive Use of @code{make} * @code @code. @end detailmenu @end menu @node Overview, Introduction, Top, Top @comment node-name, next, previous, up @chapter Overview of @code{make} The @code{make} utility automatically determines which pieces of a large program need to be recompiled, and issues commands to recompile them. This manual describes GNU @code{make}, which was implemented by Richard Stallman and Roland McGrath. Development since Version 3.76 has been handled by Paul D. Smith. GNU @code{make} conforms to section 6.2 of @cite{IEEE Standard 1003.2-1992} (POSIX.2). @cindex POSIX @cindex IEEE Standard 1003.2 @cindex standards conformance Our examples show C programs, since they are most common, but you can use @code{make} with any programming language whose compiler can be run with a shell command. Indeed, @code{make} is not limited to programs. You can use it to describe any task where some files must be updated automatically from others whenever the others change. @menu * Preparing:: Preparing and Running Make * Reading:: On Reading this Text * Bugs:: Problems and Bugs @end menu @node Preparing, Reading, Overview, Overview @ifnottex @heading Preparing and Running Make @end ifnottex To prepare to use @code{make}, you must write a file called the @dfn{makefile} that describes the relationships among files in your program and provides commands for updating each file. In a program, typically, the executable file is updated from object files, which are in turn made by compiling source files.@refill Once a suitable makefile exists, each time you change some source files, this simple shell command: @example make @end example @noindent suffices to perform all necessary recompilations. The @code{make} program uses the makefile data base and the last-modification times of the files to decide which of the files need to be updated. For each of those files, it issues the commands recorded in the data base. You can provide command line arguments to @code{make} to control which files should be recompiled, or how. @xref{Running, ,How to Run @code{make}}. @node Reading, Bugs, Preparing, Overview @section How to Read This Manual If you are new to @code{make}, or are looking for a general introduction, read the first few sections of each chapter, skipping the later sections. In each chapter, the first few sections contain introductory or general information and the later sections contain specialized or technical information. @ifnottex The exception is the second chapter, @ref{Introduction, ,An Introduction to Makefiles}, all of which is introductory. @end ifnottex @iftex The exception is @ref{Introduction, ,An Introduction to Makefiles}, all of which is introductory. @end iftex If you are familiar with other @code{make} programs, see @ref{Features, ,Features of GNU @code{make}}, which lists the enhancements GNU @code{make} has, and @ref{Missing, ,Incompatibilities and Missing Features}, which explains the few things GNU @code{make} lacks that others have. For a quick summary, see @ref{Options Summary}, @ref{Quick Reference}, and @ref{Special Targets}. @node Bugs, , Reading, Overview @section Problems and Bugs @cindex reporting bugs @cindex bugs, reporting @cindex problems and bugs, reporting If you have problems with GNU @code : @example bug-make@@gnu.org @end example @noindent or use our Web-based project management tool, at: @example @end example @noindent In addition to the information above, please be careful to include the version number of @code{make} you are using. You can get this information with the command @samp{make --version}. Be sure also to include the type of machine and operating system you are using. One way to obtain this information is by looking at the final lines of output from the command @samp{make --help}. @node Introduction, Makefiles, Overview, Top @comment node-name, next, previous, up @chapter An Introduction to Makefiles You need a file called a @dfn{makefile} to tell @code{make} what to do. Most often, the makefile tells @code{make} how to compile and link a program. @cindex makefile In this chapter, we will discuss a simple makefile that describes how to compile and link a text editor which consists of eight C source files and three header files. The makefile can also tell @code{make} how to run miscellaneous commands when explicitly asked (for example, to remove certain files as a clean-up operation). To see a more complex example of a makefile, see @ref{Complex Makefile}. When @code. @cindex recompilation @cindex editor @menu * @end menu @node Rule Introduction, Simple Makefile, Introduction, Introduction @comment node-name, next, previous, up @section What a Rule Looks Like @cindex rule, introduction to @cindex makefile rule parts @cindex parts of makefile rule A simple makefile consists of ``rules'' with the following shape: @cindex targets, introduction to @cindex prerequisites, introduction to @cindex commands, introduction to @example @group @var{target} @dots{} : @var{prerequisites} @dots{} @var{command} @dots{} @dots{} @end group @end example A @dfn{target} is usually the name of a file that is generated by a program; examples of targets are executable or object files. A target can also be the name of an action to carry out, such as @samp{clean} (@pxref{Phony Targets}). A @dfn{prerequisite} is a file that is used as input to create the target. A target often depends on several files. @cindex tabs in rules A @dfn{command} is an action that @code{make} carries out. A rule may have more than one command, each on its own line. @strong @samp{clean} does not have prerequisites. A @dfn{rule}, then, explains how and when to remake certain files which are the targets of the particular rule. @code{make} carries out the commands on the prerequisites to create or update the target. A rule can also explain how and when to carry out an action. @xref{Rules, , Writing Rules}. A makefile may contain other text besides rules, but a simple makefile need only contain rules. Rules may look somewhat more complicated than shown in this template, but all fit the pattern more or less. @node Simple Makefile, How Make Works, Rule Introduction, Introduction @section A Simple Makefile @cindex simple makefile @cindex makefile, simple Here is a straightforward makefile that describes the way an executable file called @code{edit} depends on eight object files which, in turn, depend on eight C source and three header files. In this example, all the C files include @file{defs.h}, but only those defining editing commands include @file{command.h}, and only low level files that change the editor buffer include @file{buffer.h}. @example @group @end group @end example @noindent We split each long line into two lines using backslash-newline; this is like using one long line, but is easier to read. @cindex continuation lines @cindex @code{\} (backslash), for continuation lines @cindex backslash (@code{\}), for continuation lines @cindex quoting newline, in makefile @cindex newline, quoting, in makefile To use this makefile to create the executable file called @file{edit}, type: @example make @end example To use this makefile to delete the executable file and all the object files from the directory, type: @example make clean @end example In the example makefile, the targets include the executable file @samp{edit}, and the object files @samp{main.o} and @samp{kbd.o}. The prerequisites are files such as @samp{main.c} and @samp{defs.h}. In fact, each @samp{.o} file is both a target and a prerequisite. Commands include @w{@samp{cc -c main.c}} and @w{@samp{cc -c kbd.c}}. When a target is a file, it needs to be recompiled or relinked if any of its prerequisites change. In addition, any prerequisites that are themselves automatically generated should be updated first. In this example, @file{edit} depends on each of the eight object files; the object file @file{main.o} depends on the source file @file{main.c} and on the header file @code{make} does not know anything about how the commands work. It is up to you to supply commands that will update the target file properly. All @code{make} does is execute the commands in the rule you have specified when the target file needs to be updated.) @cindex shell command The target @samp{clean} is not a file, but merely the name of an action. Since you normally do not want to carry out the actions in this rule, @samp{clean} is not a prerequisite of any other rule. Consequently, @code @dfn{phony targets}. @xref{Phony Targets}, for information about this kind of target. @xref{Errors, , Errors in Commands}, to see how to cause @code{make} to ignore errors from @code{rm} or any other command. @cindex @code{clean} target @cindex @code{rm} (shell command) @node How Make Works, Variables Simplify, Simple Makefile, Introduction @comment node-name, next, previous, up @section How @code{make} Processes a Makefile @cindex processing a makefile @cindex makefile, how @code{make} processes By default, @code{make} starts with the first target (not targets whose names start with @samp{.}). This is called the @dfn{default goal}. (@dfn{Goals} are the targets that @code{make} strives ultimately to update. You can override this behavior using the command line (@pxref{Goals, , Arguments to Specify the Goals}) or with the @code{.DEFAULT_GOAL} special variable (@pxref{Special Variables, , Other Special Variables}). @cindex default goal @cindex goal, default @cindex goal In the simple example of the previous section, the default goal is to update the executable program @file{edit}; therefore, we put that rule first. Thus, when you give the command: @example make @end example @noindent @code{make} reads the makefile in the current directory and begins by processing the first rule. In the example, this rule is for relinking @file{edit}; but before @code{make} can fully process this rule, it must process the rules for the files that @file{edit} depends on, which in this case are the object files. Each of these files is processed according to its own rule. These rules say to update each @samp{ @code{make} to do so (with a command such as @w{@code{make clean}}). Before recompiling an object file, @code{make} considers updating its prerequisites, the source file and header files. This makefile does not specify anything to be done for them---the @samp{.c} and @samp{.h} files are not the targets of any rules---so @code{make} does nothing for these files. But @code{make} would update automatically generated C programs, such as those made by Bison or Yacc, by their own rules at this time. After recompiling whichever object files need it, @code{make} decides whether to relink @file{edit}. This must be done if the file @file{edit} does not exist, or if any of the object files are newer than it. If an object file was just recompiled, it is now newer than @file{edit}, so @file{edit} is relinked. @cindex relinking Thus, if we change the file @file{insert.c} and run @code{make}, @code{make} will compile that file to update @file{insert.o}, and then link @file{edit}. If we change the file @file{command.h} and run @code{make}, @code{make} will recompile the object files @file{kbd.o}, @file{command.o} and @file{files.o} and then link the file @file{edit}. @node Variables Simplify, make Deduces, How Make Works, Introduction @section Variables Make Makefiles Simpler @cindex variables @cindex simplifying with variables In our example, we had to list all the object files twice in the rule for @file{edit} (repeated here): @example @group edit : main.o kbd.o command.o display.o \ insert.o search.o files.o utils.o cc -o edit main.o kbd.o command.o display.o \ insert.o search.o files.o utils.o @end group @end example @cindex @code{objects} Such duplication is error-prone; if a new object file is added to the system, we might add it to one list and forget the other. We can eliminate the risk and simplify the makefile by using a variable. @dfn{Variables} allow a text string to be defined once and substituted in multiple places later (@pxref{Using Variables, ,How to Use Variables}). @cindex @code{OBJECTS} @cindex @code{objs} @cindex @code{OBJS} @cindex @code{obj} @cindex @code{OBJ} It is standard practice for every makefile to have a variable named @code{objects}, @code{OBJECTS}, @code{objs}, @code{OBJS}, @code{obj}, or @code{OBJ} which is a list of all object file names. We would define such a variable @code{objects} with a line like this in the makefile:@refill @example @group objects = main.o kbd.o command.o display.o \ insert.o search.o files.o utils.o @end group @end example @noindent Then, each place we want to put a list of the object file names, we can substitute the variable's value by writing @samp{$(objects)} (@pxref{Using Variables, ,How to Use Variables}). Here is how the complete simple makefile looks when you use a variable for the object files: @example @group) @end group @end example @node make Deduces, Combine By Prerequisite, Variables Simplify, Introduction @section Letting @code{make} Deduce the Commands @cindex deducing commands (implicit rules) @cindex implicit rule, introduction to @cindex rule, implicit, introduction to It is not necessary to spell out the commands for compiling the individual C source files, because @code{make} can figure them out: it has an @dfn{implicit rule} for updating a @samp{.o} file from a correspondingly named @samp{.c} file using a @samp{cc -c} command. For example, it will use the command @samp{cc -c main.c -o main.o} to compile @file{main.c} into @file{main.o}. We can therefore omit the commands from the rules for the object files. @xref{Implicit Rules, ,Using Implicit Rules}.@refill When a @samp{.c} file is used automatically in this way, it is also automatically added to the list of prerequisites. We can therefore omit the @samp{.c} files from the prerequisites, provided we omit the commands. Here is the entire example, with both of these changes, and a variable @code{objects} as suggested above: @example @group) @end group @end example @noindent This is how we would write the makefile in actual practice. (The complications associated with @samp{clean} are described elsewhere. See @ref{Phony Targets}, and @ref{Errors, ,Errors in Commands}.) Because implicit rules are so convenient, they are important. You will see them used frequently.@refill @node Combine By Prerequisite, Cleanup, make Deduces, Introduction @section Another Style of Makefile @cindex combining rules by prerequisite When the objects of a makefile are created only by implicit rules, an alternative style of makefile is possible. In this style of makefile, you group entries by their prerequisites instead of by their targets. Here is what one looks like: @example @group objects = main.o kbd.o command.o display.o \ insert.o search.o files.o utils.o edit : $(objects) cc -o edit $(objects) $(objects) : defs.h kbd.o command.o files.o : command.h display.o insert.o search.o files.o : buffer.h @end group @end example @noindent Here @file{defs.h} is given as a prerequisite of all the object files; @file{command.h} and @file{buffer.h} are prerequisites of the specific object files listed for them. Whether this is better is a matter of taste: it is more compact, but some people dislike it because they find it clearer to put all the information about each target in one place. @node Cleanup, , Combine By Prerequisite, Introduction @section Rules for Cleaning the Directory @cindex cleaning up @cindex removing, to clean up Compiling a program is not the only thing you might want to write rules for. Makefiles commonly tell how to do a few other things besides compiling a program: for example, how to delete all the object files and executables so that the directory is @samp{clean}. @cindex @code{clean} target Here is how we could write a @code{make} rule for cleaning our example editor: @example @group clean: rm edit $(objects) @end group @end example In practice, we might want to write the rule in a somewhat more complicated manner to handle unanticipated situations. We would do this: @example @group .PHONY : clean clean : -rm edit $(objects) @end group @end example @noindent This prevents @code{make} from getting confused by an actual file called @file{clean} and causes it to continue in spite of errors from @code{rm}. (See @ref{Phony Targets}, and @ref{Errors, ,Errors in Commands}.) @noindent A rule such as this should not be placed at the beginning of the makefile, because we do not want it to run by default! Thus, in the example makefile, we want the rule for @code{edit}, which recompiles the editor, to remain the default goal. Since @code{clean} is not a prerequisite of @code{edit}, this rule will not run at all if we give the command @samp{make} with no arguments. In order to make the rule run, we have to type @samp{make clean}. @xref{Running, ,How to Run @code{make}}. @node Makefiles, Rules, Introduction, Top @chapter Writing Makefiles @cindex makefile, how to write The information that tells @code{make} how to recompile a system comes from reading a data base called the @dfn. @end menu @node Makefile Contents, Makefile Names, Makefiles, Makefiles @section What Makefiles Contain Makefiles contain five kinds of things: @dfn{explicit rules}, @dfn{implicit rules}, @dfn{variable definitions}, @dfn{directives}, and @dfn{comments}. Rules, variables, and directives are described at length in later chapters.@refill @itemize @bullet @cindex rule, explicit, definition of @cindex explicit rule, definition of @item An @dfn{explicit rule} says when and how to remake one or more files, called the rule's @dfn{targets}. It lists the other files that the targets depend on, called the @dfn{prerequisites} of the target, and may also give commands to use to create or update the targets. @xref{Rules, ,Writing Rules}. @cindex rule, implicit, definition of @cindex implicit rule, definition of @item An @dfn{implicit rule} says when and how to remake a class of files based on their names. It describes how a target may depend on a file with a name similar to the target and gives commands to create or update such a target. @xref{Implicit Rules, ,Using Implicit Rules}. @cindex variable definition @item A @dfn{variable definition} is a line that specifies a text string value for a variable that can be substituted into the text later. The simple makefile example shows a variable definition for @code{objects} as a list of all object files (@pxref{Variables Simplify, , Variables Make Makefiles Simpler}). @cindex directive @item A @dfn{directive} is a command for @code{make} to do something special while reading the makefile. These include: @itemize @bullet @item Reading another makefile (@pxref{Include, ,Including Other Makefiles}). @item Deciding (based on the values of variables) whether to use or ignore a part of the makefile (@pxref{Conditionals, ,Conditional Parts of Makefiles}). @item Defining a variable from a verbatim string containing multiple lines (@pxref{Defining, ,Defining Variables Verbatim}). @end itemize @cindex comments, in makefile @cindex @code{#} (comments), in makefile @item @samp{#} in a line of a makefile starts a @dfn @code{#}, escape it with a backslash (e.g., @code{\#}). @code{define} directive, comments are not ignored during the definition of the variable, but rather kept intact in the value of the variable. When the variable is expanded they will either be treated as @code{make} comments or as command script text, depending on the context in which the variable is evaluated. @end itemize @node Makefile Names, Include, Makefile Contents, Makefiles @section What Name to Give Your Makefile @cindex makefile name @cindex name of makefile @cindex default makefile name @cindex file name of makefile @c following paragraph rewritten to avoid overfull hbox By default, when @code{make} looks for the makefile, it tries the following names, in order: @file{GNUmakefile}, @file{makefile} and @file{Makefile}.@refill @findex Makefile @findex GNUmakefile @findex makefile @cindex @code{README} Normally you should call your makefile either @file{makefile} or @file{Makefile}. (We recommend @file{Makefile} because it appears prominently near the beginning of a directory listing, right near other important files such as @file{README}.) The first name checked, @file{GNUmakefile}, is not recommended for most makefiles. You should use this name if you have a makefile that is specific to GNU @code{make}, and will not be understood by other versions of @code{make}. Other @code{make} programs look for @file{makefile} and @file{Makefile}, but not @file{GNUmakefile}. If @code{make} finds none of these names, it does not use any makefile. Then you must specify a goal with a command argument, and @code{make} will attempt to figure out how to remake it using only its built-in implicit rules. @xref{Implicit Rules, ,Using Implicit Rules}. @cindex @code{-f} @cindex @code{--file} @cindex @code{--makefile} If you want to use a nonstandard name for your makefile, you can specify the makefile name with the @samp{-f} or @samp{--file} option. The arguments @w{@samp{-f @var{name}}} or @w{@samp{--file=@var{name}}} tell @code{make} to read the file @var{name} as the makefile. If you use more than one @samp{-f} or @samp{--file} option, you can specify several makefiles. All the makefiles are effectively concatenated in the order specified. The default makefile names @file{GNUmakefile}, @file{makefile} and @file{Makefile} are not checked automatically if you specify @samp{-f} or @samp{--file}.@refill @cindex specifying makefile name @cindex makefile name, how to specify @cindex name of makefile, how to specify @cindex file name of makefile, how to specify @node Include, MAKEFILES Variable, Makefile Names, Makefiles @section Including Other Makefiles @cindex including other makefiles @cindex makefile, including @findex include The @code{include} directive tells @code{make} to suspend reading the current makefile and read one or more other makefiles before continuing. The directive is a line in the makefile that looks like this: @example include @var{filenames}@dots{} @end example @noindent @var{filenames} can contain shell file name patterns. If @var{filenames} is empty, nothing is included and no error is printed. @cindex shell file name pattern (in @code{include}) @cindex shell wildcards (in @code{include}) @cindex wildcard, in @code{include} Extra spaces are allowed and ignored at the beginning of the line, but a tab is not allowed. (If the line begins with a tab, it will be considered a command line.) Whitespace is required between @code{include} and the file names, and between file names; extra whitespace is ignored there and at the end of the directive. A comment starting with @samp{#} is allowed at the end of the line. If the file names contain any variable or function references, they are expanded. @xref{Using Variables, ,How to Use Variables}. For example, if you have three @file{.mk} files, @file{a.mk}, @file{b.mk}, and @file{c.mk}, and @code{$(bar)} expands to @code{bish bash}, then the following expression @example include foo *.mk $(bar) @end example is equivalent to @example include foo a.mk b.mk c.mk bish bash @end example When @code{make} processes an @code{include} directive, it suspends reading of the containing makefile and reads from each listed file in turn. When that is finished, @code{make} resumes reading the makefile in which the directive appears. One occasion for using @code{include} directives is when several programs, handled by individual makefiles in various directories, need to use a common set of variable definitions (@pxref{Setting, ,Setting Variables}) or pattern rules (@pxref{Pattern Rules, @code{make}. @xref{Automatic Prerequisites}. @cindex prerequisites, automatic generation @cindex automatic generation of prerequisites @cindex generating prerequisites automatically @cindex @code{-I} @cindex @code{--include-dir} @cindex included makefiles, default directories @cindex default directories for included makefiles @findex /usr/gnu/include @findex /usr/local/include @findex /usr/include If the specified name does not start with a slash, and the file is not found in the current directory, several other directories are searched. First, any directories you have specified with the @samp{-I} or @samp{--include-dir} option are searched (@pxref{Options Summary, ,Summary of Options}). Then the following directories (if they exist) are searched, in this order: @file{@var{prefix}/include} (normally @file{/usr/local/include} @footnote{GNU Make compiled for MS-DOS and MS-Windows behaves as if @var{prefix} has been defined to be the root of the DJGPP tree hierarchy.}) @file{/usr/gnu/include}, @file{/usr/local/include}, @file{/usr/include}. If an included makefile cannot be found in any of these directories, a warning message is generated, but it is not an immediately fatal error; processing of the makefile containing the @code{include} continues. Once it has finished reading makefiles, @code{make} will try to remake any that are out of date or don't exist. @xref{Remaking Makefiles, ,How Makefiles Are Remade}. Only after it has tried to find a way to remake a makefile and failed, will @code{make} diagnose the missing makefile as a fatal error. If you want @code{make} to simply ignore a makefile which does not exist and cannot be remade, with no error message, use the @w{@code{-include}} directive instead of @code{include}, like this: @example -include @var{filenames}@dots{} @end example This acts like @code{include} in every way except that there is no error (not even a warning) if any of the @var{filenames} do not exist. For compatibility with some other @code{make} implementations, @code{sinclude} is another name for @w{@code{-include}}. @node MAKEFILES Variable, MAKEFILE_LIST Variable, Include, Makefiles @section The Variable @code{MAKEFILES} @cindex makefile, and @code{MAKEFILES} variable @cindex including (@code{MAKEFILES} variable) @vindex MAKEFILES If the environment variable @code{MAKEFILES} is defined, @code{make} considers its value as a list of names (separated by whitespace) of additional makefiles to be read before the others. This works much like the @code{include} directive: various directories are searched for those files (@pxref{Include, ,Including Other Makefiles}). In addition, the default goal is never taken from one of these makefiles and it is not an error if the files listed in @code{MAKEFILES} are not found.@refill @cindex recursion, and @code{MAKEFILES} variable The main use of @code{MAKEFILES} is in communication between recursive invocations of @code{make} (@pxref{Recursion, ,Recursive Use of @code{make}}). It usually is not desirable to set the environment variable before a top-level invocation of @code{make}, because it is usually better not to mess with a makefile from outside. However, if you are running @code{make} without a specific makefile, a makefile in @code{MAKEFILES} can do useful things to help the built-in implicit rules work better, such as defining search paths (@pxref{Directory Search}). Some users are tempted to set @code{MAKEFILES} in the environment automatically on login, and program makefiles to expect this to be done. This is a very bad idea, because such makefiles will fail to work if run by anyone else. It is much better to write explicit @code{include} directives in the makefiles. @xref{Include, , Including Other Makefiles}. @node MAKEFILE_LIST Variable, Special Variables, MAKEFILES Variable, Makefiles @comment node-name, next, previous, up @section The Variable @code{MAKEFILE_LIST} @cindex makefiles, and @code{MAKEFILE_LIST} variable @cindex including (@code{MAKEFILE_LIST} variable) @vindex MAKEFILE_LIST As @code{make} reads various makefiles, including any obtained from the @code{MAKEFILES} variable, the command line, the default files, or from @code{include} directives, their names will be automatically appended to the @code{MAKEFILE_LIST} variable. They are added right before @code{make} begins to parse them. This means that if the first thing a makefile does is examine the last word in this variable, it will be the name of the current makefile. Once the current makefile has used @code{include}, however, the last word will be the just-included makefile. If a makefile named @code{Makefile} has this content: @example @group name1 := $(lastword $(MAKEFILE_LIST)) include inc.mk name2 := $(lastword $(MAKEFILE_LIST)) all: @@echo name1 = $(name1) @@echo name2 = $(name2) @end group @end example @noindent then you would expect to see this output: @example @group name1 = Makefile name2 = inc.mk @end group @end example @xref{Text Functions}, for more information on the @code{word} and @code{words} functions used above. @xref{Flavors, The Two Flavors of Variables}, for more information on simply-expanded (@code{:=}) variable definitions. @node Special Variables, Remaking Makefiles, MAKEFILE_LIST Variable, Makefiles @comment node-name, next, previous, up @section Other Special Variables @cindex makefiles, and special variables @cindex special variables GNU @code{make} also supports other special variables. Unless otherwise documented here, these values lose their special properties if they are set by a makefile or on the command line. @table @code @vindex .DEFAULT_GOAL @r{(define default goal)} @item .DEFAULT_GOAL Sets the default goal to be used if no targets were specified on the command line (@pxref{Goals, , Arguments to Specify the Goals}). The @code{.DEFAULT_GOAL} variable allows you to discover the current default goal, restart the default goal selection algorithm by clearing its value, or to explicitly set the default goal. The following example illustrates these cases: @example @group # Query the default goal. ifeq ($(.DEFAULT_GOAL),) $(warning no default goal is set) endif .PHONY: foo foo: ; @@echo $@@ $(warning default goal is $(.DEFAULT_GOAL)) # Reset the default goal. .DEFAULT_GOAL := .PHONY: bar bar: ; @@echo $@@ $(warning default goal is $(.DEFAULT_GOAL)) # Set our own. .DEFAULT_GOAL := foo @end group @end example This makefile prints: @example @group no default goal is set default goal is foo default goal is bar foo @end group @end example Note that assigning more than one target name to @code{.DEFAULT_GOAL} is illegal and will result in an error. @vindex MAKE_RESTARTS @r{(number of times @code{make} has restarted)} @item MAKE_RESTARTS This variable is set only if this instance of @code{make} has restarted (@pxref{Remaking Makefiles, , How Makefiles Are Remade}): it will contain the number of times this instance has restarted. Note this is not the same as recursion (counted by the @code{MAKELEVEL} variable). You should not set, modify, or export this variable. @vindex .VARIABLES @r{(list of variables)} @item .VARIABLES Expands to a list of the @emph{names} of all global variables defined so far. This includes variables which have empty values, as well as built-in variables (@pxref{Implicit Variables, , Variables Used by Implicit Rules}), but does not include any variables which are only defined in a target-specific context. Note that any value you assign to this variable will be ignored; it will always return its special value. @c @vindex .TARGETS @r{(list of targets)} @c @item .TARGETS @c The second special variable is @code{.TARGETS}. When expanded, the @c value consists of a list of all targets defined in all makefiles read @c up until that point. Note it's not enough for a file to be simply @c mentioned in the makefile to be listed in this variable, even if it @c would match an implicit rule and become an ``implicit target''. The @c file must appear as a target, on the left-hand side of a ``:'', to be @c considered a target for the purposes of this variable. @vindex .FEATURES @r{(list of supported features)} @item .FEATURES Expands to a list of special features supported by this version of @code{make}. Possible values include: @table @samp @item archives Supports @code{ar} (archive) files using special filename syntax. @xref{Archives, ,Using @code{make} to Update Archive Files}. @item check-symlink Supports the @code{-L} (@code{--check-symlink-times}) flag. @xref{Options Summary, ,Summary of Options}. @item else-if Supports ``else if'' non-nested conditionals. @xref{Conditional Syntax, ,Syntax of Conditionals}. @item jobserver Supports ``job server'' enhanced parallel builds. @xref{Parallel, ,Parallel Execution}. @item second-expansion Supports secondary expansion of prerequisite lists. @item order-only Supports order-only prerequisites. @xref{Prerequisite Types, ,Types of Prerequisites}. @item target-specific Supports target-specific and pattern-specific variable assignments. @xref{Target-specific, ,Target-specific Variable Values}. @end table @vindex .INCLUDE_DIRS @r{(list of include directories)} @item .INCLUDE_DIRS Expands to a list of directories that @code{make} searches for included makefiles (@pxref{Include, , Including Other Makefiles}). @end table @node Remaking Makefiles, Overriding Makefiles, Special Variables, Makefiles @section How Makefiles Are Remade @cindex updating makefiles @cindex remaking makefiles @cindex makefile, remaking of Sometimes makefiles can be remade from other files, such as RCS or SCCS files. If a makefile can be remade from other files, you probably want @code{make} to get an up-to-date version of the makefile to read in. To this end, after reading in all makefiles, @code{make} will consider each as a goal target and attempt to update it. If a makefile has a rule which says how to update it (found either in that very makefile or in another one) or if an implicit rule applies to it (@pxref{Implicit Rules, ,Using Implicit Rules}), it will be updated if necessary. After all makefiles have been checked, if any have actually been changed, @code{make} starts with a clean slate and reads all the makefiles over again. (It will also attempt to update each of them over again, but normally this will not change them again, since they are already up to date.)@refill If you know that one or more of your makefiles cannot be remade and you want to keep @code{make} from performing an implicit rule search on them, perhaps for efficiency reasons, you can use any normal method of preventing implicit rule lookup to do so. For example, you can write an explicit rule with the makefile as the target, and an empty command string (@pxref{Empty Commands, ,Using Empty Commands}). If the makefiles specify a double-colon rule to remake a file with commands but no prerequisites, that file will always be remade (@pxref{Double-Colon}). In the case of makefiles, a makefile that has a double-colon rule with commands but no prerequisites will be remade every time @code{make} is run, and then again after @code{make} starts over and reads the makefiles in again. This would cause an infinite loop: @code{make} would constantly remake the makefile, and never do anything else. So, to avoid this, @code{make} will @strong{not} attempt to remake makefiles which are specified as targets of a double-colon rule with commands but no prerequisites.@refill If you do not specify any makefiles to be read with @samp{-f} or @samp{--file} options, @code{make} will try the default makefile names; @pxref{Makefile Names, ,What Name to Give Your Makefile}. Unlike makefiles explicitly requested with @samp{-f} or @samp{--file} options, @code{make} is not certain that these makefiles should exist. However, if a default makefile does not exist but can be created by running @code{make} rules, you probably want the rules to be run so that the makefile can be used. Therefore, if none of the default makefiles exists, @code{make} will try to make each of them in the same order in which they are searched for (@pxref{Makefile Names, ,What Name to Give Your Makefile}) until it succeeds in making one, or it runs out of names to try. Note that it is not an error if @code{make} cannot find or make any makefile; a makefile is not always necessary.@refill When you use the @samp{-t} or @samp{--touch} option (@pxref{Instead of Execution, ,Instead of Executing the Commands}), you would not want to use an out-of-date makefile to decide which targets to touch. So the @samp{-t} option has no effect on updating makefiles; they are really updated even if @samp{-t} is specified. Likewise, @samp{-q} (or @samp{--question}) and @samp{-n} (or @samp{--just-print}) do not prevent updating of makefiles, because an out-of-date makefile would result in the wrong output for other targets. Thus, @samp{make -f mfile -n foo} will update @file{mfile}, read it in, and then print the commands to update @file{foo} and its prerequisites without running them. The commands printed for @file{foo} will be those specified in the updated contents of @file @samp{-t} and so on do apply to them. Thus, @samp{make -f mfile -n mfile foo} would read the makefile @file{mfile}, print the commands needed to update it without actually running them, and then print the commands needed to update @file{foo} without running them. The commands for @file{foo} will be those specified by the existing contents of @file{mfile}. @node Overriding Makefiles, Reading Makefiles, Remaking Makefiles, Makefiles @section Overriding Part of Another Makefile @cindex overriding makefiles @cindex makefile, overriding Sometimes it is useful to have a makefile that is mostly just like another makefile. You can often use the @samp{include} directive to include one in the other, and add more targets or variable definitions. However, if the two makefiles give different commands for the same target, @code{make} will not let you just do this. But there is another way. @cindex match-anything rule, used to override In the containing makefile (the one that wants to include the other), you can use a match-anything pattern rule to say that to remake any target that cannot be made from the information in the containing makefile, @code{make} should look in another makefile. @xref{Pattern Rules}, for more information on pattern rules. For example, if you have a makefile called @file{Makefile} that says how to make the target @samp{foo} (and other targets), you can write a makefile called @file{GNUmakefile} that contains: @example foo: frobnicate > foo %: force @@$(MAKE) -f Makefile $@@ force: ; @end example If you say @samp{make foo}, @code{make} will find @file{GNUmakefile}, read it, and see that to make @file{foo}, it needs to run the command @samp{frobnicate > foo}. If you say @samp{make bar}, @code{make} will find no way to make @file{bar} in @file{GNUmakefile}, so it will use the commands from the pattern rule: @samp{make -f Makefile bar}. If @file{Makefile} provides a rule for updating @file{bar}, @code{make} will apply the rule. And likewise for any other target that @file{GNUmakefile} does not say how to make. The way this works is that the pattern rule has a pattern of just @samp{%}, so it matches any target whatever. The rule specifies a prerequisite @file{force}, to guarantee that the commands will be run even if the target file already exists. We give @file{force} target empty commands to prevent @code{make} from searching for an implicit rule to build it---otherwise it would apply the same match-anything rule to @file{force} itself and create a prerequisite loop! @node Reading Makefiles, Secondary Expansion, Overriding Makefiles, Makefiles @section How @code{make} Reads a Makefile @cindex reading makefiles @cindex makefile, parsing GNU @code, @code @dfn{immediate} if it happens during the first phase: in this case @code{make} will expand any variables or functions in that section of a construct as the makefile is parsed. We say that expansion is @dfn. @subheading Variable Assignment @cindex +=, expansion @cindex =, expansion @cindex ?=, expansion @cindex +=, expansion @cindex define, expansion Variable definitions are parsed as follows: @example @var{immediate} = @var{deferred} @var{immediate} ?= @var{deferred} @var{immediate} := @var{immediate} @var{immediate} += @var{deferred} or @var{immediate} define @var{immediate} @var{deferred} endef @end example For the append operator, @samp{+=}, the right-hand side is considered immediate if the variable was previously set as a simple variable (@samp{:=}), and deferred otherwise. @subheading Conditional Statements @cindex ifdef, expansion @cindex ifeq, expansion @cindex ifndef, expansion @cindex ifneq, expansion All instances of conditional syntax are parsed immediately, in their entirety; this includes the @code{ifdef}, @code{ifeq}, @code{ifndef}, and @code{ifneq} forms. Of course this means that automatic variables cannot be used in conditional statements, as automatic variables are not set until the command script for that rule is invoked. If you need to use automatic variables in a conditional you @emph{must} use shell conditional syntax, in your command script proper, for these tests, not @code{make} conditionals. @subheading Rule Definition @cindex target, expansion @cindex prerequisite, expansion @cindex implicit rule, expansion @cindex pattern rule, expansion @cindex explicit rule, expansion A rule is always expanded the same way, regardless of the form: @example @var{immediate} : @var{immediate} ; @var{deferred} @var{deferred} @end example That is, the target and prerequisite sections are expanded immediately, and the commands used to construct the target are always deferred. This general rule is true for explicit rules, pattern rules, suffix rules, static pattern rules, and simple prerequisite definitions. @node Secondary Expansion, , Reading Makefiles, Makefiles @section Secondary Expansion @cindex secondary expansion @cindex expansion, secondary @findex .SECONDEXPANSION In the previous section we learned that GNU @code{make} works in two distinct phases: a read-in phase and a target-update phase (@pxref{Reading Makefiles, , How @code{make} Reads a Makefile}). GNU make also has the ability to enable a @emph{second expansion} of the prerequisites (only) for some or all targets defined in the makefile. In order for this second expansion to occur, the special target @emph{second time}. In most circumstances this secondary expansion will have no effect, since all variable and function references will have been expanded during the initial parsing of the makefiles. In order to take advantage of the secondary expansion phase of the parser, then, it's necessary to @emph{escape} the variable or function reference in the makefile. In this case the first expansion merely un-escapes the reference but doesn't expand it, and expansion is left to the secondary expansion phase. For example, consider this makefile: @example .SECONDEXPANSION: ONEVAR = onefile TWOVAR = twofile myfile: $(ONEVAR) $$(TWOVAR) @end example After the first expansion phase the prerequisites list of the @file{myfile} target will be @code{onefile} and @code{$(TWOVAR)}; the first (unescaped) variable reference to @var{ONEVAR} is expanded, while the second (escaped) variable reference is simply unescaped, without being recognized as a variable reference. Now during the secondary expansion the first word is expanded again but since it contains no variable or function references it remains the static value @file{onefile}, while the second word is now a normal reference to the variable @var{TWOVAR}, which is expanded to the value @file{twofile}. The final result is that there are two prerequisites, @file{onefile} and @file{twofile}. Obviously, this is not a very interesting case since the same result could more easily have been achieved simply by having both variables appear, unescaped, in the prerequisites list. One difference becomes apparent if the variables are reset; consider this example: @example .SECONDEXPANSION: AVAR = top onefile: $(AVAR) twofile: $$(AVAR) AVAR = bottom @end example Here the prerequisite of @file{onefile} will be expanded immediately, and resolve to the value @file{top}, while the prerequisite of @file{twofile} will not be full expanded until the secondary expansion and yield a value of @file{bottom}. This is marginally more exciting, but the true power of this feature only becomes apparent when you discover that secondary expansions always take place within the scope of the automatic variables for that target. This means that you can use variables such as @code{$@@}, @code{$*}, etc. during the second expansion and they will have their expected values, just as in the command script. All you have to do is defer the expansion by escaping the @code{$}. Also, secondary expansion occurs for both explicit and implicit (pattern) rules. Knowing this, the possible uses for this feature increase dramatically. For example: @example .SECONDEXPANSION: main_OBJS := main.o try.o test.o lib_OBJS := lib.o api.o main lib: $$($$@@_OBJS) @end example Here, after the initial expansion the prerequisites of both the @file{main} and @file{lib} targets will be @code{$($@@_OBJS)}. During the secondary expansion, the @code{$@@} variable is set to the name of the target and so the expansion for the @file{main} target will yield @code{$(main_OBJS)}, or @code{main.o try.o test.o}, while the secondary expansion for the @file{lib} target will yield @code{$(lib_OBJS)}, or @code{lib.o api.o}. You can also mix functions here, as long as they are properly escaped: @example main_SRCS := main.c try.c test.c lib_SRCS := lib.c api.c .SECONDEXPANSION: main lib: $$(patsubst %.c,%.o,$$($$@@_SRCS)) @end example This version allows users to specify source files rather than object files, but gives the same resulting prerequisites list as the previous example. Evaluation of automatic variables during the secondary expansion phase, especially of the target name variable @code{$$@@}, behaves similarly to evaluation within command scripts. However, there are some subtle differences and ``corner cases'' which come into play for the different types of rule definitions that @code{make} understands. The subtleties of using the different automatic variables are described below. @subheading Secondary Expansion of Explicit Rules @cindex secondary expansion and explicit rules @cindex explicit rules, secondary expansion of During the secondary expansion of explicit rules, @code{$$@@} and @code{$$%} evaluate, respectively, to the file name of the target and, when the target is an archive member, the target member name. The @code{$$<} variable evaluates to the first prerequisite in the first rule for this target. @code{$$^} and @code{$$+} evaluate to the list of all prerequisites of rules @emph{that have already appeared} for the same target (@code{$$+} with repetitions and @code{$$^} without). The following example will help illustrate these behaviors: @example .SECONDEXPANSION: foo: foo.1 bar.1 $$< $$^ $$+ # line #1 foo: foo.2 bar.2 $$< $$^ $$+ # line #2 foo: foo.3 bar.3 $$< $$^ $$+ # line #3 @end example In the first prerequisite list, all three variables (@code{$$<}, @code{$$^}, and @code{$$+}) expand to the empty string. In the second, they will have values @code{foo.1}, @code{foo.1 bar.1}, and @code{foo.1 bar.1} respectively. In the third they will have values @code{foo.1}, @code{foo.1 bar.1 foo.2 bar.2}, and @code{foo.1 bar.1 foo.2 bar.2} respectively. Rules undergo secondary expansion in makefile order, except that the rule with the command script is always evaluated last. The variables @code{$$?} and @code{$$*} are not available and expand to the empty string. @subheading Secondary Expansion of Static Pattern Rules @cindex secondary expansion and static pattern rules @cindex static pattern rules, secondary expansion of Rules for secondary expansion of static pattern rules are identical to those for explicit rules, above, with one exception: for static pattern rules the @code{$$*} variable is set to the pattern stem. As with explicit rules, @code{$$?} is not available and expands to the empty string. @subheading Secondary Expansion of Implicit Rules @cindex secondary expansion and implicit rules @cindex implicit rules, secondary expansion of As @code{make} searches for an implicit rule, it substitutes the stem and then performs secondary expansion for every rule with a matching target pattern. The value of the automatic variables is derived in the same fashion as for static pattern rules. As an example: @example .SECONDEXPANSION: foo: bar foo foz: fo%: bo% %oo: $$< $$^ $$+ $$* @end example When the implicit rule is tried for target @file{foo}, @code{$$<} expands to @file{bar}, @code{$$^} expands to @file{bar boo}, @code{$$+} also expands to @file{bar boo}, and @code{$$*} expands to @file{f}. Note that the directory prefix (D), as described in @ref{Implicit Rule Search, ,Implicit Rule Search Algorithm}, is appended (after expansion) to all the patterns in the prerequisites list. As an example: @example .SECONDEXPANSION: /tmp/foo.o: %.o: $$(addsuffix /%.c,foo bar) foo.h @end example The prerequisite list after the secondary expansion and directory prefix reconstruction will be @file{/tmp/foo/foo.c /tmp/var/bar/foo.c foo.h}. If you are not interested in this reconstruction, you can use @code{$$*} instead of @code{%} in the prerequisites list. @node Rules, Commands, Makefiles, Top @chapter Writing Rules @cindex writing rules @cindex rule, how to write @cindex target @cindex prerequisite A @dfn{rule} appears in the makefile and says when and how to remake certain files, called the rule's @dfn{targets} (most often only one per rule). It lists the other files that are the @dfn{prerequisites} of the target, and @dfn{commands} to use to create or update the target. @cindex default goal @cindex goal, default The order of rules is not significant, except for determining the @dfn{default goal}: the target for @code, @samp{/}, as well; and, a target that defines a pattern rule has no effect on the default goal. (@xref{Pattern Rules, ,Defining and Redefining Pattern Rules}.) Therefore, we usually write the makefile so that the first rule is the one for compiling the entire program or all the programs described by the makefile (often with a target called @samp{all}). @xref{Goals, ,Arguments to Specify the. @end menu @ifnottex @node Rule Example, Rule Syntax, Rules, Rules @section Rule Example Here is an example of a rule: @example foo.o : foo.c defs.h # module for twiddling the frobs cc -c -g foo.c @end example Its target is @file{foo.o} and its prerequisites are @file{foo.c} and @file{defs.h}. It has one command, which is @samp{cc -c -g foo.c}. The command line starts with a tab to identify it as a command. This rule says two things: @itemize @bullet @item How to decide whether @file{foo.o} is out of date: it is out of date if it does not exist, or if either @file{foo.c} or @file{defs.h} is more recent than it. @item How to update the file @file{foo.o}: by running @code{cc} as stated. The command does not explicitly mention @file{defs.h}, but we presume that @file{foo.c} includes it, and that that is why @file{defs.h} was added to the prerequisites. @end itemize @end ifnottex @node Rule Syntax, Prerequisite Types, Rule Example, Rules @section Rule Syntax @cindex rule syntax @cindex syntax of rules In general, a rule looks like this: @example @var{targets} : @var{prerequisites} @var{command} @dots{} @end example @noindent or like this: @example @var{targets} : @var{prerequisites} ; @var{command} @var{command} @dots{} @end example @cindex targets @cindex rule targets The @var{targets} are file names, separated by spaces. Wildcard characters may be used (@pxref{Wildcards, ,Using Wildcard Characters in File Names}) and a name of the form @file{@var{a}(@var{m})} represents member @var{m} in archive file @var{a} (@pxref{Archive Members, ,Archive Members as Targets}). Usually there is only one target per rule, but occasionally there is a reason to have more (@pxref{Multiple Targets, , Multiple Targets in a Rule}).@refill @cindex commands @cindex tab character (in commands) The @var. @xref{Commands, ,Writing the Commands in Rules}. @cindex dollar sign (@code{$}), in rules @cindex @code{$}, in rules @cindex rules, and @code{$} Because dollar signs are used to start @code{make} variable references, if you really want a dollar sign in a target or prerequisite you must write two of them, @samp{$$} (@pxref{Using Variables, ,How to Use Variables}). If you have enabled secondary expansion (@pxref{Secondary Expansion}) and you want a literal dollar sign in the prerequisites lise, you must actually write @emph{four} dollar signs (@samp{$$$$}). You may split a long line by inserting a backslash followed by a newline, but this is not required, as @code{make} places no limit on the length of a line in a makefile. A rule tells @code{make} two things: when the targets are out of date, and how to update them when necessary. @cindex prerequisites @cindex rule prerequisites The criterion for being out of date is specified in terms of the @var{prerequisites}, which consist of file names separated by spaces. (Wildcards and archive members (@pxref @var{commands}. These are lines to be executed by the shell (normally @samp{sh}), but with some extra features (@pxref{Commands, ,Writing the Commands in Rules}). @node Prerequisite Types, Wildcards, Rule Syntax, Rules @comment node-name, next, previous, up @section Types of Prerequisites @cindex prerequisite types @cindex types of prerequisites @cindex prerequisites, normal @cindex normal prerequisites @cindex prerequisites, order-only @cindex order-only prerequisites There are actually two different types of prerequisites understood by GNU @code{make}: normal prerequisites such as described in the previous section, and @dfn @emph{without} forcing the target to be updated if one of those rules is executed. In that case, you want to define @dfn{order-only} prerequisites. Order-only prerequisites can be specified by placing a pipe symbol (@code{|}) in the prerequisites list: any prerequisites to the left of the pipe symbol are normal; any prerequisites to the right are order-only: @example @var{targets} : @var{normal-prerequisites} | @var{order-only-prerequisites} @end example). @node Wildcards, Directory Search, Prerequisite Types, Rules @section Using Wildcard Characters in File Names @cindex wildcard @cindex file name with wildcards @cindex globbing (wildcards) @cindex @code{*} (wildcard character) @cindex @code{?} (wildcard character) @cindex @code{[@dots{}]} (wildcard characters) A single file name can specify many files using @dfn{wildcard characters}. The wildcard characters in @code{make} are @samp{*}, @samp{?} and @samp{[@dots{}]}, the same as in the Bourne shell. For example, @file{*.c} specifies a list of all the files (in the working directory) whose names end in @samp{.c}.@refill @cindex @code{~} (tilde) @cindex tilde (@code{~}) @cindex home directory The character @samp{~} at the beginning of a file name also has special significance. If alone, or followed by a slash, it represents your home directory. For example @file{~/bin} expands to @file{/home/you/bin}. If the @samp{~} is followed by a word, the string represents the home directory of the user named by that word. For example @file{~john/bin} expands to @file{/home/john/bin}. On systems which don't have a home directory for each user (such as MS-DOS or MS-Windows), this functionality can be simulated by setting the environment variable @var{HOME}.@refill Wildcard expansion is performed by @code{make} automatically in targets and in prerequisites. In commands the shell is responsible for wildcard expansion. In other contexts, wildcard expansion happens only if you request it explicitly with the @code{wildcard} function. The special significance of a wildcard character can be turned off by preceding it with a backslash. Thus, @file{foo\*bar} would refer to a specific file whose name consists of @samp{foo}, an asterisk, and @samp{bar}.@refill @menu * Wildcard Examples:: Several examples * Wildcard Pitfall:: Problems to avoid. * Wildcard Function:: How to cause wildcard expansion where it does not normally take place. @end menu @node Wildcard Examples, Wildcard Pitfall, Wildcards, Wildcards @subsection Wildcard Examples Wildcards can be used in the commands of a rule, where they are expanded by the shell. For example, here is a rule to delete all the object files: @example @group clean: rm -f *.o @end group @end example @cindex @code{rm} (shell command) Wildcards are also useful in the prerequisites of a rule. With the following rule in the makefile, @samp{make print} will print all the @samp{.c} files that have changed since the last time you printed them: @example print: *.c lpr -p $? touch print @end example @cindex @code{print} target @cindex @code{lpr} (shell command) @cindex @code{touch} (shell command) @noindent This rule uses @file{print} as an empty target file; see @ref{Empty Targets, ,Empty Target Files to Record Events}. (The automatic variable @samp{$?} is used to print only those files that have changed; see @ref{Automatic Variables}.)@refill Wildcard expansion does not happen when you define a variable. Thus, if you write this: @example objects = *.o @end example @noindent then the value of the variable @code{objects} is the actual string @samp{*.o}. However, if you use the value of @code{objects} in a target, prerequisite or command, wildcard expansion will take place at that time. To set @code{objects} to the expansion, instead use: @example objects := $(wildcard *.o) @end example @noindent @xref{Wildcard Function}. @node Wildcard Pitfall, Wildcard Function, Wildcard Examples, Wildcards @subsection Pitfalls of Using Wildcards @cindex wildcard pitfalls @cindex pitfalls of wildcards @cindex mistakes with wildcards @cindex errors with wildcards @cindex problems with wildcards Now here is an example of a naive way of using wildcard expansion, that does not do what you would intend. Suppose you would like to say that the executable file @file{foo} is made from all the object files in the directory, and you write this: @example objects = *.o foo : $(objects) cc -o foo $(CFLAGS) $(objects) @end example @noindent The value of @code{objects} is the actual string @samp{*.o}. Wildcard expansion happens in the rule for @file{foo}, so that each @emph{existing} @samp{.o} file becomes a prerequisite of @file{foo} and will be recompiled if necessary. But what if you delete all the @samp{.o} files? When a wildcard matches no files, it is left as it is, so then @file{foo} will depend on the oddly-named file @file{*.o}. Since no such file is likely to exist, @code{make} will give you an error saying it cannot figure out how to make @file{*.o}. This is not what you want! Actually it is possible to obtain the desired result with wildcard expansion, but you need more sophisticated techniques, including the @code{wildcard} function and string substitution. @ifnottex @xref{Wildcard Function, ,The Function @code{wildcard}}. @end ifnottex @iftex These are described in the following section. @end iftex @cindex wildcards and MS-DOS/MS-Windows backslashes @cindex backslashes in pathnames and wildcard expansion Microsoft operating systems (MS-DOS and MS-Windows) use backslashes to separate directories in pathnames, like so: @example c:\foo\bar\baz.c @end example This is equivalent to the Unix-style @file{c:/foo/bar/baz.c} (the @file{c:} part is the so-called drive letter). When @code{make} runs on these systems, it supports backslashes as well as the Unix-style forward slashes in pathnames. However, this support does @emph{not} include the wildcard expansion, where backslash is a quote character. Therefore, you @emph{must} use Unix-style slashes in these cases. @node Wildcard Function, , Wildcard Pitfall, Wildcards @subsection The Function @code{wildcard} @findex wildcard Wildcard expansion happens automatically in rules. But wildcard expansion does not normally take place when a variable is set, or inside the arguments of a function. If you want to do wildcard expansion in such places, you need to use the @code{wildcard} function, like this: @example $(wildcard @var{pattern}@dots{}) }). One use of the @code{wildcard} function is to get a list of all the C source files in a directory, like this: @example $(wildcard *.c) @end example We can change the list of C source files into a list of object files by replacing the @samp{.c} suffix with @samp{.o} in the result, like this: @example $(patsubst %.c,%.o,$(wildcard *.c)) @end example @noindent (Here we have used another function, @code{patsubst}. @xref{Text Functions, ,Functions for String Substitution and Analysis}.)@refill Thus, a makefile to compile all C source files in the directory and then link them together could be written as follows: @example objects := $(patsubst %.c,%.o,$(wildcard *.c)) foo : $(objects) cc -o foo $(objects) @end example @noindent (This takes advantage of the implicit rule for compiling C programs, so there is no need to write explicit rules for compiling the files. @xref{Flavors, ,The Two Flavors of Variables}, for an explanation of @samp{:=}, which is a variant of @samp{=}.) @node Directory Search, Phony Targets, Wildcards, Rules @section Searching Directories for Prerequisites @vindex VPATH @findex vpath @cindex vpath @cindex search path for prerequisites (@code{VPATH}) @cindex directory search (@code{VPATH}) For large systems, it is often desirable to put sources in a separate directory from the binaries. The @dfn{directory search} features of @code. @end menu @node General Search, Selective Search, Directory Search, Directory Search @subsection @code{VPATH}: Search Path for All Prerequisites @vindex VPATH The value of the @code{make} variable @code{VPATH} specifies a list of directories that @code{make} should search. Most often, the directories are expected to contain prerequisite files that are not in the current directory; however, @code{make} uses @code{VPATH} as a search list for both prerequisites and targets of rules. Thus, if a file that is listed as a target or prerequisite does not exist in the current directory, @code{make} searches the directories listed in @code{VPATH} for a file with that name. If a file is found in one of them, that file may become the prerequisite (see below). Rules may then specify the names of files in the prerequisite list as if they all existed in the current directory. @xref{Commands/Search, ,Writing Shell Commands with Directory Search}. In the @code{VPATH} variable, directory names are separated by colons or blanks. The order in which directories are listed is the order followed by @code{make} in its search. (On MS-DOS and MS-Windows, semi-colons are used as separators of directory names in @code{VPATH}, since the colon can be used in the pathname itself, after the drive letter.) For example, @example VPATH = src:../headers @end example @noindent specifies a path containing two directories, @file{src} and @file{../headers}, which @code{make} searches in that order. With this value of @code{VPATH}, the following rule, @example foo.o : foo.c @end example @noindent is interpreted as if it were written like this: @example foo.o : src/foo.c @end example @noindent assuming the file @file{foo.c} does not exist in the current directory but is found in the directory @file{src}. @node Selective Search, Search Algorithm, General Search, Directory Search @subsection The @code{vpath} Directive @findex vpath Similar to the @code{VPATH} variable, but more selective, is the @code @code{vpath} directive: @table @code @item vpath @var{pattern} @var{directories} Specify the search path @var{directories} for file names that match @var{pattern}. The search path, @var{directories}, is a list of directories to be searched, separated by colons (semi-colons on MS-DOS and MS-Windows) or blanks, just like the search path used in the @code{VPATH} variable. @item vpath @var{pattern} Clear out the search path associated with @var{pattern}. @c Extra blank line makes sure this gets two lines. @item vpath Clear all search paths previously specified with @code{vpath} directives. @end table A @code{vpath} pattern is a string containing a @samp{%} character. The string must match the file name of a prerequisite that is being searched for, the @samp{%} character matching any sequence of zero or more characters (as in pattern rules; @pxref{Pattern Rules, ,Defining and Redefining Pattern Rules}). For example, @code{%.h} matches files that end in @code{.h}. (If there is no @samp{%}, the pattern must match the prerequisite exactly, which is not useful very often.) @cindex @code{%}, quoting in @code{vpath} @cindex @code{%}, quoting with @code{\} (backslash) @cindex @code{\} (backslash), to quote @code{%} @cindex backslash (@code{\}), to quote @code{%} @cindex quoting @code{%}, in @code{vpath} @samp{%} characters in a @code{vpath} directive's pattern. Backslashes that are not in danger of quoting @samp{%} characters go unmolested.@refill When a prerequisite fails to exist in the current directory, if the @var{pattern} in a @code{vpath} directive matches the name of the prerequisite file, then the @var{directories} in that directive are searched just like (and before) the directories in the @code{VPATH} variable. For example, @example vpath %.h ../headers @end example @noindent tells @code{make} to look for any prerequisite whose name ends in @file{.h} in the directory @file{../headers} if the file is not found in the current directory. If several @code{vpath} patterns match the prerequisite file's name, then @code{make} processes each matching @code{vpath} directive one by one, searching all the directories mentioned in each directive. @code{make} handles multiple @code{vpath} directives in the order in which they appear in the makefile; multiple directives with the same pattern are independent of each other. @need 750 Thus, @example @group vpath %.c foo vpath % blish vpath %.c bar @end group @end example @noindent will look for a file ending in @samp{.c} in @file{foo}, then @file{blish}, then @file{bar}, while @example @group vpath %.c foo:bar vpath % blish @end group @end example @noindent will look for a file ending in @samp{.c} in @file{foo}, then @file{bar}, then @file{blish}. @node Search Algorithm, Commands/Search, Selective Search, Directory Search @subsection How Directory Searches are Performed @cindex algorithm for directory search @cindex directory search algorithm When a prerequisite is found through directory search, regardless of type (general or selective), the pathname located may not be the one that @code{make} actually provides you in the prerequisite list. Sometimes the path discovered through directory search is thrown away. The algorithm @code{make} uses to decide whether to keep or abandon a path found via directory search is as follows: @enumerate @item If a target file does not exist at the path specified in the makefile, directory search is performed. @item If the directory search is successful, that path is kept and this file is tentatively stored as the target. @item All prerequisites of this target are examined using this same method. @item After processing the prerequisites, the target may or may not need to be rebuilt: @enumerate a @item If the target does @emph{not} need to be rebuilt, the path to the file found during directory search is used for any prerequisite lists which contain this target. In short, if @code{make} doesn't need to rebuild the target then you use the path found via directory search. @item If the target @emph{does} need to be rebuilt (is out-of-date), the pathname found during directory search is @emph{thrown away}, and the target is rebuilt using the file name specified in the makefile. In short, if @code{make} must rebuild, then the target is rebuilt locally, not in the directory found via directory search. @end enumerate @end enumerate This algorithm may seem complex, but in practice it is quite often exactly what you want. @cindex traditional directory search (GPATH) @cindex directory search, traditional (GPATH) Other versions of @code. @vindex GPATH If, in fact, this is the behavior you want for some or all of your directories, you can use the @code{GPATH} variable to indicate this to @code{make}. @code{GPATH} has the same syntax and format as @code{VPATH} (that is, a space- or colon-delimited list of pathnames). If an out-of-date target is found by directory search in a directory that also appears in @code{GPATH}, then that pathname is not thrown away. The target is rebuilt using the expanded path. @node Commands/Search, Implicit/Search, Search Algorithm, Directory Search @subsection Writing Shell Commands with Directory Search @cindex shell command, and directory search @cindex directory search (@code{VPATH}), and shell commands When a prerequisite is found in another directory through directory search, this cannot change the commands of the rule; they will execute as written. Therefore, you must write the commands with care so that they will look for the prerequisite in the directory where @code{make} finds it. This is done with the @dfn{automatic variables} such as @samp{$^} (@pxref{Automatic Variables}). For instance, the value of @samp{$^} is a list of all the prerequisites of the rule, including the names of the directories in which they were found, and the value of @samp{$@@} is the target. Thus:@refill @example foo.o : foo.c cc -c $(CFLAGS) $^ -o $@@ @end example @noindent (The variable @code{CFLAGS} exists so you can specify flags for C compilation by implicit rules; we use it here for consistency so it will affect all C compilations uniformly; @pxref{Implicit Variables, ,Variables Used by Implicit Rules}.) Often the prerequisites include header files as well, which you do not want to mention in the commands. The automatic variable @samp{$<} is just the first prerequisite: @example VPATH = src:../headers foo.o : foo.c defs.h hack.h cc -c $(CFLAGS) $< -o $@@ @end example @node Implicit/Search, Libraries/Search, Commands/Search, Directory Search @subsection Directory Search and Implicit Rules @cindex @code{VPATH}, and implicit rules @cindex directory search (@code{VPATH}), and implicit rules @cindex search path for prerequisites (@code{VPATH}), and implicit rules @cindex implicit rule, and directory search @cindex implicit rule, and @code{VPATH} @cindex rule, implicit, and directory search @cindex rule, implicit, and @code{VPATH} The search through the directories specified in @code{VPATH} or with @code{vpath} also happens during consideration of implicit rules (@pxref{Implicit Rules, ,Using Implicit Rules}). For example, when a file @file{foo.o} has no explicit rule, @code{make} considers implicit rules, such as the built-in rule to compile @file{foo.c} if that file exists. If such a file is lacking in the current directory, the appropriate directories are searched for it. If @file. @node Libraries/Search, , Implicit/Search, Directory Search @subsection Directory Search for Link Libraries @cindex link libraries, and directory search @cindex libraries for linking, directory search @cindex directory search (@code{VPATH}), and link libraries @cindex @code{VPATH}, and link libraries @cindex search path for prerequisites (@code{VPATH}), and link libraries @cindex @code{-l} (library search) @cindex link libraries, patterns matching @cindex @code{.LIBPATTERNS}, and link libraries @vindex .LIBPATTERNS Directory search applies in a special way to libraries used with the linker. This special feature comes into play when you write a prerequisite whose name is of the form @samp{-l@var{name}}. (You can tell something strange is going on here because the prerequisite is normally the name of a file, and the @emph{file name} of a library generally looks like @file{lib@var{name}.a}, not like @samp{-l@var{name}}.)@refill When a prerequisite's name has the form @samp{-l@var{name}}, @code{make} handles it specially by searching for the file @file{lib@var{name}.so} in the current directory, in directories specified by matching @code{vpath} search paths and the @code{VPATH} search path, and then in the directories @file{/lib}, @file{/usr/lib}, and @file{@var{prefix}/lib} (normally @file{/usr/local/lib}, but MS-DOS/MS-Windows versions of @code{make} behave as if @var{prefix} is defined to be the root of the DJGPP installation tree). If that file is not found, then the file @file{lib@var{name}.a} is searched for, in the same directories as above. For example, if there is a @file{/usr/lib/libcurses.a} library on your system (and no @file{/usr/lib/libcurses.so} file), then @example @group foo : foo.c -lcurses cc $^ -o $@@ @end group @end example @noindent would cause the command @samp{cc foo.c /usr/lib/libcurses.a -o foo} to be executed when @file{foo} is older than @file{foo.c} or than @file{/usr/lib/libcurses.a}.@refill Although the default set of files to be searched for is @file{lib@var{name}.so} and @file{lib@var{name}.a}, this is customizable via the @code{.LIBPATTERNS} variable. Each word in the value of this variable is a pattern string. When a prerequisite like @samp{-l@var{name}} is seen, @code{make} will replace the percent in each pattern in the list with @var{name} and perform the above directory searches using that library filename. If no library is found, the next word in the list will be used. The default value for @code{.LIBPATTERNS} is @samp{lib%.so lib%.a}, which provides the default behavior described above. You can turn off link library expansion completely by setting this variable to an empty value. @node Phony Targets, Force Targets, Directory Search, Rules @section Phony Targets @cindex phony targets @cindex targets, phony @cindex targets without a file: @example @group clean: rm *.o temp @end group @end example @noindent Because the @code{rm} command does not create a file named @file{clean}, probably no such file will ever exist. Therefore, the @code{rm} command will be executed every time you say @samp{make clean}. @cindex @code{rm} (shell command) @findex .PHONY The phony target will cease to work if anything ever does create a file named @file{clean} in this directory. Since it has no prerequisites, the file @file{clean} would inevitably be considered up to date, and its commands would not be executed. To avoid this problem, you can explicitly declare the target to be phony, using the special target @code{.PHONY} (@pxref{Special Targets, ,Special Built-in Target Names}) as follows: @example .PHONY : clean @end example @noindent Once this is done, @samp{make clean} will run the commands regardless of whether there is a file named @file{clean}. Since it knows that phony targets do not name actual files that could be remade from other files, @code{make} skips the implicit rule search for phony targets (@pxref{Implicit Rules}). This is why declaring a target phony is good for performance, even if you are not worried about the actual file existing. Thus, you first write the line that states that @code{clean} is a phony target, then you write the rule, like this: @example @group .PHONY: clean clean: rm *.o temp @end group @end example Another example of the usefulness of phony targets is in conjunction with recursive invocations of @code{make} (for more information, see @ref{Recursion, ,Recursive Use of @code{make}}). In this case the makefile will often contain a variable which lists a number of subdirectories to be built. One way to handle this is with one rule whose command is a shell loop over the subdirectories, like this: @example @group SUBDIRS = foo bar baz subdirs: for dir in $(SUBDIRS); do \ $(MAKE) -C $$dir; \ done @end group @end @code{make} is invoked with the @code{-k} option, which is unfortunate. Second, and perhaps more importantly, you cannot take advantage of @code{make}'s ability to build targets in parallel (@pxref{Parallel, ,Parallel Execution}), since there is only one rule. By declaring the subdirectories as phony targets (you must do this as the subdirectory obviously always exists; otherwise it won't be built) you can remove these problems: @example @group SUBDIRS = foo bar baz .PHONY: subdirs $(SUBDIRS) subdirs: $(SUBDIRS) $(SUBDIRS): $(MAKE) -C $@@ foo: baz @end group @end example Here we've also declared that the @file{foo} subdirectory cannot be built until after the @file{baz} subdirectory is complete; this kind of relationship declaration is particularly important when attempting parallel builds. A phony target should not be a prerequisite of a real target file; if it is, its commands are run every time @code{make} goes to update that file. As long as a phony target is never a prerequisite of a real target, the phony target commands will be executed only when the phony target is a specified goal (@pxref{Goals, ,Arguments to Specify the Goals}). Phony targets can have prerequisites. When one directory contains multiple programs, it is most convenient to describe all of the programs in one makefile @file{./Makefile}. Since the target remade by default will be the first one in the makefile, it is common to make this a phony target named @samp{all} and give it, as prerequisites, all the individual programs. For example: @end example @noindent Now you can say just @samp{make} to remake all three programs, or specify as arguments the ones to remake (as in @samp{make prog1 prog3}). Phoniness is not inherited: the prerequisites of a phony target are not themselves phony, unless explicitly declared to be so. When one phony target is a prerequisite of another, it serves as a subroutine of the other. For example, here @samp{make cleanall} will delete the object files, the difference files, and the file @file{program}: @example .PHONY: cleanall cleanobj cleandiff cleanall : cleanobj cleandiff rm program cleanobj : rm *.o cleandiff : rm *.diff @end example @node Force Targets, Empty Targets, Phony Targets, Rules @section Rules without Commands or Prerequisites @cindex force targets @cindex targets, force @cindex @code{FORCE} @cindex rule, no commands or prerequisites If a rule has no prerequisites or commands, and the target of the rule is a nonexistent file, then @code{make} imagines this target to have been updated whenever its rule is run. This implies that all targets depending on this one will always have their commands run. An example will illustrate this: @example @group clean: FORCE rm $(objects) FORCE: @end group @end example Here the target @samp{FORCE} satisfies the special conditions, so the target @file{clean} that depends on it is forced to run its commands. There is nothing special about the name @samp{FORCE}, but that is one name commonly used this way. As you can see, using @samp{FORCE} this way has the same results as using @samp{.PHONY: clean}. Using @samp{.PHONY} is more explicit and more efficient. However, other versions of @code{make} do not support @samp{.PHONY}; thus @samp{FORCE} appears in many makefiles. @xref{Phony Targets}. @node Empty Targets, Special Targets, Force Targets, Rules @section Empty Target Files to Record Events @cindex empty targets @cindex targets, empty @cindex recording events with empty targets The @dfn @code: @example print: foo.c bar.c lpr -p $? touch print @end example @cindex @code{print} target @cindex @code{lpr} (shell command) @cindex @code{touch} (shell command) @noindent With this rule, @samp{make print} will execute the @code{lpr} command if either source file has changed since the last @samp{make print}. The automatic variable @samp{$?} is used to print only those files that have changed (@pxref{Automatic Variables}). @node Special Targets, Multiple Targets, Empty Targets, Rules @section Special Built-in Target Names @cindex special targets @cindex built-in special targets @cindex targets, built-in special Certain names have special meanings if they appear as targets. @table @code @findex .PHONY @item .PHONY The prerequisites of the special target @code{.PHONY} are considered to be phony targets. When it is time to consider such a target, @code{make} will run its commands unconditionally, regardless of whether a file with that name exists or what its last-modification time is. @xref{Phony Targets, ,Phony Targets}. @findex .SUFFIXES @item .SUFFIXES The prerequisites of the special target @code{.SUFFIXES} are the list of suffixes to be used in checking for suffix rules. @xref{Suffix Rules, , Old-Fashioned Suffix Rules}. @findex .DEFAULT @item .DEFAULT The commands specified for @code{.DEFAULT} are used for any target for which no rules are found (either explicit rules or implicit rules). @xref{Last Resort}. If @code{.DEFAULT} commands are specified, every file mentioned as a prerequisite, but not as a target in a rule, will have these commands executed on its behalf. @xref{Implicit Rule Search, ,Implicit Rule Search Algorithm}. @findex .PRECIOUS @item .PRECIOUS @cindex precious targets @cindex preserving with @code{.PRECIOUS} The targets which @code{.PRECIOUS} depends on are given the following special treatment: if @code{make} is killed or interrupted during the execution of their commands, the target is not deleted. @xref{Interrupts, ,Interrupting or Killing @code{make}}. Also, if the target is an intermediate file, it will not be deleted after it is no longer needed, as is normally done. @xref{Chained Rules, ,Chains of Implicit Rules}. In this latter respect it overlaps with the @code{.SECONDARY} special target. You can also list the target pattern of an implicit rule (such as @samp{%.o}) as a prerequisite file of the special target @code{.PRECIOUS} to preserve intermediate files created by rules whose target patterns match that file's name. @findex .INTERMEDIATE @item .INTERMEDIATE @cindex intermediate targets, explicit The targets which @code{.INTERMEDIATE} depends on are treated as intermediate files. @xref{Chained Rules, ,Chains of Implicit Rules}. @code{.INTERMEDIATE} with no prerequisites has no effect. @findex .SECONDARY @item .SECONDARY @cindex secondary targets @cindex preserving with @code{.SECONDARY} The targets which @code{.SECONDARY} depends on are treated as intermediate files, except that they are never automatically deleted. @xref{Chained Rules, ,Chains of Implicit Rules}. @code{.SECONDARY} with no prerequisites causes all targets to be treated as secondary (i.e., no target is removed because it is considered intermediate). @findex .SECONDEXPANSION @item .SECONDEXPANSION If @code{.SECONDEXPANSION} is mentioned as a target anywhere in the makefile, then all prerequisite lists defined @emph{after} it appears will be expanded a second time after all makefiles have been read in. @xref{Secondary Expansion, ,Secondary Expansion}. The prerequisites of the special target @code{.SUFFIXES} are the list of suffixes to be used in checking for suffix rules. @xref{Suffix Rules, , Old-Fashioned Suffix Rules}. @findex .DELETE_ON_ERROR @item .DELETE_ON_ERROR @cindex removing targets on failure If @code{.DELETE_ON_ERROR} is mentioned as a target anywhere in the makefile, then @code{make} will delete the target of a rule if it has changed and its commands exit with a nonzero exit status, just as it does when it receives a signal. @xref{Errors, ,Errors in Commands}. @findex .IGNORE @item .IGNORE If you specify prerequisites for @code{.IGNORE}, then @code{make} will ignore errors in execution of the commands run for those particular files. The commands for @code{.IGNORE} are not meaningful. If mentioned as a target with no prerequisites, @code{.IGNORE} says to ignore errors in execution of commands for all files. This usage of @samp{.IGNORE} is supported only for historical compatibility. Since this affects every command in the makefile, it is not very useful; we recommend you use the more selective ways to ignore errors in specific commands. @xref{Errors, ,Errors in Commands}. @findex .LOW_RESOLUTION_TIME @item .LOW_RESOLUTION_TIME If you specify prerequisites for @code{.LOW_RESOLUTION_TIME}, @command{make} assumes that these files are created by commands that generate low resolution time stamps. The commands for @code{.LOW_RESOLUTION_TIME} are not meaningful. The high resolution file time stamps of many modern hosts lessen the chance of @command{make} incorrectly concluding that a file is up to date. Unfortunately, these hosts provide no way to set a high resolution file time stamp, so commands like @samp{cp -p} that explicitly set a file's time stamp must discard its subsecond part. If a file is created by such a command, you should list it as a prerequisite of @code{.LOW_RESOLUTION_TIME} so that @command{make} does not mistakenly conclude that the file is out of date. For example: @example @group .LOW_RESOLUTION_TIME: dst dst: src cp -p src dst @end group @end example Since @samp{cp -p} discards the subsecond part of @file{src}'s time stamp, @file{dst} is typically slightly older than @file{src} even when it is up to date. The @code{.LOW_RESOLUTION_TIME} line causes @command{make} to consider @file{dst} to be up to date if its time stamp is at the start of the same second that @file{src}'s time stamp is in. Due to a limitation of the archive format, archive member time stamps are always low resolution. You need not list archive members as prerequisites of @code{.LOW_RESOLUTION_TIME}, as @command{make} does this automatically. @findex .SILENT @item .SILENT If you specify prerequisites for @code{.SILENT}, then @code{make} will not print the commands to remake those particular files before executing them. The commands for @code{.SILENT} are not meaningful. If mentioned as a target with no prerequisites, @code{.SILENT} says not to print any commands before executing them. This usage of @samp{.SILENT} is supported only for historical compatibility. We recommend you use the more selective ways to silence specific commands. @xref{Echoing, ,Command Echoing}. If you want to silence all commands for a particular run of @code{make}, use the @samp{-s} or @w{@samp{--silent}} option (@pxref{Options Summary}). @findex .EXPORT_ALL_VARIABLES @item .EXPORT_ALL_VARIABLES Simply by being mentioned as a target, this tells @code{make} to export all variables to child processes by default. @xref{Variables/Recursion, ,Communicating Variables to a Sub-@code{make}}. @findex .NOTPARALLEL @item .NOTPARALLEL @cindex parallel execution, overriding If @code{.NOTPARALLEL} is mentioned as a target, then this invocation of @code{make} will be run serially, even if the @samp{-j} option is given. Any recursively invoked @code{make} command will still be run in parallel (unless its makefile contains this target). Any prerequisites on this target are ignored. @end table Any defined implicit rule suffix also counts as a special target if it appears as a target, and so does the concatenation of two suffixes, such as @samp{ @samp{.}, so these special target names also begin with @samp{.}. @xref{Suffix Rules, ,Old-Fashioned Suffix Rules}. @node Multiple Targets, Multiple Rules, Special Targets, Rules @section Multiple Targets in a Rule @cindex multiple targets @cindex several targets in a rule @cindex targets, multiple @cindex rule, with multiple targets A rule with multiple targets is equivalent to writing many rules, each with one target, and all identical aside from that. The same commands apply to all the targets, but their effects may vary because you can substitute the actual target name into the command using @samp{$@@}. The rule contributes the same prerequisites to all the targets also. This is useful in two cases. @itemize @bullet @item You want just prerequisites, no commands. For example: @example kbd.o command.o files.o: command.h @end example @noindent gives an additional prerequisite to each of the three object files mentioned. @item Similar commands work for all the targets. The commands do not need to be absolutely identical, since the automatic variable @samp{$@@} can be used to substitute the particular target to be remade into the commands (@pxref{Automatic Variables}). For example: @example @group bigoutput littleoutput : text.g generate text.g -$(subst output,,$@@) > $@@ @end group @end example @findex subst @noindent is equivalent to @example bigoutput : text.g generate text.g -big > bigoutput littleoutput : text.g generate text.g -little > littleoutput @end example @noindent Here we assume the hypothetical program @code{generate} makes two types of output, one if given @samp{-big} and one if given @samp{-little}. @xref{Text Functions, ,Functions for String Substitution and Analysis}, for an explanation of the @code{subst} function. @end itemize Suppose you would like to vary the prerequisites according to the target, much as the variable @samp{$@@} allows you to vary the commands. You cannot do this with multiple targets in an ordinary rule, but you can do it with a @dfn{static pattern rule}. @xref{Static Pattern, ,Static Pattern Rules}. @node Multiple Rules, Static Pattern, Multiple Targets, Rules @section Multiple Rules for One Target @cindex multiple rules for one target @cindex several rules for one target @cindex rule, multiple for one target @cindex target, multiple rules for one, @code{make} uses the last set given and prints an error message. (As a special case, if the file's name begins with a dot, no error message is printed. This odd behavior is only for compatibility with other implementations of @code{make}... you should avoid using it). Occasionally it is useful to have the same target invoke multiple commands which are defined in different parts of your makefile; you can use @dfn{double-colon rules} (@pxref{Double-Colon}) for this. An extra rule with just prerequisites can be used to give a few extra prerequisites to many files at once. For example, makefiles often have a variable, such as @code{objects}, containing a list of all the compiler output files in the system being made. An easy way to say that all of them must be recompiled if @file{config.h} changes is to write the following: @example objects = foo.o bar.o foo.o : defs.h bar.o : defs.h test.h $(objects) : config.h @end @code{make} (@pxref{Overriding, ,Overriding Variables}). For example, @example @group extradeps= $(objects) : $(extradeps) @end group @end example @noindent means that the command @samp{make extradeps=foo.h} will consider @file{foo.h} as a prerequisite of each object file, but plain @samp{make} will not. If none of the explicit rules for a target has commands, then @code{make} searches for an applicable implicit rule to find some commands @pxref{Implicit Rules, ,Using Implicit Rules}). @node Static Pattern, Double-Colon, Multiple Rules, Rules @section Static Pattern Rules @cindex static pattern rule @cindex rule, static pattern @cindex pattern rules, static (not implicit) @cindex varying prerequisites @cindex prerequisites, varying (static pattern) @dfn{Static pattern rules} are rules which specify multiple targets and construct the prerequisite names for each target based on the target name. They are more general than ordinary rules with multiple targets because the targets do not have to have identical prerequisites. Their prerequisites must be @emph{analogous}, but not necessarily @emph{identical}. @menu * Static Usage:: The syntax of static pattern rules. * Static versus Implicit:: When are they better than implicit rules? @end menu @node Static Usage, Static versus Implicit, Static Pattern, Static Pattern @subsection Syntax of Static Pattern Rules @cindex static pattern rule, syntax of @cindex pattern rules, static, syntax of Here is the syntax of a static pattern rule: @example @var{targets} @dots{}: @var{target-pattern}: @var{prereq-patterns} @dots{} @var{commands} @dots{} @end example @noindent The @var{targets} list specifies the targets that the rule applies to. The targets can contain wildcard characters, just like the targets of ordinary rules (@pxref{Wildcards, ,Using Wildcard Characters in File Names}). @cindex target pattern, static (not implicit) @cindex stem The @var{target-pattern} and @var{prereq-patterns} say how to compute the prerequisites of each target. Each target is matched against the @var{target-pattern} to extract a part of the target name, called the @dfn{stem}. This stem is substituted into each of the @var{prereq-patterns} to make the prerequisite names (one from each @var{prereq-pattern}). Each pattern normally contains the character @samp{%} just once. When the @var{target-pattern} matches a target, the @samp{%} can match any part of the target name; this part is called the @dfn{stem}. The rest of the pattern must match exactly. For example, the target @file{foo.o} matches the pattern @samp{%.o}, with @samp{foo} as the stem. The targets @file{foo.c} and @file{foo.out} do not match that pattern.@refill @cindex prerequisite pattern, static (not implicit) The prerequisite names for each target are made by substituting the stem for the @samp{%} in each prerequisite pattern. For example, if one prerequisite pattern is @file{%.c}, then substitution of the stem @samp{foo} gives the prerequisite name @file{foo.c}. It is legitimate to write a prerequisite pattern that does not contain @samp{%}; then this prerequisite is the same for all targets. @cindex @code{%}, quoting in static pattern @cindex @code{%}, quoting with @code{\} (backslash) @cindex @code{\} (backslash), to quote @code{%} @cindex backslash (@code{\}), to quote @code{%} @cindex quoting @code{%}, in static pattern @samp{%} characters in pattern rules Here is an example, which compiles each of @file{foo.o} and @file{bar.o} from the corresponding @file{.c} file: @example @group objects = foo.o bar.o all: $(objects) $(objects): %.o: %.c $(CC) -c $(CFLAGS) $< -o $@@ @end group @end example @noindent Here @samp{$<} is the automatic variable that holds the name of the prerequisite and @samp{$@@} is the automatic variable that holds the name of the target; see @ref{Automatic Variables}. Each target specified must match the target pattern; a warning is issued for each target that does not. If you have a list of files, only some of which will match the pattern, you can use the @code{filter} function to remove nonmatching file names (@pxref{Text Functions, ,Functions for String Substitution and Analysis}): @example files = foo.elc bar.o lose.o $(filter %.o,$(files)): %.o: %.c $(CC) -c $(CFLAGS) $< -o $@@ $(filter %.elc,$(files)): %.elc: %.el emacs -f batch-byte-compile $< @end example @noindent In this example the result of @samp{$(filter %.o,$(files))} is @file{bar.o lose.o}, and the first static pattern rule causes each of these object files to be updated by compiling the corresponding C source file. The result of @w{@samp{$(filter %.elc,$(files))}} is @file{foo.elc}, so that file is made from @file{foo.el}.@refill Another example shows how to use @code{$*} in static pattern rules: @vindex $*@r{, and static pattern} @example @group bigoutput littleoutput : %output : text.g generate text.g -$* > $@@ @end group @end example @noindent When the @code{generate} command is run, @code{$*} will expand to the stem, either @samp{big} or @samp{little}. @node Static versus Implicit, , Static Usage, Static Pattern @subsection Static Pattern Rules versus Implicit Rules @cindex rule, static pattern versus implicit @cindex static pattern rule, versus implicit A static pattern rule has much in common with an implicit rule defined as a pattern rule (@pxref{Pattern Rules, ,Defining and Redefining Pattern Rules}). Both have a pattern for the target and patterns for constructing the names of prerequisites. The difference is in how @code{make} decides @emph{when} the rule applies. An implicit rule @emph{can} apply to any target that matches its pattern, but it @emph: @itemize @bullet @item You may wish to override the usual implicit rule for a few files whose names cannot be categorized syntactically but can be given in an explicit list. @item If you cannot be sure of the precise contents of the directories you are using, you may not be sure which other irrelevant files might lead @code{make} to use the wrong implicit rule. The choice might depend on the order in which the implicit rule search is done. With static pattern rules, there is no uncertainty: each rule applies to precisely the targets specified. @end itemize @node Double-Colon, Automatic Prerequisites, Static Pattern, Rules @section Double-Colon Rules @cindex double-colon rules @cindex rule, double-colon (@code{::}) @cindex multiple rules for one target (@code{::}) @cindex @code{::} rules (double-colon) @dfn{Double-colon} rules are rules written with @samp{::} instead of @samp{:}. @xref{Implicit Rules, ,Using Implicit Rules}. @node Automatic Prerequisites, , Double-Colon, Rules @section Generating Prerequisites Automatically @cindex prerequisites, automatic generation @cindex automatic generation of prerequisites @cindex generating prerequisites automatically In the makefile for a program, many of the rules you need to write often say only that some object file depends on some header file. For example, if @file{main.c} uses @file{defs.h} via an @code{#include}, you would write: @example main.o: defs.h @end example @noindent You need this rule so that @code{make} knows that it must remake @file{main.o} whenever @file{defs.h} changes. You can see that for a large program you would have to write dozens of such rules in your makefile. And, you must always be very careful to update the makefile every time you add or remove an @code{#include}. @cindex @code{#include} @cindex @code{-M} (to compiler) To avoid this hassle, most modern C compilers can write these rules for you, by looking at the @code{#include} lines in the source files. Usually this is done with the @samp{-M} option to the compiler. For example, the command: @example cc -M main.c @end example @noindent generates the output: @example main.o : main.c defs.h @end example @noindent Thus you no longer have to write all those rules yourself. The compiler will do it for you. Note that such a prerequisite constitutes mentioning @file{main.o} in a makefile, so it can never be considered an intermediate file by implicit rule search. This means that @code{make} won't ever remove the file after using it; @pxref{Chained Rules, ,Chains of Implicit Rules}. @cindex @code{make depend} With old @code{make} programs, it was traditional practice to use this compiler feature to generate prerequisites on demand with a command like @samp{make depend}. That command would create a file @file{depend} containing all the automatically-generated prerequisites; then the makefile could use @code{include} to read them in (@pxref{Include}). In GNU @code{make}, the feature of remaking makefiles makes this practice obsolete---you need never tell @code{make} explicitly to regenerate the prerequisites, because it always regenerates any makefile that is out of date. @xref{Remaking Makefiles}. The practice we recommend for automatic prerequisite generation is to have one makefile corresponding to each source file. For each source file @file{@var{name}.c} there is a makefile @file{@var{name}.d} which lists what files the object file @file{@var{name}.o} depends on. That way only the source files that have changed need to be rescanned to produce the new prerequisites. Here is the pattern rule to generate a file of prerequisites (i.e., a makefile) called @file{@var{name}.d} from a C source file called @file{@var{name}.c}: @smallexample @group %.d: %.c @@set -e; rm -f $@@; \ $(CC) -M $(CPPFLAGS) $< > $@@.$$$$; \ sed 's,\($*\)\.o[ :]*,\1.o $@@ : ,g' < $@@.$$$$ > $@@; \ rm -f $@@.$$$$ @end group @end smallexample @noindent @xref{Pattern Rules}, for information on defining pattern rules. The @samp{-e} flag to the shell causes it to exit immediately if the @code{$(CC)} command (or any other command) fails (exits with a nonzero status). @cindex @code{-e} (shell flag) @cindex @code{-MM} (to GNU compiler) With the GNU C compiler, you may wish to use the @samp{-MM} flag instead of @samp{-M}. This omits prerequisites on system header files. @xref{Preprocessor Options, , Options Controlling the Preprocessor, gcc.info, Using GNU CC}, for details. @cindex @code{sed} (shell command) The purpose of the @code{sed} command is to translate (for example): @example main.o : main.c defs.h @end example @noindent into: @example main.o main.d : main.c defs.h @end example @noindent @cindex @code{.d} This makes each @samp{.d} file depend on all the source and header files that the corresponding @samp{.o} file depends on. @code{make} then knows it must regenerate the prerequisites whenever any of the source or header files changes. Once you've defined the rule to remake the @samp{.d} files, you then use the @code{include} directive to read them all in. @xref{Include}. For example: @example @group sources = foo.c bar.c include $(sources:.c=.d) @end group @end example @noindent (This example uses a substitution variable reference to translate the list of source files @samp{foo.c bar.c} into a list of prerequisite makefiles, @samp{foo.d bar.d}. @xref{Substitution Refs}, for full information on substitution references.) Since the @samp{.d} files are makefiles like any others, @code{make} will remake them as necessary with no further work from you. @xref{Remaking Makefiles}. Note that the @samp{.d} files contain target definitions; you should be sure to place the @code{include} directive @emph{after} the first, default goal in your makefiles or run the risk of having a random object file become the default goal. @xref{How Make Works}. @node Commands, Using Variables, Rules, Top @chapter Writing the Commands in Rules @cindex commands, how to write @cindex rule commands @cindex writing rule commands @file{/bin/sh} unless the makefile specifies otherwise. @xref{Execution, ,Command Execution}. @menu *. @end menu @node Command Syntax, Echoing, Commands, Commands @section Command Syntax @cindex command syntax @cindex syntax of commands Makefiles have the unusual property that there are really two distinct syntaxes in one file. Most of the makefile uses @code{make} syntax (@pxref{Makefiles, ,Writing Makefiles}). However, commands are meant to be interpreted by the shell and so they are written using shell syntax. The @code. @emph: @itemize @bullet @item A blank line that begins with a tab is not blank: it's an empty command (@pxref{Empty Commands}). @cindex comments, in commands @cindex commands, comments in @cindex @code{#} (comments), in commands @item A comment in a command line is not a @code{make} comment; it will be passed to the shell as-is. Whether the shell treats it as a comment or not depends on your shell. @item A variable definition in a ``rule context'' which is indented by a tab as the first character on the line, will be considered a command line, not a @code{make} variable definition, and passed to the shell. @item A conditional expression (@code{ifdef}, @code{ifeq}, etc. @pxref{Conditional Syntax, ,Syntax of Conditionals}) in a ``rule context'' which is indented by a tab as the first character on the line, will be considered a command line and be passed to the shell. @end itemize @menu * Splitting Lines:: Breaking long command lines for readability. * Variables in Commands:: Using @code{make} variables in commands. @end menu @node Splitting Lines, Variables in Commands, Command Syntax, Command Syntax @subsection Splitting Command Lines @cindex commands, splitting @cindex splitting commands @cindex commands, backslash (@code{\}) in @cindex commands, quoting newlines in @cindex backslash (@code{\}), in commands @cindex @code{\} (backslash), in commands @cindex quoting newline, in commands @cindex newline, quoting, in commands One of the few ways in which @code @emph: @example @group all : @@echo no\ space @@echo no\ space @@echo one \ space @@echo one\ space @end group @end example @noindent consists of four separate shell commands where the output is: @example @group nospace nospace one space one space @end group @end example As a more complex example, this makefile: @example @group all : ; @@echo 'hello \ world' ; echo "hello \ world" @end group @end example @noindent will run one shell with a command script of: @example @group echo 'hello \ world' ; echo "hello \ world" @end group @end example @noindent which, according to shell quoting rules, will yield the following output: @example @group hello \ world hello world @end group @end example @noindent Notice how the backslash/newline pair was removed inside the string quoted with double quotes (@code{"..."}), but not from the string quoted with single quotes (@code{'...'}). This is the way the default shell (@file{ @code{make} variable then use the variable in the command. In this situation the newline quoting rules for makefiles will be used, and the backslash-newline will be removed. If we rewrite our example above using this method: @example @group HELLO = 'hello \ world' all : ; @@echo $(HELLO) @end group @end example @noindent we will get output like this: @example @group hello world @end group @end example If you like, you can also use target-specific variables (@pxref{Target-specific, ,Target-specific Variable Values}) to obtain a tighter correspondence between the variable and the command that uses it. @node Variables in Commands, , Splitting Lines, Command Syntax @subsection Using Variables in Commands @cindex variable references in commands @cindex commands, using variables in The other way in which @code{make} processes commands is by expanding any variable references in them (@pxref{Reference,Basics of Variable References}). (@samp{$$}). For shells like the default shell, that use dollar signs to introduce variables, it's important to keep clear in your mind whether the variable you want to reference is a @code{make} variable (use a single dollar sign) or a shell variable (use two dollar signs). For example: @example @group LIST = one two three all: for i in $(LIST); do \ echo $$i; \ done @end group @end example @noindent results in the following command being passed to the shell: @example @group for i in one two three; do \ echo $i; \ done @end group @end example @noindent which generates the expected result: @example @group one two three @end group @end example @node Echoing, Execution, Command Syntax, Commands @section Command Echoing @cindex echoing of commands @cindex silent operation @cindex @code{@@} (in commands) @cindex commands, echoing @cindex printing of commands Normally @code{make} prints each command line before it is executed. We call this @dfn{echoing} because it gives the appearance that you are typing the commands yourself. When a line starts with @samp{@@}, the echoing of that line is suppressed. The @samp{@@} is discarded before the command is passed to the shell. Typically you would use this for a command whose only effect is to print something, such as an @code{echo} command to indicate progress through the makefile: @example @@echo About to make distribution files @end example @cindex @code{-n} @cindex @code{--just-print} @cindex @code{--dry-run} @cindex @code{--recon} When @code{make} is given the flag @samp{-n} or @samp{--just-print} it only echoes commands, it won't execute them. @xref{Options Summary, ,Summary of Options}. In this case and only this case, even the commands starting with @samp{@@} are printed. This flag is useful for finding out which commands @code{make} thinks are necessary without actually doing them. @cindex @code{-s} @cindex @code{--silent} @cindex @code{--quiet} @findex .SILENT The @samp{-s} or @samp{--silent} flag to @code{make} prevents all echoing, as if all commands started with @samp{@@}. A rule in the makefile for the special target @code{.SILENT} without prerequisites has the same effect (@pxref{Special Targets, ,Special Built-in Target Names}). @code{.SILENT} is essentially obsolete since @samp{@@} is more flexible.@refill @node Execution, Parallel, Echoing, Commands @section Command Execution @cindex commands, execution @cindex execution, of commands @cindex shell command, execution @vindex @code{SHELL} @r{(command execution)} When it is time to execute commands to update a target, they are executed by invoking a new subshell for each command line. (In practice, @code{make} may take shortcuts that do not affect the results.) @cindex @code{cd} (shell command) @cindex shell variables, setting in commands @cindex commands setting shell variables @strong{Please note:} this implies that setting shell variables and invoking shell commands such as @code{cd} that set a context local to each process will not affect the following command lines.@footnote{On MS-DOS, the value of current working directory is @strong{global}, so changing it @emph{will} affect the following command lines on those systems.} If you want to use @code{cd} to affect the next statement, put both statements in a single command line. Then @code{make} will invoke one shell to run the entire line, and the shell will execute the statements in sequence. For example: @example foo : bar/lose cd $(@@D) && gobble $(@@F) > ../$@@ @end example @noindent Here we use the shell AND operator (@code{&&}) so that if the @code{cd} command fails, the script will fail without trying to invoke the @code{gobble} command in the wrong directory, which could cause problems (in this case it would certainly cause @file{../foo} to be truncated, at least). @menu * Choosing the Shell:: How @code{make} chooses the shell used to run commands. @end menu @node Choosing the Shell, , Execution, Execution @subsection Choosing the Shell @cindex shell, choosing the @cindex @code{SHELL}, value of @vindex SHELL The program used as the shell is taken from the variable @code{SHELL}. If this variable is not set in your makefile, the program @file{/bin/sh} is used as the shell. @cindex environment, @code{SHELL} in Unlike most variables, the variable @code{SHELL} is never set from the environment. This is because the @code{SHELL} environment variable is used to specify your personal choice of shell program for interactive use. It would be very bad for personal choices like this to affect the functioning of makefiles. @xref{Environment, ,Variables from the Environment}. Furthermore, when you do set @code{SHELL} in your makefile that value is @emph{not} exported in the environment to commands that @code{make} invokes. Instead, the value inherited from the user's environment, if any, is exported. You can override this behavior by explicitly exporting @code{SHELL} (@pxref{Variables/Recursion, ,Communicating Variables to a Sub-@code{make}}), forcing it to be passed in the environment to commands. @vindex @code{MAKESHELL} @r{(MS-DOS alternative to @code{SHELL})} However, on MS-DOS and MS-Windows the value of @code{SHELL} in the environment @strong{is} used, since on those systems most users do not set this variable, and therefore it is most likely set specifically to be used by @code{make}. On MS-DOS, if the setting of @code{SHELL} is not suitable for @code{make}, you can set the variable @code{MAKESHELL} to the shell that @code{make} should use; if set it will be used as the shell instead of the value of @code{SHELL}. @subsubheading Choosing a Shell in DOS and Windows @cindex shell, in DOS and Windows @cindex DOS, choosing a shell in @cindex Windows, choosing a shell in Choosing a shell in MS-DOS and MS-Windows is much more complex than on other systems. @vindex COMSPEC On MS-DOS, if @code{SHELL} is not set, the value of the variable @code{COMSPEC} (which is always set) is used instead. @cindex @code{SHELL}, MS-DOS specifics The processing of lines that set the variable @code{SHELL} in Makefiles is different on MS-DOS. The stock shell, @file{command.com}, is ridiculously limited in its functionality and many users of @code{make} tend to install a replacement shell. Therefore, on MS-DOS, @code{make} examines the value of @code{SHELL}, and changes its behavior based on whether it points to a Unix-style or DOS-style shell. This allows reasonable functionality even if @code{SHELL} points to @file{command.com}. If @code{SHELL} points to a Unix-style shell, @code{make} on MS-DOS additionally checks whether that shell can indeed be found; if not, it ignores the line that sets @code{SHELL}. In MS-DOS, GNU @code{make} searches for the shell in the following places: @enumerate @item In the precise place pointed to by the value of @code{SHELL}. For example, if the makefile specifies @samp{SHELL = /bin/sh}, @code{make} will look in the directory @file{/bin} on the current drive. @item In the current directory. @item In each of the directories in the @code{PATH} variable, in order. @end enumerate In every directory it examines, @code{make} will first look for the specific file (@file{sh} in the example above). If this is not found, it will also look in that directory for that file with one of the known extensions which identify executable files. For example @file{.exe}, @file{.com}, @file{.bat}, @file{.btm}, @file{.sh}, and some others. If any of these attempts is successful, the value of @code{SHELL} will be set to the full pathname of the shell as found. However, if none of these is found, the value of @code{SHELL} will not be changed, and thus the line that sets it will be effectively ignored. This is so @code{make} will only support features specific to a Unix-style shell if such a shell is actually installed on the system where @code{make} runs. Note that this extended search for the shell is limited to the cases where @code @samp{SHELL = /bin/sh} (as many Unix makefiles do), will work on MS-DOS unaltered if you have e.g.@: @file{sh.exe} installed in some directory along your @code{PATH}. @node Parallel, Errors, Execution, Commands @section Parallel Execution @cindex commands, execution in parallel @cindex parallel execution @cindex execution, in parallel @cindex job slots @cindex @code{-j} @cindex @code{--jobs} GNU @code{make} knows how to execute several commands at once. Normally, @code{make} will execute only one command at a time, waiting for it to finish before executing the next. However, the @samp{-j} or @samp{--jobs} option tells @code{make} to execute many commands simultaneously.@refill On MS-DOS, the @samp{-j} option has no effect, since that system doesn't support multi-processing. If the @samp{-j} option is followed by an integer, this is the number of commands to execute at once; this is called the number of @dfn{job slots}. If there is nothing looking like an integer after the @samp{, @code{make} will invalidate the standard input streams of all but one running command. This means that attempting to read from standard input will usually be a fatal error (a @samp{Broken pipe} signal) for most child processes if there are several. @cindex broken pipe @cindex standard input It is unpredictable which command will have a valid standard input stream (which will come from the terminal, or wherever you redirect the standard input of @code{make}). The first command run will always get it first, and the first command started after that one finishes will get it next, and so on. We will change how this aspect of @code @code{make} invocations raises issues. For more information on this, see @ref{Options/Recursion, ,Communicating Options to a Sub-@code{make}}. If a command fails (is killed by a signal or exits with a nonzero status), and errors are not ignored for that command (@pxref{Errors, ,Errors in Commands}), the remaining command lines to remake the same target will not be run. If a command fails and the @samp{-k} or @samp{--keep-going} option was not given (@pxref{Options Summary, ,Summary of Options}), @code{make} aborts execution. If make terminates for any reason (including a signal) with child processes running, it waits for them to finish before actually exiting.@refill @cindex load average @cindex limiting jobs based on load @cindex jobs, limiting based on load @cindex @code{-l} (load average) @cindex @code{--max-load} @cindex @code{--load-average} When the system is heavily loaded, you will probably want to run fewer jobs than when it is lightly loaded. You can use the @samp{-l} option to tell @code{make} to limit the number of jobs to run at once, based on the load average. The @samp{-l} or @samp{--max-load} option is followed by a floating-point number. For example, @example -l 2.5 @end example @noindent will not let @code{make} start more than one job if the load average is above 2.5. The @samp{-l} option with no following number removes the load limit, if one was given with a previous @samp{-l} option.@refill More precisely, when @code{make} goes to start up a job, and it already has at least one job running, it checks the current load average; if it is not lower than the limit given with @samp{-l}, @code{make} waits until the load average goes below that limit, or until all the other jobs finish. By default, there is no load limit. @node Errors, Interrupts, Parallel, Commands @section Errors in Commands @cindex errors (in commands) @cindex commands, errors in @cindex exit status (errors) After each shell command returns, @code{make} looks at its exit status. If the command completed successfully, the next command line is executed in a new shell; after the last command line is finished, the rule is finished. If there is an error (the exit status is nonzero), @code{make} gives up on the current rule, and perhaps on all rules. Sometimes the failure of a certain command does not indicate a problem. For example, you may use the @code{mkdir} command to ensure that a directory exists. If the directory already exists, @code{mkdir} will report an error, but you probably want @code{make} to continue regardless. @cindex @code{-} (in commands) To ignore errors in a command line, write a @samp{-} at the beginning of the line's text (after the initial tab). The @samp{-} is discarded before the command is passed to the shell for execution. For example, @example @group clean: -rm -f *.o @end group @end example @cindex @code{rm} (shell command) @noindent This causes @code{rm} to continue even if it is unable to remove a file. @cindex @code{-i} @cindex @code{--ignore-errors} @findex .IGNORE When you run @code{make} with the @samp{-i} or @samp{--ignore-errors} flag, errors are ignored in all commands of all rules. A rule in the makefile for the special target @code{.IGNORE} has the same effect, if there are no prerequisites. These ways of ignoring errors are obsolete because @samp{-} is more flexible. When errors are to be ignored, because of either a @samp{-} or the @samp{-i} flag, @code{make} treats an error return just like success, except that it prints out a message that tells you the status code the command exited with, and says that the error has been ignored. When an error happens that @code{make} has not been told to ignore, it implies that the current target cannot be correctly remade, and neither can any other that depends on it either directly or indirectly. No further commands will be executed for these targets, since their preconditions have not been achieved. @cindex @code{-k} @cindex @code{--keep-going} Normally @code{make} gives up immediately in this circumstance, returning a nonzero status. However, if the @samp{-k} or @samp{--keep-going} flag is specified, @code{make} continues. @xref{Options Summary, ,Summary of Options}. The usual behavior assumes that your purpose is to get the specified targets up to date; once @code{make} learns that this is impossible, it might as well report the failure immediately. The @samp{-k} option says that the real purpose is to test as many of the changes made in the program as possible, perhaps to find several independent problems so that you can correct them all before the next attempt to compile. This is why Emacs' @code{compile} command passes the @samp{-k} flag by default. @cindex Emacs (@code{M-x compile}) @findex .DELETE_ON_ERROR @cindex deletion of target files @cindex removal of target files @cindex target, deleting on error Usually when a command fails, if it has changed the target file at all, the file is corrupted and cannot be used---or at least it is not completely updated. Yet the file's time stamp says that it is now up to date, so the next time @code{make} runs, it will not try to update that file. The situation is just the same as when the command is killed by a signal; @pxref{Interrupts}. So generally the right thing to do is to delete the target file if the command fails after beginning to change the file. @code{make} will do this if @code{.DELETE_ON_ERROR} appears as a target. This is almost always what you want @code{make} to do, but it is not historical practice; so for compatibility, you must explicitly request it. @node Interrupts, Recursion, Errors, Commands @section Interrupting or Killing @code{make} @cindex interrupt @cindex signal @cindex deletion of target files @cindex removal of target files @cindex target, deleting on interrupt @cindex killing (interruption) If @code{make} gets a fatal signal while a command is executing, it may delete the target file that the command was supposed to update. This is done if the target file's last-modification time has changed since @code{make} first checked it. The purpose of deleting the target is to make sure that it is remade from scratch when @code{make} is next run. Why is this? Suppose you type @kbd{Ctrl-c} while a compiler is running, and it has begun to write an object file @file{foo.o}. The @kbd{Ctrl-c} kills the compiler, resulting in an incomplete file whose last-modification time is newer than the source file @file{foo.c}. But @code{make} also receives the @kbd{Ctrl-c} signal and deletes this incomplete file. If @code{make} did not do this, the next invocation of @code{make} would think that @file{foo.o} did not require updating---resulting in a strange error message from the linker when it tries to link an object file half of which is missing. @findex .PRECIOUS You can prevent the deletion of a target file in this way by making the special target @code{.PRECIOUS} depend on it. Before remaking a target, @code{make} checks to see whether it appears on the prerequisites of @code{. @node Recursion, Sequences, Interrupts, Commands @section Recursive Use of @code{make} @cindex recursion @cindex subdirectories, recursion for Recursive use of @code{make} means using @code{make} as a command in a makefile. This technique is useful when you want separate makefiles for various subsystems that compose a larger system. For example, suppose you have a subdirectory @file{subdir} which has its own makefile, and you would like the containing directory's makefile to run @code{make} on the subdirectory. You can do it by writing this: @example subsystem: cd subdir && $(MAKE) @end example @noindent or, equivalently, this (@pxref{Options Summary, ,Summary of Options}): @example subsystem: $(MAKE) -C subdir @end example @cindex @code{-C} @cindex @code{--directory} You can write recursive @code{make} commands just by copying this example, but there are many things to know about how they work and why, and about how the sub-@code{make} relates to the top-level @code{make}. You may also find it useful to declare targets that invoke recursive @code{make} commands as @samp{.PHONY} (for more discussion on when this is useful, see @ref{Phony Targets}). @vindex @code{CURDIR} For your convenience, when GNU @code{make} starts (after it has processed any @code{-C} options) it sets the variable @code{CURDIR} to the pathname of the current working directory. This value is never touched by @code{make} again: in particular note that if you include files from other directories the value of @code{CURDIR} does not change. The value has the same precedence it would have if it were set in the makefile (by default, an environment variable @code{CURDIR} will not override this value). Note that setting this variable has no impact on the operation of @code{make} (it does not cause @code{make} to change its working directory, for example). @menu *. @end menu @node MAKE Variable, Variables/Recursion, Recursion, Recursion @subsection How the @code{MAKE} Variable Works @vindex MAKE @cindex recursion, and @code{MAKE} variable Recursive @code{make} commands should always use the variable @code{MAKE}, not the explicit command name @samp{make}, as shown here: @example @group subsystem: cd subdir && $(MAKE) @end group @end example The value of this variable is the file name with which @code{make} was invoked. If this file name was @file{/bin/make}, then the command executed is @samp{cd subdir && /bin/make}. If you use a special version of @code{make} to run the top-level makefile, the same special version will be executed for recursive invocations. @cindex @code{cd} (shell command) @cindex +, and commands As a special feature, using the variable @code{MAKE} in the commands of a rule alters the effects of the @samp{-t} (@samp{--touch}), @samp{-n} (@samp{--just-print}), or @samp{-q} (@w{@samp{--question}}) option. Using the @code{MAKE} variable has the same effect as using a @samp{+} character at the beginning of the command line. @xref{Instead of Execution, ,Instead of Executing the Commands}. This special feature is only enabled if the @code{MAKE} variable appears directly in the command script: it does not apply if the @code{MAKE} variable is referenced through expansion of another variable. In the latter case you must use the @samp{+} token to get these special effects.@refill Consider the command @samp{make -t} in the above example. (The @samp{-t} option marks targets as up to date without actually running any commands; see @ref{Instead of Execution}.) Following the usual definition of @samp{-t}, a @samp{make -t} command in the example would create a file named @file{subsystem} and do nothing else. What you really want it to do is run @samp{@w{cd subdir &&} @w{make -t}}; but that would require executing the command, and @samp{-t} says not to execute commands.@refill @cindex @code{-t}, and recursion @cindex recursion, and @code{-t} @cindex @code{--touch}, and recursion The special feature makes this do what you want: whenever a command line of a rule contains the variable @code{MAKE}, the flags @samp{-t}, @samp{-n} and @samp{-q} do not apply to that line. Command lines containing @code{MAKE} are executed normally despite the presence of a flag that causes most commands not to be run. The usual @code{MAKEFLAGS} mechanism passes the flags to the sub-@code{make} (@pxref{Options/Recursion, ,Communicating Options to a Sub-@code{make}}), so your request to touch the files, or print the commands, is propagated to the subsystem.@refill @node Variables/Recursion, Options/Recursion, MAKE Variable, Recursion @subsection Communicating Variables to a Sub-@code{make} @cindex sub-@code{make} @cindex environment, and recursion @cindex exporting variables @cindex variables, environment @cindex variables, exporting @cindex recursion, and environment @cindex recursion, and variables Variable values of the top-level @code{make} can be passed to the sub-@code{make} through the environment by explicit request. These variables are defined in the sub-@code{make} as defaults, but do not override what is specified in the makefile used by the sub-@code{make} makefile unless you use the @samp{-e} switch (@pxref{Options Summary, ,Summary of Options}).@refill To pass down, or @dfn{export}, a variable, @code{make} adds the variable and its value to the environment for running each command. The sub-@code{make}, in turn, uses the environment to initialize its table of variable values. @xref{Environment, ,Variables from the Environment}. Except by explicit request, @code{make} exports a variable only if it is either defined in the environment initially or set on the command line, and if its name consists only of letters, numbers, and underscores. Some shells cannot cope with environment variable names consisting of characters other than letters, numbers, and underscores. @cindex SHELL, exported value The value of the @code{make} variable @code{SHELL} is not exported. Instead, the value of the @code{SHELL} variable from the invoking environment is passed to the sub-@code{make}. You can force @code{make} to export its value for @code{SHELL} by using the @code{export} directive, described below. @xref{Choosing the Shell}. The special variable @code{MAKEFLAGS} is always exported (unless you unexport it). @code{MAKEFILES} is exported if you set it to anything. @code{make} automatically passes down variable values that were defined on the command line, by putting them in the @code{MAKEFLAGS} variable. @iftex See the next section. @end iftex @ifnottex @xref{Options/Recursion}. @end ifnottex Variables are @emph{not} normally passed down if they were created by default by @code{make} (@pxref{Implicit Variables, ,Variables Used by Implicit Rules}). The sub-@code{make} will define these for itself.@refill @findex export If you want to export specific variables to a sub-@code{make}, use the @code{export} directive, like this: @example export @var{variable} @dots{} @end example @noindent @findex unexport If you want to @emph{prevent} a variable from being exported, use the @code{unexport} directive, like this: @example unexport @var{variable} @dots{} @end example @noindent In both of these forms, the arguments to @code{export} and @code{unexport} are expanded, and so could be variables or functions which expand to a (list of) variable names to be (un)exported. As a convenience, you can define a variable and export it at the same time by doing: @example export @var{variable} = value @end example @noindent has the same result as: @example @var{variable} = value export @var{variable} @end example @noindent and @example export @var{variable} := value @end example @noindent has the same result as: @example @var{variable} := value export @var{variable} @end example Likewise, @example export @var{variable} += value @end example @noindent is just like: @example @var{variable} += value export @var{variable} @end example @noindent @xref{Appending, ,Appending More Text to Variables}. You may notice that the @code{export} and @code{unexport} directives work in @code{make} in the same way they work in the shell, @code{sh}. If you want all variables to be exported by default, you can use @code{export} by itself: @example export @end example @noindent This tells @code{make} that variables which are not explicitly mentioned in an @code{export} or @code{unexport} directive should be exported. Any variable given in an @code{unexport} directive will still @emph{not} be exported. If you use @code{export} by itself to export variables by default, variables whose names contain characters other than alphanumerics and underscores will not be exported unless specifically mentioned in an @code{export} directive.@refill @findex .EXPORT_ALL_VARIABLES The behavior elicited by an @code{export} directive by itself was the default in older versions of GNU @code{make}. If your makefiles depend on this behavior and you want to be compatible with old versions of @code{make}, you can write a rule for the special target @code{.EXPORT_ALL_VARIABLES} instead of using the @code{export} directive. This will be ignored by old @code{make}s, while the @code{export} directive will cause a syntax error.@refill @cindex compatibility in exporting Likewise, you can use @code{unexport} by itself to tell @code{make} @emph{not} to export variables by default. Since this is the default behavior, you would only need to do this if @code{export} had been used by itself earlier (in an included makefile, perhaps). You @strong{cannot} use @code{export} and @code{unexport} by themselves to have variables exported for some commands and not for others. The last @code{export} or @code{unexport} directive that appears by itself determines the behavior for the entire run of @code{make}.@refill @vindex MAKELEVEL @cindex recursion, level of As a special feature, the variable @code{MAKELEVEL} is changed when it is passed down from level to level. This variable's value is a string which is the depth of the level as a decimal number. The value is @samp{0} for the top-level @code{make}; @samp{1} for a sub-@code{make}, @samp{2} for a sub-sub-@code{make}, and so on. The incrementation happens when @code{make} sets up the environment for a command.@refill The main use of @code{MAKELEVEL} is to test it in a conditional directive (@pxref{Conditionals, ,Conditional Parts of Makefiles}); this way you can write a makefile that behaves one way if run recursively and another way if run directly by you.@refill @vindex MAKEFILES You can use the variable @code{MAKEFILES} to cause all sub-@code{make} commands to use additional makefiles. The value of @code{MAKEFILES} is a whitespace-separated list of file names. This variable, if defined in the outer-level makefile, is passed down through the environment; then it serves as a list of extra makefiles for the sub-@code{make} to read before the usual or specified ones. @xref{MAKEFILES Variable, ,The Variable @code{MAKEFILES}}.@refill @node Options/Recursion, -w Option, Variables/Recursion, Recursion @subsection Communicating Options to a Sub-@code{make} @cindex options, and recursion @cindex recursion, and options @vindex MAKEFLAGS Flags such as @samp{-s} and @samp{-k} are passed automatically to the sub-@code{make} through the variable @code{MAKEFLAGS}. This variable is set up automatically by @code{make} to contain the flag letters that @code{make} received. Thus, if you do @w{@samp{make -ks}} then @code{MAKEFLAGS} gets the value @samp{ks}.@refill As a consequence, every sub-@code{make} gets a value for @code{MAKEFLAGS} in its environment. In response, it takes the flags from that value and processes them as if they had been given as arguments. @xref{Options Summary, ,Summary of Options}. @cindex command line variable definitions, and recursion @cindex variables, command line, and recursion @cindex recursion, and command line variable definitions Likewise variables defined on the command line are passed to the sub-@code{make} through @code{MAKEFLAGS}. Words in the value of @code{MAKEFLAGS} that contain @samp{=}, @code{make} treats as variable definitions just as if they appeared on the command line. @xref{Overriding, ,Overriding Variables}. @cindex @code{-C}, and recursion @cindex @code{-f}, and recursion @cindex @code{-o}, and recursion @cindex @code{-W}, and recursion @cindex @code{--directory}, and recursion @cindex @code{--file}, and recursion @cindex @code{--old-file}, and recursion @cindex @code{--assume-old}, and recursion @cindex @code{--assume-new}, and recursion @cindex @code{--new-file}, and recursion @cindex recursion, and @code{-C} @cindex recursion, and @code{-f} @cindex recursion, and @code{-o} @cindex recursion, and @code{-W} The options @samp{-C}, @samp{-f}, @samp{-o}, and @samp{-W} are not put into @code{MAKEFLAGS}; these options are not passed down.@refill @cindex @code{-j}, and recursion @cindex @code{--jobs}, and recursion @cindex recursion, and @code{-j} @cindex job slots, and recursion The @samp{-j} option is a special case (@pxref{Parallel, ,Parallel Execution}). If you set it to some numeric value @samp{N} and your operating system supports it (most any UNIX system will; others typically won't), the parent @code{make} and all the sub-@code{make}s will communicate to ensure that there are only @samp{N} jobs running at the same time between them all. Note that any job that is marked recursive (@pxref{Instead of Execution, ,Instead of Executing the Commands}) doesn't count against the total jobs (otherwise we could get @samp{N} sub-@code{make}s running and have no slots left over for any real work!) If your operating system doesn't support the above communication, then @samp{-j 1} is always put into @code{MAKEFLAGS} instead of the value you specified. This is because if the @w{@samp{-j}} option were passed down to sub-@code{make}s, you would get many more jobs running in parallel than you asked for. If you give @samp{-j} with no numeric argument, meaning to run as many jobs as possible in parallel, this is passed down, since multiple infinities are no more than one.@refill If you do not want to pass the other flags down, you must change the value of @code{MAKEFLAGS}, like this: @example subsystem: cd subdir && $(MAKE) MAKEFLAGS= @end example @vindex MAKEOVERRIDES The command line variable definitions really appear in the variable @code{MAKEOVERRIDES}, and @code{MAKEFLAGS} contains a reference to this variable. If you do want to pass flags down normally, but don't want to pass down the command line variable definitions, you can reset @code{MAKEOVERRIDES} to empty, like this: @example MAKEOVERRIDES = @end example @noindent @cindex Arg list too long @cindex E2BIG This is not usually useful to do. However, some systems have a small fixed limit on the size of the environment, and putting so much information into the value of @code{MAKEFLAGS} can exceed it. If you see the error message @samp{Arg list too long}, this may be the problem. @findex .POSIX @cindex POSIX.2 (For strict compliance with POSIX.2, changing @code{MAKEOVERRIDES} does not affect @code{MAKEFLAGS} if the special target @samp{.POSIX} appears in the makefile. You probably do not care about this.) @vindex MFLAGS A similar variable @code{MFLAGS} exists also, for historical compatibility. It has the same value as @code{MAKEFLAGS} except that it does not contain the command line variable definitions, and it always begins with a hyphen unless it is empty (@code{MAKEFLAGS} begins with a hyphen only when it begins with an option that has no single-letter version, such as @samp{--warn-undefined-variables}). @code{MFLAGS} was traditionally used explicitly in the recursive @code{make} command, like this: @example subsystem: cd subdir && $(MAKE) $(MFLAGS) @end example @noindent but now @code{MAKEFLAGS} makes this usage redundant. If you want your makefiles to be compatible with old @code{make} programs, use this technique; it will work fine with more modern @code{make} versions too. @cindex setting options from environment @cindex options, setting from environment @cindex setting options in makefiles @cindex options, setting in makefiles The @code{MAKEFLAGS} variable can also be useful if you want to have certain options, such as @samp{-k} (@pxref{Options Summary, ,Summary of Options}), set each time you run @code{make}. You simply put a value for @code{MAKEFLAGS} in your environment. You can also set @code{MAKEFLAGS} in a makefile, to specify additional flags that should also be in effect for that makefile. (Note that you cannot use @code{MFLAGS} this way. That variable is set only for compatibility; @code{make} does not interpret a value you set for it in any way.) When @code{make} interprets the value of @code @samp{-C}, @samp{-f}, @samp{-h}, @samp{-o}, @samp{-W}, and their long-named versions are ignored; and there is no error for an invalid option). If you do put @code{MAKEFLAGS} in your environment, you should be sure not to include any options that will drastically affect the actions of @code{make} and undermine the purpose of makefiles and of @code{make} itself. For instance, the @samp{-t}, @samp{-n}, and @samp{-q} options, if put in one of these variables, could have disastrous consequences and would certainly have at least surprising and probably annoying effects.@refill @node -w Option, , Options/Recursion, Recursion @subsection The @samp{--print-directory} Option @cindex directories, printing them @cindex printing directories @cindex recursion, and printing directories If you use several levels of recursive @code{make} invocations, the @samp{-w} or @w{@samp{--print-directory}} option can make the output a lot easier to understand by showing each directory as @code{make} starts processing it and as @code{make} finishes processing it. For example, if @samp{make -w} is run in the directory @file{/u/gnu/make}, @code{make} will print a line of the form:@refill @example make: Entering directory `/u/gnu/make'. @end example @noindent before doing anything else, and a line of the form: @example make: Leaving directory `/u/gnu/make'. @end example @noindent when processing is completed. @cindex @code{-C}, and @code{-w} @cindex @code{--directory}, and @code{--print-directory} @cindex recursion, and @code{-w} @cindex @code{-w}, and @code{-C} @cindex @code{-w}, and recursion @cindex @code{--print-directory}, and @code{--directory} @cindex @code{--print-directory}, and recursion @cindex @code{--no-print-directory} @cindex @code{--print-directory}, disabling @cindex @code{-w}, disabling Normally, you do not need to specify this option because @samp{make} does it for you: @samp{-w} is turned on automatically when you use the @samp{-C} option, and in sub-@code{make}s. @code{make} will not automatically turn on @samp{-w} if you also use @samp{-s}, which says to be silent, or if you use @samp{--no-print-directory} to explicitly disable it. @node Sequences, Empty Commands, Recursion, Commands @section Defining Canned Command Sequences @cindex sequences of commands @cindex commands, sequences of When the same sequence of commands is useful in making various targets, you can define it as a canned sequence with the @code{define} directive, and refer to the canned sequence from the rules for those targets. The canned sequence is actually a variable, so the name must not conflict with other variable names. Here is an example of defining a canned sequence of commands: @example define run-yacc yacc $(firstword $^) mv y.tab.c $@@ endef @end example @cindex @code{yacc} @noindent Here @code{run-yacc} is the name of the variable being defined; @code{endef} marks the end of the definition; the lines in between are the commands. The @code{define} directive does not expand variable references and function calls in the canned sequence; the @samp{$} characters, parentheses, variable names, and so on, all become part of the value of the variable you are defining. @xref{Defining, ,Defining Variables Verbatim}, for a complete explanation of @code{define}. The first command in this example runs Yacc on the first prerequisite of whichever rule uses the canned sequence. The output file from Yacc is always named @file{y.tab.c}. The second command moves the output to the rule's target file name. To use the canned sequence, substitute the variable into the commands of a rule. You can substitute it like any other variable (@pxref{Reference, ,Basics of Variable References}). Because variables defined by @code{define} are recursively expanded variables, all the variable references you wrote inside the @code{define} are expanded now. For example: @example foo.c : foo.y $(run-yacc) @end example @noindent @samp{foo.y} will be substituted for the variable @samp{$^} when it occurs in @code{run-yacc}'s value, and @samp{foo.c} for @samp{$@@}.@refill This is a realistic example, but this particular one is not needed in practice because @code{make} has an implicit rule to figure out these commands based on the file names involved (@pxref{Implicit Rules, ,Using Implicit Rules}). @cindex @@, and @code{define} @cindex -, and @code{define} @cindex +, and @code{define} In command execution, each line of a canned sequence is treated just as if the line appeared on its own in the rule, preceded by a tab. In particular, @code{make} invokes a separate subshell for each line. You can use the special prefix characters that affect command lines (@samp{@@}, @samp{-}, and @samp{+}) on each line of a canned sequence. @xref{Commands, ,Writing the Commands in Rules}. For example, using this canned sequence: @example define frobnicate @@echo "frobnicating target $@@" frob-step-1 $< -o $@@-step-1 frob-step-2 $@@-step-1 -o $@@ endef @end example @noindent @code{make} will not echo the first line, the @code{echo} command. But it @emph{will} echo the following two command lines. On the other hand, prefix characters on the command line that refers to a canned sequence apply to every line in the sequence. So the rule: @example frob.out: frob.in @@$(frobnicate) @end example @noindent does not echo @emph{any} commands. (@xref{Echoing, ,Command Echoing}, for a full explanation of @samp{@@}.) @node Empty Commands, , Sequences, Commands @section Using Empty Commands @cindex empty commands @cindex commands, empty It is sometimes useful to define commands which do nothing. This is done simply by giving a command that consists of nothing but whitespace. For example: @example target: ; @end example @noindent defines an empty command string for @file{target}. You could also use a line beginning with a tab character to define an empty command string, but this would be confusing because such a line looks empty. @findex .DEFAULT@r{, and empty commands} You may be wondering why you would want to define a command string that does nothing. The only reason this is useful is to prevent a target from getting implicit commands (from implicit rules or the @code{.DEFAULT} special target; @pxref{Implicit Rules} and @pxref{Last Resort, ,Defining Last-Resort Default Rules}).@refill @c !!! another reason is for canonical stamp files: @ignore @example foo: stamp-foo ; stamp-foo: foo.in create foo frm foo.in touch $@ @end example @end ignore. @xref{Phony Targets, ,Phony Targets}, for a better way to do this. @node Using Variables, Conditionals, Commands, Top @chapter How to Use Variables @cindex variable @cindex value @cindex recursive variable expansion @cindex simple variable expansion A @dfn{variable} is a name defined in a makefile to represent a string of text, called the variable's @dfn{value}. These values are substituted by explicit request into targets, prerequisites, commands, and other parts of the makefile. (In some other versions of @code{make}, variables are called @dfn{macros}.) @cindex macro Variables and functions in all parts of a makefile are expanded when read, except for the shell commands in rules, the right-hand sides of variable definitions using @samp{=}, and the bodies of variable definitions using the @code{define} directive.@refill Variables can represent lists of file names, options to pass to compilers, programs to run, directories to look in for source files, directories to write output in, or anything else you can imagine. A variable name may be any sequence of characters not containing @samp{:}, @samp{#}, @samp{=}, or leading or trailing whitespace. However, variable names containing characters other than letters, numbers, and underscores should be avoided, as they may be given special meanings in the future, and with some shells they cannot be passed through the environment to a sub-@code{make} (@pxref{Variables/Recursion, ,Communicating Variables to a Sub-@code{make}}). Variable names are case-sensitive. The names @samp{foo}, @samp{FOO}, and @samp (@pxref{Overriding, ,Overriding Variables}). A few variables have names that are a single punctuation character or just a few characters. These are the @dfn{automatic variables}, and they have particular specialized uses. @xref. @end menu @node Reference, Flavors, Using Variables, Using Variables @section Basics of Variable References @cindex variables, how to reference @cindex reference to variables @cindex @code{$}, in variable reference @cindex dollar sign (@code{$}), in variable reference To substitute a variable's value, write a dollar sign followed by the name of the variable in parentheses or braces: either @samp{$(foo)} or @samp{$@{foo@}} is a valid reference to the variable @code{foo}. This special significance of @samp{$} is why you must write @samp{$$}: @example @group objects = program.o foo.o utils.o program : $(objects) cc -o program $(objects) $(objects) : defs.h @end group @end example Variable references work by strict textual substitution. Thus, the rule @example @group foo = c prog.o : prog.$(foo) $(foo)$(foo) -$(foo) prog.$(foo) @end group @end example @noindent could be used to compile a C program @file{prog.c}. Since spaces before the variable value are ignored in variable assignments, the value of @code{foo} is precisely @samp{c}. (Don't actually write your makefiles this way!) A dollar sign followed by a character other than a dollar sign, open-parenthesis or open-brace treats that single character as the variable name. Thus, you could reference the variable @code{x} with @samp{$x}. However, this practice is strongly discouraged, except in the case of the automatic variables (@pxref{Automatic Variables}). @node Flavors, Advanced, Reference, Using Variables @section The Two Flavors of Variables @cindex flavors of variables @cindex recursive variable expansion @cindex variables, flavors @cindex recursively expanded variables @cindex variables, recursively expanded There are two ways that a variable in GNU @code{make} can have a value; we call them the two @dfn{flavors} of variables. The two flavors are distinguished in how they are defined and in what they do when expanded. @cindex = The first flavor of variable is a @dfn{recursively expanded} variable. Variables of this sort are defined by lines using @samp{=} (@pxref{Setting, ,Setting Variables}) or by the @code{define} directive (@pxref{Defining, ,Defining Variables Verbatim}). The value you specify is installed verbatim; if it contains references to other variables, these references are expanded whenever this variable is substituted (in the course of expanding some other string). When this happens, it is called @dfn{recursive expansion}.@refill For example, @example foo = $(bar) bar = $(ugh) ugh = Huh? all:;echo $(foo) @end example @noindent will echo @samp{Huh?}: @samp{$(foo)} expands to @samp{$(bar)} which expands to @samp{$(ugh)} which finally expands to @samp{Huh?}.@refill This flavor of variable is the only sort supported by other versions of @code{make}. It has its advantages and its disadvantages. An advantage (most would say) is that: @example CFLAGS = $(include_dirs) -O include_dirs = -Ifoo -Ibar @end example @noindent will do what was intended: when @samp{CFLAGS} is expanded in a command, it will expand to @samp{-Ifoo -Ibar -O}. A major disadvantage is that you cannot append something on the end of a variable, as in @example CFLAGS = $(CFLAGS) -O @end example @noindent because it will cause an infinite loop in the variable expansion. (Actually @code{make} detects the infinite loop and reports an error.) @cindex loops in variable expansion @cindex variables, loops in expansion Another disadvantage is that any functions (@pxref{Functions, ,Functions for Transforming Text}) referenced in the definition will be executed every time the variable is expanded. This makes @code{make} run slower; worse, it causes the @code{wildcard} and @code{shell} functions to give unpredictable results because you cannot easily control when they are called, or even how many times. To avoid all the problems and inconveniences of recursively expanded variables, there is another flavor: simply expanded variables. @cindex simply expanded variables @cindex variables, simply expanded @cindex := @dfn{Simply expanded variables} are defined by lines using @samp{:=} (@pxref{Setting, @emph{as of the time this variable was defined}. Therefore, @example x := foo y := $(x) bar x := later @end example @noindent is equivalent to @example y := foo bar x := later @end example When a simply expanded variable is referenced, its value is substituted verbatim. Here is a somewhat more complicated example, illustrating the use of @samp{:=} in conjunction with the @code{shell} function. (@xref{Shell Function, , The @code{shell} Function}.) This example also shows use of the variable @code{MAKELEVEL}, which is changed when it is passed down from level to level. (@xref{Variables/Recursion, , Communicating Variables to a Sub-@code{make}}, for information about @code{MAKELEVEL}.) @vindex MAKELEVEL @vindex MAKE @example @group ifeq (0,$@{MAKELEVEL@}) whoami := $(shell whoami) host-type := $(shell arch) MAKE := $@{MAKE@} host-type=$@{host-type@} whoami=$@{whoami@} endif @end group @end example @noindent An advantage of this use of @samp{:=} is that a typical `descend into a directory' command then looks like this: @example @group $@{subdirs@}: $@{MAKE@} -C $@@ all @end group @end example (@pxref{Functions, ,Functions for Transforming Text}). @cindex spaces, in variable values @cindex whitespace, in variable values @cindex variables, spaces in values: @example nullstring := space := $(nullstring) # end of the line @end example @noindent Here the value of the variable @code{space} is precisely one space. The comment @w{@samp{# end of the line}} is included here just for clarity. Since trailing space characters are @emph @emph{not} want any whitespace characters at the end of your variable value, you must remember not to put a random comment on the end of the line after some whitespace, such as this: @example dir := /foo/bar # directory to put the frobs in @end example @noindent Here the value of the variable @code{dir} is @w{@samp{/foo/bar }} (with four trailing spaces), which was probably not the intention. (Imagine something like @w{@samp{$(dir)/file}} with this definition!) @cindex conditional variable assignment @cindex variables, conditional assignment @cindex ?= There is another assignment operator for variables, @samp{?=}. This is called a conditional variable assignment operator, because it only has an effect if the variable is not yet defined. This statement: @example FOO ?= bar @end example @noindent is exactly equivalent to this (@pxref{Origin Function, ,The @code{origin} Function}): @example ifeq ($(origin FOO), undefined) FOO = bar endif @end example Note that a variable set to an empty value is still defined, so @samp{?=} will not set that variable. @node Advanced, Values, Flavors, Using Variables @section Advanced Features for Reference to Variables @cindex reference to variables This section describes some advanced features you can use to reference variables in more flexible ways. @menu * Substitution Refs:: Referencing a variable with substitutions on the value. * Computed Names:: Computing the name of the variable to refer to. @end menu @node Substitution Refs, Computed Names, Advanced, Advanced @subsection Substitution References @cindex modified variable reference @cindex substitution variable reference @cindex variables, modified reference @cindex variables, substitution reference @cindex variables, substituting suffix in @cindex suffix, substituting in variables A @dfn{substitution reference} substitutes the value of a variable with alterations that you specify. It has the form @samp{$(@var{var}:@var{a}=@var{b})} (or @samp{$@{@var{var}:@var{a}=@var{b}@}}) and its meaning is to take the value of the variable @var{var}, replace every @var{a} at the end of a word with @var{b} in that value, and substitute the resulting string. When we say ``at the end of a word'', we mean that @var{a} must appear either followed by whitespace or at the end of the value in order to be replaced; other occurrences of @var{a} in the value are unaltered. For example:@refill @example foo := a.o b.o c.o bar := $(foo:.o=.c) @end example @noindent sets @samp{bar} to @samp{a.c b.c c.c}. @xref{Setting, ,Setting Variables}. A substitution reference is actually an abbreviation for use of the @code{patsubst} expansion function (@pxref{Text Functions, ,Functions for String Substitution and Analysis}). We provide substitution references as well as @code{patsubst} for compatibility with other implementations of @code{make}. @findex patsubst Another type of substitution reference lets you use the full power of the @code{patsubst} function. It has the same form @samp{$(@var{var}:@var{a}=@var{b})} described above, except that now @var{a} must contain a single @samp{%} character. This case is equivalent to @samp{$(patsubst @var{a},@var{b},$(@var{var}))}. @xref{Text Functions, ,Functions for String Substitution and Analysis}, for a description of the @code{patsubst} function.@refill @example @group @exdent For example: foo := a.o b.o c.o bar := $(foo:%.o=%.c) @end group @end example @noindent sets @samp{bar} to @samp{a.c b.c c.c}. @node Computed Names, , Substitution Refs, Advanced @subsection Computed Variable Names @cindex nested variable reference @cindex computed variable name @cindex variables, computed names @cindex variables, nested references @cindex variables, @samp{$} in name @cindex @code{$}, in variable name @cindex dollar sign (@code{$}), in variable name @dfn{computed variable name} or a @dfn{nested variable reference}. For example, @example x = y y = z a := $($(x)) @end example @noindent defines @code{a} as @samp{z}: the @samp{$(x)} inside @samp{$($(x))} expands to @samp{y}, so @samp{$($(x))} expands to @samp{$(y)} which in turn expands to @samp{z}. Here the name of the variable to reference is not stated explicitly; it is computed by expansion of @samp{$(x)}. The reference @samp{$(x)} here is nested within the outer variable reference. The previous example shows two levels of nesting, but any number of levels is possible. For example, here are three levels: @example x = y y = z z = u a := $($($(x))) @end example @noindent Here the innermost @samp{$(x)} expands to @samp{y}, so @samp{$($(x))} expands to @samp{$(y)} which in turn expands to @samp{z}; now we have @samp{$(z)}, which becomes @samp{u}. References to recursively-expanded variables within a variable name are reexpanded in the usual fashion. For example: @example x = $(y) y = z z = Hello a := $($(x)) @end example @noindent defines @code{a} as @samp{Hello}: @samp{$($(x))} becomes @samp{$($(y))} which becomes @samp{$(z)} which becomes @samp{Hello}. Nested variable references can also contain modified references and function invocations (@pxref{Functions, ,Functions for Transforming Text}), just like any other reference. For example, using the @code{subst} function (@pxref{Text Functions, ,Functions for String Substitution and Analysis}): @example @group x = variable1 variable2 := Hello y = $(subst 1,2,$(x)) z = y a := $($($(z))) @end group @end example @noindent eventually defines @code{a} as @samp{Hello}. It is doubtful that anyone would ever want to write a nested reference as convoluted as this one, but it works: @samp{$($($(z)))} expands to @samp{$($(y))} which becomes @samp{$($(subst 1,2,$(x)))}. This gets the value @samp{variable1} from @code{x} and changes it by substitution to @samp{variable2}, so that the entire string becomes @samp{$(variable2)}, a simple variable reference whose value is @samp{Hello}.@refill A computed variable name need not consist entirely of a single variable reference. It can contain several variable references, as well as some invariant text. For example, @example @group a_dirs := dira dirb 1_dirs := dir1 dir2 @end group @group a_files := filea fileb 1_files := file1 file2 @end group @group ifeq "$(use_a)" "yes" a1 := a else a1 := 1 endif @end group @group ifeq "$(use_dirs)" "yes" df := dirs else df := files endif dirs := $($(a1)_$(df)) @end group @end example @noindent will give @code{dirs} the same value as @code{a_dirs}, @code{1_dirs}, @code{a_files} or @code{1_files} depending on the settings of @code{use_a} and @code{use_dirs}.@refill Computed variable names can also be used in substitution references: @example @group a_objects := a.o b.o c.o 1_objects := 1.o 2.o 3.o sources := $($(a1)_objects:.o=.c) @end group @end example @noindent defines @code{sources} as either @samp{a.c b.c c.c} or @samp{1.c 2.c 3.c}, depending on the value of @code{a1}. The only restriction on this sort of use of nested variable references is that they cannot specify part of the name of a function to be called. This is because the test for a recognized function name is done before the expansion of nested references. For example, @example @group ifdef do_sort func := sort else func := strip endif @end group @group bar := a d b g q c @end group @group foo := $($(func) $(bar)) @end group @end example @noindent attempts to give @samp{foo} the value of the variable @samp{sort a d b g q c} or @samp{strip a d b g q c}, rather than giving @samp{a d b g q c} as the argument to either the @code{sort} or the @code{strip} function. This restriction could be removed in the future if that change is shown to be a good idea. You can also use computed variable names in the left-hand side of a variable assignment, or in a @code{define} directive, as in: @example dir = foo $(dir)_sources := $(wildcard $(dir)/*.c) define $(dir)_print lpr $($(dir)_sources) endef @end example @noindent This example defines the variables @samp{dir}, @samp{foo_sources}, and @samp{foo_print}. Note that @dfn{nested variable references} are quite different from @dfn{recursively expanded variables} (@pxref{Flavors, ,The Two Flavors of Variables}), though both are used together in complex ways when doing makefile programming.@refill @node Values, Setting, Advanced, Using Variables @section How Variables Get Their Values @cindex variables, how they get their values @cindex value, how a variable gets it Variables can get values in several different ways: @itemize @bullet @item You can specify an overriding value when you run @code{make}. @xref{Overriding, ,Overriding Variables}. @item You can specify a value in the makefile, either with an assignment (@pxref{Setting, ,Setting Variables}) or with a verbatim definition (@pxref{Defining, ,Defining Variables Verbatim}).@refill @item Variables in the environment become @code{make} variables. @xref{Environment, ,Variables from the Environment}. @item Several @dfn{automatic} variables are given new values for each rule. Each of these has a single conventional use. @xref{Automatic Variables}. @item Several variables have constant initial values. @xref{Implicit Variables, ,Variables Used by Implicit Rules}. @end itemize @node Setting, Appending, Values, Using Variables @section Setting Variables @cindex setting variables @cindex variables, setting @cindex = @cindex := @cindex ?= To set a variable from the makefile, write a line starting with the variable name followed by @samp{=} or @samp{:=}. Whatever follows the @samp{=} or @samp{:=} on the line becomes the value. For example, @example objects = main.o foo.o bar.o utils.o @end example @noindent defines a variable named @code{objects}. Whitespace around the variable name and immediately after the @samp{=} is ignored. Variables defined with @samp{=} are @dfn{recursively expanded} variables. Variables defined with @samp{:=} are @dfn{simply expanded} variables; these definitions can contain variable references which will be expanded before the definition is made. @xref{Flavors, @code (@pxref{Implicit Variables, ,Variables Used by Implicit Rules}). Several special variables are set automatically to a new value for each rule; these are called the @dfn{automatic} variables (@pxref{Automatic Variables}). If you'd like a variable to be set to a value only if it's not already set, then you can use the shorthand operator @samp{?=} instead of @samp{=}. These two settings of the variable @samp{FOO} are identical (@pxref{Origin Function, ,The @code{origin} Function}): @example FOO ?= bar @end example @noindent and @example ifeq ($(origin FOO), undefined) FOO = bar endif @end example @node Appending, Override Directive, Setting, Using Variables @section Appending More Text to Variables @cindex += @cindex appending to variables @cindex variables, appending to Often it is useful to add more text to the value of a variable already defined. You do this with a line containing @samp{+=}, like this: @example objects += another.o @end example @noindent This takes the value of the variable @code{objects}, and adds the text @samp{another.o} to it (preceded by a single space). Thus: @example objects = main.o foo.o bar.o utils.o objects += another.o @end example @noindent sets @code{objects} to @samp{main.o foo.o bar.o utils.o another.o}. Using @samp{+=} is similar to: @example objects = main.o foo.o bar.o utils.o objects := $(objects) another.o @end example @noindent but differs in ways that become important when you use more complex values. When the variable in question has not been defined before, @samp{+=} acts just like normal @samp{=}: it defines a recursively-expanded variable. However, when there @emph{is} a previous definition, exactly what @samp{+=} does depends on what flavor of variable you defined originally. @xref{Flavors, ,The Two Flavors of Variables}, for an explanation of the two flavors of variables. When you add to a variable's value with @samp{+=}, @code{make} acts essentially as if you had included the extra text in the initial definition of the variable. If you defined it first with @samp{:=}, making it a simply-expanded variable, @samp{+=} adds to that simply-expanded definition, and expands the new text before appending it to the old value just as @samp{:=} does (see @ref{Setting, ,Setting Variables}, for a full explanation of @samp{:=}). In fact, @example variable := value variable += more @end example @noindent is exactly equivalent to: @noindent @example variable := value variable := $(variable) more @end example On the other hand, when you use @samp{+=} with a variable that you defined first to be recursively-expanded using plain @samp{=}, @code{make} does something a bit different. Recall that when you define a recursively-expanded variable, @code{make} does not expand the value you set for variable and function references immediately. Instead it stores the text verbatim, and saves these variable and function references to be expanded later, when you refer to the new variable (@pxref{Flavors, ,The Two Flavors of Variables}). When you use @samp{+=} on a recursively-expanded variable, it is this unexpanded text to which @code{make} appends the new text you specify. @example @group variable = value variable += more @end group @end example @noindent is roughly equivalent to: @example @group temp = value variable = $(temp) more @end group @end example @noindent except that of course it never defines a variable called @code{temp}. The importance of this comes when the variable's old value contains variable references. Take this common example: @example CFLAGS = $(includes) -O @dots{} CFLAGS += -pg # enable profiling @end example @noindent The first line defines the @code{CFLAGS} variable with a reference to another variable, @code{includes}. (@code{CFLAGS} is used by the rules for C compilation; @pxref{Catalogue of Rules, ,Catalogue of Implicit Rules}.) Using @samp{=} for the definition makes @code{CFLAGS} a recursively-expanded variable, meaning @w{@samp{$(includes) -O}} is @emph{not} expanded when @code{make} processes the definition of @code{CFLAGS}. Thus, @code{includes} need not be defined yet for its value to take effect. It only has to be defined before any reference to @code{CFLAGS}. If we tried to append to the value of @code{CFLAGS} without using @samp{+=}, we might do it like this: @example CFLAGS := $(CFLAGS) -pg # enable profiling @end example @noindent This is pretty close, but not quite what we want. Using @samp{:=} redefines @code{CFLAGS} as a simply-expanded variable; this means @code{make} expands the text @w{@samp{$(CFLAGS) -pg}} before setting the variable. If @code{includes} is not yet defined, we get @w{@samp{ -O -pg}}, and a later definition of @code{includes} will have no effect. Conversely, by using @samp{+=} we set @code{CFLAGS} to the @emph{unexpanded} value @w{@samp{$(includes) -O -pg}}. Thus we preserve the reference to @code{includes}, so if that variable gets defined at any later point, a reference like @samp{$(CFLAGS)} still uses its value. @node Override Directive, Defining, Appending, Using Variables @section The @code{override} Directive @findex override @cindex overriding with @code{override} @cindex variables, overriding If a variable has been set with a command argument (@pxref{Overriding, ,Overriding Variables}), then ordinary assignments in the makefile are ignored. If you want to set the variable in the makefile even though it was set with a command argument, you can use an @code{override} directive, which is a line that looks like this:@refill @example override @var{variable} = @var{value} @end example @noindent or @example override @var{variable} := @var{value} @end example To append more text to a variable defined on the command line, use: @example override @var{variable} += @var{more text} @end example @noindent @xref{Appending, ,Appending More Text to Variables}. The @code{override} directive was not invented for escalation in the war between makefiles and command arguments. It was invented so you can alter and add to values that the user specifies with command arguments. For example, suppose you always want the @samp{-g} switch when you run the C compiler, but you would like to allow the user to specify the other switches with a command argument just as usual. You could use this @code{override} directive: @example override CFLAGS += -g @end example You can also use @code{override} directives with @code{define} directives. This is done as you might expect: @example override define foo bar endef @end example @noindent @iftex See the next section for information about @code{define}. @end iftex @ifnottex @xref{Defining, ,Defining Variables Verbatim}. @end ifnottex @node Defining, Environment, Override Directive, Using Variables @section Defining Variables Verbatim @findex define @findex endef @cindex verbatim variable definition @cindex defining variables verbatim @cindex variables, defining verbatim Another way to set the value of a variable is to use the @code{define} directive. This directive has an unusual syntax which allows newline characters to be included in the value, which is convenient for defining both canned sequences of commands (@pxref{Sequences, ,Defining Canned Command Sequences}), and also sections of makefile syntax to use with @code{eval} (@pxref{Eval Function}). The @code{define} directive is followed on the same line by the name of the variable and nothing more. The value to give the variable appears on the following lines. The end of the value is marked by a line containing just the word @code{endef}. Aside from this difference in syntax, @code{define} works just like @samp{=}: it creates a recursively-expanded variable (@pxref{Flavors, ,The Two Flavors of Variables}). The variable name may contain function and variable references, which are expanded when the directive is read to find the actual variable name to use. You may nest @code{define} directives: @code{make} will keep track of nested directives and report an error if they are not all properly closed with @code{endef}. Note that lines beginning with tab characters are considered part of a command script, so any @code{define} or @code{endef} strings appearing on such a line will not be considered @code{make} operators. @example define two-lines echo foo echo $(bar) endef @end example The value in an ordinary assignment cannot contain a newline; but the newlines that separate the lines of the value in a @code{define} become part of the variable's value (except for the final newline which precedes the @code{endef} and is not considered part of the value).@refill @need 800 When used in a command script, the previous example is functionally equivalent to this: @example two-lines = echo foo; echo $(bar) @end example @noindent since two commands separated by semicolon behave much like two separate shell commands. However, note that using two separate lines means @code{make} will invoke the shell twice, running an independent subshell for each line. @xref{Execution, ,Command Execution}. If you want variable definitions made with @code{define} to take precedence over command-line variable definitions, you can use the @code{override} directive together with @code{define}: @example override define two-lines foo $(bar) endef @end example @noindent @xref{Override Directive, ,The @code{override} Directive}. @node Environment, Target-specific, Defining, Using Variables @section Variables from the Environment @cindex variables, environment @cindex environment Variables in @code{make} can come from the environment in which @code{make} is run. Every environment variable that @code{make} sees when it starts up is transformed into a @code{make} variable with the same name and value. However, an explicit assignment in the makefile, or with a command argument, overrides the environment. (If the @samp{-e} flag is specified, then values from the environment override assignments in the makefile. @xref{Options Summary, ,Summary of Options}. But this is not recommended practice.) Thus, by setting the variable @code @code{CFLAGS} explicitly and therefore are not affected by the value in the environment.) When @code{make} runs a command script, variables defined in the makefile are placed into the environment of that command. This allows you to pass values to sub-@code{make} invocations (@pxref{Recursion, ,Recursive Use of @code{make}}). By default, only variables that came from the environment or the command line are passed to recursive invocations. You can use the @code{export} directive to pass other variables. @xref{Variables/Recursion, , Communicating Variables to a Sub-@code. @cindex SHELL, import from environment Such problems would be especially likely with the variable @code{SHELL}, which is normally present in the environment to specify the user's choice of interactive shell. It would be very undesirable for this choice to affect @code{make}; so, @code{make} handles the @code{SHELL} environment variable in a special way; see @ref{Choosing the Shell}.@refill @node Target-specific, Pattern-specific, Environment, Using Variables @section Target-specific Variable Values @cindex target-specific variables @cindex variables, target-specific Variable values in @code{make} are usually global; that is, they are the same regardless of where they are evaluated (unless they're reset, of course). One exception to that is automatic variables (@pxref{Automatic Variables}). The other exception is @dfn{target-specific variable values}. This feature allows you to define different values for the same variable, based on the target that @code{make} is currently building. As with automatic variables, these values are only available within the context of a target's command script (and in other target-specific assignments). Set a target-specific variable value like this: @example @var{target} @dots{} : @var{variable-assignment} @end example @noindent or like this: @example @var{target} @dots{} : override @var{variable-assignment} @end example @noindent or like this: @example @var{target} @dots{} : export @var{variable-assignment} @end example Multiple @var{target} values create a target-specific variable value for each member of the target list individually. The @var{variable-assignment} can be any valid form of assignment; recursive (@samp{=}), static (@samp{:=}), appending (@samp{+=}), or conditional (@samp{?=}). All variables that appear within the @var @samp{-e} option is in force) will take precedence. Specifying the @code: @example prog : CFLAGS = -g prog : prog.o foo.o bar.o @end example @noindent will set @code{CFLAGS} to @samp{-g} in the command script for @file{prog}, but it will also set @code{CFLAGS} to @samp{-g} in the command scripts that create @file{prog.o}, @file{foo.o}, and @file. @node Pattern-specific, , Target-specific, Using Variables @section Pattern-specific Variable Values @cindex pattern-specific variables @cindex variables, pattern-specific In addition to target-specific variable values (@pxref{Target-specific, ,Target-specific Variable Values}), GNU @code: @example @var{pattern} @dots{} : @var{variable-assignment} @end example @noindent or like this: @example @var{pattern} @dots{} : override @var{variable-assignment} @end example @noindent where @var{pattern} is a %-pattern. As with target-specific variable values, multiple @var{pattern} values create a pattern-specific variable value for each pattern individually. The @var{variable-assignment} can be any valid form of assignment. Any command-line variable setting will take precedence, unless @code{override} is specified. For example: @example %.o : CFLAGS = -O @end example @noindent will assign @code{CFLAGS} the value of @samp{-O} for all targets matching the pattern @code{%.o}. @node Conditionals, Functions, Using Variables, Top @chapter Conditional Parts of Makefiles @cindex conditionals A @dfn{conditional} causes part of a makefile to be obeyed or ignored depending on the values of variables. Conditionals can compare the value of one variable to another, or the value of a variable to a constant string. Conditionals control what @code{make} actually ``sees'' in the makefile, so they @emph{cannot} be used to control shell commands at the time of execution.@refill @menu * Conditional Example:: Example of a conditional * Conditional Syntax:: The syntax of conditionals. * Testing Flags:: Conditionals that test flags. @end menu @node Conditional Example, Conditional Syntax, Conditionals, Conditionals @section Example of a Conditional The following example of a conditional tells @code{make} to use one set of libraries if the @code{CC} variable is @samp{gcc}, and a different set of libraries otherwise. It works by controlling which of two command lines will be used as the command for a rule. The result is that @samp{CC=gcc} as an argument to @code{make} changes not only which compiler is used but also which libraries are linked. @example libs_for_gcc = -lgnu normal_libs = foo: $(objects) ifeq ($(CC),gcc) $(CC) -o foo $(objects) $(libs_for_gcc) else $(CC) -o foo $(objects) $(normal_libs) endif @end example This conditional uses three directives: one @code{ifeq}, one @code{else} and one @code{endif}. The @code{ifeq} directive begins the conditional, and specifies the condition. It contains two arguments, separated by a comma and surrounded by parentheses. Variable substitution is performed on both arguments and then they are compared. The lines of the makefile following the @code{ifeq} are obeyed if the two arguments match; otherwise they are ignored. The @code{else} directive causes the following lines to be obeyed if the previous conditional failed. In the example above, this means that the second alternative linking command is used whenever the first alternative is not used. It is optional to have an @code{else} in a conditional. The @code{endif} directive ends the conditional. Every conditional must end with an @code @code{CC} has the value @samp{gcc}, the above example has this effect: @example foo: $(objects) $(CC) -o foo $(objects) $(libs_for_gcc) @end example @noindent When the variable @code{CC} has any other value, the effect is this: @example foo: $(objects) $(CC) -o foo $(objects) $(normal_libs) @end example Equivalent results can be obtained in another way by conditionalizing a variable assignment and then using the variable unconditionally: @example libs_for_gcc = -lgnu normal_libs = ifeq ($(CC),gcc) libs=$(libs_for_gcc) else libs=$(normal_libs) endif foo: $(objects) $(CC) -o foo $(objects) $(libs) @end example @node Conditional Syntax, Testing Flags, Conditional Example, Conditionals @section Syntax of Conditionals @findex ifdef @findex ifeq @findex ifndef @findex ifneq @findex else @findex endif The syntax of a simple conditional with no @code{else} is as follows: @example @var{conditional-directive} @var{text-if-true} endif @end example @noindent The @var{text-if-true} may be any lines of text, to be considered as part of the makefile if the condition is true. If the condition is false, no text is used instead. The syntax of a complex conditional is as follows: @example @var{conditional-directive} @var{text-if-true} else @var{text-if-false} endif @end example or: @example @var{conditional-directive} @var{text-if-one-is-true} else @var{conditional-directive} @var{text-if-true} else @var{text-if-false} endif @end example @noindent There can be as many ``@code{else} @var{conditional-directive}'' clauses as necessary. Once a given condition is true, @var{text-if-true} is used and no other clause is used; if no condition is true then @var{text-if-false} is used. The @var{text-if-true} and @var{text-if-false} can be any number of lines of text. The syntax of the @var{conditional-directive} is the same whether the conditional is simple or complex; after an @code{else} or not. There are four different directives that test different conditions. Here is a table of them: @table @code @item ifeq (@var{arg1}, @var{arg2}) @itemx ifeq '@var{arg1}' '@var{arg2}' @itemx ifeq "@var{arg1}" "@var{arg2}" @itemx ifeq "@var{arg1}" '@var{arg2}' @itemx ifeq '@var{arg1}' "@var{arg2}" Expand all variable references in @var{arg1} and @var{arg2} and compare them. If they are identical, the @var{text-if-true} is effective; otherwise, the @var @code{strip} function (@pxref{Text Functions}) to avoid interpreting whitespace as a non-empty value. For example: @example @group ifeq ($(strip $(foo)),) @var{text-if-empty} endif @end group @end example @noindent will evaluate @var{text-if-empty} even if the expansion of @code{$(foo)} contains whitespace characters. @item ifneq (@var{arg1}, @var{arg2}) @itemx ifneq '@var{arg1}' '@var{arg2}' @itemx ifneq "@var{arg1}" "@var{arg2}" @itemx ifneq "@var{arg1}" '@var{arg2}' @itemx ifneq '@var{arg1}' "@var{arg2}" Expand all variable references in @var{arg1} and @var{arg2} and compare them. If they are different, the @var{text-if-true} is effective; otherwise, the @var{text-if-false}, if any, is effective. @item ifdef @var{variable-name} The @code{ifdef} form takes the @emph{name} of a variable as its argument, not a reference to a variable. The value of that variable has a non-empty value, the @var{text-if-true} is effective; otherwise, the @var{text-if-false}, if any, is effective. Variables that have never been defined have an empty value. The text @var{variable-name} is expanded, so it could be a variable or function that expands to the name of a variable. For example: @example bar = true foo = bar ifdef $(foo) frobozz = yes endif @end example The variable reference @code{$(foo)} is expanded, yielding @code{bar}, which is considered to be the name of a variable. The variable @code{bar} is not expanded, but its value is examined to determine if it is non-empty. Note that @code{ifdef} only tests whether a variable has a value. It does not expand the variable to see if that value is nonempty. Consequently, tests using @code{ifdef} return true for all definitions except those like @code{foo =}. To test for an empty value, use @w{@code{ifeq ($(foo),)}}. For example, @example bar = foo = $(bar) ifdef foo frobozz = yes else frobozz = no endif @end example @noindent sets @samp{frobozz} to @samp{yes}, while: @example foo = ifdef foo frobozz = yes else frobozz = no endif @end example @noindent sets @samp{frobozz} to @samp{no}. @item ifndef @var{variable-name} If the variable @var{variable-name} has an empty value, the @var{text-if-true} is effective; otherwise, the @var{text-if-false}, if any, is effective. The rules for expansion and testing of @var{variable-name} are identical to the @code{ifdef} directive. @end table @samp{#} may appear at the end of the line. The other two directives that play a part in a conditional are @code{else} and @code{endif}. Each of these directives is written as one word, with no arguments. Extra spaces are allowed and ignored at the beginning of the line, and spaces or tabs at the end. A comment starting with @samp{#} may appear at the end of the line. Conditionals affect which lines of the makefile @code{make} uses. If the condition is true, @code{make} reads the lines of the @var{text-if-true} as part of the makefile; if the condition is false, @code{make} ignores those lines completely. It follows that syntactic units of the makefile, such as rules, may safely be split across the beginning or the end of the conditional.@refill @code{make} evaluates conditionals when it reads a makefile. Consequently, you cannot use automatic variables in the tests of conditionals because they are not defined until commands are run (@pxref{Automatic Variables}). To prevent intolerable confusion, it is not permitted to start a conditional in one makefile and end it in another. However, you may write an @code{include} directive within a conditional, provided you do not attempt to terminate the conditional inside the included file. @node Testing Flags, , Conditional Syntax, Conditionals @section Conditionals that Test Flags You can write a conditional that tests @code{make} command flags such as @samp{-t} by using the variable @code{MAKEFLAGS} together with the @code{findstring} function (@pxref{Text Functions, , Functions for String Substitution and Analysis}). This is useful when @code{touch} is not enough to make a file appear up to date. The @code{findstring} function determines whether one string appears as a substring of another. If you want to test for the @samp{-t} flag, use @samp{t} as the first string and the value of @code{MAKEFLAGS} as the other. For example, here is how to arrange to use @samp{ranlib -t} to finish marking an archive file up to date: @example archive.a: @dots{} ifneq (,$(findstring t,$(MAKEFLAGS))) +touch archive.a +ranlib -t archive.a else ranlib archive.a endif @end example @noindent The @samp{+} prefix marks those command lines as ``recursive'' so that they will be executed despite use of the @samp{-t} flag. @xref{Recursion, ,Recursive Use of @code{make}}. @node Functions, Running, Conditionals, Top @chapter Functions for Transforming Text @cindex functions @dfn{Functions} allow you to do text processing in the makefile to compute the files to operate on or the commands to use. You use a function in a @dfn{function call}, where you give the name of the function and some text (the @dfn. @end menu @node Syntax of Functions, Text Functions, Functions, Functions @section Function Call Syntax @cindex @code{$}, in function call @cindex dollar sign (@code{$}), in function call @cindex arguments of functions @cindex functions, syntax of A function call resembles a variable reference. It looks like this: @example $(@var{function} @var{arguments}) @end example @noindent or like this: @example $@{@var{function} @var{arguments}@} @end example Here @var{function} is a function name; one of a short list of names that are part of @code{make}. You can also essentially create your own functions by using the @code{call} builtin function. The @var @w{@samp{$(subst a,b,$(x))}}, not @w{@samp{$ @code{comma} and @code{space} whose values are isolated comma and space characters, then substitute these variables where such characters are wanted, like this: @example @group comma:= , empty:= space:= $(empty) $(empty) foo:= a b c bar:= $(subst $(space),$(comma),$(foo)) # @r{bar is now `a,b,c'.} @end group @end example @noindent Here the @code{subst} function replaces each space with a comma, through the value of @code{foo}, and substitutes the result. @node Text Functions, File Name Functions, Syntax of Functions, Functions @section Functions for String Substitution and Analysis @cindex functions, for text Here are some functions that operate on strings: @table @code @item $(subst @var{from},@var{to},@var{text}) @findex subst Performs a textual replacement on the text @var{text}: each occurrence of @var{from} is replaced by @var{to}. The result is substituted for the function call. For example, @example $(subst ee,EE,feet on the street) @end example substitutes the string @samp{fEEt on the strEEt}. @item $(patsubst @var{pattern},@var{replacement},@var{text}) @findex patsubst Finds whitespace-separated words in @var{text} that match @var{pattern} and replaces them with @var{replacement}. Here @var{pattern} may contain a @samp{%} which acts as a wildcard, matching any number of any characters within a word. If @var{replacement} also contains a @samp{%}, the @samp{%} is replaced by the text that matched the @samp{%} in @var{pattern}. Only the first @samp{%} in the @var{pattern} and @var{replacement} is treated this way; any subsequent @samp{%} is unchanged.@refill @cindex @code{%}, quoting in @code{patsubst} @cindex @code{%}, quoting with @code{\} (backslash) @cindex @code{\} (backslash), to quote @code{%} @cindex backslash (@code{\}), to quote @code{%} @cindex quoting @code{%}, in @code{patsubst} @samp{%} characters in @code{patsubst} function invocations Whitespace between words is folded into single space characters; leading and trailing whitespace is discarded. For example, @example $(patsubst %.c,%.o,x.c.c bar.c) @end example @noindent produces the value @samp{x.c.o bar.o}. Substitution references (@pxref{Substitution Refs, ,Substitution References}) are a simpler way to get the effect of the @code{patsubst} function: @example $(@var{var}:@var{pattern}=@var{replacement}) @end example @noindent is equivalent to @example $(patsubst @var{pattern},@var{replacement},$(@var{var})) @end example The second shorthand simplifies one of the most common uses of @code{patsubst}: replacing the suffix at the end of file names. @example $(@var{var}:@var{suffix}=@var{replacement}) @end example @noindent is equivalent to @example $(patsubst %@var{suffix},%@var{replacement},$(@var{var})) @end example @noindent For example, you might have a list of object files: @example objects = foo.o bar.o baz.o @end example @noindent To get the list of corresponding source files, you could simply write: @example $(objects:.o=.c) @end example @noindent instead of using the general form: @example $(patsubst %.o,%.c,$(objects)) @end example @item $(strip @var{string}) @cindex stripping whitespace @cindex whitespace, stripping @cindex spaces, stripping @findex strip Removes leading and trailing whitespace from @var{string} and replaces each internal sequence of one or more whitespace characters with a single space. Thus, @samp{$(strip a b c )} results in @w{@samp{a b c}}. The function @code{strip} can be very useful when used in conjunction with conditionals. When comparing something with the empty string @samp{} using @code{ifeq} or @code{ifneq}, you usually want a string of just whitespace to match the empty string (@pxref{Conditionals}). Thus, the following may fail to have the desired results: @example .PHONY: all ifneq "$(needs_made)" "" all: $(needs_made) else all:;@@echo 'Nothing to make!' endif @end example @noindent Replacing the variable reference @w{@samp{$(needs_made)}} with the function call @w{@samp{$(strip $(needs_made))}} in the @code{ifneq} directive would make it more robust.@refill @item $(findstring @var{find},@var{in}) @findex findstring @cindex searching for strings @cindex finding strings @cindex strings, searching for Searches @var{in} for an occurrence of @var{find}. If it occurs, the value is @var{find}; otherwise, the value is empty. You can use this function in a conditional to test for the presence of a specific substring in a given string. Thus, the two examples, @example $(findstring a,a b c) $(findstring a,b c) @end example @noindent produce the values @samp{a} and @samp{} (the empty string), respectively. @xref{Testing Flags}, for a practical application of @code{findstring}.@refill @need 750 @findex filter @cindex filtering words @cindex words, filtering @item $(filter @var{pattern}@dots{},@var{text}) Returns all whitespace-separated words in @var{text} that @emph{do} match any of the @var{pattern} words, removing any words that @emph{do not} match. The patterns are written using @samp{%}, just like the patterns used in the @code{patsubst} function above.@refill The @code{filter} function can be used to separate out different types of strings (such as file names) in a variable. For example: @example sources := foo.c bar.c baz.s ugh.h foo: $(sources) cc $(filter %.c %.s,$(sources)) -o foo @end example @noindent says that @file{foo} depends of @file{foo.c}, @file{bar.c}, @file{baz.s} and @file{ugh.h} but only @file{foo.c}, @file{bar.c} and @file{baz.s} should be specified in the command to the compiler.@refill @item $(filter-out @var{pattern}@dots{},@var{text}) @findex filter-out @cindex filtering out words @cindex words, filtering out Returns all whitespace-separated words in @var{text} that @emph{do not} match any of the @var{pattern} words, removing the words that @emph{do} match one or more. This is the exact opposite of the @code{filter} function.@refill For example, given: @example @group objects=main1.o foo.o main2.o bar.o mains=main1.o main2.o @end group @end example @noindent the following generates a list which contains all the object files not in @samp{mains}: @example $(filter-out $(mains),$(objects)) @end example @need 1500 @findex sort @cindex sorting words @item $(sort @var{list}) Sorts the words of @var{list} in lexical order, removing duplicate words. The output is a list of words separated by single spaces. Thus, @example $(sort foo bar lose) @end example @noindent returns the value @samp{bar foo lose}. @cindex removing duplicate words @cindex duplicate words, removing @cindex words, removing duplicates Incidentally, since @code{sort} removes duplicate words, you can use it for this purpose even if you don't care about the sort order. @item $(word @var{n},@var{text}) @findex word @cindex word, selecting a @cindex selecting a word Returns the @var{n}th word of @var{text}. The legitimate values of @var{n} start from 1. If @var{n} is bigger than the number of words in @var{text}, the value is empty. For example, @example $(word 2, foo bar baz) @end example @noindent returns @samp{bar}. @item $(wordlist @var{s},@var{e},@var{text}) @findex wordlist @cindex words, selecting lists of @cindex selecting word lists Returns the list of words in @var{text} starting with word @var{s} and ending with word @var{e} (inclusive). The legitimate values of @var{s} start from 1; @var{e} may start from 0. If @var{s} is bigger than the number of words in @var{text}, the value is empty. If @var{e} is bigger than the number of words in @var{text}, words up to the end of @var{text} are returned. If @var{s} is greater than @var{e}, nothing is returned. For example, @example $(wordlist 2, 3, foo bar baz) @end example @noindent returns @samp{bar baz}. @c Following item phrased to prevent overfull hbox. --RJC 17 Jul 92 @item $(words @var{text}) @findex words @cindex words, finding number Returns the number of words in @var{text}. Thus, the last word of @var{text} is @w{@code{$(word $(words @var{text}),@var{text})}}.@refill @item $(firstword @var{names}@dots{}) @findex firstword @cindex words, extracting first The argument @var{names} is regarded as a series of names, separated by whitespace. The value is the first name in the series. The rest of the names are ignored. For example, @example $(firstword foo bar) @end example @noindent produces the result @samp{foo}. Although @code{$(firstword @var{text})} is the same as @code{$(word 1,@var{text})}, the @code{firstword} function is retained for its simplicity.@refill @item $(lastword @var{names}@dots{}) @findex lastword @cindex words, extracting last The argument @var{names} is regarded as a series of names, separated by whitespace. The value is the last name in the series. For example, @example $(lastword foo bar) @end example @noindent produces the result @samp{bar}. Although @code{$(lastword @var{text})} is the same as @code{$(word $(words @var{text}),@var{text})}, the @code{lastword} function was added for its simplicity and better performance.@refill @end table Here is a realistic example of the use of @code{subst} and @code{patsubst}. Suppose that a makefile uses the @code{VPATH} variable to specify a list of directories that @code{make} should search for prerequisite files (@pxref{General Search, , @code{VPATH} Search Path for All Prerequisites}). This example shows how to tell the C compiler to search for header files in the same list of directories.@refill The value of @code{VPATH} is a list of directories separated by colons, such as @samp{src:../headers}. First, the @code{subst} function is used to change the colons to spaces: @example $(subst :, ,$(VPATH)) @end example @noindent This produces @samp{src ../headers}. Then @code{patsubst} is used to turn each directory name into a @samp{-I} flag. These can be added to the value of the variable @code{CFLAGS}, which is passed automatically to the C compiler, like this: @example override CFLAGS += $(patsubst %,-I%,$(subst :, ,$(VPATH))) @end example @noindent The effect is to append the text @samp{-Isrc -I../headers} to the previously given value of @code{CFLAGS}. The @code{override} directive is used so that the new value is assigned even if the previous value of @code{CFLAGS} was specified with a command argument (@pxref{Override Directive, , The @code{override} Directive}). @node File Name Functions, Conditional Functions, Text Functions, Functions @section Functions for File Names @cindex functions, for file names @cindex file name functions. @table @code @item $(dir @var{names}@dots{}) @findex dir @cindex directory part @cindex file name, directory part Extracts the directory-part of each file name in @var{names}. The directory-part of the file name is everything up through (and including) the last slash in it. If the file name contains no slash, the directory part is the string @samp{./}. For example, @example $(dir src/foo.c hacks) @end example @noindent produces the result @samp{src/ ./}. @item $(notdir @var{names}@dots{}) @findex notdir @cindex file name, nondirectory part @cindex nondirectory part Extracts all but the directory-part of each file name in @var, @example $(notdir src/foo.c hacks) @end example @noindent produces the result @samp{foo.c hacks}. @item $(suffix @var{names}@dots{}) @findex suffix @cindex suffix, function to find @cindex file name suffix Extracts the suffix of each file name in @var{names}. If the file name contains a period, the suffix is everything starting with the last period. Otherwise, the suffix is the empty string. This frequently means that the result will be empty when @var{names} is not, and if @var{names} contains multiple file names, the result may contain fewer file names. For example, @example $(suffix src/foo.c src-1.0/bar.c hacks) @end example @noindent produces the result @samp{.c .c}. @item $(basename @var{names}@dots{}) @findex basename @cindex basename @cindex file name, basename of Extracts all but the suffix of each file name in @var{names}. If the file name contains a period, the basename is everything starting up to (and not including) the last period. Periods in the directory part are ignored. If there is no period, the basename is the entire file name. For example, @example $(basename src/foo.c src-1.0/bar hacks) @end example @noindent produces the result @samp{src/foo src-1.0/bar hacks}. @c plural convention with dots (be consistent) @item $(addsuffix @var{suffix},@var{names}@dots{}) @findex addsuffix @cindex suffix, adding @cindex file name suffix, adding The argument @var{names} is regarded as a series of names, separated by whitespace; @var{suffix} is used as a unit. The value of @var{suffix} is appended to the end of each individual name and the resulting larger names are concatenated with single spaces between them. For example, @example $(addsuffix .c,foo bar) @end example @noindent produces the result @samp{foo.c bar.c}. @item $(addprefix @var{prefix},@var{names}@dots{}) @findex addprefix @cindex prefix, adding @cindex file name prefix, adding The argument @var{names} is regarded as a series of names, separated by whitespace; @var{prefix} is used as a unit. The value of @var{prefix} is prepended to the front of each individual name and the resulting larger names are concatenated with single spaces between them. For example, @example $(addprefix src/,foo bar) @end example @noindent produces the result @samp{src/foo src/bar}. @item $(join @var{list1},@var{list2}) @findex join @cindex joining lists of words @cindex words, joining lists Concatenates the two arguments word by word: the two first words (one from each argument) concatenated form the first word of the result, the two second words form the second word of the result, and so on. So the @var{n}th word of the result comes from the @var{n}th word of each argument. If one argument has more words that the other, the extra words are copied unchanged into the result. For example, @samp{$(join a b,.c .o)} produces @samp{a.c b.o}. Whitespace between the words in the lists is not preserved; it is replaced with a single space. This function can merge the results of the @code{dir} and @code{notdir} functions, to produce the original list of files which was given to those two functions.@refill @item $(wildcard @var{pattern}) @findex wildcard @cindex wildcard, function The argument @var{pattern} is a file name pattern, typically containing wildcard characters (as in shell file name patterns). The result of @code{wildcard} is a space-separated list of the names of existing files that match the pattern. @xref{Wildcards, ,Using Wildcard Characters in File Names}. @item $(realpath @var{names}@dots{}) @findex realpath @cindex realpath @cindex file name, realpath of For each file name in @var{names} return the canonical absolute name. A canonical name does not contain any @code{.} or @code{..} components, nor any repeated path separators (@code{/}) or symlinks. In case of a failure the empty string is returned. Consult the @code{realpath(3)} documentation for a list of possible failure causes. @item $(abspath @var{names}@dots{}) @findex abspath @cindex abspath @cindex file name, abspath of For each file name in @var{names} return an absolute name that does not contain any @code{.} or @code{..} components, nor any repeated path separators (@code{/}). Note that, in contrast to @code{realpath} function, @code{abspath} does not resolve symlinks and does not require the file names to refer to an existing file or directory. Use the @code{wildcard} function to test for existence. @end table @node Conditional Functions, Foreach Function, File Name Functions, Functions @section Functions for Conditionals @findex if @cindex conditional expansion There are three functions that provide conditional expansion. A key aspect of these functions is that not all of the arguments are expanded initially. Only those arguments which need to be expanded, will be expanded. @table @code @item $(if @var{condition},@var{then-part}[,@var{else-part}]) @findex if The @code{if} function provides support for conditional expansion in a functional context (as opposed to the GNU @code{make} makefile conditionals such as @code{ifeq} (@pxref{Conditional Syntax, ,Syntax of Conditionals}). The first argument, @var, @var{then-part}, is evaluated and this is used as the result of the evaluation of the entire @code{if} function. If the condition is false then the third argument, @var{else-part}, is evaluated and this is the result of the @code{if} function. If there is no third argument, the @code{if} function evaluates to nothing (the empty string). Note that only one of the @var{then-part} or the @var{else-part} will be evaluated, never both. Thus, either can contain side-effects (such as @code{shell} function calls, etc.) @item $(or @var{condition1}[,@var{condition2}[,@var{condition3}@dots{}]]) @findex or The @code. @item $(and @var{condition1}[,@var{condition2}[,@var{condition3}@dots{}]]) @findex and The @code. @end table @node Foreach Function, Call Function, Conditional Functions, Functions @section The @code{foreach} Function @findex foreach @cindex words, iterating over The @code{foreach} function is very different from other functions. It causes one piece of text to be used repeatedly, each time with a different substitution performed on it. It resembles the @code{for} command in the shell @code{sh} and the @code{foreach} command in the C-shell @code{csh}. The syntax of the @code{foreach} function is: @example $(foreach @var{var},@var{list},@var{text}) @end example @noindent The first two arguments, @var{var} and @var{list}, are expanded before anything else is done; note that the last argument, @var{text}, is @strong{not} expanded at the same time. Then for each word of the expanded value of @var{list}, the variable named by the expanded value of @var{var} is set to that word, and @var{text} is expanded. Presumably @var{text} contains references to that variable, so its expansion will be different each time. The result is that @var{text} is expanded as many times as there are whitespace-separated words in @var{list}. The multiple expansions of @var{text} are concatenated, with spaces between them, to make the result of @code{foreach}. This simple example sets the variable @samp{files} to the list of all files in the directories in the list @samp{dirs}: @example dirs := a b c d files := $(foreach dir,$(dirs),$(wildcard $(dir)/*)) @end example Here @var{text} is @samp{$(wildcard $(dir)/*)}. The first repetition finds the value @samp{a} for @code{dir}, so it produces the same result as @samp{$(wildcard a/*)}; the second repetition produces the result of @samp{$(wildcard b/*)}; and the third, that of @samp{$(wildcard c/*)}. This example has the same result (except for setting @samp{dirs}) as the following example: @example files := $(wildcard a/* b/* c/* d/*) @end example When @var{text} is complicated, you can improve readability by giving it a name, with an additional variable: @example find_files = $(wildcard $(dir)/*) dirs := a b c d files := $(foreach dir,$(dirs),$(find_files)) @end example @noindent Here we use the variable @code{find_files} this way. We use plain @samp{=} to define a recursively-expanding variable, so that its value contains an actual function call to be reexpanded under the control of @code{foreach}; a simply-expanded variable would not do, since @code{wildcard} would be called only once at the time of defining @code{find_files}. The @code{foreach} function has no permanent effect on the variable @var{var}; its value and flavor after the @code{foreach} function call are the same as they were beforehand. The other values which are taken from @var{list} are in effect only temporarily, during the execution of @code{foreach}. The variable @var{var} is a simply-expanded variable during the execution of @code{foreach}. If @var{var} was undefined before the @code{foreach} function call, it is undefined after the call. @xref{Flavors, ,The Two Flavors of Variables}.@refill You must take care when using complex variable expressions that result in variable names because many strange things are valid variable names, but are probably not what you intended. For example, @smallexample files := $(foreach Esta escrito en espanol!,b c ch,$(find_files)) @end smallexample @noindent might be useful if the value of @code{find_files} references the variable whose name is @samp{Esta escrito en espanol!} (es un nombre bastante largo, no?), but it is more likely to be a mistake. @node Call Function, Value Function, Foreach Function, Functions @section The @code{call} Function @findex call @cindex functions, user defined @cindex user defined functions The @code{call} function is unique in that it can be used to create new parameterized functions. You can write a complex expression as the value of a variable, then use @code{call} to expand it with different values. The syntax of the @code{call} function is: @example $(call @var{variable},@var{param},@var{param},@dots{}) @end example When @code{make} expands this function, it assigns each @var{param} to temporary variables @code{$(1)}, @code{$(2)}, etc. The variable @code{$(0)} will contain @var{variable}. There is no maximum number of parameter arguments. There is no minimum, either, but it doesn't make sense to use @code{call} with no parameters. Then @var{variable} is expanded as a @code{make} variable in the context of these temporary assignments. Thus, any reference to @code{$(1)} in the value of @var{variable} will resolve to the first @var{param} in the invocation of @code{call}..) If @var{variable} is the name of a builtin function, the builtin function is always invoked (even if a @code{make} variable by that name also exists). The @code{call} function expands the @var{param} arguments before assigning them to temporary variables. This means that @var{variable} values containing references to builtin functions that have special expansion rules, like @code{foreach} or @code{if}, may not work as you expect. Some examples may make this clearer. This macro simply reverses its arguments: @smallexample reverse = $(2) $(1) foo = $(call reverse,a,b) @end smallexample @noindent Here @var{foo} will contain @samp{b a}. This one is slightly more interesting: it defines a macro to search for the first instance of a program in @code{PATH}: @smallexample pathsearch = $(firstword $(wildcard $(addsuffix /$(1),$(subst :, ,$(PATH))))) LS := $(call pathsearch,ls) @end smallexample @noindent Now the variable LS contains @code{/bin/ls} or similar. The @code{call} function can be nested. Each recursive invocation gets its own local values for @code{$(1)}, etc.@: that mask the values of higher-level @code{call}. For example, here is an implementation of a @dfn{map} function: @smallexample map = $(foreach a,$(2),$(call $(1),$(a))) @end smallexample Now you can @var{map} a function that normally takes only one argument, such as @code{origin}, to multiple values in one step: @smallexample o = $(call map,origin,o map MAKE) @end smallexample and end up with @var{o} containing something like @samp{file file default}. A final caution: be careful when adding whitespace to the arguments to @code{call}. As with other functions, any whitespace contained in the second and subsequent arguments is kept; this can cause strange effects. It's generally safest to remove all extraneous whitespace when providing parameters to @code{call}. @node Value Function, Eval Function, Call Function, Functions @comment node-name, next, previous, up @section The @code{value} Function @findex value @cindex variables, unexpanded value The @code{value} function provides a way for you to use the value of a variable @emph{without} having it expanded. Please note that this does not undo expansions which have already occurred; for example if you create a simply expanded variable its value is expanded during the definition; in that case the @code{value} function will return the same result as using the variable directly. The syntax of the @code{value} function is: @example $(value @var{variable}) @end example.) The result of this function is a string containing the value of @var{variable}, without any expansion occurring. For example, in this makefile: @example @group FOO = $PATH all: @@echo $(FOO) @@echo $(value FOO) @end group @end example @noindent The first output line would be @code{ATH}, since the ``$P'' would be expanded as a @code{make} variable, while the second output line would be the current value of your @code{$PATH} environment variable, since the @code{value} function avoided the expansion. The @code{value} function is most often used in conjunction with the @code{eval} function (@pxref{Eval Function}). @node Eval Function, Origin Function, Value Function, Functions @comment node-name, next, previous, up @section The @code{eval} Function @findex eval @cindex evaluating makefile syntax @cindex makefile syntax, evaluating The @code{eval} function is very special: it allows you to define new makefile constructs that are not constant; which are the result of evaluating other variables and functions. The argument to the @code{eval} function is expanded, then the results of that expansion are parsed as makefile syntax. The expanded results can define new @code{make} variables, targets, implicit or explicit rules, etc. The result of the @code{eval} function is always the empty string; thus, it can be placed virtually anywhere in a makefile without causing syntax errors. It's important to realize that the @code{eval} argument is expanded @emph{twice}; first by the @code{eval} function, then the results of that expansion are expanded again when they are parsed as makefile syntax. This means you may need to provide extra levels of escaping for ``$'' characters when using @code{eval}. The @code{value} function (@pxref{Value Function}) can sometimes be useful in these situations, to circumvent unwanted expansions. Here is an example of how @code{eval} can be used; this example combines a number of concepts and other functions. Although it might seem overly complex to use @code{eval} in this example, rather than just writing out the rules, consider two things: first, the template definition (in @code{PROGRAM_template}) could need to be much more complex than it is here; and second, you might put the complex, ``generic'' part of this example into another makefile, then include it in all the individual makefiles. Now your individual makefiles are quite straightforward. @example @group) @end group @end example @node Origin Function, Flavor Function, Eval Function, Functions @section The @code{origin} Function @findex origin @cindex variables, origin of @cindex origin of variable The @code{origin} function is unlike most other functions in that it does not operate on the values of variables; it tells you something @emph{about} a variable. Specifically, it tells you where it came from. The syntax of the @code{origin} function is: @example $(origin telling you how the variable @var{variable} was defined: @table @samp @item undefined if @var{variable} was never defined. @item default if @var{variable} has a default definition, as is usual with @code{CC} and so on. @xref{Implicit Variables, ,Variables Used by Implicit Rules}. Note that if you have redefined a default variable, the @code{origin} function will return the origin of the later definition. @item environment if @var{variable} was defined as an environment variable and the @samp{-e} option is @emph{not} turned on (@pxref{Options Summary, ,Summary of Options}). @item environment override if @var{variable} was defined as an environment variable and the @w{@samp{-e}} option @emph{is} turned on (@pxref{Options Summary, ,Summary of Options}).@refill @item file if @var{variable} was defined in a makefile. @item command line if @var{variable} was defined on the command line. @item override if @var{variable} was defined with an @code{override} directive in a makefile (@pxref{Override Directive, ,The @code{override} Directive}). @item automatic if @var{variable} is an automatic variable defined for the execution of the commands for each rule (@pxref{Automatic Variables}). @end table This information is primarily useful (other than for your curiosity) to determine if you want to believe the value of a variable. For example, suppose you have a makefile @file{foo} that includes another makefile @file{bar}. You want a variable @code{bletch} to be defined in @file{bar} if you run the command @w{@samp{make -f bar}}, even if the environment contains a definition of @code{bletch}. However, if @file{foo} defined @code{bletch} before including @file{bar}, you do not want to override that definition. This could be done by using an @code{override} directive in @file{foo}, giving that definition precedence over the later definition in @file{bar}; unfortunately, the @code{override} directive would also override any command line definitions. So, @file{bar} could include:@refill @example @group ifdef bletch ifeq "$(origin bletch)" "environment" bletch = barf, gag, etc. endif endif @end group @end example @noindent If @code{bletch} has been defined from the environment, this will redefine it. If you want to override a previous definition of @code{bletch} if it came from the environment, even under @samp{-e}, you could instead write: @example @group ifneq "$(findstring environment,$(origin bletch))" "" bletch = barf, gag, etc. endif @end group @end example Here the redefinition takes place if @samp{$(origin bletch)} returns either @samp{environment} or @samp{environment override}. @xref{Text Functions, , Functions for String Substitution and Analysis}. @node Flavor Function, Shell Function, Origin Function, Functions @section The @code{flavor} Function @findex flavor @cindex variables, flavor of @cindex flavor of variable The @code{flavor} function is unlike most other functions (and like @code{origin} function) in that it does not operate on the values of variables; it tells you something @emph{about} a variable. Specifically, it tells you the flavor of a variable (@pxref{Flavors, ,The Two Flavors of Variables}). The syntax of the @code{flavor} function is: @example $(flavor that identifies the flavor of the variable @var{variable}: @table @samp @item undefined if @var{variable} was never defined. @item recursive if @var{variable} is a recursively expanded variable. @item simple if @var{variable} is a simply expanded variable. @end table @node Shell Function, Make Control Functions, Flavor Function, Functions @section The @code{shell} Function @findex shell @cindex commands, expansion @cindex backquotes @cindex shell command, function for The @code{shell} function is unlike any other function other than the @code{wildcard} function (@pxref{Wildcard Function, ,The Function @code{wildcard}}) in that it communicates with the world outside of @code{make}. The @code{shell} function performs the same function that backquotes (@samp{`}) perform in most shells: it does @dfn{command expansion}. This means that it takes as an argument a shell command and evaluates to the output of the command. The only processing @code{make} does on the result is to convert each newline (or carriage-return / newline pair) to a single space. If there is a trailing (carriage-return and) newline it will simply be removed.@refill The commands run by calls to the @code{shell} function are run when the function calls are expanded (@pxref{Reading Makefiles, , How @code{make} Reads a Makefile}). Because this function involves spawning a new shell, you should carefully consider the performance implications of using the @code{shell} function within recursively expanded variables vs.@: simply expanded variables (@pxref{Flavors, ,The Two Flavors of Variables}). Here are some examples of the use of the @code{shell} function: @example contents := $(shell cat foo) @end example @noindent sets @code{contents} to the contents of the file @file{foo}, with a space (rather than a newline) separating each line. @example files := $(shell echo *.c) @end example @noindent sets @code{files} to the expansion of @samp{*.c}. Unless @code{make} is using a very strange shell, this has the same result as @w{@samp{$(wildcard *.c)}} (as long as at least one @samp{.c} file exists).@refill @node Make Control Functions, , Shell Function, Functions @section Functions That Control Make @cindex functions, for controlling make @cindex controlling make These functions control the way make runs. Generally, they are used to provide information to the user of the makefile or to cause make to stop if some sort of environmental error is detected. @table @code @item $(error @var{text}@dots{}) @findex error @cindex error, stopping on @cindex stopping make Generates a fatal error where the message is @var{text}. Note that the error is generated whenever this function is evaluated. So, if you put it inside a command script or on the right side of a recursive variable assignment, it won't be evaluated until later. The @var{text} will be expanded before the error is generated. For example, @example ifdef ERROR1 $(error error is $(ERROR1)) endif @end example @noindent will generate a fatal error during the read of the makefile if the @code{make} variable @code{ERROR1} is defined. Or, @example ERR = $(error found an error!) .PHONY: err err: ; $(ERR) @end example @noindent will generate a fatal error while @code{make} is running, if the @code{err} target is invoked. @item $(warning @var{text}@dots{}) @findex warning @cindex warnings, printing @cindex printing user warnings This function works similarly to the @code{error} function, above, except that @code{make} doesn't exit. Instead, @var{text} is expanded and the resulting message is displayed, but processing of the makefile continues. The result of the expansion of this function is the empty string. @item $(info @var{text}@dots{}) @findex info @cindex printing messages This function does nothing more than print its (expanded) argument(s) to standard output. No makefile name or line number is added. The result of the expansion of this function is the empty string. @end table @node Running, Implicit Rules, Functions, Top @chapter How to Run @code{make} A makefile that says how to recompile a program can be used in more than one way. The simplest use is to recompile every file that is out of date. Usually, makefiles are written so that if you run @code @code{make}, you can do any of these things and many others. @cindex exit status of make The exit status of @code{make} is always one of three values: @table @code @item 0 The exit status is zero if @code{make} is successful. @item 2 The exit status is two if @code{make} encounters any errors. It will print messages describing the particular errors. @item 1 The exit status is one if you use the @samp{-q} flag and @code{make} determines that some target is not already up to date. @xref{Instead of Execution, ,Instead of Executing the Commands}. @end table @end menu @node Makefile Arguments, Goals, Running, Running @section Arguments to Specify the Makefile @cindex @code{--file} @cindex @code{--makefile} @cindex @code{-f} The way to specify the name of the makefile is with the @samp{-f} or @samp{--file} option (@samp{--makefile} also works). For example, @samp{-f altmake} says to use the file @file{altmake} as the makefile. If you use the @samp{-f} flag several times and follow each @samp{-f} with an argument, all the specified files are used jointly as makefiles. If you do not use the @samp{-f} or @samp{--file} flag, the default is to try @file{GNUmakefile}, @file{makefile}, and @file{Makefile}, in that order, and use the first of these three which exists or can be made (@pxref{Makefiles, ,Writing Makefiles}).@refill @node Goals, Instead of Execution, Makefile Arguments, Running @section Arguments to Specify the Goals @cindex goal, how to specify The @dfn{goals} are the targets that @code @code{.DEFAULT_GOAL} variable (@pxref{Special Variables, , Other Special Variables}). You can also specify a different goal or goals with command-line arguments to @code{make}. Use the name of the goal as an argument. If you specify several goals, @code{make} processes each of them in turn, in the order you name them. Any target in the makefile may be specified as a goal (unless it starts with @samp{-} or contains an @samp{=}, in which case it will be parsed as a switch or variable definition, respectively). Even targets not in the makefile may be specified, if @code{make} can find implicit rules that say how to make them. @vindex MAKECMDGOALS @code{Make} will set the special variable @code{MAKECMDGOALS} to the list of goals you specified on the command line. If no goals were given on the command line, this variable is empty. Note that this variable should be used only in special circumstances. An example of appropriate use is to avoid including @file{.d} files during @code{clean} rules (@pxref{Automatic Prerequisites}), so @code{make} won't create them only to immediately remove them again:@refill @example @group sources = foo.c bar.c ifneq ($(MAKECMDGOALS),clean) include $(sources:.c=.d) endif @end group @end example: @example .PHONY: all all: size nm ld ar as @end example If you are working on the program @code{size}, you might want to say @w{@samp (@pxref{Phony Targets}) or empty target (@pxref{Empty Targets, ,Empty Target Files to Record Events}). Many makefiles contain a phony target named @file{clean} which deletes everything except source files. Naturally, this is done only if you request it explicitly with @w{@samp{make clean}}. Following is a list of typical phony and empty target names. @xref{Standard Targets}, for a detailed list of all the standard target names which GNU software packages use. @table @file @item all @cindex @code{all} @r{(standard target)} Make all the top-level targets the makefile knows about. @item clean @cindex @code{clean} @r{(standard target)} Delete all files that are normally created by running @code{make}. @item mostlyclean @cindex @code{mostlyclean} @r{(standard target)} distclean @cindex @code{distclean} @r{(standard target)} @itemx realclean @cindex @code{realclean} @r{(standard target)} @itemx clobber @cindex @code{clobber} @r{(standard target)} Any of these targets might be defined to delete @emph{more} files than @samp{clean} does. For example, this would delete configuration files or links that you would normally create as preparation for compilation, even if the makefile itself cannot create these files. @item install @cindex @code{install} @r{(standard target)} Copy the executable file into a directory that users typically search for commands; copy any auxiliary files that the executable uses into the directories where it will look for them. @item print @cindex @code{print} @r{(standard target)} Print listings of the source files that have changed. @item tar @cindex @code{tar} @r{(standard target)} Create a tar file of the source files. @item shar @cindex @code{shar} @r{(standard target)} Create a shell archive (shar file) of the source files. @item dist @cindex @code{dist} @r{(standard target)} Create a distribution file of the source files. This might be a tar file, or a shar file, or a compressed version of one of the above, or even more than one of the above. @item TAGS @cindex @code{TAGS} @r{(standard target)} Update a tags table for this program. @item check @cindex @code{check} @r{(standard target)} @itemx test @cindex @code{test} @r{(standard target)} Perform self tests on the program this makefile builds. @end table @node Instead of Execution, Avoiding Compilation, Goals, Running @section Instead of Executing the Commands @cindex execution, instead of @cindex commands, instead of executing The makefile tells @code{make} how to tell whether a target is up to date, and how to update each target. But updating the targets is not always what you want. Certain options specify other activities for @code{make}. @comment Extra blank lines make it print better. @table @samp @item -n @itemx --just-print @itemx --dry-run @itemx --recon @cindex @code{--just-print} @cindex @code{--dry-run} @cindex @code{--recon} @cindex @code{-n} ``No-op''. The activity is to print what commands would be used to make the targets up to date, but not actually execute them. @item -t @itemx --touch @cindex @code{--touch} @cindex touching files @cindex target, touching @cindex @code{-t} ``Touch''. The activity is to mark the targets as up to date without actually changing them. In other words, @code{make} pretends to compile the targets but does not really change their contents. @item -q @itemx --question @cindex @code{--question} @cindex @code{-q} @cindex question mode ``Question''. The activity is to find out silently whether the targets are up to date already; but execute no commands in either case. In other words, neither compilation nor output will occur. @item -W @var{file} @itemx --what-if=@var{file} @itemx --assume-new=@var{file} @itemx --new-file=@var{file} @cindex @code{--what-if} @cindex @code{-W} @cindex @code{--assume-new} @cindex @code{--new-file} @cindex what if @cindex files, assuming new ``What if''. Each @samp{-W} flag is followed by a file name. The given files' modification times are recorded by @code{make} as being the present time, although the actual modification times remain the same. You can use the @samp{-W} flag in conjunction with the @samp{-n} flag to see what would happen if you were to modify specific files.@refill @end table With the @samp{-n} flag, @code{make} prints the commands that it would normally execute but does not execute them. With the @samp{-t} flag, @code{make} ignores the commands in the rules and uses (in effect) the command @code{touch} for each target that needs to be remade. The @code{touch} command is also printed, unless @samp{-s} or @code{.SILENT} is used. For speed, @code{make} does not actually invoke the program @code{touch}. It does the work directly. With the @samp{-q} flag, @code{make} prints nothing and executes no commands, but the exit status code it returns is zero if and only if the targets to be considered are already up to date. If the exit status is one, then some updating needs to be done. If @code{make} encounters an error, the exit status is two, so you can distinguish an error from a target that is not up to date. It is an error to use more than one of these three flags in the same invocation of @code{make}. @cindex +, and command execution The @samp{-n}, @samp{-t}, and @samp{-q} options do not affect command lines that begin with @samp{+} characters or contain the strings @samp{$(MAKE)} or @samp{$@{MAKE@}}. Note that only the line containing the @samp{+} character or the strings @samp{$(MAKE)} or @samp{$@{MAKE@}} is run regardless of these options. Other lines in the same rule are not run unless they too begin with @samp{+} or contain @samp{$(MAKE)} or @samp{$@{MAKE@}} (@xref{MAKE Variable, ,How the @code{MAKE} Variable Works}.) The @samp{-W} flag provides two features: @itemize @bullet @item If you also use the @samp{-n} or @samp{-q} flag, you can see what @code{make} would do if you were to modify some files. @item Without the @samp{-n} or @samp{-q} flag, when @code{make} is actually executing commands, the @samp{-W} flag can direct @code{make} to act as if some files had been modified, without actually modifying the files.@refill @end itemize Note that the options @samp{-p} and @samp{-v} allow you to obtain other information about @code{make} or about the makefiles in use (@pxref{Options Summary, ,Summary of Options}).@refill @node Avoiding Compilation, Overriding, Instead of Execution, Running @section Avoiding Recompilation of Some Files @cindex @code{-o} @cindex @code{--old-file} @cindex @code{--assume-old} @cindex files, assuming old @cindex files, avoiding recompilation of @cindex recompilation, avoiding Sometimes you may have changed a source file but you do not want to recompile all the files that depend on it. For example, suppose you add a macro or a declaration to a header file that many other files depend on. Being conservative, @code @samp{-t} flag. This flag tells @code{make} not to run the commands in the rules, but rather to mark the target up to date by changing its last-modification date. You would follow this procedure: @enumerate @item Use the command @samp{make} to recompile the source files that really need recompilation, ensuring that the object files are up-to-date before you begin. @item Make the changes in the header files. @item Use the command @samp{make -t} to mark all the object files as up to date. The next time you run @code{make}, the changes in the header files will not cause any recompilation. @end enumerate If you have already changed the header file at a time when some files do need recompilation, it is too late to do this. Instead, you can use the @w{@samp{-o @var{file}}} flag, which marks a specified file as ``old'' (@pxref{Options Summary, ,Summary of Options}). This means that the file itself will not be remade, and nothing else will be remade on its account. Follow this procedure: @enumerate @item Recompile the source files that need compilation for reasons independent of the particular header file, with @samp{make -o @var{headerfile}}. If several header files are involved, use a separate @samp{-o} option for each header file. @item Touch all the object files with @samp{make -t}. @end enumerate @node Overriding, Testing, Avoiding Compilation, Running @section Overriding Variables @cindex overriding variables with arguments @cindex variables, overriding with arguments @cindex command line variables @cindex variables, command line An argument that contains @samp{=} specifies the value of a variable: @samp{@var{v}=@var{x}} sets the value of the variable @var{v} to @var{x}. If you specify a value in this way, all ordinary assignments of the same variable in the makefile are ignored; we say they have been @dfn{overridden} by the command line argument. The most common way to use this facility is to pass extra flags to compilers. For example, in a properly written makefile, the variable @code{CFLAGS} is included in each command that runs the C compiler, so a file @file{foo.c} would be compiled something like this: @example cc -c $(CFLAGS) foo.c @end example Thus, whatever value you set for @code{CFLAGS} affects each compilation that occurs. The makefile probably specifies the usual value for @code{CFLAGS}, like this: @example CFLAGS=-g @end example Each time you run @code{make}, you can override this value if you wish. For example, if you say @samp{make CFLAGS='-g -O'}, each C compilation will be done with @samp{cc -c -g -O}. (This also illustrates how you can use quoting in the shell to enclose spaces and other special characters in the value of a variable when you override it.) The variable @code{CFLAGS} is only one of many standard variables that exist just so that you can change them this way. @xref{Implicit Variables, , @samp{:=} instead of @samp{=}. But, unless you want to include a variable reference or function call in the @emph{value} that you specify, it makes no difference which kind of variable you create. There is one way that the makefile can change a variable that you have overridden. This is to use the @code{override} directive, which is a line that looks like this: @samp{override @var{variable} = @var{value}} (@pxref{Override Directive, ,The @code{override} Directive}). @node Testing, Options Summary, Overriding, Running @section Testing the Compilation of a Program @cindex testing compilation @cindex compilation, testing Normally, when an error happens in executing a shell command, @code{make} gives up immediately, returning a nonzero status. No further commands are executed for any target. The error implies that the goal cannot be correctly remade, and @code{make} reports this as soon as it knows. When you are compiling a program that you have just changed, this is not what you want. Instead, you would rather that @code{make} try compiling every file that can be tried, to show you as many compilation errors as possible. @cindex @code{-k} @cindex @code{--keep-going} On these occasions, you should use the @samp{-k} or @samp{--keep-going} flag. This tells @code{make} to continue. In addition to continuing after failed shell commands, @samp{make -k} will continue as much as possible after discovering that it does not know how to make a target or prerequisite file. This will always cause an error message, but without @samp{-k}, it is a fatal error (@pxref{Options Summary, ,Summary of Options}).@refill The usual behavior of @code{make} assumes that your purpose is to get the goals up to date; once @code{make} learns that this is impossible, it might as well report the failure immediately. The @samp{-k} flag says that the real purpose is to test as much as possible of the changes made in the program, perhaps to find several independent problems so that you can correct them all before the next attempt to compile. This is why Emacs' @kbd{M-x compile} command passes the @samp{-k} flag by default. @node Options Summary, , Testing, Running @section Summary of Options @cindex options @cindex flags @cindex switches Here is a table of all the options @code{make} understands: @table @samp @item -b @cindex @code{-b} @itemx -m @cindex @code{-m} These options are ignored for compatibility with other versions of @code{make}. @item -B @cindex @code{-B} @itemx --always-make @cindex @code{--always-make} Consider all targets out-of-date. GNU @code{make} proceeds to consider targets and their prerequisites using the normal algorithms; however, all targets so considered are always remade regardless of the status of their prerequisites. To avoid infinite recursion, if @code{MAKE_RESTARTS} (@pxref{Special Variables, , Other Special Variables}) is set to a number greater than 0 this option is disabled when considering whether to remake makefiles (@pxref{Remaking Makefiles, , How Makefiles Are Remade}). @item -C @var{dir} @cindex @code{-C} @itemx --directory=@var{dir} @cindex @code{--directory} Change to directory @var{dir} before reading the makefiles. If multiple @samp{-C} options are specified, each is interpreted relative to the previous one: @samp{-C / -C etc} is equivalent to @samp{-C /etc}. This is typically used with recursive invocations of @code{make} (@pxref{Recursion, ,Recursive Use of @code{make}}). @item -d @cindex @code{-d} @c Extra blank line here makes the table look better. @code{make} decides what to do. The @code{-d} option is equivalent to @samp{--debug=a} (see below). @item --debug[=@var{options}] @cindex @code{--debug} @c Extra blank line here makes the table look better. Print debugging information in addition to normal processing. Various levels and types of output can be chosen. With no arguments, print the ``basic'' level of debugging. Possible arguments are below; only the first character is considered, and values must be comma- or space-separated. @table @code @item a (@i{all}) All types of debugging output are enabled. This is equivalent to using @samp{-d}. @item b (@i{basic}) Basic debugging prints each target that was found to be out-of-date, and whether the build was successful or not. @item v (@i{verbose}) A level above @samp{basic}; includes messages about which makefiles were parsed, prerequisites that did not need to be rebuilt, etc. This option also enables @samp{basic} messages. @item i (@i{implicit}) Prints messages describing the implicit rule searches for each target. This option also enables @samp{basic} messages. @item j (@i{jobs}) Prints messages giving details on the invocation of specific subcommands. @item m (@i{makefile}) By default, the above messages are not enabled while trying to remake the makefiles. This option enables messages while rebuilding makefiles, too. Note that the @samp{all} option does enable this option. This option also enables @samp{basic} messages. @end table @item -e @cindex @code{-e} @itemx --environment-overrides @cindex @code{--environment-overrides} Give variables taken from the environment precedence over variables from makefiles. @xref{Environment, ,Variables from the Environment}. @item -f @var{file} @cindex @code{-f} @itemx --file=@var{file} @cindex @code{--file} @itemx --makefile=@var{file} @cindex @code{--makefile} Read the file named @var{file} as a makefile. @xref{Makefiles, ,Writing Makefiles}. @item -h @cindex @code{-h} @itemx --help @cindex @code{--help} @c Extra blank line here makes the table look better. Remind you of the options that @code{make} understands and then exit. @item -i @cindex @code{-i} @itemx --ignore-errors @cindex @code{--ignore-errors} Ignore all errors in commands executed to remake files. @xref{Errors, ,Errors in Commands}. @item -I @var{dir} @cindex @code{-I} @itemx --include-dir=@var{dir} @cindex @code{--include-dir} Specifies a directory @var{dir} to search for included makefiles. @xref{Include, ,Including Other Makefiles}. If several @samp{-I} options are used to specify several directories, the directories are searched in the order specified. @item -j [@var{jobs}] @cindex @code{-j} @itemx --jobs[=@var{jobs}] @cindex @code{--jobs} Specifies the number of jobs (commands) to run simultaneously. With no argument, @code{make} runs as many jobs simultaneously as possible. If there is more than one @samp{-j} option, the last one is effective. @xref{Parallel, ,Parallel Execution}, for more information on how commands are run. Note that this option is ignored on MS-DOS. @item -k @cindex @code{-k} @itemx --keep-going @cindex @code{--keep-going} Continue as much as possible after an error. While the target that failed, and those that depend on it, cannot be remade, the other prerequisites of these targets can be processed all the same. @xref{Testing, ,Testing the Compilation of a Program}. @item -l [@var{load}] @cindex @code{-l} @itemx --load-average[=@var{load}] @cindex @code{--load-average} @itemx --max-load[=@var{load}] @cindex @code{--max-load} Specifies that no new jobs (commands) should be started if there are other jobs running and the load average is at least @var{load} (a floating-point number). With no argument, removes a previous load limit. @xref{Parallel, ,Parallel Execution}. @item -L @cindex @code{-L} @itemx --check-symlink-times @cindex @code{--check-symlink-times} On systems that support symbolic links, this option causes @code{make} to consider the timestamps on any symbolic links in addition to the timestamp on the file referenced by those links. When this option is provided, the most recent timestamp among the file and the symbolic links is taken as the modification time for this target file. @item -n @cindex @code{-n} @itemx --just-print @cindex @code{--just-print} @itemx --dry-run @cindex @code{--dry-run} @itemx --recon @cindex @code{--recon} @c Extra blank line here makes the table look better. Print the commands that would be executed, but do not execute them. @xref{Instead of Execution, ,Instead of Executing the Commands}. @item -o @var{file} @cindex @code{-o} @itemx --old-file=@var{file} @cindex @code{--old-file} @itemx --assume-old=@var{file} @cindex @code{--assume-old} Do not remake the file @var{file} even if it is older than its prerequisites, and do not remake anything on account of changes in @var{file}. Essentially the file is treated as very old and its rules are ignored. @xref{Avoiding Compilation, ,Avoiding Recompilation of Some Files}.@refill @item -p @cindex @code{-p} @itemx --print-data-base @cindex @code{--print-data-base} @cindex data base of @code{make} rules @cindex predefined rules and variables, printing Print the data base (rules and variable values) that results from reading the makefiles; then execute as usual or as otherwise specified. This also prints the version information given by the @samp{-v} switch (see below). To print the data base without trying to remake any files, use @w{@samp{make -qp}}. To print the data base of predefined rules and variables, use @w{@samp{make -p -f /dev/null}}. The data base output contains filename and linenumber information for command and variable definitions, so it can be a useful debugging tool in complex environments. @item -q @cindex @code{-q} @itemx --question @cindex @code{--question} ``Question mode''. Do not run any commands, or print anything; just return an exit status that is zero if the specified targets are already up to date, one if any remaking is required, or two if an error is encountered. @xref{Instead of Execution, ,Instead of Executing the Commands}.@refill @item -r @cindex @code{-r} @itemx --no-builtin-rules @cindex @code{--no-builtin-rules} Eliminate use of the built-in implicit rules (@pxref{Implicit Rules, ,Using Implicit Rules}). You can still define your own by writing pattern rules (@pxref{Pattern Rules, ,Defining and Redefining Pattern Rules}). The @samp{-r} option also clears out the default list of suffixes for suffix rules (@pxref{Suffix Rules, ,Old-Fashioned Suffix Rules}). But you can still define your own suffixes with a rule for @code{.SUFFIXES}, and then define your own suffix rules. Note that only @emph{rules} are affected by the @code{-r} option; default variables remain in effect (@pxref{Implicit Variables, ,Variables Used by Implicit Rules}); see the @samp{-R} option below. @item -R @cindex @code{-R} @itemx --no-builtin-variables @cindex @code{--no-builtin-variables} Eliminate use of the built-in rule-specific variables (@pxref{Implicit Variables, ,Variables Used by Implicit Rules}). You can still define your own, of course. The @samp{-R} option also automatically enables the @samp{-r} option (see above), since it doesn't make sense to have implicit rules without any definitions for the variables that they use. @item -s @cindex @code{-s} @itemx --silent @cindex @code{--silent} @itemx --quiet @cindex @code{--quiet} @c Extra blank line here makes the table look better. Silent operation; do not print the commands as they are executed. @xref{Echoing, ,Command Echoing}. @item -S @cindex @code{-S} @itemx --no-keep-going @cindex @code{--no-keep-going} @itemx --stop @cindex @code{--stop} @c Extra blank line here makes the table look better. Cancel the effect of the @samp{-k} option. This is never necessary except in a recursive @code{make} where @samp{-k} might be inherited from the top-level @code{make} via @code{MAKEFLAGS} (@pxref{Recursion, ,Recursive Use of @code{make}}) or if you set @samp{-k} in @code{MAKEFLAGS} in your environment.@refill @item -t @cindex @code{-t} @itemx --touch @cindex @code{--touch} @c Extra blank line here makes the table look better. Touch files (mark them up to date without really changing them) instead of running their commands. This is used to pretend that the commands were done, in order to fool future invocations of @code{make}. @xref{Instead of Execution, ,Instead of Executing the Commands}. @item -v @cindex @code{-v} @itemx --version @cindex @code{--version} Print the version of the @code{make} program plus a copyright, a list of authors, and a notice that there is no warranty; then exit. @item -w @cindex @code{-w} @itemx --print-directory @cindex @code{--print-directory} Print a message containing the working directory both before and after executing the makefile. This may be useful for tracking down errors from complicated nests of recursive @code{make} commands. @xref{Recursion, ,Recursive Use of @code{make}}. (In practice, you rarely need to specify this option since @samp{make} does it for you; see @ref{-w Option, ,The @samp{--print-directory} Option}.) @itemx --no-print-directory @cindex @code{--no-print-directory} Disable printing of the working directory under @code{-w}. This option is useful when @code{-w} is turned on automatically, but you do not want to see the extra messages. @xref{-w Option, ,The @samp{--print-directory} Option}. @item -W @var{file} @cindex @code{-W} @itemx --what-if=@var{file} @cindex @code{--what-if} @itemx --new-file=@var{file} @cindex @code{--new-file} @itemx --assume-new=@var{file} @cindex @code{--assume-new} Pretend that the target @var{file} has just been modified. When used with the @samp{-n} flag, this shows you what would happen if you were to modify that file. Without @samp{-n}, it is almost the same as running a @code{touch} command on the given file before running @code{make}, except that the modification time is changed only in the imagination of @code{make}. @xref{Instead of Execution, ,Instead of Executing the Commands}. @item --warn-undefined-variables @cindex @code{--warn-undefined-variables} @cindex variables, warning for undefined @cindex undefined variables, warning message Issue a warning message whenever @code{make} sees a reference to an undefined variable. This can be helpful when you are trying to debug makefiles which use variables in complex ways. @end table @node Implicit Rules, Archives, Running, Top @chapter Using Implicit Rules @cindex implicit rule @cindex rule, implicit Certain standard ways of remaking target files are used very often. For example, one customary way to make an object file is from a C source file using the C compiler, @code{cc}. @dfn{Implicit rules} tell @code{make} how to use customary techniques so that you do not have to specify them in detail when you want to use them. For example, there is an implicit rule for C compilation. File names determine which implicit rules are run. For example, C compilation typically takes a @file{.c} file and makes a @file{.o} file. So @code{make} applies the implicit rule for C compilation when it sees this combination of file name endings.@refill A chain of implicit rules can apply in sequence; for example, @code{make} will remake a @file{.o} file from a @file{.y} file by way of a @file{.c} file. @iftex @xref{Chained Rules, ,Chains of Implicit Rules}. @end iftex The built-in implicit rules use several variables in their commands so that, by changing the values of the variables, you can change the way the implicit rule works. For example, the variable @code{CFLAGS} controls the flags given to the C compiler by the implicit rule for C compilation. @iftex @xref{Implicit Variables, ,Variables Used by Implicit Rules}. @end iftex You can define your own implicit rules by writing @dfn{pattern rules}. @iftex @xref{Pattern Rules, ,Defining and Redefining Pattern Rules}. @end iftex @dfn{Suffix rules} are a more limited way to define implicit rules. Pattern rules are more general and clearer, but suffix rules are retained for compatibility. @iftex @xref{Suffix Rules, ,Old-Fashioned Suffix Rules}. @end iftex . @end menu @node Using Implicit, Catalogue of Rules, Implicit Rules, Implicit Rules @section Using Implicit Rules @cindex implicit rule, how to use @cindex rule, implicit, how to use To allow @code{make} to find a customary method for updating a target file, all you have to do is refrain from specifying commands yourself. Either write a rule with no command lines, or don't write a rule at all. Then @code{make} will figure out which implicit rule to use based on which kind of source file exists or can be made. For example, suppose the makefile looks like this: @example foo : foo.o bar.o cc -o foo foo.o bar.o $(CFLAGS) $(LDFLAGS) @end example @noindent Because you mention @file{foo.o} but do not give a rule for it, @code{make} will automatically look for an implicit rule that tells how to update it. This happens whether or not the file @file{foo.o} currently exists. If an implicit rule is found, it can supply both commands and one or more prerequisites (the source files). You would want to write a rule for @file @samp{.o} files: one, from a @samp{.c} file with the C compiler; another, from a @samp{.p} file with the Pascal compiler; and so on. The rule that actually applies is the one whose prerequisites exist or can be made. So, if you have a file @file{foo.c}, @code{make} will run the C compiler; otherwise, if you have a file @file{foo.p}, @code{make} will run the Pascal compiler; and so on. Of course, when you write the makefile, you know which implicit rule you want @code{make} to use, and you know it will choose that one because you know which possible prerequisite files are supposed to exist. @xref{Catalogue of Rules, ,Catalogue of Implicit @dfn{chaining} is occurring. @xref{Chained Rules, ,Chains of Implicit Rules}. In general, @code{make} searches for an implicit rule for each target, and for each double-colon rule, that has no commands. A file that is mentioned only as a prerequisite is considered a target whose rule specifies nothing, so implicit rule search happens for it. @xref{Implicit Rule Search, ,Implicit Rule Search Algorithm}, for the details of how the search is done. Note that explicit prerequisites do not influence implicit rule search. For example, consider this explicit rule: @example foo.o: foo.p @end example @noindent The prerequisite on @file{foo.p} does not necessarily mean that @code{make} will remake @file{foo.o} according to the implicit rule to make an object file, a @file{.o} file, from a Pascal source file, a @file{.p} file. For example, if @file{foo.c} also exists, the implicit rule to make an object file from a C source file is used instead, because it appears before the Pascal rule in the list of predefined implicit rules (@pxref{Catalogue of Rules, , Catalogue of Implicit Rules}). If you do not want an implicit rule to be used for a target that has no commands, you can give that target empty commands by writing a semicolon (@pxref{Empty Commands, ,Defining Empty Commands}). @node Catalogue of Rules, Implicit Variables, Using Implicit, Implicit Rules @section Catalogue of Implicit Rules @cindex implicit rule, predefined @cindex rule, implicit, predefined Here is a catalogue of predefined implicit rules which are always available unless the makefile explicitly overrides or cancels them. @xref{Canceling Rules, ,Canceling Implicit Rules}, for information on canceling or overriding an implicit rule. The @samp{-r} or @samp{- @code{make}, run @samp{make -p} in a directory with no makefile. Not all of these rules will always be defined, even when the @samp{-r} option is not given. Many of the predefined implicit rules are implemented in @code{make} as suffix rules, so which ones will be defined depends on the @dfn{suffix list} (the list of prerequisites of the special target @code{.SUFFIXES}). The default suffix list is: @code{.out}, @code{.a}, @code{.ln}, @code{.o}, @code{.c}, @code{.cc}, @code{.C}, @code{.cpp}, @code{.p}, @code{.f}, @code{.F}, @code{.r}, @code{.y}, @code{.l}, @code{.s}, @code{.S}, @code{.mod}, @code{.sym}, @code{.def}, @code{.h}, @code{.info}, @code{.dvi}, @code{.tex}, @code{.texinfo}, @code{.texi}, @code{.txinfo}, @code{.w}, @code{.ch} @code{.web}, @code{.sh}, @code{.elc}, @code{. @xref{Suffix Rules, ,Old-Fashioned Suffix Rules}, for full details on suffix rules. @table @asis @item Compiling C programs @cindex C, rule to compile @pindex cc @pindex gcc @pindex .o @pindex .c @file{@var{n}.o} is made automatically from @file{@var{n}.c} with a command of the form @samp{$(CC) -c $(CPPFLAGS) $(CFLAGS)}.@refill @item Compiling C++ programs @cindex C++, rule to compile @pindex g++ @pindex .cc @pindex .cpp @pindex .C @file{@var{n}.o} is made automatically from @file{@var{n}.cc}, @file{@var{n}.cpp}, or @file{@var{n}.C} with a command of the form @samp{$(CXX) -c $(CPPFLAGS) $(CXXFLAGS)}. We encourage you to use the suffix @samp{.cc} for C++ source files instead of @samp{.C}.@refill @item Compiling Pascal programs @cindex Pascal, rule to compile @pindex pc @pindex .p @file{@var{n}.o} is made automatically from @file{@var{n}.p} with the command @samp{$(PC) -c $(PFLAGS)}.@refill @item Compiling Fortran and Ratfor programs @cindex Fortran, rule to compile @cindex Ratfor, rule to compile @pindex f77 @pindex .f @pindex .r @pindex .F @file{@var{n}.o} is made automatically from @file{@var{n}.r}, @file{@var{n}.F} or @file{@var{n}.f} by running the Fortran compiler. The precise command used is as follows:@refill @table @samp @item .f @samp{$(FC) -c $(FFLAGS)}. @item .F @samp{$(FC) -c $(FFLAGS) $(CPPFLAGS)}. @item .r @samp{$(FC) -c $(FFLAGS) $(RFLAGS)}. @end table @item Preprocessing Fortran and Ratfor programs @file{@var{n}.f} is made automatically from @file{@var{n}.r} or @file{@var{n}.F}. This rule runs just the preprocessor to convert a Ratfor or preprocessable Fortran program into a strict Fortran program. The precise command used is as follows:@refill @table @samp @item .F @samp{$(FC) -F $(CPPFLAGS) $(FFLAGS)}. @item .r @samp{$(FC) -F $(FFLAGS) $(RFLAGS)}. @end table @item Compiling Modula-2 programs @cindex Modula-2, rule to compile @pindex m2c @pindex .sym @pindex .def @pindex .mod @file{@var{n}.sym} is made from @file{@var{n}.def} with a command of the form @samp{$(M2C) $(M2FLAGS) $(DEFFLAGS)}. @file{@var{n}.o} is made from @file{@var{n}.mod}; the form is: @w{@samp{$(M2C) $(M2FLAGS) $(MODFLAGS)}}.@refill @need 1200 @item Assembling and preprocessing assembler programs @cindex assembly, rule to compile @pindex as @pindex .s @file{@var{n}.o} is made automatically from @file{@var{n}.s} by running the assembler, @code{as}. The precise command is @samp{$(AS) $(ASFLAGS)}.@refill @pindex .S @file{@var{n}.s} is made automatically from @file{@var{n}.S} by running the C preprocessor, @code{cpp}. The precise command is @w{@samp{$(CPP) $(CPPFLAGS)}}. @item Linking a single object file @cindex linking, predefined rule for @pindex ld @pindex .o @file{@var{n}} is made automatically from @file{@var{n}.o} by running the linker (usually called @code{ld}) via the C compiler. The precise command used is @w{@samp{$(CC) $(LDFLAGS) @var, @example x: y.o z.o @end example @noindent when @file{x.c}, @file{y.c} and @file{z.c} all exist will execute: @example @group cc -c x.c -o x.o cc -c y.c -o y.o cc -c z.c -o z.o cc x.o y.o z.o -o x rm -f x.o rm -f y.o rm -f z.o @end group @end example @noindent In more complicated cases, such as when there is no object file whose name derives from the executable file name, you must write an explicit command for linking. Each kind of file automatically made into @samp{.o} object files will be automatically linked by using the compiler (@samp{$(CC)}, @samp{$(FC)} or @samp{$(PC)}; the C compiler @samp{$(CC)} is used to assemble @samp{.s} files) without the @samp{-c} option. This could be done by using the @samp{.o} object files as intermediates, but it is faster to do the compiling and linking in one step, so that's how it's done.@refill @item Yacc for C programs @pindex yacc @cindex Yacc, rule to run @pindex .y @file{@var{n}.c} is made automatically from @file{@var{n}.y} by running Yacc with the command @samp{$(YACC) $(YFLAGS)}. @item Lex for C programs @pindex lex @cindex Lex, rule to run @pindex .l @file{@var{n}.c} is made automatically from @file{@var{n}.l} by running Lex. The actual command is @samp{$(LEX) $(LFLAGS)}. @item Lex for Ratfor programs @file{@var{n}.r} is made automatically from @file{@var{n}.l} by running Lex. The actual command is @samp{$(LEX) $(LFLAGS)}. The convention of using the same suffix @samp{.l} for all Lex files regardless of whether they produce C code or Ratfor code makes it impossible for @code{make} to determine automatically which of the two languages you are using in any particular case. If @code{make} is called upon to remake an object file from a @samp{.l} file, it must guess which compiler to use. It will guess the C compiler, because that is more common. If you are using Ratfor, make sure @code{make} knows this by mentioning @file{@var{n}.r} in the makefile. Or, if you are using Ratfor exclusively, with no C files, remove @samp{.c} from the list of implicit rule suffixes with:@refill @example @group .SUFFIXES: .SUFFIXES: .o .r .f .l @dots{} @end group @end example @item Making Lint Libraries from C, Yacc, or Lex programs @pindex lint @cindex @code{lint}, rule to run @pindex .ln @file{@var{n}.ln} is made from @file{@var{n}.c} by running @code{lint}. The precise command is @w{@samp{$(LINT) $(LINTFLAGS) $(CPPFLAGS) -i}}. The same command is used on the C code produced from @file{@var{n}.y} or @file{@var{n}.l}.@refill @item @TeX{} and Web @cindex @TeX{}, rule to run @cindex Web, rule to run @pindex tex @pindex cweave @pindex weave @pindex tangle @pindex ctangle @pindex .dvi @pindex .tex @pindex .web @pindex .w @pindex .ch @file{@var{n}.dvi} is made from @file{@var{n}.tex} with the command @samp{$(TEX)}. @file{@var{n}.tex} is made from @file{@var{n}.web} with @samp{$(WEAVE)}, or from @file{@var{n}.w} (and from @file{@var{n}.ch} if it exists or can be made) with @samp{$(CWEAVE)}. @file{@var{n}.p} is made from @file{@var{n}.web} with @samp{$(TANGLE)} and @file{@var{n}.c} is made from @file{@var{n}.w} (and from @file{@var{n}.ch} if it exists or can be made) with @samp{$(CTANGLE)}.@refill @item Texinfo and Info @cindex Texinfo, rule to format @cindex Info, rule to format @pindex texi2dvi @pindex makeinfo @pindex .texinfo @pindex .info @pindex .texi @pindex .txinfo @file{@var{n}.dvi} is made from @file{@var{n}.texinfo}, @file{@var{n}.texi}, or @file{@var{n}.txinfo}, with the command @w{@samp{$(TEXI2DVI) $(TEXI2DVI_FLAGS)}}. @file{@var{n}.info} is made from @file{@var{n}.texinfo}, @file{@var{n}.texi}, or @file{@var{n}.txinfo}, with the command @w{@samp{$(MAKEINFO) $(MAKEINFO_FLAGS)}}. @item RCS @cindex RCS, rule to extract from @pindex co @pindex ,v @r{(RCS file extension)} Any file @file{@var{n}} is extracted if necessary from an RCS file named either @file{@var{n},v} or @file{RCS/@var{n},v}. The precise command used is @w{@samp{$(CO) $(COFLAGS)}}. @file{@var{n}} will not be extracted from RCS if it already exists, even if the RCS file is newer. The rules for RCS are terminal (@pxref{Match-Anything Rules, ,Match-Anything Pattern Rules}), so RCS files cannot be generated from another source; they must actually exist.@refill @item SCCS @cindex SCCS, rule to extract from @pindex get @pindex s. @r{(SCCS file prefix)} Any file @file{@var{n}} is extracted if necessary from an SCCS file named either @file{s.@var{n}} or @file{SCCS/s.@var{n}}. The precise command used is @w{@samp{$(GET) $(GFLAGS)}}. The rules for SCCS are terminal (@pxref{Match-Anything Rules, ,Match-Anything Pattern Rules}), so SCCS files cannot be generated from another source; they must actually exist.@refill @pindex .sh For the benefit of SCCS, a file @file{@var{n}} is copied from @file{@var{n}.sh} and made executable (by everyone). This is for shell scripts that are checked into SCCS. Since RCS preserves the execution permission of a file, you do not need to use this feature with RCS.@refill We recommend that you avoid using of SCCS. RCS is widely held to be superior, and is also free. By choosing free software in place of comparable (or inferior) proprietary software, you support the free software movement. @end table Usually, you want to change only the variables listed in the table above, which are documented in the following section. However, the commands in built-in implicit rules actually use variables such as @code{COMPILE.c}, @code{LINK.p}, and @code{PREPROCESS.S}, whose values contain the commands listed above. @code{make} follows the convention that the rule to compile a @file{.@var{x}} source file uses the variable @code{COMPILE.@var{x}}. Similarly, the rule to produce an executable from a @file{.@var{x}} file uses @code{LINK.@var{x}}; and the rule to preprocess a @file{.@var{x}} file uses @code{PREPROCESS.@var{x}}. @vindex OUTPUT_OPTION Every rule that produces an object file uses the variable @code{OUTPUT_OPTION}. @code{make} defines this variable either to contain @samp{-o $@@}, or to be empty, depending on a compile-time option. You need the @samp{-o} option to ensure that the output goes into the right file when the source file is in a different directory, as when using @code{VPATH} (@pxref{Directory Search}). However, compilers on some systems do not accept a @samp{-o} switch for object files. If you use such a system, and use @code{VPATH}, some compilations will put their output in the wrong place. A possible workaround for this problem is to give @code{OUTPUT_OPTION} the value @w{@samp{; mv $*.o $@@}}. @node Implicit Variables, Chained Rules, Catalogue of Rules, Implicit Rules @section Variables Used by Implicit Rules @cindex flags for compilers The commands in built-in implicit rules make liberal use of certain predefined variables. You can alter the values of these variables in the makefile, with arguments to @code{make}, or in the environment to alter how the implicit rules work without redefining the rules themselves. You can cancel all variables used by implicit rules with the @samp{-R} or @samp{--no-builtin-variables} option. For example, the command used to compile a C source file actually says @samp{$(CC) -c $(CFLAGS) $(CPPFLAGS)}. The default values of the variables used are @samp{cc} and nothing, resulting in the command @samp{cc -c}. By redefining @samp{CC} to @samp{ncc}, you could cause @samp{ncc} to be used for all C compilations performed by the implicit rule. By redefining @samp{CFLAGS} to be @samp{-g}, you could pass the @samp{-g} option to each compilation. @emph{All} implicit rules that do C compilation use @samp{$(CC)} to get the program name for the compiler and @emph{all} include @samp{$(CFLAGS)} among the arguments given to the compiler.@refill The variables used in implicit rules fall into two classes: those that are names of programs (like @code{CC}) and those that contain arguments for the programs (like @code @code{make} for your environment. To see the complete list of predefined variables for your instance of GNU @code{make} you can run @samp{make -p} in a directory with no makefiles. Here is a table of some of the more common variables used as names of programs in built-in rules: makefiles. @table @code @item AR @vindex AR Archive-maintaining program; default @samp{ar}. @pindex ar @item AS @vindex AS Program for compiling assembly files; default @samp{as}. @pindex as @item CC @vindex CC Program for compiling C programs; default @samp{cc}. @pindex cc @item CO @vindex CO Program for checking out files from RCS; default @samp{co}. @pindex cc @item CXX @vindex CXX Program for compiling C++ programs; default @samp{g++}. @pindex g++ @item CO @vindex CO Program for extracting a file from RCS; default @samp{co}. @pindex co @item CPP @vindex CPP Program for running the C preprocessor, with results to standard output; default @samp{$(CC) -E}. @item FC @vindex FC Program for compiling or preprocessing Fortran and Ratfor programs; default @samp{f77}. @pindex f77 @item GET @vindex GET Program for extracting a file from SCCS; default @samp{get}. @pindex get @item LEX @vindex LEX Program to use to turn Lex grammars into source code; default @samp{lex}. @pindex lex @item YACC @vindex YACC Program to use to turn Yacc grammars into source code; default @samp{yacc}. @pindex yacc @item LINT @vindex LINT Program to use to run lint on source code; default @samp{lint}. @pindex lint @item M2C @vindex M2C Program to use to compile Modula-2 source code; default @samp{m2c}. @pindex m2c @item PC @vindex PC Program for compiling Pascal programs; default @samp{pc}. @pindex pc @item MAKEINFO @vindex MAKEINFO Program to convert a Texinfo source file into an Info file; default @samp{makeinfo}. @pindex makeinfo @item TEX @vindex TEX Program to make @TeX{} @sc{dvi} files from @TeX{} source; default @samp{tex}. @pindex tex @item TEXI2DVI @vindex TEXI2DVI Program to make @TeX{} @sc{dvi} files from Texinfo source; default @samp{texi2dvi}. @pindex texi2dvi @item WEAVE @vindex WEAVE Program to translate Web into @TeX{}; default @samp{weave}. @pindex weave @item CWEAVE @vindex CWEAVE Program to translate C Web into @TeX{}; default @samp{cweave}. @pindex cweave @item TANGLE @vindex TANGLE Program to translate Web into Pascal; default @samp{tangle}. @pindex tangle @item CTANGLE @vindex CTANGLE Program to translate C Web into C; default @samp{ctangle}. @pindex ctangle @item RM @vindex RM Command to remove a file; default @samp{rm -f}. @pindex rm @end table Here is a table of variables whose values are additional arguments for the programs above. The default values for all of these is the empty string, unless otherwise noted. @table @code @item ARFLAGS @vindex ARFLAGS Flags to give the archive-maintaining program; default @samp{rv}. @item ASFLAGS @vindex ASFLAGS Extra flags to give to the assembler (when explicitly invoked on a @samp{.s} or @samp{.S} file). @item CFLAGS @vindex CFLAGS Extra flags to give to the C compiler. @item CXXFLAGS @vindex CXXFLAGS Extra flags to give to the C++ compiler. @item COFLAGS @vindex COFLAGS Extra flags to give to the RCS @code{co} program. @item CPPFLAGS @vindex CPPFLAGS Extra flags to give to the C preprocessor and programs that use it (the C and Fortran compilers). @item FFLAGS @vindex FFLAGS Extra flags to give to the Fortran compiler. @item GFLAGS @vindex GFLAGS Extra flags to give to the SCCS @code{get} program. @item LDFLAGS @vindex LDFLAGS Extra flags to give to compilers when they are supposed to invoke the linker, @samp{ld}. @item LFLAGS @vindex LFLAGS Extra flags to give to Lex. @item YFLAGS @vindex YFLAGS Extra flags to give to Yacc. @item PFLAGS @vindex PFLAGS Extra flags to give to the Pascal compiler. @item RFLAGS @vindex RFLAGS Extra flags to give to the Fortran compiler for Ratfor programs. @item LINTFLAGS @vindex LINTFLAGS Extra flags to give to lint. @end table @node Chained Rules, Pattern Rules, Implicit Variables, Implicit Rules @section Chains of Implicit Rules @cindex chains of rules @cindex rule, implicit, chains of Sometimes a file can be made by a sequence of implicit rules. For example, a file @file{@var{n}.o} could be made from @file{@var{n}.y} by running first Yacc and then @code{cc}. Such a sequence is called a @dfn{chain}. If the file @file{@var{n}.c} exists, or is mentioned in the makefile, no special searching is required: @code{make} finds that the object file can be made by C compilation from @file{@var{n}.c}; later on, when considering how to make @file{@var{n}.c}, the rule for running Yacc is used. Ultimately both @file{@var{n}.c} and @file{@var{n}.o} are updated.@refill @cindex intermediate files @cindex files, intermediate However, even if @file{@var{n}.c} does not exist and is not mentioned, @code{make} knows how to envision it as the missing link between @file{@var{n}.o} and @file{@var{n}.y}! In this case, @file{@var{n}.c} is called an @dfn{intermediate file}. Once @code{make} has decided to use the intermediate file, it is entered in the data base as if it had been mentioned in the makefile, along with the implicit rule that says how to create it.@refill Intermediate files are remade using their rules just like all other files. But intermediate files are treated differently in two ways. The first difference is what happens if the intermediate file does not exist. If an ordinary file @var{b} does not exist, and @code{make} considers a target that depends on @var{b}, it invariably creates @var{b} and then updates the target from @var{b}. But if @var{b} is an intermediate file, then @code{make} can leave well enough alone. It won't bother updating @var{b}, or the ultimate target, unless some prerequisite of @var{b} is newer than that target or there is some other reason to update that target. The second difference is that if @code{make} @emph{does} create @var{b} in order to update something else, it deletes @var{b} later on after it is no longer needed. Therefore, an intermediate file which did not exist before @code{make} also does not exist after @code{make}. @code{make} reports the deletion to you by printing a @samp{rm -f} command showing which file it is deleting. Ordinarily, a file cannot be intermediate if it is mentioned in the makefile as a target or prerequisite. However, you can explicitly mark a file as intermediate by listing it as a prerequisite of the special target @code{.INTERMEDIATE}. This takes effect even if the file is mentioned explicitly in some other way. @cindex intermediate files, preserving @cindex preserving intermediate files @cindex secondary files You can prevent automatic deletion of an intermediate file by marking it as a @dfn{secondary} file. To do this, list it as a prerequisite of the special target @code{.SECONDARY}. When a file is secondary, @code{make} will not create the file merely because it does not already exist, but @code{make} does not automatically delete the file. Marking a file as secondary also marks it as intermediate. You can list the target pattern of an implicit rule (such as @samp{%.o}) as a prerequisite of the special target @code{.PRECIOUS} to preserve intermediate files made by implicit rules whose target patterns match that file's name; see @ref{Interrupts}.@refill @cindex preserving with @code{.PRECIOUS} @cindex @code{.PRECIOUS} intermediate files A chain can involve more than two implicit rules. For example, it is possible to make a file @file{foo} from @file{RCS/foo.y,v} by running RCS, Yacc and @code{cc}. Then both @file{foo.y} and @file{foo.c} are intermediate files that are deleted at the end.@refill No single implicit rule can appear more than once in a chain. This means that @code{make} will not even consider such a ridiculous thing as making @file{foo} from @file{foo.o.o} by running the linker twice. This constraint has the added benefit of preventing any infinite loop in the search for an implicit rule chain. There are some special implicit rules to optimize certain cases that would otherwise be handled by rule chains. For example, making @file{foo} from @file{foo.c} could be handled by compiling and linking with separate chained rules, using @file{foo.o} as an intermediate file. But what actually happens is that a special rule for this case does the compilation and linking with a single @code{cc} command. The optimized rule is used in preference to the step-by-step chain because it comes earlier in the ordering of rules. @node Pattern Rules, Last Resort, Chained Rules, Implicit Rules @section Defining and Redefining Pattern Rules You define an implicit rule by writing a @dfn{pattern rule}. A pattern rule looks like an ordinary rule, except that its target contains the character @samp{%} (exactly one of them). The target is considered a pattern for matching file names; the @samp{%} can match any nonempty substring, while other characters match only themselves. The prerequisites likewise use @samp{%} to show how their names relate to the target name. Thus, a pattern rule @samp{%.o : %.c} says how to make any file @file{@var{stem}.o} from another file @file{@var{stem}.c}.@refill Note that expansion using @samp{%} in pattern rules occurs @strong{after} any variable or function expansions, which take place when the makefile is read. @xref{Using Variables, , How to Use Variables}, and @ref{Functions, ,Functions for Transforming Text}. . @end menu @node Pattern Intro, Pattern Examples, Pattern Rules, Pattern Rules @subsection Introduction to Pattern Rules @cindex pattern rule @cindex rule, pattern A pattern rule contains the character @samp{%} (exactly one of them) in the target; otherwise, it looks exactly like an ordinary rule. The target is a pattern for matching file names; the @samp{%} matches any nonempty substring, while other characters match only themselves. @cindex target pattern, implicit @cindex @code{%}, in pattern rules For example, @samp{%.c} as a pattern matches any file name that ends in @samp{.c}. @samp{s.%.c} as a pattern matches any file name that starts with @samp{s.}, ends in @samp{.c} and is at least five characters long. (There must be at least one character to match the @samp{%}.) The substring that the @samp{%} matches is called the @dfn{stem}.@refill @samp{%} in a prerequisite of a pattern rule stands for the same stem that was matched by the @samp{%} in the target. In order for the pattern rule to apply, its target pattern must match the file name under consideration and all of its prerequisites (after pattern substitution) must name files that exist or can be made. These files become prerequisites of the target. @cindex prerequisite pattern, implicit Thus, a rule of the form @example %.o : %.c ; @var{command}@dots{} @end example @noindent specifies how to make a file @file{@var{n}.o}, with another file @file{@var{n}.c} as its prerequisite, provided that @file{@var{n}.c} exists or can be made. There may also be prerequisites that do not use @samp{%}; such a prerequisite attaches to every file made by this pattern rule. These unvarying prerequisites are useful occasionally. A pattern rule need not have any prerequisites that contain @samp{%}, or in fact any prerequisites at all. Such a rule is effectively a general wildcard. It provides a way to make any file that matches the target pattern. @xref{Last Resort}. @c !!! The end of of this paragraph should be rewritten. --bob Pattern rules may have more than one target. Unlike normal rules, this does not act as many different rules with the same prerequisites and commands. If a pattern rule has multiple targets, @code: @code{make} worries only about giving commands and prerequisites to the file presently in question. However, when this file's commands are run, the other targets are marked as having been updated themselves. @cindex multiple targets, in pattern rule @cindex target, multiple in pattern rule. @cindex pattern rules, order of @cindex order of pattern rules @node Pattern Examples, Automatic Variables, Pattern Intro, Pattern Rules @subsection Pattern Rule Examples Here are some examples of pattern rules actually predefined in @code{make}. First, the rule that compiles @samp{.c} files into @samp{.o} files:@refill @example %.o : %.c $(CC) -c $(CFLAGS) $(CPPFLAGS) $< -o $@@ @end example @noindent defines a rule that can make any file @file{@var{x}.o} from @file{@var{x}.c}. The command uses the automatic variables @samp{$@@} and @samp{$<} to substitute the names of the target file and the source file in each case where the rule applies (@pxref{Automatic Variables}).@refill Here is a second built-in rule: @example % :: RCS/%,v $(CO) $(COFLAGS) $< @end example @noindent defines a rule that can make any file @file{@var{x}} whatsoever from a corresponding file @file{@var{x},v} in the subdirectory @file{RCS}. Since the target is @samp{%}, this rule will apply to any file whatever, provided the appropriate prerequisite file exists. The double colon makes the rule @dfn{terminal}, which means that its prerequisite may not be an intermediate file (@pxref{Match-Anything Rules, ,Match-Anything Pattern Rules}).@refill @need 500 This pattern rule has two targets: @example @group %.tab.c %.tab.h: %.y bison -d $< @end group @end example @noindent @c The following paragraph is rewritten to avoid overfull hboxes This tells @code{make} that the command @samp{bison -d @var{x}.y} will make both @file{@var{x}.tab.c} and @file{@var{x}.tab.h}. If the file @file{foo} depends on the files @file{parse.tab.o} and @file{scan.o} and the file @file{scan.o} depends on the file @file{parse.tab.h}, when @file{parse.y} is changed, the command @samp{bison -d parse.y} will be executed only once, and the prerequisites of both @file{parse.tab.o} and @file{scan.o} will be satisfied. (Presumably the file @file{parse.tab.o} will be recompiled from @file{parse.tab.c} and the file @file{scan.o} from @file{scan.c}, while @file{foo} is linked from @file{parse.tab.o}, @file{scan.o}, and its other prerequisites, and it will execute happily ever after.)@refill @node Automatic Variables, Pattern Match, Pattern Examples, Pattern Rules @subsection Automatic Variables @cindex automatic variables @cindex variables, automatic @cindex variables, and implicit rule Suppose you are writing a pattern rule to compile a @samp{.c} file into a @samp{.o} file: how do you write the @samp{cc} command so that it operates on the right source file name? You cannot write the name in the command, because the name is different each time the implicit rule is applied. What you do is use a special feature of @code{make}, the @dfn{automatic variables}. These variables have values computed afresh for each rule that is executed, based on the target and prerequisites of the rule. In this example, you would use @samp{$@@} for the object file name and @samp{$<} for the source file name. @cindex automatic variables in prerequisites @cindex prerequisites, and automatic variables @code{$@@} within the prerequisites list; this will not work. However, there is a special feature of GNU @code{make}, secondary expansion (@pxref{Secondary Expansion}), which will allow automatic variable values to be used in prerequisite lists. Here is a table of automatic variables: @table @code @vindex $@@ @vindex @@ @r{(automatic variable)} @item $@@ The file name of the target of the rule. If the target is an archive member, then @samp{$@@} is the name of the archive file. In a pattern rule that has multiple targets (@pxref{Pattern Intro, ,Introduction to Pattern Rules}), @samp{$@@} is the name of whichever target caused the rule's commands to be run. @vindex $% @vindex % @r{(automatic variable)} @item $% The target member name, when the target is an archive member. @xref{Archives}. For example, if the target is @file{foo.a(bar.o)} then @samp{$%} is @file{bar.o} and @samp{$@@} is @file{foo.a}. @samp{$%} is empty when the target is not an archive member. @vindex $< @vindex < @r{(automatic variable)} @item $< The name of the first prerequisite. If the target got its commands from an implicit rule, this will be the first prerequisite added by the implicit rule (@pxref{Implicit Rules}). @vindex $? @vindex ? @r{(automatic variable)} @item $? The names of all the prerequisites that are newer than the target, with spaces between them. For prerequisites which are archive members, only the member named is used (@pxref{Archives}). @cindex prerequisites, list of changed @cindex list of changed prerequisites @vindex $^ @vindex ^ @r{(automatic variable)} @item $^ The names of all the prerequisites, with spaces between them. For prerequisites which are archive members, only the member named is used (@pxref{Archives}). A target has only one prerequisite on each other file it depends on, no matter how many times each file is listed as a prerequisite. So if you list a prerequisite more than once for a target, the value of @code{$^} contains just one copy of the name. This list does @strong{not} contain any of the order-only prerequisites; for those see the @samp{$|} variable, below. @cindex prerequisites, list of all @cindex list of all prerequisites @vindex $+ @vindex + @r{(automatic variable)} @item $+ This is like @samp{$^}, but prerequisites listed more than once are duplicated in the order they were listed in the makefile. This is primarily useful for use in linking commands where it is meaningful to repeat library file names in a particular order. @vindex $| @vindex | @r{(automatic variable)} @item $| The names of all the order-only prerequisites, with spaces between them. @vindex $* @vindex * @r{(automatic variable)} @item $* The stem with which an implicit rule matches (@pxref{Pattern Match, ,How Patterns Match}). If the target is @file{dir/a.foo.b} and the target pattern is @file{a.%.b} then the stem is @file{dir/foo}. The stem is useful for constructing names of related files.@refill @cindex stem, variable for In a static pattern rule, the stem is part of the file name that matched the @samp{%} in the target pattern. In an explicit rule, there is no stem; so @samp{$*} cannot be determined in that way. Instead, if the target name ends with a recognized suffix (@pxref{Suffix Rules, ,Old-Fashioned Suffix Rules}), @samp{$*} is set to the target name minus the suffix. For example, if the target name is @samp{foo.c}, then @samp{$*} is set to @samp{foo}, since @samp{.c} is a suffix. GNU @code{make} does this bizarre thing only for compatibility with other implementations of @code{make}. You should generally avoid using @samp{$*} except in implicit rules or static pattern rules.@refill If the target name in an explicit rule does not end with a recognized suffix, @samp{$*} is set to the empty string for that rule. @end table @samp{$?} is useful even in explicit rules when you wish to operate on only the prerequisites that have changed. For example, suppose that an archive named @file{lib} is supposed to contain copies of several object files. This rule copies just the changed object files into the archive: @example @group lib: foo.o bar.o lose.o win.o ar r lib $? @end group @end example @samp{D} or @samp{F}, respectively. These variants are semi-obsolete in GNU @code{make} since the functions @code{dir} and @code{notdir} can be used to get a similar effect (@pxref{File Name Functions, , Functions for File Names}). Note, however, that the @samp{D} variants all omit the trailing slash which always appears in the output of the @code{dir} function. Here is a table of the variants: @table @samp @vindex $(@@D) @vindex @@D @r{(automatic variable)} @item $(@@D) The directory part of the file name of the target, with the trailing slash removed. If the value of @samp{$@@} is @file{dir/foo.o} then @samp{$(@@D)} is @file{dir}. This value is @file{.} if @samp{$@@} does not contain a slash. @vindex $(@@F) @vindex @@F @r{(automatic variable)} @item $(@@F) The file-within-directory part of the file name of the target. If the value of @samp{$@@} is @file{dir/foo.o} then @samp{$(@@F)} is @file{foo.o}. @samp{$(@@F)} is equivalent to @samp{$(notdir $@@)}. @vindex $(*D) @vindex *D @r{(automatic variable)} @item $(*D) @vindex $(*F) @vindex *F @r{(automatic variable)} @itemx $(*F) The directory part and the file-within-directory part of the stem; @file{dir} and @file{foo} in this example. @vindex $(%D) @vindex %D @r{(automatic variable)} @item $(%D) @vindex $(%F) @vindex %F @r{(automatic variable)} @itemx $(%F) The directory part and the file-within-directory part of the target archive member name. This makes sense only for archive member targets of the form @file{@var{archive}(@var{member})} and is useful only when @var{member} may contain a directory name. (@xref{Archive Members, ,Archive Members as Targets}.) @vindex $(<D) @vindex <D @r{(automatic variable)} @item $(<D) @vindex $(<F) @vindex <F @r{(automatic variable)} @itemx $(<F) The directory part and the file-within-directory part of the first prerequisite. @vindex $(^D) @vindex ^D @r{(automatic variable)} @item $(^D) @vindex $(^F) @vindex ^F @r{(automatic variable)} @itemx $(^F) Lists of the directory parts and the file-within-directory parts of all prerequisites. @vindex $(+D) @vindex +D @r{(automatic variable)} @item $(+D) @vindex $(+F) @vindex +F @r{(automatic variable)} @itemx $(+F) Lists of the directory parts and the file-within-directory parts of all prerequisites, including multiple instances of duplicated prerequisites. @vindex $(?D) @vindex ?D @r{(automatic variable)} @item $(?D) @vindex $(?F) @vindex ?F @r{(automatic variable)} @itemx $(?F) Lists of the directory parts and the file-within-directory parts of all prerequisites that are newer than the target. @end table Note that we use a special stylistic convention when we talk about these automatic variables; we write ``the value of @samp{$<}'', rather than @w{``the variable @code{<}''} as we would write for ordinary variables such as @code{objects} and @code{CFLAGS}. We think this convention looks more natural in this special case. Please do not assume it has a deep significance; @samp{$<} refers to the variable named @code{<} just as @samp{$(CFLAGS)} refers to the variable named @code{CFLAGS}. You could just as well use @samp{$(<)} in place of @samp{$<}. @node Pattern Match, Match-Anything Rules, Automatic Variables, Pattern Rules @subsection How Patterns Match @cindex stem A target pattern is composed of a @samp{%} between a prefix and a suffix, either or both of which may be empty. The pattern matches a file name only if the file name starts with the prefix and ends with the suffix, without overlap. The text between the prefix and the suffix is called the @dfn{stem}. Thus, when the pattern @samp{%.o} matches the file name @file{test.o}, the stem is @samp{test}. The pattern rule prerequisites are turned into actual file names by substituting the stem for the character @samp{%}. Thus, if in the same example one of the prerequisites is written as @samp{%.c}, it expands to @samp{test.c}.@refill, @samp{e%t} matches the file name @file{src/eat}, with @samp{src/a} as the stem. When prerequisites are turned into file names, the directories from the stem are added at the front, while the rest of the stem is substituted for the @samp{%}. The stem @samp{src/a} with a prerequisite pattern @samp{c%r} gives the file name @file{src/car}.@refill @node Match-Anything Rules, Canceling Rules, Pattern Match, Pattern Rules @subsection Match-Anything Pattern Rules @cindex match-anything rule @cindex terminal rule When a pattern rule's target is just @samp{%}, it matches any file name whatever. We call these rules @dfn{match-anything} rules. They are very useful, but it can take a lot of time for @code{make} to think about them, because it must consider every such rule for each file name listed either as a target or as a prerequisite. Suppose the makefile mentions @file{foo.c}. For this target, @code{make} would have to consider making it by linking an object file @file{foo.c.o}, or by C compilation-and-linking in one step from @file{foo.c.c}, or by Pascal compilation-and-linking from @file{foo.c.p}, and many other possibilities. We know these possibilities are ridiculous since @file{foo.c} is a C source file, not an executable. If @code{make} did consider these possibilities, it would ultimately reject them, because files such as @file{foo.c.o} and @file{foo.c.p} would not exist. But these possibilities are so numerous that @code{make} would run very slowly if it had to consider them.@refill To gain speed, we have put various constraints on the way @code{make} considers match-anything rules. There are two different constraints that can be applied, and each time you define a match-anything rule you must choose one or the other for that rule. One choice is to mark the match-anything rule as @dfn @file{foo.c,v} does not exist, @code{make} will not even consider trying to make it as an intermediate file from @file{foo.c,v.o} or from @file{RCS/SCCS/s.foo.c,v}. RCS and SCCS files are generally ultimate source files, which should not be remade from any other files; therefore, @code{make} can save time by not looking for ways to remake them.@refill @file{foo.c} matches the target for the pattern rule @samp{%.c : %.y} (the rule to run Yacc). Regardless of whether this rule is actually applicable (which happens only if there is a file @file{foo.y}), the fact that its target matches is enough to prevent consideration of any nonterminal match-anything rules for the file @file{foo.c}. Thus, @code{make} will not even consider trying to make @file{foo.c} as an executable file from @file{foo.c.o}, @file{foo.c.c}, @file{foo.c.p}, etc.@refill @example %.p : @end example @noindent exists to make sure that Pascal source files such as @file{foo.p} match a specific target pattern and thereby prevent time from being wasted looking for @file{foo.p.o} or @file{foo.p.c}. Dummy pattern rules such as the one for @samp{%.p} are made for every suffix listed as valid for use in suffix rules (@pxref{Suffix Rules, ,Old-Fashioned Suffix Rules}). @node Canceling Rules, , Match-Anything Rules, Pattern Rules @subsection: @example %.o : %.s @end example @node Last Resort, Suffix Rules, Pattern Rules, Implicit Rules @section Defining Last-Resort Default Rules @cindex last-resort default rules @cindex default rules, last-resort You can define a last-resort implicit rule by writing a terminal match-anything pattern rule with no prerequisites (@pxref: @example %:: touch $@@ @end example @noindent to cause all the source files needed (as prerequisites) to be created automatically. @findex .DEFAULT You can instead define commands to be used for targets for which there are no rules at all, even ones which don't specify commands. You do this by writing a rule for the target @code{.DEFAULT}. Such a rule's commands are used for all prerequisites which do not appear as targets in any explicit rule, and for which no implicit rule applies. Naturally, there is no @code{.DEFAULT} rule unless you write one. If you use @code{.DEFAULT} with no commands or prerequisites: @example .DEFAULT: @end example @noindent the commands previously stored for @code{.DEFAULT} are cleared. Then @code{make} acts as if you had never defined @code{.DEFAULT} at all. If you do not want a target to get the commands from a match-anything pattern rule or @code{.DEFAULT}, but you also do not want any commands to be run for the target, you can give it empty commands (@pxref{Empty Commands, ,Defining Empty Commands}).@refill You can use a last-resort rule to override part of another makefile. @xref{Overriding Makefiles, , Overriding Part of Another Makefile}. @node Suffix Rules, Implicit Rule Search, Last Resort, Implicit Rules @section Old-Fashioned Suffix Rules @cindex old-fashioned suffix rules @cindex suffix rule @dfn{Suffix rules} are the old-fashioned way of defining implicit rules for @code{make}. Suffix rules are obsolete because pattern rules are more general and clearer. They are supported in GNU @code{make} for compatibility with old makefiles. They come in two kinds: @dfn{double-suffix} and @dfn{single-suffix}.@refill @samp{.o} and @samp{.c} is equivalent to the pattern rule @samp{%.o : %.c}. A single-suffix rule is defined by a single suffix, which is the source suffix. It matches any file name, and the corresponding implicit prerequisite name is made by appending the source suffix. A single-suffix rule whose source suffix is @samp{.c} is equivalent to the pattern rule @samp{% : %.c}. Suffix rule definitions are recognized by comparing each rule's target against a defined list of known suffixes. When @code{make} sees a rule whose target is a known suffix, this rule is considered a single-suffix rule. When @code{make} sees a rule whose target is two known suffixes concatenated, this rule is taken as a double-suffix rule. For example, @samp{.c} and @samp{.o} are both on the default list of known suffixes. Therefore, if you define a rule whose target is @samp{.c.o}, @code{make} takes it to be a double-suffix rule with source suffix @samp{.c} and target suffix @samp{.o}. Here is the old-fashioned way to define the rule for compiling a C source file:@refill @example .c.o: $(CC) -c $(CFLAGS) $(CPPFLAGS) -o $@@ $< @end example Suffix rules cannot have any prerequisites of their own. If they have any, they are treated as normal files with funny names, not as suffix rules. Thus, the rule: @example .c.o: foo.h $(CC) -c $(CFLAGS) $(CPPFLAGS) -o $@@ $< @end example @noindent tells how to make the file @file{.c.o} from the prerequisite file @file{foo.h}, and is not at all like the pattern rule: @example %.o: %.c foo.h $(CC) -c $(CFLAGS) $(CPPFLAGS) -o $@@ $< @end example @noindent which tells how to make @samp{.o} files from @samp{.c} files, and makes all @samp{.o} files using this pattern rule also depend on @file{foo.h}. Suffix rules with no commands are also meaningless. They do not remove previous rules as do pattern rules with no commands (@pxref{Canceling Rules, , Canceling Implicit Rules}). They simply enter the suffix or pair of suffixes concatenated as a target in the data base.@refill @findex .SUFFIXES The known suffixes are simply the names of the prerequisites of the special target @code{.SUFFIXES}. You can add your own suffixes by writing a rule for @code{.SUFFIXES} that adds more prerequisites, as in: @example .SUFFIXES: .hack .win @end example @noindent which adds @samp{.hack} and @samp{.win} to the end of the list of suffixes. If you wish to eliminate the default known suffixes instead of just adding to them, write a rule for @code{.SUFFIXES} with no prerequisites. By special dispensation, this eliminates all existing prerequisites of @code{.SUFFIXES}. You can then write another rule to add the suffixes you want. For example, @example @group .SUFFIXES: # @r{Delete the default suffixes} .SUFFIXES: .c .o .h # @r{Define our suffix list} @end group @end example The @samp{-r} or @samp{--no-builtin-rules} flag causes the default list of suffixes to be empty. @vindex SUFFIXES The variable @code{SUFFIXES} is defined to the default list of suffixes before @code{make} reads any makefiles. You can change the list of suffixes with a rule for the special target @code{.SUFFIXES}, but that does not alter this variable. @node Implicit Rule Search, , Suffix Rules, Implicit Rules @section Implicit Rule Search Algorithm @cindex implicit rule, search algorithm @cindex search algorithm, implicit rule Here is the procedure @code{make} uses for searching for an implicit rule for a target @var @samp{@var{archive}(@var{member})}, the following algorithm is run twice, first using the entire target name @var{t}, and second using @samp{(@var{member})} as the target @var{t} if the first run found no rule.@refill @enumerate @item Split @var{t} into a directory part, called @var{d}, and the rest, called @var{n}. For example, if @var{t} is @samp{src/foo.o}, then @var{d} is @samp{src/} and @var{n} is @samp{foo.o}.@refill @item Make a list of all the pattern rules one of whose targets matches @var{t} or @var{n}. If the target pattern contains a slash, it is matched against @var{t}; otherwise, against @var{n}. @item If any rule in that list is @emph{not} a match-anything rule, then remove all nonterminal match-anything rules from the list. @item Remove from the list all rules with no commands. @item For each pattern rule in the list: @enumerate a @item Find the stem @var{s}, which is the nonempty part of @var{t} or @var{n} matched by the @samp{%} in the target pattern.@refill @item Compute the prerequisite names by substituting @var{s} for @samp{%}; if the target pattern does not contain a slash, append @var{d} to the front of each prerequisite name.@refill @item. @end enumerate @item If no pattern rule has been found so far, try harder. For each pattern rule in the list: @enumerate a @item If the rule is terminal, ignore it and go on to the next rule. @item Compute the prerequisite names as before. @item Test whether all the prerequisites exist or ought to exist. @item For each prerequisite that does not exist, follow this algorithm recursively to see if the prerequisite can be made by an implicit rule. @item If all prerequisites exist, ought to exist, or can be made by implicit rules, then this rule applies. @end enumerate @item If no implicit rule applies, the rule for @code{.DEFAULT}, if any, applies. In that case, give @var{t} the same commands that @code{.DEFAULT} has. Otherwise, there are no commands for @var{t}. @end enumerate Once a rule that applies has been found, for each target pattern of the rule other than the one that matched @var{t} or @var{n}, the @samp{%} in the pattern is replaced with @var{s} and the resultant file name is stored until the commands to remake the target file @var{t} are executed. After these commands are executed, each of these stored file names are entered into the data base and marked as having been updated and having the same update status as the file @var{t}. When the commands of a pattern rule are executed for @var{t}, the automatic variables are set corresponding to the target and prerequisites. @xref{Automatic Variables}. @node Archives, Features, Implicit Rules, Top @chapter Using @code{make} to Update Archive Files @cindex archive @dfn{Archive files} are files containing named subfiles called @dfn{members}; they are maintained with the program @code. @end menu @node Archive Members, Archive Update, Archives, Archives @section Archive Members as Targets @cindex archive member targets An individual member of an archive file can be used as a target or prerequisite in @code{make}. You specify the member named @var{member} in archive file @var{archive} as follows: @example @var{archive}(@var{member}) @end example @noindent This construct is available only in targets and prerequisites, not in commands! Most programs that you might use in commands do not support this syntax and cannot act directly on archive members. Only @code{ar} and other programs specifically designed to operate on archives can do so. Therefore, valid commands to update an archive member target probably must use @code{ar}. For example, this rule says to create a member @file{hack.o} in archive @file{foolib} by copying the file @file{hack.o}: @example foolib(hack.o) : hack.o ar cr foolib hack.o @end example In fact, nearly all archive member targets are updated in just this way and there is an implicit rule to do it for you. @strong{Please note:} The @samp{c} flag to @code{ar} is required if the archive file does not already exist. To specify several members in the same archive, you can write all the member names together between the parentheses. For example: @example foolib(hack.o kludge.o) @end example @noindent is equivalent to: @example foolib(hack.o) foolib(kludge.o) @end example @cindex wildcard, in archive member You can also use shell-style wildcards in an archive member reference. @xref{Wildcards, ,Using Wildcard Characters in File Names}. For example, @w{@samp{foolib(*.o)}} expands to all existing members of the @file{foolib} archive whose names end in @samp{.o}; perhaps @samp{@w{foolib(hack.o)} @w{foolib(kludge.o)}}. @node Archive Update, Archive Pitfalls, Archive Members, Archives @section Implicit Rule for Archive Member Targets Recall that a target that looks like @file{@var{a}(@var{m})} stands for the member named @var{m} in the archive file @var{a}. When @code{make} looks for an implicit rule for such a target, as a special feature it considers implicit rules that match @file{(@var{m})}, as well as those that match the actual target @file{@var{a}(@var{m})}. This causes one special rule whose target is @file{(%)} to match. This rule updates the target @file{@var{a}(@var{m})} by copying the file @var{m} into the archive. For example, it will update the archive member target @file{foo.a(bar.o)} by copying the @emph{file} @file{bar.o} into the archive @file{foo.a} as a @emph{member} named @file{bar.o}. When this rule is chained with others, the result is very powerful. Thus, @samp{make "foo.a(bar.o)"} (the quotes are needed to protect the @samp{(} and @samp{)} from being interpreted specially by the shell) in the presence of a file @file{bar.c} is enough to cause the following commands to be run, even without a makefile: @example cc -c bar.c -o bar.o ar r foo.a bar.o rm -f bar.o @end example @noindent Here @code{make} has envisioned the file @file{bar.o} as an intermediate file. @xref{Chained Rules, ,Chains of Implicit Rules}. Implicit rules such as this one are written using the automatic variable @samp{$%}. @xref{Automatic Variables}. An archive member name in an archive cannot contain a directory name, but it may be useful in a makefile to pretend that it does. If you write an archive member target @file{foo.a(dir/file.o)}, @code{make} will perform automatic updating with this command: @example ar r foo.a dir/file.o @end example @noindent which has the effect of copying the file @file{dir/file.o} into a member named @file{file.o}. In connection with such usage, the automatic variables @code{%D} and @code{%F} may be useful. @menu * Archive Symbols:: How to update archive symbol directories. @end menu @node Archive Symbols, , Archive Update, Archive Update @subsection Updating Archive Symbol Directories @cindex @code{__.SYMDEF} @cindex updating archive symbol directories @cindex archive symbol directory updating @cindex symbol directories, updating archive @cindex directories, updating archive symbol An archive file that is used as a library usually contains a special member named @file{__.SYMDEF} that contains a directory of the external symbol names defined by all the other members. After you update any other members, you need to update @file{__.SYMDEF} so that it will summarize the other members properly. This is done by running the @code{ranlib} program: @example ranlib @var{archivefile} @end example Normally you would put this command in the rule for the archive file, and make all the members of the archive file prerequisites of that rule. For example, @example libfoo.a: libfoo.a(x.o) libfoo.a(y.o) @dots{} ranlib libfoo.a @end example @noindent The effect of this is to update archive members @file{x.o}, @file{y.o}, etc., and then update the symbol directory member @file{__.SYMDEF} by running @code{ranlib}. The rules for updating the members are not shown here; most likely you can omit them and use the implicit rule which copies files into the archive, as described in the preceding section. This is not necessary when using the GNU @code{ar} program, which updates the @file{__.SYMDEF} member automatically. @node Archive Pitfalls, Archive Suffix Rules, Archive Update, Archives @section Dangers When Using Archives @cindex archive, and parallel execution @cindex parallel execution, and archive update @cindex archive, and @code{-j} @cindex @code{-j}, and archive update It is important to be careful when using parallel execution (the @code{-j} switch; @pxref{Parallel, ,Parallel Execution}) and archives. If multiple @code{ar} commands run at the same time on the same archive file, they will not know about each other and can corrupt the file. Possibly a future version of @code{make} will provide a mechanism to circumvent this problem by serializing all commands that operate on the same archive file. But for the time being, you must either write your makefiles to avoid this problem in some other way, or not use @code{-j}. @node Archive Suffix Rules, , Archive Pitfalls, Archives @section Suffix Rules for Archive Files @cindex suffix rule, for archive @cindex archive, suffix rule for @cindex library archive, suffix rule for @cindex @code{.a} (archives) You can write a special kind of suffix rule for dealing with archive files. @xref{Suffix Rules}, for a full explanation of suffix rules. Archive suffix rules are obsolete in GNU @code{make}, because pattern rules for archives are a more general mechanism (@pxref{Archive Update}). But they are retained for compatibility with other @code{make}s. To write a suffix rule for archives, you simply write a suffix rule using the target suffix @samp{.a} (the usual suffix for archive files). For example, here is the old-fashioned suffix rule to update a library archive from C source files: @example @group .c.a: $(CC) $(CFLAGS) $(CPPFLAGS) -c $< -o $*.o $(AR) r $@@ $*.o $(RM) $*.o @end group @end example @noindent This works just as if you had written the pattern rule: @example @group (%.o): %.c $(CC) $(CFLAGS) $(CPPFLAGS) -c $< -o $*.o $(AR) r $@@ $*.o $(RM) $*.o @end group @end example In fact, this is just what @code{make} does when it sees a suffix rule with @samp{.a} as the target suffix. Any double-suffix rule @w{@samp{.@var{x}.a}} is converted to a pattern rule with the target pattern @samp{(%.o)} and a prerequisite pattern of @samp{%.@var{x}}. Since you might want to use @samp{.a} as the suffix for some other kind of file, @code{make} also converts archive suffix rules to pattern rules in the normal way (@pxref{Suffix Rules}). Thus a double-suffix rule @w{@samp{.@var{x}.a}} produces two pattern rules: @samp{@w{(%.o):} @w{%.@var{x}}} and @samp{@w{%.a}: @w{%.@var{x}}}.@refill @node Features, Missing, Archives, Top @chapter Features of GNU @code{make} @cindex features of GNU @code{make} @cindex portability @cindex compatibility Here is a summary of the features of GNU @code{make}, for comparison with and credit to other versions of @code{make}. We consider the features of @code{make} in 4.2 BSD systems as a baseline. If you are concerned with writing portable makefiles, you should not use the features of @code{make} listed here, nor the ones in @ref{Missing}. Many features come from the version of @code{make} in System V. @itemize @bullet @item The @code{VPATH} variable and its special meaning. @xref{Directory Search, , Searching Directories for Prerequisites}. This feature exists in System V @code{make}, but is undocumented. It is documented in 4.3 BSD @code{make} (which says it mimics System V's @code{VPATH} feature).@refill @item Included makefiles. @xref{Include, ,Including Other Makefiles}. Allowing multiple files to be included with a single directive is a GNU extension. @item Variables are read from and communicated via the environment. @xref{Environment, ,Variables from the Environment}. @item Options passed through the variable @code{MAKEFLAGS} to recursive invocations of @code{make}. @xref{Options/Recursion, ,Communicating Options to a Sub-@code{make}}. @item The automatic variable @code{$%} is set to the member name in an archive reference. @xref{Automatic Variables}. @item The automatic variables @code{$@@}, @code{$*}, @code{$<}, @code{$%}, and @code{$?} have corresponding forms like @code{$(@@F)} and @code{$(@@D)}. We have generalized this to @code{$^} as an obvious extension. @xref{Automatic Variables}.@refill @item Substitution variable references. @xref{Reference, ,Basics of Variable References}. @item The command-line options @samp{-b} and @samp{-m}, accepted and ignored. In System V @code{make}, these options actually do something. @item Execution of recursive commands to run @code{make} via the variable @code{MAKE} even if @samp{-n}, @samp{-q} or @samp{-t} is specified. @xref{Recursion, ,Recursive Use of @code{make}}. @item Support for suffix @samp{.a} in suffix rules. @xref{Archive Suffix Rules}. This feature is obsolete in GNU @code{make}, because the general feature of rule chaining (@pxref{Chained Rules, ,Chains of Implicit Rules}) allows one pattern rule for installing members in an archive (@pxref{Archive Update}) to be sufficient. @item The arrangement of lines and backslash-newline combinations in commands is retained when the commands are printed, so they appear as they do in the makefile, except for the stripping of initial whitespace. @end itemize The following features were inspired by various other versions of @code{make}. In some cases it is unclear exactly which versions inspired which others. @itemize @bullet @item Pattern rules using @samp{%}. This has been implemented in several versions of @code{make}. We're not sure who invented it first, but it's been spread around a bit. @xref{Pattern Rules, ,Defining and Redefining Pattern Rules}.@refill @item Rule chaining and implicit intermediate files. This was implemented by Stu Feldman in his version of @code{make} for AT&T Eighth Edition Research Unix, and later by Andrew Hume of AT&T Bell Labs in his @code{mk} program (where he terms it ``transitive closure''). We do not really know if we got this from either of them or thought it up ourselves at the same time. @xref{Chained Rules, ,Chains of Implicit Rules}. @item The automatic variable @code{$^} containing a list of all prerequisites of the current target. We did not invent this, but we have no idea who did. @xref{Automatic Variables}. The automatic variable @code{$+} is a simple extension of @code{$^}. @item The ``what if'' flag (@samp{-W} in GNU @code{make}) was (as far as we know) invented by Andrew Hume in @code{mk}. @xref{Instead of Execution, ,Instead of Executing the Commands}. @item The concept of doing several things at once (parallelism) exists in many incarnations of @code{make} and similar programs, though not in the System V or BSD implementations. @xref{Execution, ,Command Execution}. @item Modified variable references using pattern substitution come from SunOS 4. @xref{Reference, ,Basics of Variable References}. This functionality was provided in GNU @code{make} by the @code{patsubst} function before the alternate syntax was implemented for compatibility with SunOS 4. It is not altogether clear who inspired whom, since GNU @code{make} had @code{patsubst} before SunOS 4 was released.@refill @item The special significance of @samp{+} characters preceding command lines (@pxref{Instead of Execution, ,Instead of Executing the Commands}) is mandated by @cite{IEEE Standard 1003.2-1992} (POSIX.2). @item The @samp{+=} syntax to append to the value of a variable comes from SunOS 4 @code{make}. @xref{Appending, , Appending More Text to Variables}. @item The syntax @w{@samp{@var{archive}(@var{mem1} @var{mem2}@dots{})}} to list multiple members in a single archive file comes from SunOS 4 @code{make}. @xref{Archive Members}. @item The @code{-include} directive to include makefiles with no error for a nonexistent file comes from SunOS 4 @code{make}. (But note that SunOS 4 @code{make} does not allow multiple makefiles to be specified in one @code{-include} directive.) The same feature appears with the name @code{sinclude} in SGI @code{make} and perhaps others. @end itemize The remaining features are inventions new in GNU @code{make}: @itemize @bullet @item Use the @samp{-v} or @samp{--version} option to print version and copyright information. @item Use the @samp{-h} or @samp{--help} option to summarize the options to @code{make}. @item Simply-expanded variables. @xref{Flavors, ,The Two Flavors of Variables}. @item Pass command-line variable assignments automatically through the variable @code{MAKE} to recursive @code{make} invocations. @xref{Recursion, ,Recursive Use of @code{make}}. @item Use the @samp{-C} or @samp{--directory} command option to change directory. @xref{Options Summary, ,Summary of Options}. @item Make verbatim variable definitions with @code{define}. @xref{Defining, ,Defining Variables Verbatim}. @item Declare phony targets with the special target @code{.PHONY}. Andrew Hume of AT&T Bell Labs implemented a similar feature with a different syntax in his @code{mk} program. This seems to be a case of parallel discovery. @xref{Phony Targets, ,Phony Targets}. @item Manipulate text by calling functions. @xref{Functions, ,Functions for Transforming Text}. @item Use the @samp{-o} or @samp{--old-file} option to pretend a file's modification-time is old. @xref{Avoiding Compilation, ,Avoiding Recompilation of Some Files}. @item Conditional execution. This feature has been implemented numerous times in various versions of @code{make}; it seems a natural extension derived from the features of the C preprocessor and similar macro languages and is not a revolutionary concept. @xref{Conditionals, ,Conditional Parts of Makefiles}. @item Specify a search path for included makefiles. @xref{Include, ,Including Other Makefiles}. @item Specify extra makefiles to read with an environment variable. @xref{MAKEFILES Variable, ,The Variable @code{MAKEFILES}}. @item Strip leading sequences of @samp{./} from file names, so that @file{./@var{file}} and @file{@var{file}} are considered to be the same file.@refill @item Use a special search method for library prerequisites written in the form @samp{-l@var{name}}. @xref{Libraries/Search, ,Directory Search for Link Libraries}. @item Allow suffixes for suffix rules (@pxref{Suffix Rules, ,Old-Fashioned Suffix Rules}) to contain any characters. In other versions of @code{make}, they must begin with @samp{.} and not contain any @samp{/} characters. @item Keep track of the current level of @code{make} recursion using the variable @code{MAKELEVEL}. @xref{Recursion, ,Recursive Use of @code{make}}. @item Provide any goals given on the command line in the variable @code{MAKECMDGOALS}. @xref{Goals, ,Arguments to Specify the Goals}. @item Specify static pattern rules. @xref{Static Pattern, ,Static Pattern Rules}. @item Provide selective @code{vpath} search. @xref{Directory Search, ,Searching Directories for Prerequisites}. @item Provide computed variable references. @xref{Reference, ,Basics of Variable References}. @item Update makefiles. @xref{Remaking Makefiles, ,How Makefiles Are Remade}. System V @code{make} has a very, very limited form of this functionality in that it will check out SCCS files for makefiles. @item Various new built-in implicit rules. @xref{Catalogue of Rules, ,Catalogue of Implicit Rules}. @item The built-in variable @samp{MAKE_VERSION} gives the version number of @code{make}. @vindex MAKE_VERSION @end itemize @node Missing, Makefile Conventions, Features, Top @chapter Incompatibilities and Missing Features @cindex incompatibilities @cindex missing features @cindex features, missing The @code{make} programs in various other systems support a few features that are not implemented in GNU @code{make}. The POSIX.2 standard (@cite{IEEE Standard 1003.2-1992}) which specifies @code{make} does not require any of these features.@refill @itemize @bullet @item A target of the form @samp{@var{file}((@var{entry}))} stands for a member of archive file @var{file}. The member is chosen, not by name, but by being an object file which defines the linker symbol @var{entry}.@refill This feature was not put into GNU @code{make} because of the nonmodularity of putting knowledge into @code{make} of the internal format of archive file symbol tables. @xref{Archive Symbols, ,Updating Archive Symbol Directories}. @item Suffixes (used in suffix rules) that end with the character @samp{~} have a special meaning to System V @code{make}; they refer to the SCCS file that corresponds to the file one would get without the @samp{~}. For example, the suffix rule @samp{.c~.o} would make the file @file{@var{n}.o} from the SCCS file @file{s.@var{n}.c}. For complete coverage, a whole series of such suffix rules is required. @xref{Suffix Rules, ,Old-Fashioned Suffix Rules}. In GNU @code{make}, this entire series of cases is handled by two pattern rules for extraction from SCCS, in combination with the general feature of rule chaining. @xref{Chained Rules, ,Chains of Implicit Rules}. @item In System V and 4.3 BSD @code{make}, files found by @code{VPATH} search (@pxref{Directory Search, ,Searching Directories for Prerequisites}) have their names changed inside command strings. We feel it is much cleaner to always use automatic variables and thus make this feature obsolete.@refill @item In some Unix @code{make}s, the automatic variable @code{$*} appearing in the prerequisites of a rule has the amazingly strange ``feature'' of expanding to the full name of the @emph{target of that rule}. We cannot imagine what went on in the minds of Unix @code{make} developers to do this; it is utterly inconsistent with the normal definition of @code{$*}. @vindex * @r{(automatic variable), unsupported bizarre usage} @item In some Unix @code{make}s, implicit rule search (@pxref{Implicit Rules, ,Using Implicit Rules}) is apparently done for @emph{all} targets, not just those without commands. This means you can do:@refill @example @group foo.o: cc -c foo.c @end group @end example @noindent and Unix @code{make} will intuit that @file{foo.o} depends on @file{foo.c}.@refill We feel that such usage is broken. The prerequisite properties of @code{make} are well-defined (for GNU @code{make}, at least), and doing such a thing simply does not fit the model.@refill @item GNU @code{make} does not include any built-in implicit rules for compiling or preprocessing EFL programs. If we hear of anyone who is using EFL, we will gladly add them. @item It appears that in SVR4 @code{make}, a suffix rule can be specified with no commands, and it is treated as if it had empty commands (@pxref{Empty Commands}). For example: @example .c.a: @end example @noindent will override the built-in @file{.c.a} suffix rule. We feel that it is cleaner for a rule without commands to always simply add to the prerequisite list for the target. The above example can be easily rewritten to get the desired behavior in GNU @code{make}: @example .c.a: ; @end example @item Some versions of @code{make} invoke the shell with the @samp{-e} flag, except under @samp{-k} (@pxref{Testing, ,Testing the Compilation of a Program}). The @samp{-e} flag tells the shell to exit as soon as any program it runs returns a nonzero status. We feel it is cleaner to write each shell command line to stand on its own and not require this special treatment. @end itemize @comment The makefile standards are in a separate file that is also @comment included by standards.texi. @include make-stds.texi @node Quick Reference, Error Messages, Makefile Conventions, Top @appendix Quick Reference This appendix summarizes the directives, text manipulation functions, and special variables which GNU @code{make} understands. @xref{Special Targets}, @ref{Catalogue of Rules, ,Catalogue of Implicit Rules}, and @ref{Options Summary, ,Summary of Options}, for other summaries. Here is a summary of the directives GNU @code{make} recognizes: @table @code @item define @var{variable} @itemx endef Define a multi-line, recursively-expanded variable.@* @xref{Sequences}. @item ifdef @var{variable} @itemx ifndef @var{variable} @itemx ifeq (@var{a},@var{b}) @itemx ifeq "@var{a}" "@var{b}" @itemx ifeq '@var{a}' '@var{b}' @itemx ifneq (@var{a},@var{b}) @itemx ifneq "@var{a}" "@var{b}" @itemx ifneq '@var{a}' '@var{b}' @itemx else @itemx endif Conditionally evaluate part of the makefile.@* @xref{Conditionals}. @item include @var{file} @itemx -include @var{file} @itemx sinclude @var{file} Include another makefile.@* @xref{Include, ,Including Other Makefiles}. @item override @var{variable} = @var{value} @itemx override @var{variable} := @var{value} @itemx override @var{variable} += @var{value} @itemx override @var{variable} ?= @var{value} @itemx override define @var{variable} @itemx endef Define a variable, overriding any previous definition, even one from the command line.@* @xref{Override Directive, ,The @code{override} Directive}. @item export Tell @code{make} to export all variables to child processes by default.@* @xref{Variables/Recursion, , Communicating Variables to a Sub-@code{make}}. @item export @var{variable} @itemx export @var{variable} = @var{value} @itemx export @var{variable} := @var{value} @itemx export @var{variable} += @var{value} @itemx export @var{variable} ?= @var{value} @itemx unexport @var{variable} Tell @code{make} whether or not to export a particular variable to child processes.@* @xref{Variables/Recursion, , Communicating Variables to a Sub-@code{make}}. @item vpath @var{pattern} @var{path} Specify a search path for files matching a @samp{%} pattern.@* @xref{Selective Search, , The @code{vpath} Directive}. @item vpath @var{pattern} Remove all search paths previously specified for @var{pattern}. @item vpath Remove all search paths previously specified in any @code{vpath} directive. @end table Here is a summary of the built-in functions (@pxref{Functions}): @table @code @item $(subst @var{from},@var{to},@var{text}) Replace @var{from} with @var{to} in @var{text}.@* @xref{Text Functions, , Functions for String Substitution and Analysis}. @item $(patsubst @var{pattern},@var{replacement},@var{text}) Replace words matching @var{pattern} with @var{replacement} in @var{text}.@* @xref{Text Functions, , Functions for String Substitution and Analysis}. @item $(strip @var{string}) Remove excess whitespace characters from @var{string}.@* @xref{Text Functions, , Functions for String Substitution and Analysis}. @item $(findstring @var{find},@var{text}) Locate @var{find} in @var{text}.@* @xref{Text Functions, , Functions for String Substitution and Analysis}. @item $(filter @var{pattern}@dots{},@var{text}) Select words in @var{text} that match one of the @var{pattern} words.@* @xref{Text Functions, , Functions for String Substitution and Analysis}. @item $(filter-out @var{pattern}@dots{},@var{text}) Select words in @var{text} that @emph{do not} match any of the @var{pattern} words.@* @xref{Text Functions, , Functions for String Substitution and Analysis}. @item $(sort @var{list}) Sort the words in @var{list} lexicographically, removing duplicates.@* @xref{Text Functions, , Functions for String Substitution and Analysis}. @item $(word @var{n},@var{text}) Extract the @var{n}th word (one-origin) of @var{text}.@* @xref{Text Functions, , Functions for String Substitution and Analysis}. @item $(words @var{text}) Count the number of words in @var{text}.@* @xref{Text Functions, , Functions for String Substitution and Analysis}. @item $(wordlist @var{s},@var{e},@var{text}) Returns the list of words in @var{text} from @var{s} to @var{e}.@* @xref{Text Functions, , Functions for String Substitution and Analysis}. @item $(firstword @var{names}@dots{}) Extract the first word of @var{names}.@* @xref{Text Functions, , Functions for String Substitution and Analysis}. @item $(lastword @var{names}@dots{}) Extract the last word of @var{names}.@* @xref{Text Functions, , Functions for String Substitution and Analysis}. @item $(dir @var{names}@dots{}) Extract the directory part of each file name.@* @xref{File Name Functions, ,Functions for File Names}. @item $(notdir @var{names}@dots{}) Extract the non-directory part of each file name.@* @xref{File Name Functions, ,Functions for File Names}. @item $(suffix @var{names}@dots{}) Extract the suffix (the last @samp{.} and following characters) of each file name.@* @xref{File Name Functions, ,Functions for File Names}. @item $(basename @var{names}@dots{}) Extract the base name (name without suffix) of each file name.@* @xref{File Name Functions, ,Functions for File Names}. @item $(addsuffix @var{suffix},@var{names}@dots{}) Append @var{suffix} to each word in @var{names}.@* @xref{File Name Functions, ,Functions for File Names}. @item $(addprefix @var{prefix},@var{names}@dots{}) Prepend @var{prefix} to each word in @var{names}.@* @xref{File Name Functions, ,Functions for File Names}. @item $(join @var{list1},@var{list2}) Join two parallel lists of words.@* @xref{File Name Functions, ,Functions for File Names}. @item $(wildcard @var{pattern}@dots{}) Find file names matching a shell file name pattern (@emph{not} a @samp{%} pattern).@* @xref{Wildcard Function, ,The Function @code{wildcard}}. @item $(realpath @var{names}@dots{}) For each file name in @var{names}, expand to an absolute name that does not contain any @code{.}, @code{..}, nor symlinks.@* @xref{File Name Functions, ,Functions for File Names}. @item $(abspath @var{names}@dots{}) For each file name in @var{names}, expand to an absolute name that does not contain any @code{.} or @code{..} components, but preserves symlinks.@* @xref{File Name Functions, ,Functions for File Names}. @item $(error @var{text}@dots{}) When this function is evaluated, @code{make} generates a fatal error with the message @var{text}.@* @xref{Make Control Functions, ,Functions That Control Make}. @item $(warning @var{text}@dots{}) When this function is evaluated, @code{make} generates a warning with the message @var{text}.@* @xref{Make Control Functions, ,Functions That Control Make}. @item $(shell @var{command}) Execute a shell command and return its output.@* @xref{Shell Function, , The @code{shell} Function}. @item $(origin @var{variable}) Return a string describing how the @code{make} variable @var{variable} was defined.@* @xref{Origin Function, , The @code{origin} Function}. @item $(flavor @var{variable}) Return a string describing the flavor of the @code{make} variable @var{variable}.@* @xref{Flavor Function, , The @code{flavor} Function}. @item $(foreach @var{var},@var{words},@var{text}) Evaluate @var{text} with @var{var} bound to each word in @var{words}, and concatenate the results.@* @xref{Foreach Function, ,The @code{foreach} Function}. @item $(call @var{var},@var{param},@dots{}) Evaluate the variable @var{var} replacing any references to @code{$(1)}, @code{$(2)} with the first, second, etc.@: @var{param} values.@* @xref{Call Function, ,The @code{call} Function}. @item $(eval @var{text}) Evaluate @var{text} then read the results as makefile commands. Expands to the empty string.@* @xref{Eval Function, ,The @code{eval} Function}. @item $(value @var{var}) Evaluates to the contents of the variable @var{var}, with no expansion performed on it.@* @xref{Value Function, ,The @code{value} Function}. @end table Here is a summary of the automatic variables. @xref{Automatic Variables}, for full information. @table @code @item $@@ The file name of the target. @item $% The target member name, when the target is an archive member. @item $< The name of the first prerequisite. @item $? The names of all the prerequisites that are newer than the target, with spaces between them. For prerequisites which are archive members, only the member named is used (@pxref{Archives}). @item $^ @itemx $+ The names of all the prerequisites, with spaces between them. For prerequisites which are archive members, only the member named is used (@pxref{Archives}). The value of @code{$^} omits duplicate prerequisites, while @code{$+} retains them and preserves their order. @item $* The stem with which an implicit rule matches (@pxref{Pattern Match, ,How Patterns Match}). {$?}. @end table These variables are used specially by GNU @code{make}: @table @code @item MAKEFILES Makefiles to be read on every invocation of @code{make}.@* @xref{MAKEFILES Variable, ,The Variable @code{MAKEFILES}}. @item VPATH Directory search path for files not found in the current directory.@* @xref{General Search, , @code{VPATH} Search Path for All Prerequisites}. @item SHELL The name of the system default command interpreter, usually @file{/bin/sh}. You can set @code{SHELL} in the makefile to change the shell used to run commands. @xref{Execution, ,Command Execution}. The @code{SHELL} variable is handled specially when importing from and exporting to the environment. @xref{Choosing the Shell}. @item MAKESHELL On MS-DOS only, the name of the command interpreter that is to be used by @code{make}. This value takes precedence over the value of @code{SHELL}. @xref{Execution, ,MAKESHELL variable}. @item MAKE The name with which @code{make} was invoked. Using this variable in commands has special meaning. @xref{MAKE Variable, ,How the @code{MAKE} Variable Works}. @item MAKELEVEL The number of levels of recursion (sub-@code{make}s).@* @xref{Variables/Recursion}. @item MAKEFLAGS The flags given to @code{make}. You can set this in the environment or a makefile to set flags.@* @xref{Options/Recursion, ,Communicating Options to a Sub-@code{make}}. It is @emph{never} appropriate to use @code{MAKEFLAGS} directly on a command line: its contents may not be quoted correctly for use in the shell. Always allow recursive @code{make}'s to obtain these values through the environment from its parent. @item MAKECMDGOALS The targets given to @code{make} on the command line. Setting this variable has no effect on the operation of @code{make}.@* @xref{Goals, ,Arguments to Specify the Goals}. @item CURDIR Set to the pathname of the current working directory (after all @code{-C} options are processed, if any). Setting this variable has no effect on the operation of @code{make}.@* @xref{Recursion, ,Recursive Use of @code{make}}. @item SUFFIXES The default list of suffixes before @code{make} reads any makefiles. @item .LIBPATTERNS Defines the naming of the libraries @code{make} searches for, and their order.@* @xref{Libraries/Search, ,Directory Search for Link Libraries}. @end table @node Error Messages, Complex Makefile, Quick Reference, Top @comment node-name, next, previous, up @appendix Errors Generated by Make Here is a list of the more common errors you might see generated by @code{make}, and some information about what they mean and how to fix them. Sometimes @code{make} errors are not fatal, especially in the presence of a @code{-} prefix on a command script line, or the @code{-k} command line option. Errors that are fatal are prefixed with the string @code{***}. Error messages are all either prefixed with the name of the program (usually @samp{make}), or, if the error is found in a makefile, the name of the file and linenumber containing the problem. In the table below, these common prefixes are left off. @table @samp @item [@var{foo}] Error @var{NN} @itemx [@var{foo}] @var{signal description} These errors are not really @code{make} errors at all. They mean that a program that @code{make} invoked as part of a command script returned a non-0 error code (@samp{Error @var{NN}}), which @code{make} interprets as failure, or it exited in some other abnormal fashion (with a signal of some type). @xref{Errors, ,Errors in Commands}. If no @code{***} is attached to the message, then the subprocess failed but the rule in the makefile was prefixed with the @code{-} special character, so @code{make} ignored the error. @item missing separator. Stop. @itemx missing separator (did you mean TAB instead of 8 spaces?). Stop. This means that @code{make} could not understand much of anything about the command line it just read. GNU @code{make} looks for various kinds of separators (@code{:}, @code{=},, @code{make} will use the second form of the error above. Remember that every line in the command script must begin with a TAB character. Eight spaces do not count. @xref{Rule Syntax}. @item commands commence before first target. Stop. @itemx missing rule before commands. Stop. This means the first thing in the makefile seems to be part of a command script: it begins with a TAB character and doesn't appear to be a legal @code{make} command (such as a variable assignment). Command scripts must always be associated with a target. The second form is generated if the line has a semicolon as the first non-whitespace character; @code{make} interprets this to mean you left out the "target: prerequisite" section of a rule. @xref{Rule Syntax}. @item No rule to make target `@var{xxx}'. @itemx No rule to make target `@var{xxx}', needed by `@var{yyy}'. This means that @code). @item No targets specified and no makefile found. Stop. @itemx No targets. Stop. The former means that you didn't provide any targets to be built on the command line, and @code{make} couldn't find any makefiles to read in. The latter means that some makefile was found, but it didn't contain any default goal and none was given on the command line. GNU @code{make} has nothing to do in these situations. @xref{Makefile Arguments, ,Arguments to Specify the Makefile}.@refill @item Makefile `@var{xxx}' was not found. @itemx Included makefile `@var{xxx}' was not found. A makefile specified on the command line (first form) or included (second form) was not found. @item warning: overriding commands for target `@var{xxx}' @itemx warning: ignoring old commands for target `@var{xxx}' GNU @code{make} allows commands to be specified only once per target (except for double-colon rules). If you give commands for a target which already has been defined to have commands, this warning is issued and the second set of commands will overwrite the first set. @xref{Multiple Rules, ,Multiple Rules for One Target}. @item Circular @var{xxx} <- @var{yyy} dependency dropped. This means that @code{make} detected a loop in the dependency graph: after tracing the prerequisite @var{yyy} of target @var{xxx}, and its prerequisites, etc., one of them depended on @var{xxx} again. @item Recursive variable `@var{xxx}' references itself (eventually). Stop. This means you've defined a normal (recursive) @code{make} variable @var{xxx} that, when it's expanded, will refer to itself (@var{xxx}). This is not allowed; either use simply-expanded variables (@code{:=}) or use the append operator (@code{+=}). @xref{Using Variables, ,How to Use Variables}. @item Unterminated variable reference. Stop. This means you forgot to provide the proper closing parenthesis or brace in your variable or function reference. @item insufficient arguments to function `@var{xxx}'. Stop. This means you haven't provided the requisite number of arguments for this function. See the documentation of the function for a description of its arguments. @xref{Functions, ,Functions for Transforming Text}. @item missing target pattern. Stop. @itemx multiple target patterns. Stop. @itemx target pattern contains no `%'. Stop. @itemx (@code{%}); and the fourth means that all three parts of the static pattern rule contain pattern characters (@code{%})--only the first two parts should. @xref{Static Usage, ,Syntax of Static Pattern Rules}. @item warning: -jN forced in submake: disabling jobserver mode. This warning and the next are generated if @code{make} detects error conditions related to parallel processing on systems where sub-@code{make}s can communicate (@pxref{Options/Recursion, ,Communicating Options to a Sub-@code{make}}). This warning is generated if a recursive invocation of a @code{make} process is forced to have @samp{-j@var{N}} in its argument list (where @var{N} is greater than one). This could happen, for example, if you set the @code{MAKE} environment variable to @samp{make -j2}. In this case, the sub-@code{make} doesn't communicate with other @code{make} processes and will simply pretend it has two jobs of its own. @item warning: jobserver unavailable: using -j1. Add `+' to parent make rule. In order for @code{make} processes to communicate, the parent will pass information to the child. Since this could result in problems if the child process isn't actually a @code{make}, the parent will only do this if it thinks the child is a @code{make}. The parent uses the normal algorithms to determine this (@pxref{MAKE Variable, ,How the @code{MAKE} Variable Works}). If the makefile is constructed such that the parent doesn't know the child is a @code{make} process, then the child will receive only part of the information necessary. In this case, the child will generate this warning message and proceed with its build in a sequential manner. @end table @node Complex Makefile, GNU Free Documentation License, Error Messages, Top @appendix Complex Makefile Example Here is the makefile for the GNU @code{tar} program. This is a moderately complex makefile. Because it is the first target, the default goal is @samp{all}. An interesting feature of this makefile is that @file{testpad.h} is a source file automatically created by the @code{testpad} program, itself compiled from @file{testpad.c}. If you type @samp{make} or @samp{make all}, then @code{make} creates the @file{tar} executable, the @file{rmt} daemon that provides remote tape access, and the @file{tar.info} Info file. If you type @samp{make install}, then @code{make} not only creates @file{tar}, @file{rmt}, and @file{tar.info}, but also installs them. If you type @samp{make clean}, then @code{make} removes the @samp{.o} files, and the @file{tar}, @file{rmt}, @file{testpad}, @file{testpad.h}, and @file{core} files. If you type @samp{make distclean}, then @code{make} not only removes the same files as does @samp{make clean} but also the @file{TAGS}, @file{Makefile}, and @file{config.status} files. (Although it is not evident, this makefile (and @file{config.status}) is generated by the user with the @code{configure} program, which is provided in the @code{tar} distribution, but is not shown here.) If you type @samp{make realclean}, then @code{make} removes the same files as does @samp{make distclean} and also removes the Info files generated from @file{tar.texinfo}. In addition, there are targets @code{shar} and @code{dist} that create distribution kits. @example @group # Generated automatically from Makefile.in by configure. # Un*x Makefile for GNU tar program. # Copyright (C) 1991 Free Software Foundation, Inc. @end group @group # This program is free software; you can redistribute # it and/or modify it under the terms of the GNU # General Public License @dots{} @dots{} @dots{} @end group SHELL = /bin/sh #### Start of system configuration section. #### srcdir = . @group # @end group # @group CDEBUG = -g CFLAGS = $(CDEBUG) -I. -I$(srcdir) $(DEFS) \ -DDEF_AR_FILE=\"$(DEF_AR_FILE)\" \ -DDEFBLOCKING=$(DEFBLOCKING) LDFLAGS = -g @end group @group prefix = /usr/local # Prefix for each installed program, # normally empty or `g'. binprefix = # The directory to install tar in. bindir = $(prefix)/bin # The directory to install the info files in. infodir = $(prefix)/info @end group ####) @group @end group .PHONY: all all: tar rmt tar.info @group .PHONY: tar tar: $(OBJS) $(CC) $(LDFLAGS) -o $@@ $(OBJS) $(LIBS) @end group @group rmt: rmt.c $(CC) $(CFLAGS) $(LDFLAGS) -o $@@ rmt.c @end group @group tar.info: tar.texinfo makeinfo tar.texinfo @end group @group .PHONY: install install: all $(INSTALL) tar $(bindir)/$(binprefix)tar -test ! -f rmt || $(INSTALL) rmt /etc/rmt $(INSTALLDATA) $(srcdir)/tar.info* $(infodir) @end group @group $(OBJS): tar.h port.h testpad.h regex.o buffer.o tar.o: regex.h # getdate.y has 8 shift/reduce conflicts. @end group @group testpad.h: testpad ./testpad @end group @group testpad: testpad.o $(CC) -o $@@ testpad.o @end group @group TAGS: $(SRCS) etags $(SRCS) @end group @group .PHONY: clean clean: rm -f *.o tar rmt testpad testpad.h core @end group @group .PHONY: distclean distclean: clean rm -f TAGS Makefile config.status @end group @group .PHONY: realclean realclean: distclean rm -f tar.info* @end group @group .PHONY: shar shar: $(SRCS) $(AUX) shar $(SRCS) $(AUX) | compress \ > tar-`sed -e '/version_string/!d' \ -e 's/[^0-9.]*\([0-9.]*\).*/\1/' \ -e q version.c`.shar.Z @end group @group @end group @group tar.zoo: $(SRCS) $(AUX) -rm -rf tmp.dir -mkdir tmp.dir -rm tar.zoo for X in $(SRCS) $(AUX) ; do \ echo $$X ; \ sed 's/$$/^M/' $$X \ > tmp.dir/$$X ; done cd tmp.dir ; zoo aM ../tar.zoo * -rm -rf tmp.dir @end group @end example @raisesections @include fdl.texi @lowersections @node Concept Index, Name Index, GNU Free Documentation License, Top @unnumbered Index of Concepts @printindex cp @node Name Index, , Concept Index, Top @unnumbered Index of Functions, Variables, & Directives @printindex fn @bye
http://opensource.apple.com/source/gnumake/gnumake-126.2/make/doc/make.texi
CC-MAIN-2014-52
refinedweb
55,884
54.32
On Thu, Mar 01, 2007 at 10:31:26AM -0600, Serge E. Hallyn wrote:> we've already had some trouble with nsproxy holding things with> different lifetimes. As it happens the solution this time was to put> the pid namespace where it belongs - not in nsproxy - so maybe moving> this info into nsproxies will be fine, it just rings a warning bell.nsproxy seems the best place to me to hang off the resource controlobjects (nsproxy->ctlr_data[]). These objects provide information likeresource limit etc required by the resource controllers.Alternately we could move them to task_struct which would cause unnecessary duplication of pointers (and wastage of space).-- Regards,vatsa-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
http://lkml.org/lkml/2007/3/1/251
CC-MAIN-2016-36
refinedweb
137
64.1
1. Using DebuggerBrowsable Attributes If you want to customize the view on debugger window for any properties during debugging you can easily do it using DebuggerBrowsable attributes. You can apply this attributes for any properties, fields or for Indexer. DebuggerBrowsable attributes constructor takes DebuggerBrowsableState as argument. DebuggerBrowsableState is used to provide the instruction to debugger that how it going to be display in debugger window. We can provide three state for DebuggerBrowsable attributes. 1. Collapsed : If we set DebuggerBrowsableState as collapsed, then debugger window will show the element as collapsed. it’s the default behavior. 2. Never : It will never show the element in debugging window. 3. RootHidden : It will hide the root elements and display the all child items as expanded view. You can read the complete definition of these DebuggerBrowsableState at MSDN Now I am going to demonstrate the use of this DebuggerBrowsable Attributes and DebuggerBrowsableState using a example. Before starting, Let’s consider we are having the following code block. namespace DebuggerDemo { /// <summary> /// Student Info Demo Class /// </summary> class Program { /// <summary> /// Mains the specified args. /// </summary> /// <param name="args">The args.</param> static void Main(string[] args) { List<Student> student = new List<Student>(); student.Add(new Student { Roll = 1, Name = "Abhijit", Marks = 87, Addresses = new Address { Address1 = "add1", Address2 = "add2" } }); student.Add(new Student { Roll = 2, Name = "Abhishek", Marks = 41, Addresses = new Address { Address1 = "add3", Address2 = "add4" } }); student.Add(new Student { Roll = 3, Name = "Rahul", Marks = 67, Addresses = new Address { Address1 = "add5", Address2 = "" } }); student.Add(new Student { Roll = 4, Name = "Sunil", Marks = 91, Addresses = new Address { Address1 = "add11", Address2 = "add122" } }); student.Add(new Student { Roll = 5, Name = "Atul", Marks = 71, Addresses = new Address { Address1 = "add12", Address2 = "add222" } }); student.Add(new Student { Roll = 6, Name = "Kunal", Marks = 71, Addresses = new Address { Address1 = "add12", Address2 = "add222" } }); } } /// <summary> /// Student Class /// </summary> class Student { /// <summary> /// Gets or sets the roll. /// </summary> /// <value>The roll.</value> public int Roll { get; set; } /// <summary> /// Gets or sets the name. /// </summary> /// <value>The name.</value> public string Name { get; set; } /// <summary> /// Gets or sets the marks. /// </summary> /// <value>The marks.</value> public int Marks { get; set; } /// <summary> /// Gets or sets the addresses. /// </summary> /// <value>The addresses.</value> public Address Addresses { get; set; } } /// <summary> /// Address of Students /// </summary> class Address { /// <summary> /// Gets or sets the address1. /// </summary> /// <value>The address1.</value> public string Address1 { get; set; } /// <summary> /// Gets or sets the address2. /// </summary> /// <value>The address2.</value> public string Address2 { get; set; } } } Now, first let’s see, how the normal debugging window behaves. Just put a breakpoint at the end of main method and try to explore the debugging window, you will get debugging window as below picture, which is the expected debugging window view. In the above picture you can see, we are having 6 student object and each one having different value. As Addresses is a different class and used as properties with multiple value, hence it is in the collapsed mode. Now, I want to see all the address along with all other properties with expanded mode and also want to hide the Marks properties. To achieve the above requirement we have to add DebuggerBrowsable attrributes for the Marks and Addresses properties in the Student class Now if you put the breakpoint in the same location and explore the debugging window you will find the debugging window view as below picture So, from the above picture you can easily identify the changes in the debugging window view. 2. Use DebuggerDisplay attribute Here is the second tips. By using DebuggerDisplay attributes you can define how a class or field will be displayed in the debugger window. using DebuggerDisplay you can change the debugger window message and variables that you want to display. If you consider the above code sample and debug the application, by default you will get the below snaps of debugging Here, for each student object you are getting NameSpace.ClassName as display mesaage by default. Now we can customize the display using DebuggerDisplay attributes. DebuggerDisplay attributes constructors take display name as arguments or you can passed named parameter that you to display over there. After made the above changes if you run the same code you will find that custom display message with proper value of parameter that you have given in debuggerdisplay attributes. While using DebuggerDisplay, you have to make sure that you are giving proper field name as argument with in { }. Other wise you will get message like below. In this blog post I have explained how we can customize the debugging window’s view during debugging of our application using DebuggerBrowsable and DebuggerDisplay attributes. This is quite helpful while you are debugging a complex object and you want to make your debug window very simple. Hope above tips will helps you for customizing debugging windows. Cool Hi Abhijit, I would like to know how can we display the value of address in the DebuggerDisplay attributes without the root element. Is it possible Hi Jhon, Hope this post will give your answer Let me know if you have any more issue.
https://abhijitjana.net/2010/08/29/few-tips-on-customizing-debugging-window-view-in-visual-studio/
CC-MAIN-2018-30
refinedweb
842
55.74
keyboard input entered this frame. (Read Only) Only ASCII characters are contained in the inputString. The string can contain two special characters which should be handled: Character "\b" represents backspace. Character "\n" represents return or enter. // Reading typed input from the keyboard // (eg, the user entering his name). // You need to attach this script to an object with // a GUIText component. var gt: GUIText; function Start() { gt = GetComponent.<GUIText>(); } function Update () { for (var c : char in Input.inputString) { // Backspace - Remove the last character if (c == "\b"[0]) { if (gt.text.Length != 0) gt.text = gt.text.Substring(0, gt.text.Length - 1); } // End of entry else if (c == "\n"[0] || c == "\r"[0]) {// "\n" for Mac, "\r" for windows. print ("User entered his name: " + gt.text); } // Normal text input - just append to the end else { gt.text += c; } } } using UnityEngine; using System.Collections; public class ExampleClass : MonoBehaviour { public GUIText gt; void Start() { gt = GetComponent<GUIText>(); } void Update() { foreach (char c in Input.inputString) { if (c == "\b"[0]) if (gt.text.Length != 0) gt.text = gt.text.Substring(0, gt.text.Length - 1); else if (c == "\n"[0] || c == "\r"[0]) print("User entered his name: " + gt.text); else gt.text += c; } } }
http://docs.unity3d.com/ScriptReference/Input-inputString.html
CC-MAIN-2015-18
refinedweb
201
61.43
Creating a 2D world in QT with bitmap Hi guys, I'm searching for how to put two images above eachother and then make a part of the foreground image transparant so that it is like a hole and you see the second image? Because I want to use this for my worms game if you shoot in the map that a part of the map is gone like a hole and you see the background image? I already got this: @#include <QApplication> #include <QGraphicsEllipseItem> #include <QGraphicsScene> #include <QGraphicsView> int main( int argc, char **argv ) { QApplication app(argc, argv); QGraphicsScene scene; scene.setSceneRect( -100.0, -100.0, 500.0, 500.0 ); QGraphicsView view( &scene ); view.setRenderHints( QPainter::Antialiasing ); QGraphicsPixmapItem item(QPixmap(":/images/test1.jpg")); QGraphicsPixmapItem item2(QPixmap(":/images/test2.jpg")); scene.addItem(&item2); scene.addItem(&item); view.show(); return app.exec(); }@ But I don't know how to make a part of it transparant? I tried with alpha and setmask but doens't work. Kind regards, Hi! Consider displaying the image ( crater, demolished area or whatever you like), over background. It'll be much easier, and for processor I would say that also much faster. Also, if you wrap your code in @, it will syntax highlight it. Regards, Jake Oké, but then we still need to show only the thing that is destroyed so we still need to see what is destroyed? That isn't possible if we just paste it over the other image because then if there is something small there is always a big crater over the whole place... But thanks for your help already :-) If you are having interactions with entities, than it isn't background any more. Have background completely separate ( for example sky). Land and other entities are then shown over background. Than, if you need to delete part of land, you only calculate are the effect, create new image, and copy that part of background onto the newly created image. Again, you show the image over land. You can also use polygons, for better collision detection etc. Or you could also resize the image of an crater and show it again :). Yes I get that, but my only point is how can I remove something of the first image, I don't find functions to do that in Qgraphics That's because, you can't just create a "hole" in the image. But, you can write your own function, which will change any necessary pixel at that image to transparent.
https://forum.qt.io/topic/13218/creating-a-2d-world-in-qt-with-bitmap
CC-MAIN-2018-09
refinedweb
417
72.05
How. What C++ needs most is operator[]= (lhs version of operator[]). Interestingly, research done by the Visual Studio team revealed that a lot of C++ are dyslexic, and/or have awful spelling, so the same experimental operators can also be enabled with the alternate directive #define __ENABEL_EPSPERAMENTLE_TAPDOLE_ORATORS Call me paranoid, but this looks like an April Fools post that would have been delayed for some reason. Anyway, the explanation for those who didn't follow: since in 2's complement -y is already equal to ~y + 1, then -~y would be equal to ~~y+1, that is to say, y+1, all with two unary operators, without a need for a compiler support. (insert Simpsons image macro here) So the Time Machine has finally been built – amazingly this also works in Visual C++ 6. From 1998! Is it a joke? What about ++ and — operators? ++operator modifies its argument. These do not. -Raymond] Please tell me this is a late April Fools thing. Wow! So much retard… Just wow! Wow! I just… Wow! I… Wow! I have to second Medinoc's April Fools' sentiment, especially since Google returns only this post for __ENABLE_EXPERIMENTAL_TADPOLE_OPERATORS. Since I couldn't find the documentation, an obvious question is why not simply ~-y and ~+y? @Boris: Look at the second part of my message, it explains that ~+y wouldn't work. Borland seems to be in the game, too. I compiled with Borland 5.5 the following code: #include <stdio.h> void main (void) { printf ("%in", -~3); printf ("%in", ~-3); } And it printed 4 and 2. The amazing thing is that the compiler I used is the command line version from 2000. Maybe the famous time machine is already working and nobody told us? s/Borland 5.5/Borland C++ 5.5/ This is hilarious. Will translate to my language and share later. Seems similar to the experimental arrow operator x –> 0 I am so sorry to read about a third of the comments :( Did you not show the downto operator ("–>") recently in this blog, too? All in vain. I rarely comment on these forums, but this just made me scream…well maybe not verbally, but in my head I am. This is atrocious. Readability is to be admired as well as convenience. I neither find this experimental syntax readable nor convenient nor obvious. Parenthesis do clarify intent, and it is readable, obvious, and not a burden on the compiler or developer. If you have to explain your new operators away in a manner that isn't obvious at first glance, it is likely a bad choice. If this is a c++ proposal, I would veto it. (x+1) and (x-1) aren't too hard to write. It only 'helps' in cases of x+1 or x-1. What about x+2 or x-3? Whoops, still need those parens again. This seems like a solution to a problem nobody thinks exists. Are there people out there actually asking for this syntax? really? ++operator that increments by 2. At least this operator can be stacked: -~-~xis (x+2). -Raymond] No thank you. Please note tadpoles are slow creatures thus using an tadpole operator can lead to additional processor instructions to be generated by the compiler being when compared to traditional (x+1) operation. Gotta love the party tricks you can come up with when you properly understand your operators. Reminds me of the minimum and maximum operators GCC had at some point: a <? b (minimum) a >? b (maximum) a <?= b (clamp a to a maximum of b) a >?= b (clamp a to a minimum of b) (they were removed later on) What I'd really like to see is some kind of implicit loop. Something like: vec[$i] += vec2[i]; automatically expanding to: for(size_t i=0; i < vec.size(); i++) vec[i] += vec2[i]; [By the same logic, we are missing a ++ operator that increments by 2. At least this operator can be stacked: -~-~x is (x+2). -Raymond] While there isn't a ++ operator that pre-increments by 2, we do have += . Your stacked example is another reason for not supporting this. It doesn't look well and isn't obvious. I would have to count the tadpoles to understand what is being done. I'd hate to see the tadpole syntax behind (x+1000). Granted I am using an absurd and extreme example, it begs the question: Would there be an upper limit to how many times the operator could be stacked? I haven't yet tried on the VS2015RC yet. -~operator that adds 2, we do have (x+2). So I don't see why the "doesn't generalize beyond 1" argument is so special about -~when it also applies to ++. -Raymond] tadpoles are slimy. Guess this operator is very well named. If this does go beyond experimental, I will FLUNK (with extreme prejudice) any code review I see that contains it. Seriously, "-~" means +1??? (At first I thought there was a typo, "where's the +~?", then I noticed after _very_ close inspection that my dyslexia could kick in full-time.) My compliments on your complements. Not liking it. Would prefer y++1, z–1,x++5,a–8 where the higher precedence ++ — are overloaded /repurposed for this 'feature' vec[$i] += vec2[i]; is already implemented under the name std::valarray. If the positives of valarray outweigh the negatives, that's what it's there for. Anyway, prefix ++ and — do sometimes scale in C++11 according to the famous "Undefined Behaviour and Sequence Points" post on StackOverflow. ++++a – b is okay. So is ++++a + a. What a crazy world we live in. On another note, it's good to see a joke convincing enough that people complain about it :) I wonder if the comments taking this as a serious discussion about experimental features are some sort of meta-level joke (See in particular @David above) :/ Anyhow, it's a cute 2's complement trick :) You had me intrigued there for a while. Then I thought about how this might affect compatibility for existing code bases… Given that tails wiggle I think the tilde should be the tail. It messes up the swimming towards means higher finger string however. I think this is a terrible idea. It's both completely unnecessary and worse worse than the problem it purports to fix. The joke has already been explained in the comments, but people apparently aren't reading them… We've had this in C# for ages. foreach (int number in Enumerable.Range(1,10)) { Debug.WriteLine(string.Format("Value: {0} Tadpole Value: {1}", number, -~number)); } You c++ folks need to get with the times. great idea to distract from the compiler lacking all the C++11 features. By "all the C++11 features", you mean expression SFINAE, some constexpr, and some C99 preprocessor (and the many bugs we know and love)? Hey Microsoft, which way is FAIL swimming? Towards you, or away from you? This gotta be one of my favorite posts ever. It even works if you are compiling from C. Man, Microsoft is really going all out on this one. I've read all the comments and I can't believe nobody's mentioned yet that this isn't guaranteed to work in C. Because, you know, C doesn't guarantee that integers are represented as two's complement. In polite society we're supposed to avoid mentioning this, though, since absolutely nobody likes to deal with ones' complement and sign/magnitude implementations. I don't know if C++ guarantees two's complement. And I dimly recall that C99 (or C11?) may have actually standardized on two's complement now. This is nearly as fun as the "approaches from above" operator, as in x=100; while (x –> 0) { doSomething(x); } MSVC hasn't shined in C++ standard compatibility, IMHO I'd concentrate more on getting "the old new things" right This also has the flaw of not supporting tadpoles which disagree on the direction (there is no clear winner as the standard says) int x = ~-~y % 10; // Where do I go? Omg Joshua: What on earth would []= do?? I can't believe this post. The tadpole operators have been implemented in gcc for like, forever. But you're making it sound like it's some novel idea? How dare you. Will this compose well with tail call optimizations or might they render the result ranine? I would also like two second Josh's compliment. You're joking, right? Especially the (x+1) case — you use minus as part of an expression to *add* something to a value??? So we are going to replace a well known mathematical notation, used since the 17th century, with an obscure notation which will only be used by C++. Sorry but I think this is a bad idea This is crazy. Not only because when these tadpoles grow up, your code will literally start JMPing all over the place, but they will also eat all your source flies. Give it 5 minutes. This is awesome. > heavily nested code that is hard to read Has someone, in fact, tried getting "-~x" into some code by claiming that it's more readable? Not sure I trust myself to correctly guess where the line between truth and leg-pulling is in this post; code reviews surprise me sometimes. This is a terrible idea. These so called 'tadpoles' look nothing like actual tadpoles. They should be called sea snakes. Also doesn't work on my CDC 6600. My first question – what is actually wrong with (y+1) in the first place? I would seriously question that bracketed code is hard to read and would like to see the study that was performed to confirm this. Given that many people who write code have some form of math training from high school and likely have seen operator precedence before. I would also like to introduce the topic of cognitive loading – I would make the assertion that the tadpole symbolic would increase visual cognitive load (they are not a normal everyday thing, so the reader has to mentally scan for them [which increases cognitive load]. High cognitive load leads to an increased defect count as a side effect of increased complexity. So I would question the value of these features – certainly I would not like them in complex production code. Even I am finding it hard to discern sarcasm from honest anguish and disgust in the comments! ~-s I'm all for the new operator… but IMHO the "head" and the "tail" of the tadpole as described above is **Exactly** backwards! twitter.com/…/602885761264799744 (pic of a tadpole for reference) Perl has also supported this since 1994 in perl 5. This is just another example of other languages catching up to Perl. #!/usr/bin/perl use integer; print ~-5, "n", -~5, "n"; However with Perl it hasn't been experimental. It's been a built in part of the language for over 20 years now. /s Some people here need to revisit C/C++ operator syntax. Kudos to Raymond for making me think, then making me laugh. For extra fun, see what happens when the tadpole is applied to a float. Seems to work with VC++ 6, too! To enable it, you must use #define __ENABLE_ESCOTIC_SQUIGGLE_OPERATORS Looks like the name is still in flux, though. Why on earth would you make it so easy to mistype "-~" and "~-", and on top of that force people to learn that minus will ADD SOMETHING???? Why not "+~" and "-~" to mean add and subtract? Because all the major JavaScript engines are written in C++, JavaScript has also inherited the tadpole operators from the underlying C++ implementation. It is still an experimental feature, but you can enable it with the "use tadpole"; directive: (function() { "use tadpole"; var n = 3; console.log( "3 + 1 = ", -~n ); console.log( "(3 – 1) * (3 + 1) ", ~-n * -~n ); })(); Replacing parentheses with tadpoles in C++ because they're considered hard to read??- That's crazy. What's next??- Removing the parentheses between a function's name and its arguments? With parenthesis pair highlighting feature in Visual Studio, I found using parenthesis to spell out precedence cleaner. Excellent post, I have question. I see these work on y, z, and n. Will you add support for i, j, and k? The thing that bothers me is that up till now, if we wanted to tell the compiler "apply the ~ operator, then apply the – operator," we simply wrote -~i. But now we'll have to write -(~i) to make clear that we don't want the new tadpole operator. So we're not really getting away from parentheses. We're just adding them where we formerly didn't need them. P.S. Raymond, I've been reading your blog almost since the beginning, and this is one of my all-time favorite posts. This is jargon. C++ needs changes that increase clarity — this just makes it less accessible. Perl calls these the "Inchworm-On-A-Stick" operators and they're considered secret (and listed as such in documentation). =( )= std::valarray doesn't let you do stuff like this: vec[$i] = table[(vec2[i] & 0xff) + (page << 8)]; printf(vec[$i]); vec[$i] = state += 0.5f * (vec[i] – state); vec[$i][$j] = vec2[j][i]; vec[$i] = sinf(i * (2 * 3.14159286f / 256.f)); Let's just write our programs in Ogh! It's more readable than this. *Ook!, not ogh! (!) So to cover a minor need you propose a disruption to the established idea that addition isassociated with the plus and negation with a minus? Instead you think it is reason enough to introduce a new and conflicting idea of tadpoles and direction of swim? Just because you can do something, you should really learn to self-censure _before_ you publish. My favorite is the BangBang operator, for those times when you just need to slam a value into being boolean. In other words, this is really an April pseudo-Fools' post, which only gets away with the tardiness because it's true in fact though not in spirit? x+~ or x-~ I'd say appear better and is easier to read. Great. Let's name operators after what they look like. NEVER name operators after what they do. For the sake of consistency please rename the following operators accordingly: + the cross operator – the line operator * the star operator / the slope operator Well, it seems that the Visual Studio team (or should I say MS policy in general?) reached a point where there is no room for improvement and, instead of releasing a service pack for bug fixes, decided to release another product, adding useless features that nobody will ever use. Even if you don't admit it publicly, Raymond (it's obvious why), you can't argue the fact that the expression (x+1)%y is MUCH easier and cleaner for humans to read. P.S.: It is possible to write a program in a single "line" of code. Should we adapt this way of writing code? OMG. Looks like the dismissed IE team put hands to C++. WTF, who is mastermind behind this? Although Python also supports those operators, I'm concerned about F# programmers. I mean, in F# the "~-" syntax is used to overload the negation operator. msdn.microsoft.com/…/dd233204%28v=vs.100%29.aspx It would be a shame if such users were scared off because of these operators. At least, the ~~~— and —~~~ syntaxes would be more familiar to them… Fortunately enough, they seem to work too. Is this a joke? What's wrong with: y = ++y % 10; ? @meme: The problem with y = ++y % 10 is that it's not valid C or C++. It has undefined behavior. (Also, yes, this is a joke, and it has been explained in the comments.) >> Also doesn't work on my CDC 6600. Can some one complement this ? @Mark Y: It would provide a convenient way to assign to an element of a class that emulates an array. Contrived example: A class for a variable-length array of pointers to reference-counted elements. Because your storage is raw pointers, you can safely realloc() it, but you have to take care to fix up the refcounts when you store a pointer to the array. Because there's no operator[]=, you have to write an operator[] that returns a reference to a helper type that does the underlying fixup in its assignment operator and has a typecast operator to retrieve the pointer. I dunno, these "tadpole" operators seem much less readable than parentheses. Shouldn't making code more readable involve getting rid of funky operators that don't speak for themselves? "Tadpole swimming toward a value makes it bigger", just reading that makes me wanna write some esoteric language that is based on ASCII art, although, I'm pretty sure something like that already exists. It's misleading, especially when the plus one operator uses the minus sign to perform the action. Clever girl..! Somebody has surely said this before me, but I'm not gonna read all those comments: this is basic two's complement math, and an absolutely epic troll. -x = ~x + 1 ~x = -x – 1 So -(~x) = ~(~x) + 1 = x + 1 ~(-x) = -(-x) – 1 = x – 1 This will NOT work on old obscure architectures that don't use two's complement math. Incidentally, your posting system doesn't work in Firefox. These operators are old. They work because "~" means "not". not 1 = 0xFFFFFFFE = -2, -~1 = -(not 1) = -(-2) = 2. ~-1 = not -1 = not 0xFFFFFFFF = 0. This has to be some sort of bad joke right? Why on earth do the designers of C++ constantly strive to make the language more terse and ever less human readable? This is just ridiculous. I like it! Now what we need is a "Custard Pie" operator. It's like the Tadpole operators but it only allows accretion! :D @Joshua You are aware, I hope, that ordinary overloads of [] can be used on the lhs, as long as the result is a (non-const) reference? Very nice. I'm still trying to figure out which is funnier though: the original post, or the comments (the one worrying about backward compatibility was *especially* good). Such moronic idea just shows the level of C++ team. Instead of graving C++ they play with the corpse, joining red nose and ledigaga's panties. Great job! I'd really like to see the results of whatever social experiment you are *really* running. This sounds a lot more like something Scott Adams would do. This is awesome!! I'm adopting this for our company coding guidelines, like, yesterday. I did note something really curious, though: apparently this operator has the unique property that intervening spaces between the '~' and the '~' (the tad and the pole???) are allowed, such that the expression "-~y" is interprested exactly the same as "- ~ y". Is that intentional? Anyone have some suggestions as to which style is better / more readable? …but, but these tadpole operators don't seem to swim well in an unsigned medium! This tadpole looks more like a spermatozoon. I also think it's a better name because it contains more letters anyway it's the right way to obtain (x+1) on architectures such as risc/mmx/gpu and has been so since the stone age. well actually it's (y==y)+x lol @Rob G: I refer the honorary gentleman to Neil's most excellent explanation. Raymond, this is _just_ what we need… more IOCCC fuel… NOT!. Yeah, I checked it out and it looks like you kooks have also added this to C as well. Next thing you know there will be crazy operators like "*++" and "–%" showing up in our beloved language. When will the madness stop?! Bjarne Stroustrup, call your office! @Stephan Leclercq: You win, here's one internet. Has anyone mentioned that the Tadpole operators could be written as ??– or -??- @Nargil: *Double Facepalm* so ridiculous. Can't believe it. so many things we need in c++ and visual studio and as if they have nothing else to do but add some stupid ideas that nobody needs and are already there like ++ –. really late april fools. just go discover internet and read what people complain and miss about in c++ and visual studio … you have to be dumb or ignore it if you don't do it. give us options to turn ALL your 'smart' and 'vs will auto do it for you' so we can turn it off or on (like mouse click in VS puts cursor in virtual space even though all virtual space is disabled etc.). give us decent libraries where we can customize easily and have freedom (unlike mfc, gdi etc) we want freedom c++ once had Gotta love you Raymond! There I said it :) Based on the comments, clear proof that the 10x programmer existence is true. (or maybe that the 0x programmer exists). Raymond, did you intend to perform a social experiment here? I always knew that some percentage of posters operated in write-only complain-only mode, but this… this is spectacular proof of just how high that percentage is. Wow Raymond, this is cruel :). I'm torn between thinking the comments are hilarious and hurting inside a bit for the people who think someone would design this as a language feature (and not an accident of two's complement math). Perhaps you should link to the second post before someone takes things too far? "The tilde is the tadpole's head and the hyphen is the tail." The ~ character far better represents a tadpole tail. How do you F that up? You seem to enjoy demonstrating how many of your readers are fools. mk: "You seem to enjoy demonstrating how many of your readers are fools." I think he's allowing them to demonstrate that for themselves. Brad: "Perhaps you should link to the second post before someone takes things too far?" There's ample information in the comments right here. "The problem with y = ++y % 10 is that it's not valid C or C++. It has undefined behavior." False. @McBucket I don't think you quite understand the full scope of the word "demonstrate". Me: "False." Never mind. (I'm an idiot too.) All, We need to get this post on Slashdot and watch mayhem ensue. The operator I'd like to see is the long operator-> "i->foo()" is equivalent to "(*i).foo()" "i–>foo()" is equivalent to "(**i).foo()" "i—>foo()" is equivalent to "(***i).foo()" etc. y = ++y % 10 is actually valid C++11. See stackoverflow.com/…/14005508 Both ISO C and ISO C++ do not guarantee this to work, since 1’s complement and signed magnitude representations for integral types are explicitly allowed. Since C++11, y = ++y % 10 does not have undefined behavior due to sequence point rules for built-in operators (btw, there is actually nothing normative about "sequence point" in C++11, it is superseded by…/n2239.html). It is actually y = (y += 1) % 10, and compound assignment in C++11 has stronger guarantee than previous versions. See ISO C++11 5.17, and wg21.cmeerw.net/…/issue637 for details. Nevertheless, in ISO C it is still undefined because of more than one side effects on the same scalar between sequence points (even if ISO C11 adopted the "sequence before/after" wording). "<strike>PERL</strike> C/C++: indistinguishable from line noise."
https://blogs.msdn.microsoft.com/oldnewthing/20150525-00/?p=45044
CC-MAIN-2018-05
refinedweb
3,912
74.29
I am going to create a graph representation using a adjacency list. How do I represent the vertices and the connected vertices??? I am thinking of using a set of hash map to represent all the vertices and linked list to represent the adjacency vertices. But my problem comes to how do I connect from the vertices in the hash map into the linked list?? Say that in the hash map I have the vertices number from 0 up to n as the key, and what is the value here?? how to create a graph with adjacency list Page 1 of 1 help me to start 2 Replies - 51009 Views - Last Post: 20 February 2008 - 08:10 AM #1 how to create a graph with adjacency list Posted 19 February 2008 - 09:25 PM Replies To: how to create a graph with adjacency list #2 Re: how to create a graph with adjacency list Posted 19 February 2008 - 10:31: how to create a graph with adjacency list Posted 20 February 2008 - 08:10 AM here's the code that I have so far: I am just worried that this won't work, any suggestions?? public class Graph { private ArrayList<Integer> vertices; private LinkedList<Integer>[] edges; private int numVertices = 0; public Graph(int numVertices) { this.numVertices = numVertices; vertices = new ArrayList<Integer>(); edges = (LinkedList<Integer>[]) new LinkedList[numVertices]; for (int i = 0; i < numVertices; i++) { vertices.add(i); edges[i] = new LinkedList<Integer>(); } } public void addEdge(int source, int destination) { int i = vertices.indexOf(source); int j = vertices.indexOf(destination); if (i != -1 || j != -1) { edges[i].addFirst(destination); edges[j].addFirst(source); } } I am just worried that this won't work, any suggestions?? Was This Post Helpful? 1 Page 1 of 1
http://www.dreamincode.net/forums/topic/43783-how-to-create-a-graph-with-adjacency-list/
CC-MAIN-2017-30
refinedweb
292
60.85
...one of the most highly regarded and expertly designed C++ library projects in the world. — Herb Sutter and Andrei Alexandrescu, C++ Coding Standards The header <boost/checked_delete.hpp> defines two function templates, checked_delete and checked_array_delete, and two class templates, checked_deleter and checked_array_deleter.. A particularly troublesome case is when a smart pointer's destructor, such as boost::scoped_ptr<T>::~scoped_ptr, is instantiated with an incomplete type. This can often lead to silent, hard to track failures. The supplied function and class templates can be used to prevent these problems, as they require a complete type, and cause a compilation error otherwise. namespace boost { template<class T> void checked_delete(T * p); template<class T> void checked_array_delete(T * p); template<class T> struct checked_deleter; template<class T> struct checked_array_deleter; } Tmust be a complete type. The expression delete pmust be well-formed. delete p; Tmust be a complete type. The expression delete [] pmust be well-formed. delete [] p; template<class T> struct checked_deleter { typedef void result_type; typedef T * argument_type; void operator()(T * p) const; }; Tmust be a complete type. The expression delete pmust be well-formed. delete p; template<class T> struct checked_array_deleter { typedef void result_type; typedef T * argument_type; void operator()(T * p) const; }; Tmust be a complete type. The expression delete [] pmust be well-formed. delete [] p; The function templates checked_delete and checked_array_delete were originally part of <boost/utility.hpp>, and the documentation acknowledged Beman Dawes, Dave Abrahams, Vladimir Prus, Rainer Deyke, John Maddock, and others as contributors.
http://www.boost.org/doc/libs/1_58_0/libs/core/doc/html/core/checked_delete.html
CC-MAIN-2015-22
refinedweb
245
57.37
In this article, you will see how to draw line plots using Python’s seaborn library. The seaborn library allows you to visualize data with the help of different types of plots such as scatter plot, box plot, bar plot, count plot, heat maps, line plots, etc. This article will be focusing on line plots. A line plot is used to plot relationships between two numeric variables. The line plot shows how values on the y-axis in a 2-dimensional graph are affected by an increase or decrease in the values on the x-axis. Table of Contents - Installing Seaborn And Importing Required Libraries - Plotting Line Plots Using NumPy Arrays - Plotting Line Plots Using Pandas - Removing the Confidence Interval from a Line Plot - Plotting Multiple Line Plots using Pandas - Changing Colour of Line Plots - Plotting Dashed Line Plots - Adding Markers to Line Plots Installing Seaborn And Importing Required Libraries To install Python’s seaborn library, open your command terminal and run this command: pip install seaborn It is pertinent to mentions that the seaborn library is built on top of Python’s Matplotlib library. Therefore, you must have installed matplotlib before you can work with seaborn. The following script imports the required libraries: import seaborn as sns import matplotlib.pyplot as plt import numpy as np %matplotlib inline sns.set_style("darkgrid") The following script increases the default plot size to 10 inches wide and 8 inches high. plt.rcParams["figure.figsize"] = [10, 8] Plotting Line Plots Using NumPy Arrays You can plot a line plot by passing two numpy arrays containing values for variables that you want to plot on the x and y axes. Here is an example: x = np.arange(-20,21) print(x) y = np.array(x * x) print(y) sns.lineplot(x=x,y=y) The script above creates two numpy arrays: x and y. You can change the names of the arrays if you want. The x array contains integers from -20 to 20. The y array contains squares of all the items in the x array. Next, the lineplot() method of the sns (the alias for the seaborn library) is called and the x array is passed to the x attribute, while the y array is passed to the y attribute of the lineplot() method. Here is the output for the above script: [-20 -19 -18 -17 -16 -15 -14 -13 -12 -11 -10 -9 -8 -7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20] [400 361 324 289 256 225 196 169 144 121 100 81 64 49 36 25 16 9 4 1 0 1 4 9 16 25 36 49 64 81 100 121 144 169 196 225 256 289 324 361 400] The output line plot shows a typical square function. Plotting Line Plots Using Pandas You can also draw seaborn line plots using Pandas dataframes. To plot a seaborn line plot using Pandas dataframe, the names of the dataframe columns that you want to plot on the x-axis and y-axis are passed to the x and y attributes of the lineplot() function, respectively. In addition, you need to pass the dataframe name to the data attribute of the lineplot() function. The seaborn library comes with built-in datasets that you can import into a Pandas dataframe. The following script imports the “flights” dataset into the “flight_data” dataframe and prints the first five rows of the dataset. flight_data = sns.load_dataset("flights") flight_data.head() The output below shows that the dataset contains three columns: year, month, and passengers. Let’s plot a seaborn line plot which displays the relationship between years and number of passengers. sns.lineplot(x='year',y='passengers', data = flight_data) The output shows that from the year 1949 to 1960, the number of passengers who traveled by air increased almost linearly. The shaded region around the line plot is the confidence interval. The next section shows how you can remove the confidence interval from a line plot. Removing the Confidence Interval from a Seaborn Line Plot To remove the confidence interval, you need to pass “None” as the value for the ci attribute of the lineplot() function as shown in the following script: plt.rcParams["figure.figsize"] = [10, 8] sns.lineplot(x='year',y='passengers', data = flight_data, ci = None) The output below shows that the confidence interval has now been removed. Plotting Multiple Line Plots using Pandas If you are using a Pandas dataframe, you can plot multiple line plots, one each for every unique value in a categorical column. For instance, if you want to plot 12 line plots, one for each month of the year, you need to pass the name of the column i.e. “month” as a parameter to the hue attribute of the lineplot() function. Look at the following script for reference. sns.lineplot(x='year',y='passengers', hue = 'month', data = flight_data, ci = None) In the output below, you can see 12 line plots, one for each month from the years 1949 to 1960. The output shows that overall the most number of people travel in the months of June, July, and August which is understandable since this is the vacation period. Changing Color of Line Plots You can also change the colors of your line plots. To do so, you need to pass a palette value to the palette attribute of the lineplot() function. The following script sets the value of the palette attribute to bright. sns.lineplot(x='year',y='passengers', hue = 'month', data = flight_data, ci = None, palette = "bright") In the output below, you can see that the colors have been updated. To see the complete list of color palette options, check the official documentation. Plotting Dashed Line Plots In addition to changing colors, you can also plot dashed line plots to differentiate multiple line plots. To do so, you need to pass the categorical column containing values for multiple line plots. It is important to mention that the style attribute doesn’t work if you have more than 6 categories, in your Pandas column. Therefore, for the sake of demonstration, the following script removes the records for the months of January to June. flight_data6 = flight_data[(flight_data['month']!= "January") & (flight_data['month']!= "February") & (flight_data['month']!= "March") & (flight_data['month']!= "April") & (flight_data['month']!= "May") & (flight_data['month']!= "June")] Next, to pot dashed line plot, pass “month” as a value for the style attribute as shown below. sns.lineplot(x='year',y='passengers', hue = 'month', data = flight_data6, ci = None, palette = "bright", style = 'month') The output below shows a dashed line plot. The legends for different types of dashes is also shown at the top-left corner of the plot. Adding Markers to Line Plots Finally, to enhance the visibility of your line plots, you can also add markers to your line plots. You need to pass the marker symbol to the marker attribute of the lineplot() function. The following script plot multiple line plots with circular markers. sns.lineplot(x='year',y='passengers', hue = 'month', data = flight_data6, ci = None, palette = "bright", marker = "o") In the output below, you can see that markers have been added at each point where x and y axes intersect.
https://pdf.co/blog/introduction-to-seaborn-line-plots
CC-MAIN-2021-49
refinedweb
1,210
67.38
User lying behind a user control is aggregation rather than inheritance. If you need only an enhanced or stripped-down version of an existing control, you are better off writing a new class that inherits from it. A Web control has varying degrees of complexity. You can write a simple Web control that outputs HTML code, but you can also write a more complex composite control that dynamically creates and lays out a tree of child controls. You can also devise templated controls, which let you specify user-defined ASP.NET templates so that you can populate parts of the user interface. All ASP.NET controls can be extended through inheritance. Depending on how complex you want the new control to be, this extension can be extremely simple. Earlier in this chapter, I built a user control made up of a label and a text box that you could independently program. To make it fully programmable, I exposed both the label and the text box as public properties of the Label type and TextBox type, respectively. If you want your users to have total control of the child elements, this approach works just fine. When you want to implement finer control on the properties of the constituent label and text box controls, you must understand that user controls have a structural limitation: constituent elements are aggregated in the all-encompassing container control. Constituent controls can either be exposed as-is as a property on the parent control or be completely hidden by the parent control. In the latter case, a child control could expose one or more of its properties through proxy properties defined on the parent control, as shown in Figure 5-9. If you expose constituent elements of the user control, you cannot filter the child control’s programmability any longer on a control-by-control basis. By contrast, if you don’t expose child controls, you have to explicitly define any element of the interface as a distinct property. Basically, you’re faced with rebuilding a brand new programming interface rather than extending or enhancing the existing one. Let’s rewrite the LabelBox control as a custom control, that is, a class that inherits from TextBox. public class LabelBox : TextBox { public LabelBox() {} public LabelBox(String strLabel, String strText) { Label = strLabel; Text = strText; } public String Label = ""; public String LabelCssClass = ""; public Boolean LabelOnTop = false; The preceding code snippet shows the two constructors of the new class, one of which takes no argument. The LabelBox control has three properties: the text, the CSS style of the label, and a Boolean flag denoting whether the label has to be rendered above the control or next to it. A private instance of the Label class represents the label, which is created and configured in the override of the CreateChildControls method. private Label m_lbl; protected override void CreateChildControls() { m_lbl = new Label(); if (LabelOnTop) m_lbl.Text = Label + "<br>"; else m_lbl.Text = Label + " "; m_lbl.Font.Bold = true; m_lbl.CssClass = LabelCssClass; // Set standard styles for the text box this.Font.Name = "verdana"; this.Font.Size = FontUnit.XSmall; this.Style["border"] = "solid 1px black"; } The control renders itself through the Render method. In the implementation of this method, the new control outputs any HTML code that forms its user interface. In our example, you should render the label control and call the Render method of the base class. protected override void Render(HtmlTextWriter writer) { m_lbl.RenderControl(writer); base.Render(writer); } A custom control is a compiled element that you register with pages by using assemblies. You use the @ Register directive and are required to specify the assembly name (without an extension) and the namespace. You don’t have to indicate a tag name because this defaults to the class name. You indicate instead the namespace prefix: <%@ Register TagPrefix="expo" Namespace="BWSLib" Assembly="BWSLib.Controls.LabelBox" %> The control’s declaration in the body of an ASP.NET page looks nearly identical to the declaration of a standard control or user control: <expo:LabelBox Because the LabelBox control derives from TextBox, you can use any of the base properties in addition to your custom ones. Figure 5-10 shows the LabelBox control in action in a sample page. The full source code for the LabelBox.cs and TestLabelBox.aspx applications is available on the companion CD. In the previous chapters, I illustrated a variety of the DataGrid control’s features, including pagination, sorting, templates, and in-place editing. When you are writing code and the DataGrid control is involved, in most situations you need to handle the events raised by the control. You do this by writing C# or Visual Basic code to flesh the <script> section out (the so-called code noodle that I mentioned earlier in this chapter when discussing ASP.NET lasagne code). As you know, the objective of the DataGrid control is to show and sort pages of records that are formatted in columns of data. In spite of this declared intention, you still have to write several lines of code. Why? Because we are talking for the most part about Web controls. A Web control is a software component that springs to life when the ASP.NET page is processed on the Web server by a special module, the HTTP runtime. A Web control generates output composed of HTML tags and elements. The user’s client interacts with the HTML output of the Web controls. When an event occurs on the client (for example, a new grid page is requested), the page is posted back to the server. Next, the HTTP runtime restores the previously saved state, gives Web controls a chance to make this state consistent with their further expectations, and resumes server-side processing. When the DataGrid control is the Web control in question, a number of issue arise: For performance reasons, the grid’s data source is not persisted across multiple requests of the same page. No information about the current sort expression is stored. No information about the columns that actually form the grid is stored. As a result, to have a grid that moves through pages and sorts by columns, you have to write code every time you do the following: Retrieve and bind the data source. Sort the data source. Set the new page index. In most cases, the code you need is boilerplate code, but you do still have to write it or, even worse, cut and paste it from existing pages. When you need to consider reusability, the natural question is, why not embed all or part of this code you’ve written into a new, wrapped-up, ad-hoc DataGrid control? The answer is, why not? No architectural or design issues prevent you from implementing an enhanced grid, but you should be aware of some critical decisions you must make. One of these decisions is particularly important: how you retrieve, and possibly sort, the data. As discussed in Chapter 2, scalability is a serious issue for Web applications, so you’ll need to answer the retrieval and sorting question based on the specific features of your application. You might want to cache the data source in Cache or Session rather than using XML files. You might also decide that, all in all, you can afford reloading data page by page by using super-optimized stored procedures. If your requirements are to display the most current data, you can even employ ADO server cursors to provide an up-to-date view of data at any time. You can see that there is more than one right way to store and retrieve data. What works for desktop applications does not work as well over the Web. That’s why the DataGrid Web control has been designed with such a flexible programming interface. All the aforementioned considerations do not necessarily lead to the conclusion that code-behind is the only way to lighten the code burden for an ASP.NET page based on the DataGrid control. However, to write an enhanced, ready-to-use version of the DataGrid control, you must decide how to solve the three issues just mentioned: paging, sorting, and data retrieval. To handle paging, in most cases, you don’t need to do much more than set the new page index in the PageIndexChanged event, and you can hard-code this behavior in the new control. As for sorting, a common practice is to sort data using a DataView class, which ultimately provides the grid’s contents. You can hard-code in the grid’s internal code the foundation for auto-reverse and safe multifield sorting. With respect to data retrieval in a new control, I believe that this is an application-specific choice. In light of this, the best you can do is fire an event that alerts the code to refresh the grid’s view. Paging and sorting will be provided automatically but you are still responsible for binding the code to data. Such a control (named PowerGrid in our example) can have a few extra properties and require only the following code for the output shown in Figure 5-11: <expo:PowerGrid <Columns> <asp:BoundColumn <asp:BoundColumn <asp:BoundColumn </Columns> </expo:PowerGrid> Wrapping up the DataGrid control in a new, more sophisticated control allows you to embed your graphical preferences—colors, fonts, borders, styles, or whatever—in the control’s code. The PowerGrid class inherits from DataGrid and sets a lot of visual attributes in the class constructor. It also hooks up a few events, as the next bit of code shows: ItemCreated += new DataGridItemEventHandler(OnItemCreated); SortCommand += new DataGridSortCommandEventHandler(OnSortCommand); PageIndexChanged += new DataGridPageChangedEventHandler(OnPageChanged); The ItemCreated handler adjusts the header to reflect the current sort expression and updates the pager bar. SortCommand is responsible for preparing the sort expression, taking into account directions and fields. PageIndexChanged cares about the index of the new page. Both selecting a new page and sorting by a new field cause the grid to retrieve the data and refresh the view. The code for refreshing the view, however, is left to the calling page and is not implemented in the control’s code. Whenever the grid needs to rebind and refresh the view, it fires a custom event: UpdateView. The following code declares the UpdateView event. In our example, you can use the EventArgs class, because all the information the calling page needs is available through the DataGrid control. public event EventHandler UpdateView; protected virtual void OnUpdateView(EventArgs e) { if (UpdateView != null) UpdateView(this, e); } The UpdateView event is fired to rebind and refresh the content of the page. The following code shows how this is done for the PageIndexChanged event: void OnPageIndexChanged(Object sender, DataGridPageChangedEventArgs e) { CurrentPageIndex = e.NewPageIndex; // Fire page-level event: UpdateView OnUpdateView(EventArgs.Empty); } The event is handled by the client, just as any other event would be. <expo:PowerGrid The signature of the handler is standard. public void UpdateView(Object sender, EventArgs e) { UpdateDataView(); } Unless the data you retrieve from the source is already sorted, the UpdateView handler needs to know about which sort expression to use. Unfortunately, the DataGrid control does not provide this information. In Chapter 2, when I discussed the sorting capabilities of the DataGrid control, I remarked how tricky auto-reverse sorting can be. In our example, you need to compare the current sort expression with the new sort expression and just reverse the order if the expressions match. I used the ViewState collection of the DataGrid control to persistently store this information. The PowerGrid control accesses this ViewState slot by using a read-only property named SortExpression: public String SortExpression { get { return (String) ViewState["CurrentSortExpression"]; } } The following code looks like some code you have already seen in previous chapters that refresh grid views. The key functionality here is the retrieval of the sort expression via a custom property of the PowerGrid control. The full source code for the PowerGrid.cs, TestPowerGrid.cs, and TestPowerGrid.aspx applications is available on the companion CD. private void UpdateDataView() { DataSet ds = (DataSet) Session["MyDataSet"]; DataView dv = ds.Tables["MyTable"].DefaultView; dv.Sort = grid.SortExpression; grid.DataSource = dv; grid.DataBind(); } The PowerGrid control takes advantage of the reusability features of ASP.NET to provide you with a powerful and specialized control that uses minimal code. By combining custom controls with code-behind techniques, you can have lightly coded ASP.NET pages that separate code and layout and offer a good level of reusability.
http://etutorials.org/Programming/Web+Solutions+based+on+ASP.NET+and+ADO.NET/Part+II+Smart+and+Effective+Data+Access+and+Reporting/Code+Reusability+in+ASP.NET/Writing+Custom+Controls/
crawl-001
refinedweb
2,072
53.81
Using objects that implement IDisposable The common language runtime's garbage collector reclaims the memory used by managed objects, but types that use unmanaged resources implement the IDisposable interface to allow the memory allocated to these unmanaged resources to be reclaimed. When you finish using an object that implements IDisposable, you should call the object's IDisposable.Dispose implementation. You can do this in one of two ways: With the C# usingstatement or the Visual Basic Usingstatement. By implementing a try/finallyblock. The using statement The using statement in C# and the Using statement in Visual Basic simplify the code that you must write to create and clean up; using System.IO; public class Example { public static void Main() { Char[] buffer = new Char[50]; using (StreamReader s = new StreamReader("File1.txt")) { int charsRead = 0; while (s.Peek() != -1) { charsRead = s.Read(buffer, 0, buffer.Length); // // Process characters read. // } } } } Imports System.IO Module Example Public Sub Main() Dim buffer(49) As Char Using s As New StreamReader("File1.txt") Dim charsRead As Integer Do While s.Peek() <> -1 charsRead = s.Read(buffer, 0, buffer.Length) ' ' Process characters read. ' Loop End Using End Sub End Module Note that; using System.IO; public class Example { public static void Main() { Char[] buffer = new Char[50]; { StreamReader s = new StreamReader("File1.txt"); try { int charsRead = 0; while (s.Peek() != -1) { charsRead = s.Read(buffer, 0, buffer.Length); // // Process characters read. // } } finally { if (s != null) ((IDisposable)s).Dispose(); } } } } Imports System.IO Module Example Public Sub Main() Dim buffer(49) As Char '' Dim s As New StreamReader("File1.txt") With s As New StreamReader("File1.txt") Try Dim charsRead As Integer Do While s.Peek() <> -1 charsRead = s.Read(buffer, 0, buffer.Length) ' ' Process characters read. ' Loop Finally If s IsNot Nothing Then DirectCast(s, IDisposable).Dispose() End Try End With End Sub End Module(. This may be your personal coding style, or you might want to do this for one of the following reasons: To include a catchblock to handle any exceptions thrown in the tryblock. Otherwise, any exceptions thrown by the usingstatement are unhandled, as are any exceptions thrown within the usingblock if a try/catchblock isn't present..(); } } } Imports System.Globalization Imports System.IO Module Example Public Sub Main() Dim sr As StreamReader = Nothing Try sr = New StreamReader("file1.txt") Dim contents As String = sr.ReadToEnd() Console.WriteLine("The file has {0} text elements.", New StringInfo(contents).LengthInTextElements) sr IsNot Nothing Then sr.Dispose() End Try End Sub End Module You can follow this basic pattern if you choose to implement or must implement a try/finally block, because your programming language doesn't support a using statement but does allow direct calls to the Dispose method. See also Feedback We'd love to hear your thoughts. Choose the type you'd like to provide: Our feedback system is built on GitHub Issues. Read more on our blog.
https://docs.microsoft.com/en-us/dotnet/standard/garbage-collection/using-objects
CC-MAIN-2019-09
refinedweb
484
61.22
Learn Java in a day Learn Java in a day  ...; Variables In this section, you will learn about Java variables..., "Learn java in a day" tutorial will definitely create your Variables in Java Variable in Java In this section you will learn about variables in java. First... values persist. In Java, all variable must be declared first before... identifier = value; This is the basic syntax of declaring variable in java. You Variables in Java 0.0d In this section, you will learn about Java variables. A variable... variables) Java Primitive Data Types Data Type Description... when a program executes. The Java contains the following types of variables Day for the given Date in Java Day for the given Date in Java How can i get the day for the user input data ? Hello Friend, You can use the following code: import..."); String day=f.format(date); System.out.println(day instance variables - Java Beginners instance variables instance variables Rose India Online/Onsite HTML Training Course can be written and edited on any type of computer. Rose India offers a six-day... Rose India Online/Onsite HTML Training Course Welcome to Rose India Online/Onsite HTML Training Course. As you learn learn i need info about where i type the java's applet and awt programs,and how to compile and run them.please give me answer java protected variables java protected variables Can we inherit the Java Protected variable..? of course you can. But I think what you mean is "override"? Is tha so? There are some restriction Get first day of week Get first day of week In this section, we will learn how to get the first day of ..._package>java FirstDayOfWeek Day of week: 7 Sunday Environment variables Environment variables How to set Environment Variables in JAVA 6 n Tomcat 6 CLASSPATH, JAVA_HOME, PATH, & CATALINA variables plzzz plzz help me Variables 0.0d In this section, you will learn about Java variables. A variable... variables) Java Primitive Data Types Data Type Description... when a program executes. The Java contains the following types of variables to learn java to learn java I am b.com graduate. Can l able to learn java platform without knowing any basics software language. Learn Java from the following link: Java Tutorials Here you will get several java tutorials Learn java Hi, I am absolute beginner in Java programming Language. Can anyone tell me how I can learn: a) Basics of Java b) Advance Java c) Java frameworks and anything which is important. Thanks Variables Help Please! Java Variables Help Please! Hi, I just started with java and i need help with my school project, this is it so far: import java.util.*; public class ProjectCS { public static void main(String[] args) { Learn Online Java Training Learn Online Java Training It would be safe to say that the advancements... their online class. Learn online Java programming training ensures that students... into the online java notes on Rose India without any delay, a facility Java get Next Day Java get Next Day In this section, you will study how to get the next day in java...() provide the string of days of week. To get the current day, we have used  Core Java Jobs at Rose India Core Java Jobs at Rose India This Core Java Job is unique among software industry as we will be providing the change to learn advance along Java assignment of variables - JSP-Servlet Java assignment of variables .... var strName="Tarunkanti Kar"; .... ... I want to access the script variable in java variable please give the code Marvellous chance to learn java from the Java Experts Marvellous chance to learn java from the Java Experts A foundation course on Java technology which... for Software Development on Java Platform. Learn to implement interface variables - Java Interview Questions interface variables why interface variables are final? explain me with good program example?? i knw why the variable is static but,i dont knw why it is final by default? thanks in advance.. Hi Friend, The final take variables in text file and run in java take variables in text file and run in java I have a text file which have variables 17 10 23 39 13 33 How to take "17"in java code Static final variables - Java Beginners Learn Java in 24 hours find it easy to understand the example. To learn Java in 24 hours, you must clear...Learning Java programming language is very simple especially for developers and programmers having knowledge of C, C++. They can have a basic idea for Java Find the Day of the Week Find the Day of the Week This example finds the specified date of an year and a day... time zone, locale and current time. The fields used: DAY_OF_WEEK Give me some java Forum like Rose India.net Give me some java Forum like Rose India.net Friends... Please suggest some forum like RoseIndia.net where we can ask question like here. Thanks How to learn programming free? , Here are the tutorials: Beginners Java Tutorial Learn Java in a day Master Java...How to learn programming free? Is there any tutorial for learning Java absolutely free? Thanks Hi, There are many tutorials Jobs at Rose India Java Jobs at Rose India Currently we are looking for Java professionals to deploy.... Senior Java Developer Rose India is looking javascript where to declare variables variables Javascript variable always be declared inside script tag... variables in Javascript JavaScript where to declare variables To declare... = Java; age = 7; document.write(name); document.write("<br>"); document.write Assignment of variables - JSP-Servlet Assignment of variables How to assign a javascript variable value to a java string variable. .... var strName="Tarunkanti Kar"; .... ... I want to access the script variable in java variable please give how to initialise variables in jsp? how to initialise variables in jsp? 1. what is the problem in the folloing jsp <%@ page import="net.viralpatel.struts.helloworld.form.LoginForm" %> <%@ page language="java" contentType="text/html Jsp Scope Variables - Java Interview Questions application scope variables in JSP?Am not understanding where which scope.....please, explain me clearly with good java code of each variable..Thanks in advance. Hi Friend, JSP Scope Variables: Page scope-It makes Senior Java Developer Jobs at Rose India Senior Java Developer Jobs at Rose India Rose India is looking for Senior Java Developers to immediately start on Java projects. The senior Java assignment of variables - JSP-Servlet to a java string variable. .... var strName="Tarunkanti Kar"; .... ... I want to access the script variable in java variable please give... (javaScript) to the server (java), you have to submit a form, or use a the name php list all server variables PHP list All server variables is present in the $_SERVER environment variables. Using the for each loop all keys and values can be accessed Example...:\XEClient\bin;C:\XEClient\bin;C:\WINDOWS\System32;c:\Program Files\java Setting environment variables Setting environment variables Hi, I have a java web application running on jboss AS. 1)How do I set the environment variable in jboss? 2)How do I get the environment variable from jboss How do I set environment variables from Java? How do I set environment variables from Java? How do I set environment variables from Java Learn Java - Learn Java Quickly Learn Java - Learn Java Quickly Java is an object oriented programming language... and universities. The most important feature of Java is that it can run Jobs at Rose India wish to learn as per your convenience. Rose India provides customized and cost-effective onsite Java, Struts, Hibernate, Ajax and other related software... Jobs at Rose India   Getting Previous, Current and Next Day Date Getting Previous, Current and Next Day Date In this section, you will learn how to get previous, current and next date in java. The java util package provides Learn Java for beginners Beginners can learn Java online though the complete Java courses provided at Roseindia for free. This Java course covers the whole of Java programming... a programmer from basic to advance level of Java. New programmers prefer to learn Java Passing java variables from JSP to Servlet - return null values Passing java variables from JSP to Servlet - return null values I want to pass some variables from a JSP page to a servlet. These variables are from... to get java variables from JSP tp servlet? If there is some error in my code Rose India C, Turbo C++, Borland C++, Visual C++, Visual Basic, Java. Java Certification Training Online have strong skills in Java. Rose India has the technology experts who train... challenges they will face in Java once they enter the Corporate IT field. Rose...Java Certification Training Online So, why should we go for a online How to learn Java with no programming experience Tutorial, how to learn Java provides you the opportunity to you about.... Apart from these, topic How to learn Java with no programming experience... program. Certainly, we hope the topic how to learn java will definitely create Swing Jobs at Rose India Swing Jobs at Rose India  ... skills and solid software development experience in Java, using Swing components. Job Description for Java Swing developers: You will be designing JDBC Training, Learn JDBC yourself programming language of java then fist learn our following two tutorials: Learn Java in a Day and Master... JDBC Connection Pooling Accessing Database using Java and JDBC Learn how Servlet tutorials: Learn Servlet Easily - Index page of Servlet tutorials at Rose India website. Learn Servlet Life...Servlet is a Java programming language class, which helps to improve server... applications, without the performance limitations of CGI programs. Moreover, Java Jsp Scope Variables - JSP-Interview Questions Jsp Scope Variables what is the importance of page,session,request n application scope variables in JSP?Am not understanding where which scope.....please, explain me clearly with good java code of each variable..Thanks basic java - Java Beginners basic java oops concept in java ? Hi Friend, Please visit the following links: Thanks Java Java what are the functionality of Static variables ? Why Static variables are used in a class? Are Static variables and class variables same jsp scope variables - JSP-Interview Questions application scope variables in JSP?Am not understanding where which scope.....please, explain me clearly with good java code of each variable..Thanks in advance. Hi Friend, Scope variables are basically used for sharing Java Tutorial and install Java on your windows machine Go through our Learn Java in a Day tutorial. Then learn our Master Java In A Week tutorial... to learn java programming language from scratch then this the best place to start How to learn Quartz? How to learn Quartz? Hi, I have to make a program in Java is scheduling account processing Job after an interval of 20 minutes. Tell me the tutorial. Thanks Hi, Please check the tutorial Hello World Quartz Java Tutorial will learn about what is Java, Download Java, Java environment set up... identifiers, access specifiers, Variables in Java, Java literals, Java operators... of Java component like, class, methods, variables, etc. There are two types Java Training - Corporate Java Training Java Training - Corporate Java Training Learn through Java Training: Java is one of the most... Java technology. Course Content day wise: Day 1 Getting java java how can use sleep in java which book learn of java language
http://www.roseindia.net/tutorialhelp/comment/51140
CC-MAIN-2014-52
refinedweb
1,897
56.55
Microapps, are miniature web-based applications exposed through a REST interface. The idea is that you build your application by wiring together REST calls to servers that implement things like tagging, authentication, full-text search, whatever you need, and putting your application-specific code on top. There are a couple of advantages here. One of this biggest ones is that you don’t necessarily have to write, maintain, or even host the server that provides your functionality. Partitioning out functionality like tagging and user login makes it easier to keep the cruft and code mixing to a minimum. Microapps are portable and reusable across projects, and due to being accessed through HTTP they are even portable across frameworks and even languages as well. Scaling is easier and potentially cheaper, as you can identify and scale only the bottlenecks. You can even provide optional features whose availability is independent of your own programs. One of TurboGears more useful features is that applications built with it are pretty easy to distribute and install. By default you are a few variable assignments and a single console command away from putting the whole mess up on PyPi for people to download and use. The installation end of things is about as simple, with niceties like automatic dependency resolution, optional installations, console and even gui script creation, basically more features for the ‘download-and-install’ process than you thought were even possible. This makes it easier in a lot of cases to use an existing library than to build your own. Microapps function in about the same way, except they make the installation, configuration, hosting, and maintenance processes entirely optional. The only drawback is that you have to muck around with all this integration work parsing urls, encapsulating data, sending it in whatever format you are looking for. Yuck, who wants to go through all of that just to let someone else do all the implementation and hosting? Ok, well, most people, but it is still a pain. But wait a second, we already have an easy-to-use installation system in TurboGears through easy_install. We already have a request-dispatching system built in to CherryPy, and if you are writing anything with TurboGears you already know how that works. Why not make a CherryPy controller that acts as a proxy for your microapp? If you do it right your microapp can be hosted locally or across the world without your program being any the wiser. To show how this whole idea works, lets look at a sample app. We’ll work with a tiny little application that does nothing but serve up random quotes from a database. Since following along is fun, go ahead and download the tgquotes project from PyPi. (see, I told you distributing TurboGears apps was easy) It comes with two controller classes QuoteController and QuoteProxyController. Both classes have the exact same signature, (an index method and a default method that returns the number of quotes given as its first argument) one just happens to do remote lookups on a specified server and return the data it finds there. Here is how you would serve up random quotes if you are hosting the quote server locally: Your controller will look something like this: import tgquotes class Root(controllers.RootController): quote = tgquotes.controllers.QuoteController() I know, standard fare here. Who wants to host their own quotes-server-and-database anyways? Besides, the TG community really got behind this quote server thing and has a much better database set up on theirs. Well, no problem there, lets just switch to use their server. Here’s how: Here’s your controller now: import tgquotes class Root(controllers.RootController): quote = tgquotes.controllers.QuoteProxyController('') Not too shabby, eh? Now whenever you access ‘quote’ on your root controller (either internally or externally) you will get the exact same data and interface, except the data is pulled from instead of the controllers that are local to your site. And when example.com’s quote database becomes wildly popular they get to deal with the scaling issues, you just keep grabbing the data. Of course, this approach isn’t entirely without downsides. Since you are gathering data remotely there is always a chance that the server will go down. The requests in tgquotes are being made in-thread, so a slow server impacts you as well. If you plan on doing something actually important (like user logins) you will probably want to get a secure connection. It also turns proxied controller method calls into a direct impact on your server’s bandwidth bill. That said, in the right situations this can really be a win. Want to build a cross-site login system? Simple, make the login its own application on its own server and have each system look there for user authentication details. Need to take one of those sites out of the cross-site login and work locally? Don’t rewrite it from scratch, host a local copy and enjoy having all of your method signatures work in exactly the same way.
http://www.turbogears.org/1.0/docs/TurboGearsAndMicroApps.html
CC-MAIN-2015-22
refinedweb
845
62.38
This article is aimed towards demonstrating various features of the .NET framework to build a very simple tool to analyze various activities on a web site. This tool will ultimately provide complete web log analysis. But for starters we have focused mainly on reporting the various kinds of browsers that are being used by clients to visit a web site. This tool demonstrates the use of following namespaces and classes. System.Diagnostics System.Web System.Xml System.Data System.Data.SqlClient System.Drawing.Imaging System.IO There are some other namespaces used to accomplish the goal, but those are the ones that are pretty much used commonly in all the .NET application, like System and System.Diagnostics. System System.Diagnostics When a client makes an HHTP request to a web server, the browser information is sent in the header of the request. The Page class exposes this information through the Request property. This property returns a HttpRequest object. This class exposes the browser capabilities through a number of properties like UserHostAddress, Browser, etc. For more information check the documentation for the HttpRequest class. Page Request HttpRequest UserHostAddress Browser HttpRequest When a client connects to the web site, we pass on the HttpRequest object to the WebLogManager class. This class extracts the browser information and packages it in a XML document. This document is sent to the data access layer. The WebLogDBManager class implements the data access to SQL Server. The AddInfoToDatabase method of this class accepts the XML document containing the browser information and puts a the record into SQL Server table. We could have directly passed the HttpRequest object to the data access layer but that would defeat the purpose of making the data access layer independent of source of information. WebLogManager WebLogDBManager AddInfoToDatabase Although HttpRequest provides most of the information about the browser's capabilities, there still is some information that has to be generated based on some preliminary information. For example, HttpRequest does not tell anything about client side capabilities e.g. if the browser supports getElementById or the all property. This information has to be found based on the browser name and version. By defining a schema for the XML document we can pass all kinds of information to the data layer to save in the database. And the most important point is that the HttpRequest object cannot be remoted. If the data access layer is running on a remote machine then HttpRequest can't be marshaled across boundaries. HttpRequest getElementById all A lot of this information is already available in the web log of IIS. However, saving the information into SQL Server sets you free from any changes in the web log file formats and the tools used to analyze them. .NET/GDI+ has provides some very powerful APIs that have made displaying the statistical information in graphics forms like bar graphs, graphs, pie-charts, etc. very easy. This makes use of the classes defined in System.Drawing namespace to convert the data stored in SQL Server into nice looking Pie Chart. Graphics class exposes a lot of methods to draw various graphics objects like rectangles, curves, lines, pie charts etc. We have made extensive use of these methods to draw the pie charts and render on the client's browser. The technique is pretty simple. Like our good old Win32 applications, we need a device context to draw. The Graphics object attaches itself to a device context for this purpose. The Image class provides this device context. First we created a Bitmap object and then attached this device context to Graphics object. // Create the bitmap object to draw our chart. chartBmp = new Bitmap(100, 100); //Create the Graphics object. chartGraphics = Graphics.FromImage(chartBmp); The FillPie method is used to draw the Pie Chart. When the drawing of chart is done, we save the bitmap to a file in JPEG format. This is accomplished by calling the Save method from the Image class. FillPie Save Image //Save the graphics as Jpeg file.<o:p> chartBmp.Save(jpgFile, ImageFormat.Jpeg); The second parameter to this method is the graphics file format. You can specify a number of file formats. Look at the documentation for this method to check what all formats are supported. After the graphics file has been created in the server, now comes the final task of showing it on the client's browser. We have made use of ASP .NET server side controls for this purpose. We have a put a asp:Panel control on the page. This control is equivalent to "div" tag. Initially this panel is set to hidden mode. When user clicks on "Run" button, the server side processing starts and after the chart graphics file has been added, an asp:Image control is added to this hidden panel. After that a table showing legends for the information is added by creating an asp:Table control and adding it to the hidden panel. After all the information controls have been added to the panel, the visible state of the panel is toggled to make it visible. asp:Panel asp:Image asp:Table ASP .NET runs in the ASPNET account, which is created as a local account on installation and belongs to the User Group on the machine. This account does not permissions to write in /inetpub folder. Therefore when you try to create the JPEG file in the virtual folder of your web site, the creation of FileStream Access permission to xxx.jpg denied. To fix this problem make sure that you provide write permissions to ASPNET account for the folder where you want to save the JPEG file. Since the information for analysis changes very rapidly make sure that you set the content expiration to a very small time or preferably to immediate. These ways the server will not server the cached pages to the client requests. And also tell the client browsers not to cache the page. Response.Cache.SetExpires(DateTime.Now.AddSeconds(1)); Response.Cache.SetNoServerCaching(); We have included the script file to create BrowserCapLogs table in the SQL Server. Just create a WebLogs database on SQL Server 2000. And use the script. Also you may have to change the connection string in WebLogDbManager class to use the right credentials for accessing your database. We have used localhost and sa account with no password. BrowserCapLogs WebLogDbManager For any comments or suggestions, feel free to contact us at softomatix@pardesiservices.
https://www.codeproject.com/Articles/1774/Web-Log-Analysis-Tool-Using-ASP-Net-and-Csharp
CC-MAIN-2017-34
refinedweb
1,072
56.35
Hello everyone. I'll soon need to implement bandwidth control at my organization, and so I finally gave an in-depth look at delay pools. While they'll be sufficient at the beginning, they could be vastly improved with (I hope) a not-too-vast effort. Here is a modest proposal on how they could be improved. I won't start coding right away on this, but it might be my next project. Notice, there might be gross inaccuracies in the presented facts since I haven't looked at the code in-depth, just a cursory glance. SPECS ===== 1) There should be two types of pools: "simple" delay pools and "class" delay pools. The former should be a single pool of a given depth, fill rate and initial fill. The latter should be a group of pools that depend on some requests' attribute, such as the source IP, destination IP or authenticated username. All pools in the class should have the same parameters. When a request comes in, the defining parameter is matched against already-defined pools in the class. If a pool is found, the request is subject to it. If not, a new "simple" pool is defined and initialized and the request is made subject to it. 2) Pool definition and request assignment should be separate. A pool should be identified by an unique name. 3) Pools are assigned to requests by ACL matches. Since inaccuracies are acceptable in bandwidth management, fast acl matches can be used thus avoiding to impact request processing flow 4) A request can be impacted by many different pools. If so, the bandwitdh it gets assigned is the least of all the pools it must obey. The bandwidth it uses is subtracted by all the pools it is subject to. 5) Bandwidth management need not be very strict. Granted bandwidth packets could be so arranged: pool <= 0: grant 0 and wait pool > 0: grant a minimum value (i.e. 1 Kb) for network efficiency reasons, and cause pool to go sub-zero pool > minimal packet: grant whatever available. CONFIGURATION ============= Configuration format and parameters which match the specs could be: -delay_pool pool_name depth fill_rate initial_fill defines a new pool and assigns parameters to it -delay_pool_class pool_name by_attribute depth fill_rate initial_fill defines a new pool class. The namespace is shared by the simple and class delay pools by_attribute can be one of "by_src", "by_user", "by_dst", "by_dstdomain", "by_dstregexp" etc. Attributes can initially be a subset of the acl types, and expand as development proceeds. -delay_pool_assign pool_name acl [acl ...] Assigns a request matching ALL the mentioned ACLs to a given delay pool. All conditions are evaluated for each request enable assigning a request to multiple pools. IMPLEMENTATION ============== simple delay pools could come for free or almost. Only caveat is that they should be refcounted to correctly handle pools undefined on reconfiguration. class delay pools are quite more complex, but should be rather modular. Optimization could lead to lots of differences in implementation of the locator algorithms for the different pool defining attributes. Those pools should obviously also be refcounted, and pose two extra issues: garbage collection and reconfiguration. A pool in a class can be garbage-collected when it's completely filled and it's not referenced by any active connection. Reconfiguration could be handled by moving all active pools to a "temporary" and undifferentiated storage area where they'll be garbage-collected when their refcount goes to 0. The request flow path should be altered to match acls and to determine what pools it must be subject to. A request should contain a list of pointers to pools it's subject to and take care to properly refcount them. When sending data it should scan the pools list twice: one to discover what is the bandwidth it has available, and another to adjourn the pools' contents. Ideas? Thoughts? Am I just on too much wine? -- /kinkie (kinkie [at] tiscalinet [dot] it) Random fortune, unrelated from the message above: I met a wonderful new man. He's fictional, but you can't have everything. -- Cecelia, "The Purple Rose of Cairo"Received on Sun Apr 14 2002 - 17:12:41 MDT This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:15:10 MST
http://www.squid-cache.org/mail-archive/squid-dev/200204/0293.html
CC-MAIN-2018-51
refinedweb
711
63.39
Hallo mein Freund, Jetzt ist es Zeit für ein weiteres tutorial, lasst uns beginnen! If you are following this website then you should know that recently I am exploring py.checkio.org which is supposed to be a website which allows us to create game with python program and I am on the way to get into the game creation stage after solving some basic python questions on this site, but starting from today our plan has changed a little bit, I am going to start a brand new python project while continue to discover that above site at the same time, so you can expect more daily articles from this site soon on both the previous topic as well as the new topic, I might create a few more python related topics at the same time in the future as well. The project which we are going to create together is the video editing/streaming application project written in python with the help of the ffmpeg multimedia framework. I am going to create the project in windows os but if you are the mac or Linux user then you can just slightly modify the program code to suit your need, no problem at all. FFmpeg is a complete, cross-platform solution tool uses to record, convert and stream audio and video. We are not going to use this tool directly on the command prompt instead we will use tkinter to create the video editing application user interface which wraps around this FFmpeg tool which can then be used to perform various video editing/streaming processes. Before we start you need to download and install FFmpeg on your computer first, you can search google for the keyword ‘FFmpeg’ and then download and install the tool on your computer through below search box. We can now create a simple tkinter’s interface with a button which when we click on it will open up a file dialog box where we can then select a video to edit, in this example we will change the size of the video from it’s previous size of 1280 x 720 pixels to 1920 x 1080 pixels, the aspect ratio of the original video will be kept and the program will create a new video based on that new video size. The entire program is as followed. from tkinter import * from tkinter import filedialog import os import subprocess win = Tk() # Create instance win.title("Manipulate Video") # Add a title win.resizable(0, 0) # Disable resizing the GUI win.configure(background='black') # change background color # Create a label aLabel = Label(win, text="Click on below button to select a video", anchor="center", padx=13, pady=10, relief=RAISED,) aLabel.grid(column=0, row=0, sticky=W+E) aLabel.configure(foreground="white") aLabel.configure(background="black") aLabel.configure(wraplength=160) # Open a video file def openVideo(): fullfilename = filedialog.askopenfilename(initialdir="/", title="Select a file", filetypes=[("Video file", "*.mp4; *.avi ")]) # select a video file from the hard drive if(fullfilename != ''): dir_path = os.path.dirname(os.path.realpath(fullfilename)) os.chdir(dir_path) f = '1920.mp4' # the new output file format subprocess.call(['ffmpeg', '-i', fullfilename, '-vf', 'scale=1920:-1', f]) # resize the video with ffmpeg action_vid = Button(win, text="Open Video", command=openVideo, padx=2) action_vid.grid(column=0, row=1, sticky=E+W) action_vid.configure(background='black') action_vid.configure(foreground='white') win.mainloop() Because the FFmpeg tool runs on command prompt we will need to use the python’s subprocess module to pass in the FFmpeg’s command arguments as a list to the call method of the subprocess module. Basically what the below line does is simply to resize the original video then saves it as a new video. subprocess.call(['ffmpeg', '-i', fullfilename, '-vf', 'scale=1920:-1', f]) # resize the video with ffmpeg The time spends to resize the video will depend on the file size of the original video. Friend, have you downloaded the application which we have created together previously? If not here is the link. At the moment the application does not offer many selections of the video size for us to change, in the next article, we will include the selection box on this same application so we can then select any video’s size which we prefer. (If you have not yet subscribed to this site then click on the red bell button to subscribe).
https://www.cebuscripts.com/2019/01/07/codingdirectional-resize-the-video-with-python-program/
CC-MAIN-2019-13
refinedweb
734
56.89
//************************************** // Name: Abstract Method in Java Demonstration // Description:A very simple program that I wrote using Java to demonstrate how to use Abstract Method. I am using Netbeans in writing this simple program as my IDE. I am currently accepting programming work, it project, school programming projects , thesis and capstone projects, IT consulting work and web development work kindly contact me in the following email address for further details. //************************************** AbstractExample.java package oop_java; /** * NetBeans IDE 8.2 * @author Mr. Jake R. Pomperada * March 30, 2018 Friday * Bacolod City, Negros Occidental */ abstract class Books{ //create an abstract class abstract void read(); //create an abstract method } class eBooks extends Books{ //extend the parent class void read() { //override the abstract method System.out.println("Abstract Method in Java Demonstration"); System.out.println("\n"); System.out.println("Overrides the abstract method !!!"); System.out.println("I am just reading my favorite book."); System.out.println("\n"); } } public class AbstractExample { public static void main(String[] args){ eBooks MyBook =new eBooks(); MyBook.read(); } } Other 71 linesky@sapo.pt on 5/8 By Jake R. Pomperada on 2/19 (Screen Shot) By Jake R. Pomperada on 2/12 By linesky@sapo.pt on 1/29 By Jake R. Pomperada on 12/20 By honeydatax@sapo .pt on 11/21 By honeydatax@sapo .pt on 11/13 By honeydatax@sapo .pt on 11/12 By honeydatax@sapo .pt on 10/30.
http://planet-source-code.com/vb/scripts/ShowCode.asp?txtCodeId=7218&lngWId=2
CC-MAIN-2019-35
refinedweb
231
53.17
Y-Not-CTF - SmS Secret Secure Server - Crypto We’re given a ssh username, server ip and public key using ECDSA, along with a _very secure RNG_ python script used to generate the ECDSA key. Exploiting a weakness in the RNG, we can enumerate all possible keys and find the private key to log on the server. Description Here is a very secure PRNG used to generate a secret ECDSA key, you’ll never find it. And we were given a SSH command to log as bob on some local server, as well as two files: id_ecdsa.pub and RNG.py Details Points: 600 Category: crypto Validations: 2 Solution The RNG script is really simple: def genECDSAPriv(x): #To seed with 128 bits of /dev/random) #To increase security, throw away first 10000 numbers for j in range(10000): print pow(g,j,p) x = pow(g,x,p) for i in range(keyLength*8): x = pow(g,x,p) if x > ths: ret += 2**i return ret As you can see, the 10000 first exponentiations are just thrown away, which makes it relatively slow to run. This RNG is actually based on the “Blum-Micali PRNG” and relies on the difficulty to solve the discrete logarithm problem in sufficiently big cyclic groups. The id_eddsa.pub file is simply an SSH key in OpenSSH format: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPZc7m3goxEkZjlzoa0f7dxod7vUT+NzSMMeyLl2YNLVvuNJ7WUel8VPkK3Q8hMLFMsKrIUCWJNHN5Lg3/edo1c= bob@mastercrypto So, we know that we are dealing with an ECDSA keypair on the nist’s curve P256. This curve has yet to be broken by cryptographers and is considered to provide a security of roughly 128 bits. So we won’t try to crack the public key, but if we could find a flaw in the PRNG used, then we could regenerate the private key used to produce this public key. Now, let’s get back to that random number generators and let’s run it a few time: import random import RNG test = random.getrandbits(128) print RNG.genECDSAPriv(test) And… Wow, this is slow: 7 seconds to compute the “random” value 52771737243107955452457115236761733307198355296235460844025885616021236394942! Let’s see what it generates if we run it a few times: import random import RNG for _ in range(10): test = random.getrandbits(128) print RNG.genECDSAPriv(test) and after an excruciating wait, we obtain: 56693337563003437446218818861732426020291386135230851162098750444697716348746 61755780926568637559237858671300217521743991821674967487710711470887002474632 115792089237316195423570985008687907853269984665640564039457584007913129639935 113386675126006874892437637723464852040582772270461702324197500889395432697493 42140956620106037719007025061362410290117084990469006220870867405983591508719 33803171228513919274316948727372377942283792080283425228211081800947874839530 77305463456367066925437428445119014850274586342778776595296254111629978560855 0 86242713400159816434894901935210166936780685400435707600778167226305422994341 77305463456367066925437428445119014850274586342778776595296254111629978560855 Mhhhh, this looks really bad: there is a 0 value, which makes no sense and even worse, there is twice the same value 77305463456367066925437428445119014850274586342778776595296254111629978560855! The probability of obtaining either is theoretically around , already negligible, but to have both that’s inconceivable. So we’ve confirmed that this RNG script is seriously broken, but to which extent? What could go wrong with Blum-Micali PRNG? Well, obviously, if one were to chose a generator value which is not a primitive root modulo , then it wouldn’t be a generator of the whole cyclic group , but would instead only generate a small subgroup . This can be empirically tested by simply trying to generate the first 10000 elements of the group with the following script: elements={1} for i in range(10000): x = pow(g,i,p) elements.add(x) print "There are %d elements in this set." % len(elements) with open("subgroup.txt","w") as f: for x in elements: f.write("%d\n" % x) Which return us a nice little: “There are 673 elements in this set.” when run! So, we’re effectively working in a small subgroup! Let’s simply then run our RNG on all possible elements of this subgroup, thus obtaining all possible RNG’s output values: def genRNG(x):) for i in range(keyLength*8): x = pow(g,x,p) if x > ths: ret += 2**i return ret values={0} with open("subgroup.txt", "r") as f: for x in f.readlines(): values.add( genRNG(int(x.strip(),10)) ) print "There are %d different values in this set." % len(values) with open("allrng.txt","w") as f: for x in values: f.write("%d\n" % x) And we thus obtain 318 different values that could have been outputted by the RNG script when generating the ECDSA private key, so let us try them all! Now the hardest part begins, we must generate private keys given a secret integer and compare their public counterpart with the OpenSSH public key we were given at first! Let’s use Cryptography.io to do it, after digging through their online documentation, we end up with a script doing it all for us: from cryptography.hazmat.backends import default_backend from cryptography.hazmat.primitives import hashes, serialization from cryptography.hazmat.primitives.asymmetric import ec, utils curve = ec.SECP256R1() algo = ec.ECDSA(hashes.SHA256()) # we read the file as being an OpenSSH key and return it as a public key: def readPubKey(filename): with open(filename, 'r') as f: data = f.read() return serialization.load_ssh_public_key(data,default_backend()) def testInt(inp): try: privateKey = ec.derive_private_key( inp, pubnum.curve,default_backend()) if pubnum.public_numbers()==privateKey.public_key().public_numbers(): return True, privateKey else: return False, None except: return False, None pubnum=readPubKey("id_ecdsa.pub") with open("allrng.txt","r") as f: for number in f.readlines(): ok, priv = testInt(int(number.strip(),10)) if ok: print 'Success, the secret int is:', number data = priv.private_bytes(encoding=serialization.Encoding.PEM, format=serialization.PrivateFormat.TraditionalOpenSSL, encryption_algorithm=serialization.NoEncryption()) with open("id_ecdsa.priv","w") as f2: f2.write(data) print "Written to id_ecdsa.priv:","\n",data break And we get the result: Success, the secret int is: 74797630232915057348943966868030142897776888372961994633834332904430502239733 Written to id_ecdsa.priv: -----BEGIN EC PRIVATE KEY----- MHcCAQEEIKVd9V0q76rpV31XSrvqulXfVdKu+q6Vd9V0q76rpV31oAoGCCqGSM49 AwEHoUQDQgAE9lzubeCjESRmOXOhrR/t3Gh3u9RP43NIwx7IuXZg0tW+40ntZR6X xU+QrdDyEwsUywqshQJYk0c3kuDf952jVw== -----END EC PRIVATE KEY----- Along with a file “id_ecdsa.priv”, which we can use to authenticate as Bob on the SSH server we were given at the beginning. Challenges resources are available in the resources folder
https://duksctf.github.io/2017/11/17/YNOT17-SmS-Secret-Server.html
CC-MAIN-2021-25
refinedweb
971
56.55
in reply to Re^18: can't import using exporterin thread can't import using exporter] Does it work in your 5.8? As you've written it, of course it doesn't work. but that's what I get for cuttin down.... can't type straight. ;-) ok so export var and not it's contents... I get warning, but it appears to work. Ok... but that's what I have in the larger program. But that's not the error it puts out... > less testme.pl #!/usr/bin/perl -w use strict; package _Debug;{ our $Filename2Fields = 1; our $HaltOnError = 2; our $DEBUG_OPS = 3; our @EXPORT=qw( Debug $DEBUG_OPS $Filename2Fields $Halt_on_Error); use parent 'Exporter'; sub Debug($$) { my ($what, $str)=@_; if ($what & $DEBUG_OPS) { print STDERR $str; } } } package Transcode_plug;{ import _Debug; sub get_fieldAR_from_filename($) { my $file=$_[0]; Debug($Filename2Fields,"get_fieldAR_from_filename($file)\n"); } } Transcode_plug::get_fieldAR_from_filename('dummy'); # vim: ts=2 sw=2 Ishtar:/Audio/scripts> testme.pl Global symbol "$Filename2Fields" requires explicit package name at ./t +estme.pl line 16. Execution of ./testme.pl aborted due to compilation errors. [download] But if I turn off strict and turn off warnings, it runs just fine. *cough*. How is this not a bug in perl 5.14? Crying wolf is bad, we learned that in grade.
http://www.perlmonks.org/index.pl?node_id=960611
CC-MAIN-2016-40
refinedweb
211
67.86
Hi, It seems there are more and more people with this issue. We had (and still have) a similar issue but with the 1024 limit of prefixes per namespace and it seems 9.4 won’t solve it. I’m afraid it is not only a “ pathological Apache application” case. (for example Apache Axis client), but nowadays it is easily to get even 1024 clients (client apps) which uses the same web service. It is true, that client should use default prefix as soon as possible to decrease payload size at least, but it is difficult to teach clients. What is more, this is another XML bomb. Assuming that a bad guy knows that the server app is Java and uses XSL he is almost sure that it is Saxon based app, so it can try to generate a request with 1024 prefixes for the same namespace and try disabling the system with even one single request. There are several ways to try avoiding that but all of them are partial: - Have several JVMs - Have a pool of TransformerFactories They doesn’t solve issue with “xml bomb with 1025 prefixes for one namespace”. Smd proposes to write a preprocessor which normalize the prefixes…. I though it is a complete solution, but it is not. There is even net.sf.saxon.om.PrefixNormalizer in the Saxon distribution, but the code is not complete (it doesn’t handle attributes with prefixes). I wonder why it is not completed, but after writing a complete one I realized that it doesn’t solve the problem completely. Even assuming that prefixes for all namespaces are unique it doesn’t solve the problem with element/attribute values. If the element/attribute value is a QName, it is a problem. For example: <n0:a xmlns: <n1:b xmlns:n1:someLocalName</n1:b> </n0:a> After normalization looks like: <n0:a xmlns: <n0:b>n1:someLocalName</n0:b> </n0:a> When XML Schema for the xml says that the b element is “string” – both xmls are valid. If the b element is described as “QName” – the normalized xml is not valid. An approach to change the values is not a solution, because the value which looks like a QName may be a string which shouldn’t be changed. The reason of this mail is to prove that: - It must be fixed, because it is dangerous - The only solution is to fix the issue in Saxon by: a) Making NamePool non global. Currently there is one NamePool per Saxon Configuration, maybe there should be as many NamePools as Controllers. b) Rewrite NamePool so that it uses “long” for addresses or consider Map representation instead of matrix. -- Regards, Mateusz Nowakowski From: Michael Kay [mailto:mike@saxonica.com] Sent: Tuesday, August 09, 2011 10:37 AM To: Scott Robey Cc: saxon-help@lists.sourceforge.net Subject: Re: [saxon] Saxon HE ArrayIndexOutOfBoundsException: -32768 in NamePool.allocateCodeForPrefix(NamePool.java:483) As it happens the limit on the number of namespace prefixes has gone in the 9.4 development branch, though there's still a limit of 32K namespace URIs and a limit of 1K prefixes per URI. It would be nice to have limits that are sufficiently high that no-one will ever reach them, but I'm afraid I've decided several times over the years that redesigning the name pool for the benefit of this one rogue application really can't be justified. I vaguely recall writing an XMLFilter on one occasion to normalize the namespace prefixes, but I can't lay my hands on it now. Using a static TransformerFactory itself seems a poor design choice. Sorry for the inconvenience! Michael Kay Saxonica------------------------------------------------------------------------------ Get a FREE DOWNLOAD! and learn more about uberSVN rich system, user administration capabilities and model configuration. Take the hassle out of deploying and managing Subversion and the tools developers use with it.------------------------------------------------------------------------------ Get a FREE DOWNLOAD! and learn more about uberSVN rich system, user administration capabilities and model configuration. Take the hassle out of deploying and managing Subversion and the tools developers use with it. saxon-help mailing list archived at saxon-help@lists.sourceforge.net saxon-help mailing list archived at saxon-help@lists.sourceforge.net
https://sourceforge.net/p/saxon/mailman/attachment/4E4FDC1C.5090907@saxonica.com/1/
CC-MAIN-2016-40
refinedweb
701
63.19
Creating Reusable Content in ASP.NET - Techniques for Creating Reusable Content - Building a ComboBox User Control - Using the ComboBox Control - Populating the ComboBox Control - Summary In This Chapter Techniques for Creating Reusable Content Building a ComboBox User Control Using the ComboBox Control Populating the ComboBox Control BEST PRACTICE: Editing the Connection String Summary Although the general public's view of computer programmers as a breed apart might be less than complimentary, we are really no different from any other people when it comes to having a hatred of dull, repetitive work. When writing code, experienced programmers are constantly on the lookout for ways to encapsulate chunks that are reusable and save the effort of having to write the same code repeatedly. Subroutines and functions are obvious examples of ways to do this within a single application; components, DLLs, and .NET assemblies provide the same kind of opportunities across different applications. However, when building Web pages and Web-based interfaces for your applications, it can be difficult to choose the obvious or the most efficient approach for creating reusable content. Traditional techniques have been to read from disk-based template files and to use disk-based include files that rely on the server-side include feature of most Web server systems. Of course, the use of external code in the form of COM or COM+ components, and in ASP.NET, the use of .NET assemblies, is also prevalent in Web pages. However, the complexity of the plumbing between COM/COM+ components and the host application has never really been an ideal approach when working with Web pages that have extremely short transitory lifetimes on the server. These components work much better when instantiated within an executable application where they have a longer lifetime. In ASP.NET, the ideal solution from a component point of view is to use native .NET managed code assemblies. These are, of course, the building blocks of ASP.NET itself, and they provide the classes that implement all the ASP.NET controls we use in our pages. However, the .NET Framework provides several techniques that are extremely useful and efficient and that can provide high levels of reuse for interface declarations and runtime code. Techniques for Creating Reusable Content Before delving too deeply into any of the specific techniques for creating reusable content, we'll briefly summarize those that are commonly used within ASP.NET Web applications: Server-side include files ASP.NET user controls Custom master page and templating techniques ASP.NET server controls built as .NET assemblies Using COM or COM+ components via COM Interop Server-Side Include Files Many people shun the use of server-side includes in ASP.NET, preferring to take advantage of one of the newer and flashier techniques that are now available (such as user controls, server controls, and custom templating methods). However, server-side includes are just as useful in ASP.NET as they are in "classic" ASP. They are also more efficient than in ASP because ASP.NET pages are compiled into an assembly the first time they are referenced, and this assembly is then cached and reused automatically until the source changes. As long as none of the files on which an assembly is dependent change (this applies to things like other assemblies and user controls as well as to server-side include files), the page will not be recompiled. This means that the include process will be required only the first time the ASP.NET page is referenced, and it will not run again until recompilation is required. The content of the include file becomes just a part of the assembly. Of course, the same include file is likely to be used in more than one page. Any change to that file will therefore cause all the assemblies that depend on it to be recompiled the next time they are referenced. This makes include files extremely useful for items of text or declarative HTML that are reused on many pages but that change rarely. An example is a page footer containing the Webmaster's contact details and your copyright statement. Using Server-Side Include Files to Insert Code Functions Remember that you aren't limited to just using text and HTML in a server-side include file. You can place client-side and server-side code into it and, in fact, you can put in it any content that you can use in an ASP.NET page. This means you can, for example, place just code routines into a server-side include file and then call those functions and subroutines from other code in the main hosting page, or you can even call them directly from control events. However, you can only include files that are located within the same virtual application as the hosting page. Including Dynamic Text Files in an ASP.NET Page Another area where server-side include files are useful is where you have some dynamically generated text or HTML content that you want to include in a Web page. One particular example we use ourselves is to remotely monitor the output generated by a custom application that executes on the Web server. It generates a disk-based log file as it runs and allows the name and location of the log file to be specified. We place the log file in a folder that is configured as a virtual Web application root and then insert it into an empty ASP.NET page by using a server-side include statement (see Listing 5.1). Listing 5.1Including a Log File in an ASP.NET Page <%@Page Language="VB" %> <html> <body> <pre> <!-- #include file="myappruntime.log" --> </pre> </body> </html> Downsides of the Server-Side Include Technique Although server-side includes are useful, there are at least a couple issues to be aware of with them. The first is one that has long annoyed users of classic ASP. The filename and path of the include file cannot be accessed or changed dynamically as the page executes. This is because the #include directive is processed before ASP.NET gets to see the page. You can't decide, for example, which file to include at runtime. However, you can change the content of the section of the page that is generated from a server-side include file at runtime by including ASP.NET control declarations within the file and setting the properties of these controls at runtime. For example, if the include file contains the code shown in Listing 5.2, you can make the Webmaster's email address visible or hide it by setting the Visible property of the Panel control at runtime, as shown in Listing 5.3. Listing 5.2Server-Side Include Files Containing ASP.NET Server Controls ©2004 Yoursite.com - no content reproduction without permission <asp:Panel <a href="mailto:webmaster@yoursite.com">webmaster@yoursite.com</a> </asp:Panel> Listing 5.3Setting Properties of Controls in a Server-Side Include File at Runtime <!-- #include Sub Page_Load() If (some condition) Then WebmasterPanel.Visible = True Else WebmasterPanel.Visible = False End If End Sub </script> When Is an Include File Actually Included? Listings 5.2 and 5.3 prove that the include file is inserted into the page before ASP.NET gets to see it. The code in Listing 5.3 should produce a compile error and report that it can't find the control with ID WebmasterPanel because the declaration of this control is not in the page. However, by the time ASP.NET gets to compile the page, the include file has already been inserted into it. Designer Support for Server-Side Include Files The second issue with using server-side include files is that they are rarely supported in the tools that are available to help build pages and sites. This doesn't mean that you can't use them, but it does mean that you're unlikely to get WYSIWYG performance from the tool. However, this may not be important for things like footers or other minor sections of output. ASP.NET User Controls The server-side include approach we just discussed is useful and works well with ASP.NET. But there are other ways to build reusable content, and these techniques often overcome the limitations of server-side include files and also offer a better development model as a whole. The simplest, and yet extremely powerful, approach introduced with ASP.NET is the concept of user controls. Whereas server-side include files are effectively just chunks of content that get inserted into the page before it is processed by ASP.NET, user controls are control objects in their own right. The System.Web.UI.UserControl class that is used to implement all user controls is descended from the same base class (System.Web.UI.Control) as all the server controls in ASP.NET. This means that a user control is instantiated by ASP.NET and becomes part of the control tree for the page. It also means that it can implement and expose properties that can be accessed by other controls and by code written within the hosting page. And, because it is part of the control tree, any other server controls that it contains can also be accessed in code within the hosting page, as well as by code within the user control itself (see Figure 5.1). Figure 5.1 Reusing user controls in multiple ASP.NET pages. Registering and Inserting a User Control A user control is written as a separate file that must have an .ascx file extension. It is then registered with any page that needs to use it, via the Register directive. The Register directive specifies the tag (element) prefix and name that will identify the user control within the page, and this prefix and name are then used to instantiate the user control at the required position within the declarative content of the page, as shown in Listing 5.4. Listing 5.4Registering a User Control and Inserting It into a Page <%@Page ... </body> You can see in Listing 5.4 how similar the technique for using a user control is to using the standard server controls that are provided with ASP.NET. All the properties of the System.Web.UI.Control class are available (for example, id, EnableViewState, Visible) and can be set using attributes or at runtime in your code. The id property is set to "cboTest1" in Listing 5.4. You can set the values of properties that are specific to this user control in exactly the same way. For example, Listing 5.4 shows the value of the IsDropDownCombo property being set to False. And any Public methods that the user control exposes can be executed from code in the hosting page, just as with a normal server control. Figure 5.2 shows a page that hosts the ComboBox user control you'll develop later in this chapter. Figure 5.2 A ComboBox control implemented as a user control. Running the ComboBox Control Example Online If you want to try out this control, go to the sample pages for this book. You can also run it online on our own server, at. The Contents of a User Control As with server-side includes, you can place almost any content in a user control. It can be just declarative HTML or client-side code and text, or it can include ASP.NET server controls, server-side code, and even other user controls. Nesting User Controls Note that you can't insert an instance of the same user control into itself. The nested user control would then insert another instance of itself again, ad infinitum, creating a circular reference. The compiler would detect this situation and generate an error. If you need to nest user controls, you must create a hosting instance that references a different file that is identical in content except that it does not contain the reference to the nested control. Oftentimes you need to insert the same user control more than once into a page, in the same way that you use server controls. Of course, this isn't obligatory, but it does mean that you need to bear in mind some obvious limitations to the content user controls include if you are to use them more than once. There are two things you should generally not include in a user control: The opening and closing <html>, <title>, or <body> elements. These should be placed in the hosting page so that they occur only once. Server-side form controls (for example, <form> elements that contain the runat="server" attribute). There can be only one server-side form on an ASP.NET page (except when you're using the MobilePage class to create pages suited to mobile devices). A common scenario is to use a user control that generates no user interface (no visible output) but exposes code functions or subroutines that you want to be able to reuse in several pages. As long as these routines are marked as Public, they will be available to code running in the hosting pagewhich can reference them through the ID that is assigned to the user control. Listing 5.5 shows how you can access a method of a user control (which in this case just returns a value) and how you can set and read property values. Later in this chapter, you'll see in more detail how properties and methods are declared within a user control. Listing 5.5Accessing Properties or Methods of a User Control ' call the ShowMembers method and get back a String Dim sSyntax As String = cboTest1.ShowMembers() ' set the width and number of rows of the control cboTest1.Width = 200 cboTest1.Rows = 10 ' read the current text value of the control Dim sValue As String = cboTest1.Text User Controls and Output Caching One extremely good reason for taking advantage of user controls (and, in fact, perhaps one of the prime reasons for their existence) is that they can be configured differently from the hosting page as far as the page-level directives are concerned. In an ASP.NET page, you can add a range of attributes to the Page directive and use other directives, such as OutputCache, to specify how the page should behave. This includes things like whether debugging and tracing are enabled, whether viewstate is supported, and how output caching should be carried out for the page. The simplest output cache declaration specifies the number of seconds for which the output generated by ASP.NET for the page should be cached and reused, and it specifies which parameters sent to the page can differ to force a new copy to be generated. When you use an asterisk (*) for the VaryByParams attribute, a different copy of the page will be cached for each varying value sent in the Request collections (Form and QueryString): <%@OutputCache Duration="300" VaryByParam="*" %> Output caching provides a huge performance benefit when the content generated by the page is the same for most clients or when there are only a limited number of different versions of the page (in other words, when the values sent in the Form and QueryString collections fall into a reasonably small subset). When there are many different cached versions, the process tends to be self-defeating. Managing Caching Individually for User Controls User controls allow you to divide a page into sections and manage output caching individually for each section. This means that you can cache the output for sections that change rarely (or for which there are few different versions) for longer periods, while caching other sections for shorter periods or not at all. The OutputCache directive can be declared in a user control, just as it can in a normal ASP.NET page, but it affects only the output generated by the user control. There is also one extra feature supported by the OutputCache directive when used in a user control: the Shared attribute. User controls are designed to be instantiated within more than one ASP.NET page, and yet it's reasonable to suppose that the output they generate could be the same in many cases (regardless of the page that uses them). When the OutputCache directive in a user control includes the attribute Shared="True", the same cached output is used for all the pages that host this user control. This saves memory and processing when the output required is the same for all the pages that use the control. The Downsides of User Controls Although user controls provide a great development environment for reusable content, they also have a couple of downsides that you must consider. The first and most obvious of these is that they are specific to an ASP.NET application. Unlike the standard ASP.NET server controls, which can be used in any ASP.NET application on a server, user controls can only be instantiated in pages that reside in the same Web application (the root folder of the virtual application, as defined in Internet Services Manager, or a subfolder of this application that is not also defined as a virtual application). In most cases, this is not a real problem. User controls tend to be specific to an application. For example, if you implement a footer section for all your pages as a user control, it probably makes sense for it to be used only within that application. However, some user controls (such as the ComboBox control shown earlier in this chapter) may be useful in many different applications. In this case, you will have to maintain multiple copies of the same user controlone for each application that requires it. Furthermore, many people still tend to see user controls as being the "poor man's solution" for building controls, as in the ComboBox example earlier in this chapter. There are good reasons for this: One is that you can't expose events from a user control in the same way you can from a server control that is defined as a class and compiled into an assembly. We'll look at this topic in Chapter 8, "Building Adaptive Server Controls." Finally, of course, you can't hide your code in a user control in quite the same way as you can by compiling a server control into an assembly. Like an ASP.NET page, the source of a user control is just a text file that must be present in the Web site folders. It's unlikely that you could build up your own software megacorporation just by selling user controls. Custom Master Page and Templating Techniques One common use of both server-side include files and user controls is to insert some common section of content into a page, perhaps to create the page header, the footer, or a navigation menu. There is, however, a technique that effectively tackles this issue from the opposite direction: You can create a master page or template for the site and base all the pages on this master page or template. All the content in the master page or template then appears on every page, and each individual page only has to implement the content sections that are specific to that page. The master page approach tends to encompass the concept of the individual pages being dynamically generated each time from the master page, with the individual content sections being inserted into it (see Figure 5.3). However, bear in mind that ASP.NET pages are compiled on first hit and then cached, so the process happens only the first time the page is referenced and when the source of the page (the master page itself, or the individual content sections) changes. Figure 5.3 Generating ASP.NET pages from a master page. A template, on the other hand, usually conjures up a vision of a single page from which the individual content pages are generated in their entiretyrather like some kind of merge process (see Figure 5.4). In fact, using master pages and using templates are generically very similar, and both produce compiled pages that are cached for use in subsequent requests. Figure 5.4 Generating ASP.NET pages from a template. Chapter 9, "Master Pages, Templates, and Page Subclassing," looks at master pages and page templates; you'll see more discussion there of the different techniques you can use and the various ways you can code pages to provide the most efficient and extensible solutions. ASP.NET Server Controls Built As .NET Assemblies The next step up the ladder of complexity versus flexibility is to create reusable content as a native .NET server control. The controls you create using this technique are functionally equivalent, in terms of performance and usability, to the standard server controls provided with ASP.NET. The controls provided in the box with ASP.NET are written in C#, and they're compiled into assemblies. The ASP.NET Web Forms controls (those prefixed with asp:) are all implemented within the assembly named System.Web.dll, which is stored in your %windir%\Microsoft.NET\Framework\[version]\ folder. Subsequent chapters show how easy it is to create your own server controls and then use them in Web pages just as you would the standard ASP.NET controls. Figure 5.5 shows the SpinBox control that is created in Chapter 8, with three instances inserted into the page and various styles applied to them. Figure 5.5 A SpinBox control implemented as a .NET server control. Server controls provide a few important advantages over user controls and most other reusable content methods. They encapsulate the code and logic, making it harder for others to steal any intellectual property they contain. Although server controls can still be disassembled to view the Microsoft Intermediate Language (MSIL) code they contain, most users are unlikely to be able to see how they work. You can also use obfuscation techniques (as built into Visual Studio) to make it much more difficult for even experienced users to discover the working of a control. Second, user controls can expose events that you can handle in the hosting page, exactly as the standard ASP.NET Web Forms controls do. For example, the SpinBox control exposes an event named ValueChanged, which can be handled by assigning an event handler to the OnValueChanged attribute of the control, as shown in Listing 5.6. Listing 5.6Handling the ValueChanged Event of the SpinBox Control <ahh:StandardSpinBox ... ... Sub SpinValueChanged(sender As Object, e As EventArgs) ' display message when value of control has changed lblResult.Text &= "Detected ValueChanged event for control " _ & sender.ID & ". New value is " _ & sender.Value.ToString() End Sub Third, server controls can be installed into the global application cache (GAC) so that they are available to all applications on the machine and not restricted to a single application, as are user controls and server-side include files. The following section looks at this particular topic in more detail. Local and Machinewide Assembly Installation In many cases, when you build custom controls as assemblies, you'll probably want to use them only within the ASP.NET application for which they were designed. As long as the assembly resides in the bin folder of the application, it will be available to any ASP.NET page (or Web service or other resource) that references it. All you need to do is add to the page an appropriate Register directive that specifies the tag prefix for elements that will declare instances of the control, the namespace in the assembly within which the control is declared, and the assembly filename, without the .dll extension: <%@ Register TagPrefix="ahh" Namespace="Stonebroom" Assembly="std-spinbox" %> You can then add an instance of the control to the page, using the following: <ahh:SpinBox However, as just mentioned, you can make a control or an assembly available machinewide by installing it in the GAC. For a control to be available to all the applications on the machine, three major requirements must be met: There must be a way for a control to be uniquely identifiable among all other controls, aside from its name. Because the assemblies that implement controls can be installed anywhere on the machine, the filename of the assembly is not sufficient to uniquely identify it. There must be a way to specify the version of the control so that new versions can be installed for applications that require them, while the existing version can remain in use for other applications. The .NET Framework requires that assemblies must be digitally signed using public key encryption techniques to protect the assemblies from malicious interference with the code. You can meet all three of these requirements by applying a strong name to an assembly. You create a strong name by using a utility named sn.exe to generate a public encryption key pair, and then you add attributes to the assembly before it is compiled to attach this key pair to the assembly and specify the version, the culture, and optionally other information. After the assembly has been compiled, you can add it to the GAC by using the gacutil.exe utility, the .NET Framework Configuration Wizard, or Windows Installer. Finally, ASP.NET pages that use the control must include a Register directive that specifies the assembly name, version, culture, and public key. For example, this is how you would register the version of the SpinBox control that is inserted into the GAC (and which has the name GACSpinBox): <%@Register TagPrefix="ahh" Namespace="Stonebroom" Assembly="GACSpinBox,Version=1.0.0.0,Culture=neutral, PublicKeyToken=92b16615bf088252" %>PublicKeyToken=92b16615bf088252" %> A Note About the Assembly Attribute Important: The text string specified for the Assembly attribute of the Register directive must all be on one line and not broken as it is here due to the limitation of the page width. In Chapter 8 you'll build the SpinBox server control you've seen in this chapter. At that point, you'll walk through the process, step-by-step, of making a server control globally available across applications. The Downside of ASP.NET Server Controls The only real limitation with building server controls is that you really have to know at least the basics of how your chosen language supports and implements features such as inheritance. You also need to understand the event sequence and the life cycle of controls. However, to quote that oft-used saying, "it's not rocket science." You can quickly pick up the knowledge you require. Using COM or COM+ Components via COM Interop Using components is a great way to provide encapsulated and reusable content, as you've seen in the preceding sections of this chapter. So far this chapter has talked about various types of components (using the word in the broadest sense) that are all fully compatible with ASP.NET. However, you may have COM or COM+ components that you are already using in a classic ASP application, or you might want to use COM components that are part of Windows or an application you have already installed in an ASP.NET application. To use COM or COM+ components within the .NET Framework, you can create a wrapper that exposes the interface in a format that allows managed code to access it. You effectively create a .NET manifest that describes the component and that acts as a connector between the component and the .NET runtime environment. Each property, method, and event is mapped through the wrapper, and you can then use the component in the same way you would use a fully managed code (.NET) assembly. The overall process is referred to as COM Interop, and it provides a path to move to .NET without having to rewrite all the business logic and custom components required in an existing or new application immediately, although you should consider this to be a temporary measure and aim to build native components as part of the process when and where possible. Performance Issues with COM Interop Using wrapped COM components affects the performance of your pages. The extra marshaling of values across the managed/unmanaged boundary with each property setting and method call is less efficient than with a native managed code component. The actual performance degradation generally depends on the number of calls you have to make when using the component; for example, a component that requires you to set a dozen property values and then call a method is likely to degrade performance more than one that lets you make a single method call with a dozen parameters. The actual marshaled size of the parameters or values you pass to properties and methods also has some effect on the performance. Creating a .NET Wrapper for a COM or COM+ Component If you are building an application by using Visual Studio .NET, you can create a type library wrapper by simply adding to your project a reference to the component. You right-click the References entry in the Solution Explorer window and select Add Reference. In the Add Reference dialog that appears, you go to the COM tab and select the component or library you want to use. Alternatively, you can use the Type Library Import utility provided with the .NET Framework. The utility tlbimp.exe is installed by default in the Program Files\Microsoft.NET\SDK\[version]\Bin folder. To use it, you specify the COM component DLL name and add any options you want to control specific features of the wrapper that is created. You can find a full list of these options in the locally installed .NET SDK at ms-help://MS.NETFrameworkSDKv1.1/cptools/html/cpgrftypelibraryimportertlbimpexe.htm or by searching for tlbimp in the index. Using the tlbimp Utility As an example of how to use the Type Library Import utility provided with the .NET Framework, let's look at an example of how to create a wrapper for a fictional custom COM component. The DLL is named stnxsltr.dll, and it implements a class named XslTransform within the namespace Stonebroom. To create the wrapper, you would copy the DLL to a temporary folder and navigate to this folder in a command window. The following command runs the tlbimp utility for version 1.1 of the Framework and generates the type library wrapper as a .NET assembly with the .dll file extension: "C:\Program Files\Microsoft.NET\SDK\v1.1\Bin\tlbimp" stnxsltr.dll Notice in Figure 5.6 that the name of the new DLL is the name of the namespace declared within the component, not the filename of the original component DLL. This is required to allow ASP.NET to find the type library when it is imported into a page. Figure 5.6 Executing the tlbimp utility to generate a wrapper for a COM component. Now you would copy the new wrapper DLL into the bin folder of an application and use the component in ASP.NET pages just as you would a native .NET component. You'd use an Import directive to import the type library wrapper, and then instantiate the component by using the classname. You could use the full namespace.classname syntax when instantiating the component, but this is not actually required. Because the namespace has been imported, you could instantiate the component by using just the classname (see Listing 5.7). Listing 5.7Using a Custom XslTransform COM Component in ASP.NET <%@Import Sub Page_Load() Dim oXml As New XslTransform Dim sStatus As String Dim sXMLFile As String = "/data/xml/myfile.xml" Dim sXSLFile As String = "/data/xsl/myfile.xsl" Dim sOutFile As String = "/results/myfile.html" Dim blnWorked As Boolean = oXml.TransformXML(sXMLFile, _ sXSLFile, sOutFile, sStatus) lblResult.Text = sStatus End Sub Coping with Classname Collisions The fictional custom component described here has the same classname, XslTransform, as a native .NET class within the .NET Framework class library. However, you do not import the System.Xml.Xsl namespace (within which the .NET Framework component lives) into the page, so there is no collision of classnames. If there were, you would get a compilation error such as "XslTransform is ambiguous, imported from the namespaces or types System.Xml.Xsl, Stonebroom." In that case, you would use the full namespace.classname syntax to identify which class you require (for example, Dim oXml As New Stonebroom.XslTransform). ASP Compatibility for Apartment-Threaded COM Components When you're using COM or COM+ components, one issue to be aware of is that the threading model used in ASP.NET is not directly compatible with components that are single threaded or apartment threaded. Single-threaded components are not suitable for use in ASP or ASP.NET anyway, so this factor should not be an issue. However, components built with Visual Basic 5 and 6 are usually apartment threaded (via the single-threaded apartment [STA] model) and work fine with only minor performance degradation in classic ASP. Until the arrival of the .NET Framework, which makes creating components in any managed code language easy, Visual Basic was quite a popular environment for building business components and server controls. To overcome any issues with running apartment-threaded components in ASP.NET, you should always add the attribute ASPCompat="True" to the Page directive. This forces ASP.NET to adopt a threading model that matches the requirements of Visual Basic apartment-threaded components. It also allows components to access the intrinsic ASP objects, such as ObjectContext, and the OnStartPage method. There is some performance degradation, but it is not usually significant except in highly stressed Web applications and Web sites. However, if you add the ASPCompat="True" attribute to a page that creates instances of apartment-threaded components before the request is scheduled, you will encounter much more significant performance degradation. You should always create instances of any apartment-threaded components you need in a Page event such as Page_Load or Page_Init.
https://www.informit.com/articles/article.aspx?p=173411
CC-MAIN-2021-10
refinedweb
5,643
53.51
We run our own path, but I still say I have been pretty lucky to see a lot of architectures in the last 15 + years, some rather average, through some pretty amazing architectures. The ones I count as amazing ones, you probably interface with as a user in your daily life as apps on your phone. As I pen this post it’s 2022, we know change is constant, are you keeping up to speed? I used to write VbScript for almost 10 years, I was not keeping up! Cloud natives, digital natives, what ever you call them, this new generation of builders, who know not what a data-center is, or how to install an operating system have a different approach on how they make technology decisions. You can teach tech to almost any organisation. I am new to Microsoft, I have just got my Azure expert certification amongst the others I hold. Does it mean I really am expert? Culture, The Secret Sauce? The thing is, whats unique in this space of cloud natives is their culture, It really is what is business differentiating. For many who know me, you may know my history (Startup / SEEK / Amazon / Microsoft) but what you may fail to know, I worked for the third largest food manufacturer (JR Simplot) in the world, for a short 13 months stint. Why only 13 months? It was a great role. Great title, an ability to influence global architecture, factories around the world. Why did I leave, it was culture. Heading global architecture but unable to install the latest version of of an SDK (Software Development Kit) because the architecture function was bound to a SOE (Standard Operating Environment) and I had no local admin rights. I can virtually see your faces and tears. That speaks volumes to why LPAR’s, ISeries and DB2 still exist there. Conways law, like Moores law, & Brooks law it holds true to this date. Digital natives, they don’t know all the cards, but what they do know is how to experiment. They experiment and learn and do this in a method that is unlike traditional organisations. They do it fast, but it doesn’t mean they experiment dangerously. They have systems and mechanism in place and are structurally organisaed so there is blast radius in place, and they can be very targeted in their experiments and of course they measure. Remember, you can’t improve what you can’t measure. So, we are talking 2-way doors. - A 1 way door, almost impossible to reverse, once you make one of these decisions its really hard to go back. - A 2 way door, it means they can be quickly changed, like choosing instance types, leveraging pytorch or tensorflow. While these decisions might feel momentous, with a little time and effort, often a lot less than you think they can be reversed. These organisations all use this principal, they will make change aren’t afraid to roll things back. They’ve built cultures and processes around rapid innovation. They use models like challenger vs champion, where they will send a fraction of traffic to a challenger to see how it performs and in some cases it becomes the champion. Most importantly they ship faster to learn faster Lastly they know their stuff, I will talk more about this later, but there is a good reason why Netflix pay their senior engineers all > 500k USD. There are no free passes here, most conversations are 400 level API / SDK, so what value are you bringing to the conversation? So what is so special about cloud native culture. Lets sumarise it in to 3 points - Empowered decentralized teams - Fail fast - They know their stuff But it’s the first point I want to talk to you about. It is that empowerment and decentralized function. Classic Account Structures != Not conducive to speed I copied and cropped this image from the Microsoft Architecture center. Why? This guidance is centered around a centralized governance of IT. That classic pattern of, Development, Test and Production subscriptions. This assumes your governance is very centralized. This is an assumption that may work in the Enterprise, but you know what they say about assumptions? It doesn’t bode well these style of organisations. We spoke about moving fast, this is a good start but we need to expand on this architecture Let me frame it like this. This picture should make it clearer, especially the snapping alligators / crocodiles. Centralised operations, are not conducive to speed, full stop. They bring other benefits, but speed with an ivory tower is not one of them. But Shane, by having centralized operations you can build economies of scale, that one large K8 (AKS/EKS) cluster all throw us your containers, we will run them, that single name space for your ServiceBus/SQS? True I would say, however scaling to meet the demands of the business will mean this style of architecture, gaurantees the operations function will in effect become a choke point, more so you are increasing business risk, all your eggs are now in one basket, that production subscription, that single Event Hubs/Kinesis namespace, those same resource limits. It drives a hero mentality which we want to avoid. These soldiers, the operations people in this picture lets be frank, often have little idea why a specific microservice API error rate is increasing and conversely the developers are really at arms length away from reality and seeing how their software fails. This style of command and control doesn’t work with cloud natives and is not a core attribute to this style of customers. My advice to you all here is. If you are trying to run faster and are experiencing outages, technology may not be the solution, have you looked at their culture. It can be difficult to approach, so use tact. I am going to say, everything fails all the time. No mater, whose cloud. Broadly speaking Complex systems, microservie architectures contain changing mixtures of failures latent within them. How can we address the speed to market constraints, whilst reducing business risk where change is embraced and is a non-event? Empowering Development Teams I need you to use your imagination here, because my ambition to draw you an amazing picture, is let down by my Visio skills. I want you to take this and picture, to which I have scaled out hugely 8 subscriptions here. No imagine what this would look like with 1000+ subscriptions. What do you notice about this picture? Gone is the castle, the soldiers. Each team is accountable for their environment. You build it, you run it (that includes supporting) and in most places, guess what. You pay for it. Do you know what the cost to serve is, well in this topology you do, no hiding costs. What I am trying to convey here is , in these organisations there may be 50 to 1000 different service teams. Each team will have many subscriptions, I have one per team in this diagram here, but there could be 5 to maybe 20+. For example you might have 5 dev subscriptions, staging, QA, stress and volume, production. When they finish building their code, they don’t throw it over the fence, the run it them self. What was the production subscription today, may be nuked tomorrow. Layer 7 routing is key here. Layer 7 routing, imagine the ingress will resolve to a CDN with an origin pointing to a L7 load balancer. Their will then be routing rules, path based routing, /search goes to this subscription, /tax goes to this subscription, /dogs to another. Notice the red in this picture? Deano the dinosaur is sick of cat pics taking over the internet and wants some dog pictures. Each team, this self empowered team will then as part of their CI/CD process will update the configuration to point to the path to the right subscription. What I want to emphasise here is, each team, squad has complete control of how and when they deploy and release. Yes there is probably some guard rails, but typically, the language, the release process is controlled by the team. One team with Python, another DotnetCore, one team with Go. Does it really matter if all you are exposing is in most cases an API endpoint? That API is your contract. Does it all need to be one language. Of course not, hence the polyglot eco-system The true production environment may includes hundreds linked / peered vnets to create a psudeo production environment. This is empowerment, this decentralisation. All of these self empowered teams in effect are running their own small business. There are inefficiencies sure. For example, each squad will be running their own infra in most cases. but it is also reducing risk and guess what, you have bought yourself a ton of speed. Tight coupling, be gone. Hello independence. This is culture here ladies and gentlemen, this is what separates the traditional enterprise from this style of customer. There is for the better part here no central governance, I say better part as there is none, but yes there are going to be guardrails in the form of an ingress / egress hub, some common infrastructure but by large the architecture function will define this If I had to codify this, this is what I would say When IT is no longer blocker, that is where the magic happens. Many of these companies will have ‘company in a box’, the premise being. As a developer I want a full end-to-end site, with sanitized production date to run a hypothesis, and I want it in 15 minutes, and guess what, most of these labels can do this. If the idea is a flop, they fail, and fail fast and it may have cost them a few cups of coffee in cloud usage. Culture Eats All You can read a lot about culture, fare more than what I have spoken to you about today. Today is my lived experience in dealing with this special but ever-growing style of organisation. I want to close this section with a pictures, its from Spotify. Think about your workplace, which quadrant are you playing in? How are you leading? Culture eats strategy for breakfast, technology for lunch and everything else for dinner You may not be working for a cloud native but it doesn’t mean you cant adopt some of their culture to advance your business and at the same time yourself as a thought leader. How can you help your organisation shift towards the top right and do you need alter your behaviors? Is this part of your career development plan and the way you add value to your organisation. If it is not I would encourage you think about what you can do in this space. I know this was a long post, so thanks for sticking it out. – Shane 2 thoughts on “Tear Down Your Castle! – A Story Of Culture & Cloud Natives.” Nicely done. I think where I have my fledgling team now is a hybrid of your proposal. We still work within that central dev/test/prod model (sometimes just dev/prod) but we own every solution from design to build to CICD and monitoring and finally support. We build it and own it. One size does not fit all, and culture can play a big role in any model. Thanks for the comment James.
https://automation.baldacchino.net/?p=1056
CC-MAIN-2022-33
refinedweb
1,915
71.55
Difference between revisions of "Control Systems Library for Python" Revision as of 06:30, 29 May: - 23 May 09: really basic functionality working (Bode and Nyquist plots of transfer functions) - 24 May 09: looking around for open source control computations - slicot looks like a candidate - 28 May 09: confirmed that linking to SLICOT works: - Option 1: build off of the 'signal.lti' object structure. This has the advantage of being compatible with existing 'signal' package functions. The signal.lti class currently represents systems simultaneously in transfer function, state space and "pzk" forms. Not may functions written for it yet, though. - Option 2: emulate the MATLAB systems structure. This uses separate classes for transfer functions, state space, "pzk" and frequency response data (frd) forms. There is also a state space format for delay systems, which seems quite useful. - Option 3: new structure that allows nonlinear computations and other general objects -) - Install ipython - interactive python interface Small snippet of code for testing if everything is installed from scipy import * from matlibplot import * a = zeros(1000) a[:100]=1 b = fft(a) plot(abs(b)) show() # Not needed if you use ipython -pylab python-control Not yet available.
https://murray.cds.caltech.edu/index.php?title=Control_Systems_Library_for_Python&diff=prev&oldid=9413
CC-MAIN-2022-33
refinedweb
195
50.57
I’ve just gone through chapter 12 of the ‘Python for Everybody’ course which is about networking. I’m a little confused about the difference between what was covered in lesson E as compared to F This is the basic data structure learnt in E (if I make it look consistent with F): import urllib.request, urllib.parse, urllib.error url = input('Enter url: ') html = urllib.request.urlopen(url) for line in html: print(line.decode().strip()) I understand that this “opens” the html code of the webpage and removes any white-space, then displays it. We then have lesson F: import urllib.request, urllib.parse, urllib.error from bs4 import BeautifulSoup url = input('Enter url: ') html = urllib.request.urlopen(url).read() soup = BeautifulSoup(html, 'html.parser') tags = soup('a') for tag in tags: print(tag.get('href', None)) Aside from retrieving the tags at the end, wouldn’t this do the same thing as in E if I got rid of that bit and wrote print(soup) ? I guess what I’m asking is what’s special about BeautifulSoup…
https://forum.freecodecamp.org/t/lesson-e-vs-f-in-python-for-everybody-networking/421812
CC-MAIN-2022-21
refinedweb
180
66.23
I'm busy learning Silverlight & WCF. I've created a WCF service that I can consume happily in a .NET 4.0 website. So I thought I'd have a go at consuming it in a silverlight app. I'm getting a weird error... I use svcutil to create my class (called in this case service1.cs) and my config file. I put the config output into ServiceReferences.ClientConfig and add the service1.cs class into my silverlight project. I can reference everything fine, so I build the project. I then get Error 2 The type or namespace name 'ExtensionDataObject' does not exist in the namespace 'System.Runtime.Serialization' (are you missing an assembly reference?) I've checked my references and System.Runtime.Serialization is there, but its version 2.0.50727. This is despite the silverlight project being .NET 4.0. The WCF project references version 4.0.30319. So I know what the error is, but I have no idea why its occuring or the proper way to fix it... Any one got any ideas? Rob View Touch isn't just a form of mouse input in Silverlight: Sometimes what's required are controls specialized and optimized for touch. Hall of Fame Twitter Terms of Service Privacy Policy Contact Us Archives Tell A Friend
http://www.dotnetspark.com/links/38096-wcf-silverlight.aspx
CC-MAIN-2017-30
refinedweb
216
69.99
Your browser does not seem to support JavaScript. As a result, your viewing experience will be diminished, and you have been placed in read-only mode. Please download a browser that supports JavaScript, or enable it if it's disabled (i.e. NoScript). On 18/01/2016 at 14:14, xxxxxxxx wrote: Hello all, I have a plugin that acts on a scene according to the arguments given in the command line, but I notice that trying to call c4d.CallButton(bakeTag, c4d.BAKETEXTURE_BAKE) causes it to hang and, well, think about it until I end the process. Is there a way to simulate c4d.CallButton() with -nogui, or is that a -no go? Thanks! Edit: This might be a really dumb question, I realize ^^ 2nd: Thought it might be a threading issue, so I went: if c4d.threading.GeIsMainThread() : c4d.CallButton(bakeTag, c4d.BAKETEXTURE_BAKE) in desperation, but On 19/01/2016 at 01:27, xxxxxxxx wrote: Hello, can you share some more information? How exactly do you execute you code in a nogui environment? Do you react to some PluginMessage? Do you handle a document that you load yourself? Best wishes, Sebastian On 19/01/2016 at 07:38, xxxxxxxx wrote: Hey Sebastian, Yeah, just as you say: def PluginMessage(id, data) : if id == c4d.C4DPL_COMMANDLINEARGS: doc = c4d.documents.LoadDocument("C:/path/file.c4d", c4d.SCENEFILTER_OBJECTS, None) #c4d.documents.SetActiveDocument(doc) <-- Silly test from earlier frame = GetFrame(sys.argv) #These two bakeTag = FindTag(doc) #defined elsewhere if bakeTag == None: return False #Button settings here c4d.CallButton(bakeTag, c4d.BAKETEXTURE_BAKE) Thanks for your time (again) + I guess I could add my command (windows) : "C:\pathToC4D\cinema 4d.exe" -frameNum 10 -nogui "c:\pathToFile\file.c4d" On 19/01/2016 at 09:57, xxxxxxxx wrote: CallButton() will just do that: it will "press" that button. It won't wait for any result. So after CallButton() your script ends. The document seems to exist only in the scope of this function so it is freed. Baking the texture is handled in a background thread. So when you press the button, this process is started and this will take a while. But as said, you document will be deleted in the meantime. So it is no surprise that is will crash. The baking system relies on a working message and event system which is not present in a non-gui environment. So it might be impossible to make this work properly. Is there a specific reason why this has to run in a non-gui Cinema? On 19/01/2016 at 10:10, xxxxxxxx wrote: Ahh, gotchya, thanks for that. We're trying to run this with deadline across our render farm, which, naturally, is just a bunch of blades with only the command-line version of c4d. We have, ehh, many shots with many frames that need this baking, and potentially will have many more, so this is our current bottleneck by quite a bit (on one machine it takes 1-2 hours, thanks to the size of the image and length of the scenes). I'll do a bit more experimenting before I give up, but either way thanks again! On 19/01/2016 at 10:58, xxxxxxxx wrote: Well, that didn't last long; I think you're right... Maybe there's an elaborate way to calculate the texture with C so as to eliminate the need for the Bake Texture tag and hence CallButton(), buuut I think I'll throw in the towel, for now. Someone, here at work, found a sort of workaround via Alembic and Fusion that's doing pretty well. On 20/01/2016 at 01:52, xxxxxxxx wrote: just to make this clear: it is possible to create a scripted solution if you can use a "normal" Cinema 4D installation. It is just not possible to do this with a non-gui version. On 20/01/2016 at 06:37, xxxxxxxx wrote: We've had similar problems. It would be great if the Bake Texture functionality was exposed in python. Then we could make our own Cmd Bake Texture plugins. That way we could check for certain conditions with a pre-job script in Deadline and submit a baking job first and make the actual job dependent on the first. When baking is finished, the initial job would start. I've been fiddling with several hacky solutions (including trying to learn c++ to code the plugin myself) but with no luck so far -b
https://plugincafe.maxon.net/topic/9309/12411_press-button-command-line-c4d
CC-MAIN-2022-40
refinedweb
749
73.17
Java Assignment Problem Im having an error on my java assignment i have most of the program right but I cant figure out how to fix the errors i have. any help would be appreciated. The errors im getting are in the Dialog File. errors: error: variable sum might not have been initialized return sum; ^ error: unreachable statement System.exit(0); // Terminate ^\ error: missing return statement } ^ 3 errors 1st file. I believe this one is correct and i dont have any errors in it. Fibonacci.java ------------------------- This is the second file. This is the one im getting errors for. FibonacciJDialog.java There are a couple of problems for the second one: 1) Check the return type of your main method against a book. 2) Main methods don't typically return a value 3) When the main method ends, the program ends. So you don't need a System.exit call there anyway. [OCA 8 book] [OCP 8 book] [Blog] [JavaRanch FAQ] [How To Ask Questions The Smart Way] [Book Promos] Other Certs: SCEA Part 1, Part 2 & 3, Core Spring 3, TOGAF part 1 and part 2 Step 1: You want to call the 'Fib' method on the Fibonacci object. How would you do that? (Note, you already know how to call methods on objects, because you're already doing that in your code). Step 2: Assign the result of the method call to the variable 'sum'. You also already know how to do that (that's also already in your code!). taylor Lynch wrote:I have no way to compile this right now. But would this be what your talking about? No, this is not valid Java. Remove the "System.out.println()" that's around the call to box.Fib, and remove the "int". Also, you want line 17, which shows the result, under the call to box.Fib, and you'll want to show the value of 'sum' there. "The variable message" means nothing to us. Please cut-and-paste the full error message you get, as well as your current code. Without both of those all anyone here can do is guess what you have, and guess at a solution. There are only two hard things in computer science: cache invalidation, naming things, and off-by-one errors Update: I was able to get your code to run. It does exactly what you told it do to, assuming this is still your current code: What do you think happens inside your Fib() method?> There are only two hard things in computer science: cache invalidation, naming things, and off-by-one errors sorry - i was updating my reply when you posted this. See my previous post.sorry - i was updating my reply when you posted this. See my previous post.taylor Lynch wrote:When i run it like this without the return sum; statement, it compiles just fine. But when i run the program it stops right after you input a number for the variable n. There are only two hard things in computer science: cache invalidation, naming things, and off-by-one errors I am assuming you pare passing in a value - something like 7. So, this says "while index is less than n...do nothing". You have set index to 1, and n is what you pass in - so it is something like 7. so..."while 1 is less than 7, do nothing". your code is stuck in an infinite loop. You need to write the code so that something HAPPENS on each iteration, and you eventually break out. There are only two hard things in computer science: cache invalidation, naming things, and off-by-one errors taylor Lynch wrote:I just dont understand what it means for the code. Also i cant change anything from the Fibonacci.java file it was supplied by the Professor and cant be changed. I don't know how to help you then. IF - and I want to emphasize the IF - you can't change the Fibonacci class, and what I posted is indeed the real and actual code, then you are screwed. the method Fib(int n) defined in that class is an infinite loop. If you call it (and you do on line 17 of your FibonacciJDialog class), your code goes into an infinite loop. It will sit there, doing basically nothing until you kill the JVM. So either a) You misunderstood your professor b) This is NOT the real code c) You need a miracle of some kind. Those are the only options I see. There are only two hard things in computer science: cache invalidation, naming things, and off-by-one errors The Fibonacci sequence consists of the numbers 1,1,2,3,5,8,13 ...... in which each number (except the first 2) is the sum of the two preceding numbers. Write a class which that you will call Fibonacci class with a method fib(N) that prints the corresponding Fibonacci value. Remember to use a long as a value for the returned fibonacci result because the number could be very large. Design and write a Dialog box for Input/Ouput that allows the users to input a number like 5 and the program will produce an output of 8, which is the fibonacci of 5 because the fib(5)=8. Remember to call the program: FibonacciJDialog.java. (Limit the number N to the value 40 maximum). Write a class which that you will call Fibonacci class with a method fib(N) That means you need to write the class. It does NOT mean that you can't edit or change it. There are only two hard things in computer science: cache invalidation, naming things, and off-by-one errors . So...I really think you need to read this page. You seem to be writing and changing your code here and there, without understanding what you need to do. I would recommend you do do this...Forget about the dialog box, forget about your FibonacciJDialog class. let's focus on your Fibonacci class. I would put a main() method in it, so you can concentrate on nothing but it. I would suggest you throw away the entire Fibonacci code, and start over. Before you write a SINGLE line of code, you need to spend some time THINKING about what your code should do. If I asked YOU to give me the value of the 6th fibonacci number, how would YOU compute it? Explain it in ENGLISH, not in java. Explain it in clear steps that a 10yr old could follow. Then, when you feel confident you know how to do it... Write a Fibonacci class with a main method that prints "I'm in main". Once that works (and I literally mean compile and TEST it to be sure it wors), write a "void Fib()" method that does nothing but prints "I'm in Fib". Change your main to call it. Compile and test to make sure that works. Then change your Fib() method to take an int parameter, and print out what that parameter was...something like "Fib was passed the value 7". Once that works, change the method to return an int (for now, it may only return what it was passed in). Print out the value it returns in your main, compile and test. Keep going like this. You should only ever add 2-3 lines of code AT MOST before you re-compile and test. There are only two hard things in computer science: cache invalidation, naming things, and off-by-one errors I have tried it, and you don’t get overflow for fib(40). I obviously had forgotten what happened previously. Fred is right. That code cannot be what you were provided with. You have misunderstood something and changed something before you showed us it. taylor Lynch wrote:The Fibonacci code that i've been using is copied and pasted directly from the site nothing has been changed except the things i changed in my previous post. This was your previous post: taylor Lynch wrote:I realize taht now. I changed the code in my previous post but as i said im still having a problem with the output being wrong. Your post prior to that was a quote of your assignement. The post prior to THAT says Ok i changed the index to 40 and it ran the program. So at this point... I have no idea WHAT your code looks like. I have no idea WHAT the issue you are having is. Your job, when you post here, is to make it EASY for me to help you. Post your current code. Post the EXACT and COMPLETE error message (if any). Post what output you GET. Post what output you EXPECT. Follow the advice you have been given (or at least, ACKNOWLEDGE you've seen it and why you don't think it will solve your problem). I've given you the best advice I can (in my post at 14:20:25). If you aren't going to follow that advice, there's not much more I can do. There are only two hard things in computer science: cache invalidation, naming things, and off-by-one errors Heres the new code there are no errors when it compiles the problem lies in when I run the program it doesn't provide the correct answer. This tells me that there is something wrong in the coding that needs to be changed. Try the following code !!! class Fibonacci { void fib(int n) { int in1=0;int in2=1; System.out.println("the fibonacci is"); System.out.println(in1); System.out.println(in2); int sum=0;//initial value int index=1; while (index<n) { sum=in1+in2; System.out.println(sum); in1=in2; in2=sum; index++; } } } public class Fib { public static void main(String ar[]) { Fibonacci fb=new Fibonacci(); fb.fib(10); } }
https://coderanch.com/t/599625/java/java/Java-Assignment
CC-MAIN-2016-36
refinedweb
1,653
74.39
I know that the default port used by the F7 is port 4370, I'm trying to connect to it by creating a socket IP + Port, since I know both. This is my code: - Code: Select all import socket import sys TCP_IP = '192.168.1.160' TCP_PORT = 4370 errormsg = 'No connection created' succmsg = 'Connection created' try: soc = socket.socket() soc.connect((TCP_IP, TCP_PORT)) except socket.error: print (errormsg) input("Press ENTER to exit") sys.exit() print (succmsg) while True: msg = soc.recv(1024) msg = str(msg) print (msg) with open('BiometricID.txt','a') as f: f.write(msg) f.write('\n') soc.close() input("Press ENTER to exit") I thought that code could work but it throws an error "10061", that says that the machine actively refused the connection. It is quite possible that I have other mistakes, but first I want to connect to the device, after that I'll go on with whatever could fail too. If you need more information just ask and I'll try to provide it. Thank you very much!
http://www.python-forum.org/viewtopic.php?f=17&t=11261
CC-MAIN-2014-42
refinedweb
176
75.5
Welcome to part 4 of machine learning for forex and stock trading and analysis. Now that we've seen our data, let's talk about our goals. Generally, with machine learning, everything boils down to actually: Machine Classification. With quant analysis, generally the first few things you are taught are "patterns" Head and shoulders, teacup, and whatever else. So what is the theory behind these patterns? The idea is that the prices of stocks or various forex ratios are a direct reflection of the psychology of the people trading, ie: the traders (either people or computers) are making decisions, based on a bunch of variables. The theory is that, when those same variables present themselves again, we get a repeat of actions that creates a similar "pattern," then the outcome as well is likely to be similar, because the variables are almost the same. So what we're going to do here is 1. Create a machine learned batch of what will end up being millions of patterns with their results, which can be used to predict future outcomes. 2. Test this. You may have learned a few simple patterns, but everyone knows these. What if you could know every pattern in history? Pretty hard for you to remember them all, but not too hard for a computer. Our entire system is really built on the inference of pattern recognition, so if patterns change due to new data, that's really built in and is done by programming that was done before results. This allows backtesting to actually serve a very truthful and accurate purpose. If a machine learned live algo passes the back test, it is highly likely to continue performing well in the future, not because it passed a back test, but because our hypothesis and entire model passed the backtest... unlike finding the best algo at the time and backtesting for great results. With that, what we will do is take a range of data in succession, and create a pattern with it. How we're going to do this is going to be with % change. We want to have the data normalized as best we can, so it can be used no matter what the price was. We're just going to use a succession of % change for it. To start, we'll do forward percent change, from starting point. This means, the longer the pattern, the more likely the END is to be less similar, but the actual direction of the pattern will be more similar. This can be useful, since some patterns might take more time to react than others, and we want the build up to be most accurate, but we might actually prefer the end to be more accurate in the future, so we could do reverse percent change. We can also do a point-to-point percent change as well. Trust me, when it comes to variables, we're gonna be very busy. Now what that means is first we just need to store a bunch of patterns, in their percent change format. Then, what we'll do to compare patterns is how similar the % changes are. def percentChange(startPoint,currentPoint): return ((currentPoint-startPoint)/startPoint)*100.00
https://pythonprogramming.net/percent-change-python/?completed=/forex-algo-pattern-rec-basics/
CC-MAIN-2021-39
refinedweb
536
69.62
Message Passing and Security Considerations in Chrome ExtensionsNick Mooney June 19th, 2019 (Last Updated: June 19th, 2019) 00. Introduction In the process of writing a Chrome extension you may find yourself needing to communicate between different components, such as scripts running on webpages and the long-running extension backend. The Chrome Extension developer documentation gives a great breakdown of the message passing tools you have available. This article is written to give you a little insight into why you would use a particular method, some tips and tricks for common use cases, and pitfalls to avoid. 01. Components of a Chrome Extension There are two main components of a Chrome extension: content scripts, which run in the context of webpages you want to interact with or modify, and background scripts, which are long-running services that can maintain global state. Simple extensions only need to rely on a content script, but extensions requiring storage or longer-term / more complex processing may benefit from a background script. Occasionally, you will also need to inject a script into a webpage to break out of the “isolated world” that content scripts run in. We will cover that more later, but I will refer to these scripts as “injected scripts.” 02. If you want X, do Y Messaging between content scripts and the backend Use chrome.runtime.sendMessage in your content script and chrome.runtime.onMessage.addListener in your backend script. Each message is a one-off, with a single optional response that can be sent via a callback. If you want to open a long-lived channel (perhaps because you want on-demand messaging from the backend to the content script), use chrome.runtime.connect instead. You can even open multiple named channels to send different types of data. Messaging and Injected Scripts (with DOM access) Chrome extension content scripts run in an “isolated world.” Here’s what Google has to say: "Content scripts live in an isolated world, allowing a content script to makes changes to its JavaScript environment without conflicting with the page or additional content scripts." … "Isolated worlds do not allow for content scripts, the extension, and the web page to access any variables or functions created by the others. This also gives content scripts the ability to enable functionality that should not be accessible to the web page." This is a great feature, but can be restrictive at times. If you’re looking to override certain functionality (like hijacking JavaScript APIs provided by the browser), the isolated world will not let you do this. You will need to programatically load a plain old <script> element – but now you have a problem: the script you’ve loaded is no longer running in the “isolated world,” and therefore has no access to the Chrome message passing APIs. The documentation has a solution: use window.postMessage to communicate between code running on the page and the content script, and optionally use the content script to proxy messages back to the backend. The content script might look something like this: // Listen for any messages window.addEventListener("message", (evt) => { // Validate the message origin if (evt.origin === window.origin) { const message = evt.data // Check the type to avoid noise from other scripts if (messageIsCorrectType(message)) { // If you need to know the origin of the message in the backend, // set it from within the content script -- do not allow it to be // controlled via the DOM message.origin = window.location.origin chrome.runtime.sendMessage(message, (response) => { // The callback for this message will call `window.postMessage` window.postMessage(response, message.origin) }) } } }) and you would inject a script directly into the DOM by putting something like this in your content script: const newScript = document.createElement("script"); newScript.src = chrome.runtime.getURL(PATH_TO_SCRIPT_FILE); (document.head || document.documentElement).appendChild(newScript); A Word of Caution Note that your injected script runs in exactly the same context as the DOM, so code served natively on a web page can communicate with your extension in the same way. Using injected scripts this way pulls you out of the “isolated world,” and you cannot trust that messages received this way didn’t come from other malicious JavaScript code on the page. Origin and source validation are a necessity, but even still, messages could come from JavaScript that was inserted into a page via XSS, via another malicious Chrome extension, etc. Place as little trust in messages from the DOM as possible. 03. Common Pitfalls Permissions vs. Content Script Directives An extension’s manifest.json defines a set of permissions available to the extension as part of the permissions directive. There are capabilities such as tabs, background etc that authorize the use of APIs, as well as match patterns that allow access to specific hosts. When you put a host in permissions, your extension is authorized certain functionality on those URLs. For example, if your extension has the permissions cookies and, your extension will be able to access the cookies on. This is entirely separate from the host matches in the content_scripts directive! If you have a permissions section that looks like this: "permissions": [ "*://*.mywebsite.com/*" ], and a content_scripts directive that looks like this: "content_scripts": [ { "js": [ "content_script.js" ], "matches": [ "https://*/*/mypage.html" ], "run_at": "document_start" } ], It is important to understand that your content script will run on all domains, not just mywebsite.com. It is important to understand that these sets of permissions do not interact with each other in order to avoid unintentionally exposing content script functionality to more domains than you expect. Origin Validation If you are using an injected script that needs to communicate with a content script via window.postMessage, it is up to you to validate the origin of that message. Mathias Karlsson’s “The pitfalls of postMessage” details the issues that can occur without proper origin validation. message.origin must be validated before treating the message as having originated from a legitimate sender. Without this validation, messages can be sent from an attacker origin to the receiver window in a malicious manner. Rendering Controls in the DOM If your Chrome extension provides functionality by rendering buttons, links, etc. on the webpage, it’s important to remember that these buttons can also be “clicked” by the webpage itself. Buttons rendered in the target webpage should not immediately provoke undesired behavior. In addition, input provided to text areas that are rendered on a page by an extension is readable by other JavaScript running on that domain. 1Password X handles this quite well – when you navigate to a site that you might like to log in to, you are prompted to hit a keyboard shortcut and provide your password to the extension via the extension popup, which is entirely separate from the DOM of the page you’re viewing. Only when you authenticate to the popup – out of band of the webpage you’re viewing – are you able to perform “privileged” actions like filling in passwords. Relying on Variables Defined in the Content Script While content scripts do run in the “Isolated World” mentioned earlier, this isolated world has access to the contents of the DOM, and should treat the contents of the DOM as untrusted. Tavis Ormandy gives a clear example in his writeup of the LastPass vulnerability reported in 2017: var trusted = false document.body.addEventListener("click", function() { if (trusted) { eval(window.location.hash.substr(1)) } }); This above function is secure because trusted is explicitly defined by the content script. This trusted variable exists in the isolated world of the content script and cannot be overridden by the DOM. document.body.addEventListener("click", function() { if (typeof trusted != "undefined") { eval(window.location.hash.substr(1)) } }); This function is insecure. The trusted variable is undefined by default, but a malicious webpage can execute the following: el = document.createElement("exploit") el.setAttribute("id", "trusted"); document.body.appendChild(el); and now the eval line will run. This is because DOM element IDs automatically become properties of window, and window is within the default namespace of global JavaScript variables. 04. Summary Chrome extensions often provide access to powerful functionality from webpage contexts. The messaging APIs are great, but it is important not to blindly pass messages from the DOM into the privileged context of extension messaging. Treat the DOM as untrusted, and perform validation within the “isolated world” of content scripts before passing messages to the backend.
https://duo.com/labs/tech-notes/message-passing-and-security-considerations-in-chrome-extensions
CC-MAIN-2022-40
refinedweb
1,391
52.49
Type: Posts; User: Pinky98 I have realised these ads are too annoying. I'm sick and tired of it. And as so I have finally decided to leave codeguru. You would be better off not using the VB loop. These are inherently slow. Better would be to use a data bound control, which are much faster. But if you are displaying that many options to the... Do you mean you have a control array on your form? if so then you use the load function to add controls to that form. depends how you define a wonder of the modren world. I would only consider things that have ACTUALLY been built in the mordern world to be eligable... Surely?? yeah. had that often. from what I can work out, it sounds like some one at jupiter has screwed up. usually itterative is faster. for lines use the : richtextbox.GetLineFromChar method. what do you mean by that?? Mmmm, yeah that may work too. cool. no prob Yes. I can't remember exactly where/what to look for but think it is under the System.Authentication namespace. Completely different... the only thing similar is the syntax. But is where it ends. Different programming paradym completely. To program C++, I would probably recomend getting a SAMS book. They're... if you use the file system object to check which type of drive each one is. Yes normal file retrieval. Well, thats a hard one. just about any of the simple things you do will be easily cracked... pls can you show your code where you acquire the data from the database You need to make the text box "multiline" property true. If you want MORE formatting (i.e. font, size, bold, italics etc. etc.) to be preserved, then use the rich text box. Sure, glad it helped. Mmmm, multiple verses. I see this starting to get hard... but first thing to do is test if a dash occurs in the verse if so, then seperate the verse into a lower and... Another great way to get ascii key values... use the VB constants i.e. vbKeyPageDown, vbKeyPageUp, vbKeyF1 etc etc etc no extiminator... the guy asked his girlfriend to marry him, she said no (or something similar / worse)... then he had this ring.. walking along broken heart it reminded him of her, so he "ditched"... Eh?? a recursive set of what? define what you mean by a "recursive infinite" set, an non-recursive subset and recursive set an infinite recursive subset. I love C++. Think that kind of power is just great. But that said, I am loving C# too. The time to market is just awesome. There are people writing VB.NET in commercial environment. But they are... what are you using on the back end? Are the pics embedded? are they in a DB? What causes the change? what is association between pic and id? I "love a challenge" as much as the next person, but... that is not something I would try in VB. sounds more like a C++ or C# project. what do you mean pack a variant into a safe array? VBs arrays are safe arrays, so you can just put the variant in there and pass that as required. But... VBs variant is a COM variant, and so you... Well, at the moment you are using the .ActiveDocuemnt property to get the current doc. I would firstly store that in a variable. Secondly, get any other docs using the . Docuements property. What do you mean any ideas? Any ideas about what?
http://forums.codeguru.com/search.php?s=8caf6c66ace0682d0a59bbd0c819b95d&searchid=8139549
CC-MAIN-2015-48
refinedweb
594
88.02
It looks like you're new here. If you want to get involved, click one of these buttons! When running reduce reads, the algorithm will find regions of low variation in the genome and compress them together. To represent this compressed region, we use a synthetic read that carries all the information necessary to downstream tools to perform likelihood calculations over the reduced data. They are called Synthetic because they are not read by a sequencer, these reads are automatically generated by the GATK and can be extremely long. In a synthetic read, each base will represent the consensus base for that genomic location. Each base will have it's consensus quality score represented in the equivalent offset in the quality score string.. The mapping quality of a synthetic read is a value representative of the mapping qualities of all the reads that contributed to it. This is an average of the root mean square of the mapping quality of all reads that contributed to the bases of the synthetic read. It is represented in the mapping quality score field of the SAM format. where n is the number of reads and x_i is the mapping quality of each read. A synthetic read may come with up to two extra tags representing its original alignment information. Due to many filters in ReduceReads, reads are hard-clipped to the are of interest. These hard-clips are always represented in the cigar string with the H element and the length of the clipping in genomic coordinates. Sometimes hard clipping will make it impossible to retrieve what was the original alignment start / end of a read. In those cases, the read will contain extra tags with integer values representing their original alignment start or end. Here are the two integer tags: OP -- original alignment start OE -- original alignment end For all other reads, where this can still be obtained through the cigar string (i.e. using getAlignmentStart() or getUnclippedStart()), these tags are not created. the RR tag is a tag that holds the observed depth (after filters) of every base that contributed to a reduce read. That means all bases that passed the mapping and base quality filters, and had the same observation as the one in the reduced read. The RR tag carries an array of bytes and for increased compression, it works like this: the first number represents the depth of the first base in the reduced read. all subsequent numbers will represent the offset depth from the first base. Therefore, to calculate the depth of base "i" using the RR array, one must use : RR[0] + RR[i] but make sure i > 0. Here is the code we use to return the depth of the i'th base: return (i==0) ? firstCount : (byte) Math.min(firstCount + offsetCount, Byte.MAX_VALUE);() Mauricio Carneiro, PhD. Mauricio Carneiro, PhD. Eric Banks, PhD -- Director, Data Sciences and Data Engineering, Broad Institute of Harvard and MIT... Eric Banks, PhD -- Director, Data Sciences and Data Engineering, Broad Institute of Harvard and MIT Ok thanks for the quick help, much appreciated. I think I'll just take the time hit and not use RR for either.
http://gatkforums.broadinstitute.org/gatk/discussion/2058/reducereads-format-specifications
CC-MAIN-2016-50
refinedweb
528
63.09