text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
2.8 Mitigation Strategies
Because errors in string manipulation have long been recognized as a leading source of buffer overflows in C and C++, a number of mitigation strategies have been devised. These include mitigation strategies design to prevent buffer overflows from occurring and strategies that are designed to detect buffer overflows and securely recover without allowing the failure to be exploited.
Rather than completely relying on a given mitigation strategy, it is often advantageous to follow a defense-in-depth strategy of combining multiple strategies. A common approach is to consistently apply a secure approach to implementing strings (a prevention strategy), and back it up with one or more run-time detection and recovery schemes.
Prevention
Prevention strategies can be further categorized as static or dynamic based on how they allocate space.
Statically allocated buffers assume a fixed size, meaning that once the buffer has been filled it is impossible to add data. Examples include the standard C strncpy() and strncat() and OpenBSD's strlcpy() and strlcat(). Because the static approach discards excess data, there is always a chance that actual program data will be lost. Consequently, the resulting string must be fully validated [Wheeler 04].
Dynamically allocated buffers dynamically resize as additional memory is required. Dynamic approaches scale better and do not discard excess data. The major disadvantage is that if inputs are not limited, they can exhaust memory on a machine and consequently be used in denial-of-service attacks.
Input Validation. Buffer overflows are often the result of unbounded string or memory copies. Buffer overflows can be prevented by ensuring that input data does not exceed the size of the smallest buffer in which it is stored. Figure 2–29 shows an example of a simple function that performs input validation.
Any data that arrives at a program interface across a security boundary requires validation. Examples of such data include argv, environment, sockets, pipes, files, signals, shared memory, and devices.
1. int myfunc(const char *arg) { 2. char buff[100]; 3. if (strlen(arg) >= sizeof(buff)) { 4. abort(); 5. } 6. }
Figure 2–29. Input validation
Input validation works for all classes of buffer exploits but requires that a developer correctly identify and validate all of the external inputs that might result in buffer overflows. Because this process is error prone, it is usually prudent to combine this avoidance strategy with others (for example, replacing suspect functions).
fgets() and gets_s(). If there was ever a hard and fast rule in secure programming in C and C++ it is this: Never use gets(). The gets() function has been used extensively in the examples of vulnerable programs in this chapter. 2–30 shows how all three functions are used.
The fgets() function is defined in C99 and has similar behavior to gets(). The fgets() function accepts two additional arguments: the number of characters to read and an input stream. By specifying stdin as the stream, fgets() can be used to simulate the behavior of gets(), as shown in lines 6–10 of Figure 2–30. Unlike gets(), the fgets() function retains the newline character, meaning that the function cannot be used as a direct replacement for gets().
1. #define BUFFSIZE 8 2. int _tmain(int argc, _TCHAR* argv[]){ 3. char buff[BUFFSIZE]; // insecure use of gets() 4. gets(buff); 5. printf("gets: %s.\n", buff); 6. if (fgets(buff, BUFFSIZE, stdin) == NULL) { 7. printf("read error.\n"); 8. abort(); 9. } 10. printf("fgets: %s.\n", buff); 11. if (gets_s(buff, BUFFSIZE) == NULL) { 12. printf("invalid input.\n"); 13. abort(); 14. } 15. printf("gets_s: %s.\n", buff); 16. return 0; 17. }
Figure 2–30. Use of gets() versus fgets() versus gets_s()
When using fgets() it is possible to read a partial line. It is possible, however, to determine when the user input is truncated because the input buffer will not contain an newline character.
The fgets() function reads at most one less than the number of characters specified from the stream into an array. No additional characters are read after a newline character or after end-of-file. A null character is written immediately after the last character read into the array. The C99 standard does not define how fgets() behaves if the number of characters to read is specified as zero or if the pointer to the character array to be written to is a NULL.
The gets_s() function is defined by ISO/IEC TR 24731 [ISO/IEC 05] to provide a compatible version of gets() that was less prone to buffer overflow. This function is closer to a direct replacement for the gets() function than fgets() in that it only reads from the stream pointed to by stdin. The gets_s()function, however, accepts an additional argument of rsize_t that specifies the maximum number of characters to input. An error condition occurs, if this argument is equal to zero or greater than RSIZE_MAX9 or if the pointer to the destination character array is null. If an error condition occurs no input is performed and the character array is not modified. Otherwise, the function reads, at most, one less than the number of characters specified and a null character is written immediately after the last character read into the array. Lines 11–15 of Figure 2–30 show how gets_s() can be used in a program.
The gets_s() function returns a pointer to the character array if successful. A NULL pointer is returned if the function arguments are invalid, an end-of-file is encountered and no characters have been read into the array, or if a read error occurs during the operation.
The gets_s() function only succeeds if it reads a complete line (that is, it reads a newline character). If a complete line cannot be read, the function returns NULL, sets the buffer to the null string, and clears the input stream to the next newline character.
The fgets() and gets_s() functions can still result in a buffer overflows if the specified number of characters to input exceeds the length of the destination buffer.
memcpy_s() and memmove_s() . The memcpy_s() and memmove_s() functions defined in ISO/IEC TR 24731 are similar to the corresponding less-secure memcpy() and memmove() functions but provide some additional safeguards. The secure versions of these functions add an additional argument that specifies the maximum size of the destination. The memcpy_s() and memmove_s() functions return zero if successful. A nonzero value is returned if either the source or destination pointer is NULL, if the specified number of characters to copy/move is greater than the maximum size of the destination buffer, or the number of characters to copy/move or the maximum size of the destination buffer is greater than RSIZE_MAX.
strcpy() and strcat()..
strcpy_s() and strcat_s(). The strcpy_s() and strcat_s() functions are defined in ISO/IEC TR 24731 as close replacements for strcpy() and strcat(). These functions take an extra argument of type rsize_t that specifies the maximum length of the destination buffer.
The strcpy_s() function is similar to strcpy() when there are no constraint violations.. If either the source or destination pointers are NULL or if the maximum length of the destination buffer is equal to zero, greater than RSIZE_MAX, or less than or equal to the length of the source string, the destination string is set to the null string and the function returns a nonzero value.
The strcat_s() function appends the characters of the source string, up to and including the null character, to the end of the destination string. The initial character from the source string overwrites the null character at the end of the destination string.
The strcat_s() function returns zero on success. However, the destination string is set to the null string and a nonzero value is returned if either the source or destination pointers are NULL or if the maximum length of the destination buffer is equal to zero or greater than RSIZE_MAX. The strcat_s() function will also fail if the destination string is already full or if there is not enough room to fully append the source string.
The strcpy_s() and strcat_s() functions can still result in a buffer overflow if the maximum length of the destination buffer is incorrectly specified.
strncpy() and strncat(). The standard C library includes functions that are designed to prevent buffer overflows, particularly strncpy() and strncat(). These universally available functions take a static allocation approach and discard data that doesn't fit into the buffer.
The strncpy() library function performs a similar function to strcpy() but allows a maximum size to be specified:
strncpy(dest, source, dest_size - 1); dest[dest_size - 1] = '\0';
The strcat() function concatenates a string to the end of a buffer. Like strcpy(), strcat() has a more secure version, strncat(). Functions like strncpy() and strncat() restrict the number of bytes written and are generally more secure, but they are not foolproof. The following is an actual code example resulting from a simplistic transformation of existing code:10
strncpy(record, user, MAX_STRING_LEN - 1); strncat(record, cpw, MAX_STRING_LEN - 1);
The problem is that the last argument to strncat() should not be the total buffer length; it should be the space remaining after the call to strncpy(). Both functions require that you specify the remaining space and not the total size of the buffer. Because the remaining space changes every time data is added or removed, programmers must track or constantly recompute the remaining space. These processes are error prone and can lead to vulnerabilities. The following call correctly calculates the remaining space when concatenating a string using strncat():
strncat(dest, source, dest_size-strlen(dest)-1);
Another problem with strncpy() and strncat() is that neither function provides a status code or reports when the resulting string is truncated. Both functions return a pointer to the destination buffer, requiring significant effort by the programmer to determine whether the resulting string was truncated.
The strncpy() function doesn't null terminate the destination string if the source string is at least as long as the destination.11 As a result, the destination string must be null terminated after calling strncpy().
There's also a performance problem with strncpy() in that it fills the entire destination buffer with null bytes after the source data has been exhausted. Although there is no good reason for this behavior, many programs now depend on it and as a result it is difficult to change.
strncpy_s() and strncat_s(). ISO/IEC TR 24731 specifies the strncpy_s() and strncat_s() functions as close replacements for strncpy() and strncat().
The strncpy_s() function copies not more than a specified number of successive characters (characters that follow a null character are not copied) from a source string to a destination character array. If no null character was copied, then the last character of the destination character array is set to a null character.
The strncpy_s() function returns zero to indicate success. If the input arguments are invalid, strncpy_s() returns a nonzero value and sets the destination string to the null string. Input validation fails if either the source or destination pointers are NULL or if the maximum size of the destination string is zero or greater than RSIZE_MAX. The input is also considered invalid when the specified number of characters to be copied exceeds RSIZE_MAX.
A strncpy_s() operation can actually succeed when the number of characters specified to be copied exceeds the maximum length of the destination string as long as the actual source string is shorter than the maximum length of the destination string. If the number of characters to copy is greater than or equal to the maximum size of the destination string and the source string is longer than the destination buffer, the operation will fail.
Users of these functions are less likely to introduce a security flaw because the size of the destination buffer and the maximum number of characters to append must be specified. The strncat_s() function also ensures null termination of the destination string. For example, the first call to strncpy_s() on line 5 of the sample program shown in Figure 2–31 assigns the value zero to r1 and the sequence hello\0 to dst1. The second call on line 6 assigns a non-zero value to r2 and the sequence \0 to dst2. The third call on line 7 assigns the value zero to r3 and the sequence good\0 to dst3. If strncpy() had been used instead of strncpy_s() a buffer overflow would have occurred during the execution of line 6.
1. char src1[100] = "hello"; 2. char src2[7] = {'g','o','o','d','b','y','e'}; 3. char dst1[6], dst2[5], dst3[5]; 4. int r1, r2, r3; 5. r1 = strncpy_s(dst1, 6, src1, 100); 6. r2 = strncpy_s(dst2, 5, src2, 7); 7. r3 = strncpy_s(dst3, 5, src2, 4);
Figure 2–31. Sample use of strncpy_s() function
The strncat_s() function appends not more than a specified number of successive characters (characters that follow a null character are not copied) from a source string to a destination character array. The initial character from the source string overwrites the null character at the end of the destination array. If no null character was copied from the source string, then a null character is written at the end of the appended string.
The strncat_s() function fails and returns a nonzero value if either the source or destination pointers are NULL, or if the maximum length of the destination buffer is equal to zero or greater than RSIZE_MAX. The function also fails when the destination string is already full or if there is not enough room to fully append the source string.
The strncpy_s() and strncat_s() functions are still capable of overflowing a buffer if the maximum length of the destination buffer and number of characters to copy are incorrectly specified.
strlen(). The strlen() function is not particularly flawed, but its operations can be subverted because of the underlying weaknesses of the underlying string representation. The strlen() function accepts a pointer to a character array and returns the number of characters that precede the terminating null character. If the character array is not properly null-terminated, the strlen() function may return an erroneously large number that could result in a vulnerability.
One solution is to ensure that a string is null terminated before passing it to strlen() by inserting a null character in the last byte of the array. Another solution is to use the strnlen() function. In addition to a character pointer, the strnlen() function accepts a maximum size. If the string is longer than the maximum size specified, the maximum size is returned rather than the actual size of the string. The strnlen() function is available in GCC and in the beta release of Visual Studio 2005. ISO/IEC TR 24731 defines a strnlen_s() function that has similar behavior.
Strsafe.h. Microsoft provides a set of safer string handling functions for the C programming language called Strsafe.h.12 These functions are intended to replace their built-in C/C++ counterparts, as well as any legacy Microsoftspecific 2–32 shows an example program that performs a secure string copy on line 8 and a secure string concatenation on line 13.
It is also important to remember that the Strsafe functions, such as StringCchCopy() and StringCchCat(), do not have the same semantics as the strncpy_s() and strncat_s() functions discussed earlier in this chapter. When strncat_s() detects an error, it sets the destination string to a null string while StringCchCat() fills the destination with as much data as possible, and then null-terminates the string.
strlcpy() and strlcat(). The strlcpy() and strlcat() functions copy and concatenate strings in a less error-prone manner than the corresponding C99 functions. These functions' prototypes are as follows:).
1. #include <Strsafe.h> 2. int main(int argc, char *argv[]) { 3. char MyString[128]; 4. HRESULT Res; 5. Res=StringCbCopy(MyString, sizeof(MyString), "Program 1. Name is "); 6. if (Res != S_OK) { 7. printf("StringCbCopy Failed: %s\n", MyString) 8. exit(-1); 9. } 10. Res=StringCbCat(MyString,sizeof(MyString),argv[0]); 11. if (Res != S_OK) { 12. printf("StringCbCat Failed: %s\n", MyString); 13. exit(-1); 14. } 15. printf("%s\n", MyString); 16. return 0; 17. }
Figure 2–32. Microsoft Strsafe example
To help prevent writing outside the bounds of the array, the strlcpy() and strlcat() functions accept the full size of the destination string as a size parameter. In most cases, this value is easily computed at compile time using the sizeof() operator.
Both functions guarantee that the destination string is null terminated for all nonzero-length buffers., the programmer needs to verify that the return value is less than the size parameter. If the resulting string is truncated, the programmer now has() [Miller 99].
Unfortunately, strlcpy() and strlcat() are not universally available in the standard libraries of UNIX systems. Both functions are defined in string.h for many UNIX variants, including Solaris but not for GNU/Linux. Because these are relatively small functions, however, you can easily include them in your own program's source whenever the underlying system doesn't provide them. It is still possible (however unlikely) that the incorrect use of these functions will result in a buffer overflow if the specified buffer size is longer than the actual buffer length.
C++ std::string. Section 2.2 described a common programming flaw using the C++ extraction operator operator>> to read input from cin into a character array. Although setting the field width eliminates the buffer overflow vulnerability, it does not address the issue of truncation. Also, unexpected program behavior could result when the maximum field width is reached and the remainder of characters in the input stream are consumed by the next call to the extraction operator.
C++ programmers have the option of using the standard std::string class defined in ISO/IEC 14882 [ISO/IEC 98]. The std::string class is the char instantiation of the std::basic_string template class, and it uses a dynamic approach to strings in that memory is allocated as required—meaning that in all cases, size() <= capacity(). The std::string class is convenient because the language supports the class directly. Also, many existing libraries already use this class, which simplifies integration.
Figure 2–33 shows another solution to extracting characters from cin into a string, using std::string instead of a character array. This program is simple, elegant, handles buffer overflows and string truncation, and behaves in a predictable fashion. What more could you possibly want?
The std::string generally protects against buffer overflow, but there are still situations in which programming errors can lead to buffer overflows. While C++ generally throws an out_of_range exception when an operation references memory outside the bounds of the string, the subscript operator [] (which does not perform bounds checking) does not [Viega 03].
1. #include <iostream> 2. #include <string> 3. using namespace std; 4. int main() { 5. string str; 6. cin >> str; 7. cout << "str 1: " << str << endl; 8. }
Figure 2–33. Extracting characters from cin into an std::string object
Another problem occurs when converting std::string objects to C-style strings. If you use string::c_str() to do the conversion, you get a properly null-terminated C-style string. However, if you use string::data(), which writes the string directly into an array (returning a pointer to the array), you get a buffer that is not null terminated. The only difference between c_str() and data() is that c_str() adds a trailing null byte.
Finally, many existing C++ programs and libraries have their own string classes. To use these libraries, you may have to use these string types or constantly convert back and forth. Such libraries are of varying quality when it comes to security. It is generally best to use the standard library (when possible) or to understand the semantics of the selected library. Generally speaking, libraries should be evaluated based on how easy or complex they are to use, the type of errors that can be made, how easy these errors are to make, and what the potential consequences may be.
SafeStr. (for example, the actual and allocated length) in memory directly preceding the memory referenced by the pointer. This is similar to the approach used by dynamic memory managers described in Chapter 4. only marked as trusted 2–34.
Error handling in SafeStr is performed using XXL,13.
The sample program shown in Figure 2–35 uses the SafeStr library to allocate two strings and copies one string to the other. The use of XXL provides a convenient mechanism for error checking.
Vstr. The Vstr string library is optimized for input/output using readv()/ writev() [Antill 04]. For example, you can readv() data to the end of the string and writev() data from the beginning of the string without allocating or moving memory. Vstr also works with data containing zero bytes.
Figure 2–36 shows a simple example of a program that uses Vstr to print out "Hello World." The library is initialized on line 8 of this example. The call to the vstr_dup_cstr_buf () function on line 10 creates a vstr from a C style string literal. The string is then output to the user using the vstr_sc_write_fd () function on line 13. This call to the vstr_sc_write_fd () function writes the
1. int safer_system(safestr_t cmd) { 2. if (!safestr_istrusted(cmd)) { 3. printf("Untrusted data in safer_system!\n"); 4. abort(); 5. } 6. return system((char *)cmd); 7. }
Figure 2–34. Trusted and untrusted data in SafeStr
1. #include <stdio.h> 2. #include "safestr.h" 3. #include "xxl.h" 4. int main(int argc, char *argv[]) { 5. safestr_t str1; 6. safestr_t str2; 7. XXL_TRY_BEGIN { 8. str1 = safestr_alloc(12, 0); 9. str2 = safestr_create( 10. "hello, world\n", 0); 11. safestr_copy(&str1, str2); 12. safestr_printf(str1); 13. safestr_printf(str2); 14. } 15. XXL_CATCH (SAFESTR_ERROR_OUT_OF_MEMORY) { 16. /* handle exception */ 17. } 18. XXL_EXCEPT { 19. /* handle exception */ 20. } 21. XXL_TRY_END; 22. return 0; 23. }
Figure 2–35. "Hello World" using SafeStr and XXL
contents of the s1 vstr to STDOUT. Lines 17 and 18 of the the example are used to cleanup get larger. Adding, substituting, or moving data anywhere in the string can be optimized to an O(1) algorithm.
String Streams
The GNU library allows you to define streams that do not correspond to open files. One such type of stream takes input from or writes output to a string. These streams are used by the GNU library to implement the sprintf() and sscanf() functions. You can also create a string stream explicitly using the
1. #define VSTR_COMPILE_INCLUDE 1 2. #include <vstr.h> 3. #include <errno.h> 4. #include <err.h> 5. #include <unistd.h> 6. int main(void) { 7. Vstr_base *s1 = NULL; 8. if (!vstr_init()) 9. err(EXIT_FAILURE, "init"); 10. if (!(s1 = vstr_dup_cstr_buf(NULL, "HelloWorld\n"))) 11. err(EXIT_FAILURE, "Create string"); 12. while (s1->len) 13. if (!vstr_sc_write_fd(s1, 1, s1->len, STDOUT_FILENO, NULL)){ 14. if ((errno != EAGAIN) && (errno != EINTR)) 15. err(EXIT_FAILURE, "write"); 16. } 17. vstr_free_base(s1); 18. vstr_exit(); 19. exit (EXIT_SUCCESS); 20. }
Figure 2–36. "Hello World" using Vstr
fmemopen() and open_memstream() functions. These functions allow you to perform I/O to a string or memory buffer. Both functions are declared in stdio.h as follows:
FILE * fmemopen(void *buf, size_t size, const char *ot) FILE * open_memstream(char **ptr, size_t *sizeloc)
The fmemopen() function opens a stream that allows you to read from or write to a specified buffer. The open_memstream() function opens a stream for writing to a buffer. The buffer is allocated dynamically and grown as necessary. When the stream is closed with fclose() or flushed with fflush(), the locations ptr and sizeloc are updated to contain the pointer to the buffer and its size. These values only remain valid as long as no further output on the stream takes place. If you perform additional output, you must flush the stream again to store new values before you use them again. A null character is written at
1. #include <stdio.h> 2. int main (void) { 3. char *bp; 4. size_t size; 5. FILE *stream; 6. stream = open_memstream(&bp, &size); 7. fprintf(stream, "hello"); 8. fflush(stream); 9. printf("buf = '%s', size = %d\n", bp, size); 10. fprintf(stream, ", world"); 11. fclose(stream); 12. printf("buf = '%s', size = %d\n", bp, size); 13. return 0; 14. }
Figure 2–37. Using open_memstream() to write to memory.
the end of the buffer. This null character is not included in the size value stored at sizeloc.
Figure 2–37 shows a sample program that opens a stream to write to memory on line 6. The string "hello" is written to the stream on line 7, and the stream flushed on line 8. The call to fflush() updates buf and size so that the printf() function on line 9 outputs:
buf = 'hello', size = 5
After the string ", world" is written to the stream on line 10 the stream is closed (line 11). Closing the stream also updates buf and size so that the printf() function on line 12 outputs:
buf = 'hello, world', size = 12
The size is the cumulative (total) size of the buffer. The fmemopen() provides a safer mechanism for writing to memory because it uses a dynamic approach that allocates memory as required. The downside is that the user must manage the memory (which could lead to some of the common memory management errors described in Chapter 4).
Detection and Recovery
Detection and recovery mitigation strategies generally make changes to the runtime. Visual C++ provides native runtime checks to catch common runtime errors such as stack pointer corruption and overruns of local arrays..
Depending on how they are implemented, nonexecutable stacks can affect performance. Nonexecutable stacks can also break programs that execute code in the stack segment, including Linux signal delivery and GCC trampolines. value on the stack while costing no more than one page of real memory. This mitigation can be relatively easy to add to an operating system. Figure 2–38 shows the change to the Linux kernel required to implement Stackgap.
Although Stackgap may make it more difficult to exploit a vulnerability, it does not prevent exploits if an attacker can use relative, rather than absolute, values. Section 6.4 describes stack randomization in more detail and also demonstrates how it can be thwarted.
Runtime Bound bound;
Figure 2–38. Linux kernel modification to support stackgap.14 exist-Guard */ }
Figure 2–39. SSP safe frame structure
A random number is generated for the guard value during application initialization, preventing discovery by a nonprivileged user.
SSP also provides a safer stack structure as shown in Figure 2–39.15 and colleagues., implements a return address verification scheme similar to that used in StackGuard but does not require recompilation of source code, which allows it to be used with existing binaries. | https://www.informit.com/articles/article.aspx?p=430402&seqNum=8 | CC-MAIN-2021-43 | refinedweb | 4,452 | 55.64 |
We are about to switch to a new forum software. Until then we have removed the registration on this forum.
Hey, I'm working on a new piece where a random number generator is connected to sound files of a lady I know screaming the numbers. So when the generator hits 1 it plays her screaming 1 and so on. I'm having trouble because I want the code to be in "void draw", however the audio files play over each other when I place it into that section.
I imagine that I need a piece of code to tell the next audio file to wait until the end of the last before it plays but I am an absolute beginner at coding, so I have no idea how to write it.
I have attached what I have done so far for reference, there is only one audio file in so far as I felt it was best to start simply and then add more audio files as I go.
import ddf.minim.*; Minim house; AudioPlayer player1; int christine; void setup(){ size(100,100); } void draw(){ house = new Minim(this); player1 = house.loadFile("u.wav"); christine = (int)random(10); println("christine is " + christine); player1.play(); player1.isPlaying(); for (int = }
Answers
.isPlaying() tells you when its's over
your goal is to wait that long and then make the new random number and start a new song
you can fill an array in setup() and use it in draw()
it's better to post code here as text (and not as an image)
there is a sticky post here that explain how it's done
Hey @Chrisir,
Thanks for that, I tried my best to edit the code so that it was posted properly, I don't think I got it quite right though.
As for the .isPlaying(), where should it tell me that it is over? I thought that it should return with a true or a false. So my idea was to use a conditional, if(), to say that if it was playing then it should pause the random number. However, as soon found out the if() only responds to integers. Sorry for being so dense but do you reckon you could help a little more?
AudioPlayer[] players = new AudioPlayer[2];
int idx;
Minim m = new Minim(this);
players[0] = m.loadFile("u.wav");
players[1] = m.loadFile("v.wav");
players[0].play();
if (!players[idx].isPlaying()) players[idx = (idx + 1) % players.length].play();
That's incorrect!
if ()expects 1
booleanas the result of the check expression: :-B
And we use the logical NOT operator
!in order to negate the evaluated expression: ~O)
Ok thanks, I think that should set me on my way! Just one last question, if I want the program to loop then should I place a loop() into the draw? Will it then run and restart once it has played all the files?
OK, I have tried what you said but it comes up with the error, "players cannot be resolved to a variable". Any ideas? Have attached the code.
no.........
draw() runs on and on anyway......
with gotoloops solution he is starting of with the first song again when the last song is over....
this is because he used % here:
this is the same as
Variable players needs to be "global". So both setup() & draw() can "see" it.
OK thank you. Now when I play the code, the sound files play after one another, but are not connected to a random number generator and do not loop. I need it so that they are both connected to the random number generator (as they were in my first attached code), play sequentially and will loop. How would this be achieved? I tried to add an if() to say that if the random number generator hit 1 then it should play players[1] but that did not work.
Problem is I don't understand what the values from the random() mean.
For example, what should happen if we got a
5? What's the value range?
Also how many times another value is picked? Or is it once only within setup()?
Should a
5return then the soundfile of her screaming 'five' would play. Would you mind explaining what you mean by value range? The values should be picked continuously from playing the code, which makes me feel like it should be in draw().
The sequence of values available to pick up. For example from
0to
9we got 10 diff. values.
By that I suppose there are more than 2 sound files for your sketch, right?
That's what I'm having problem to understand: | https://forum.processing.org/two/discussion/comment/53284/index.html | CC-MAIN-2021-49 | refinedweb | 778 | 81.22 |
Hello,
Basically I got stuck on the last part of this exercise (Step 13) which was to create a method for Energy Transfer between each of my Droids. I have 2 Droids (Codey and Nups) and I transferred 10% of “Nups” to “Codey”. I got the energy transferring part alright from what it seemed like to me but for some reason, it kept printing the message from
//toString METHOD public String toString(){ return "Hello! I'm the Droid " + name; }
with it which I didn’t want it to. I tried some ways that I knew of to fix it but it didn’t work. Please help, I’m still a beginner at coding and only started a week ago technically.
My Entire Code:
public class Droid{ String name; int batteryLevel; int health; public Droid(String droidName){ name = droidName; batteryLevel = 100; health = 100; } //toString METHOD public String toString(){ return "Hello! I'm the Droid " + name; } //performTask METHOD public void performTask(String task){ System.out.println(name + " has just " + task + "."); batteryLevel = batteryLevel - 10; health = health - 5; this.energyReport(); this.healthReport(); } //energyReport METHOD public void energyReport(){ System.out.println(name + " has " + batteryLevel + "% Battery left."); } //healthReport METHOD public void healthReport(){ System.out.println(name + " has " + health + "HP left!"); } //energyTransfer METHOD public void energyTransfer(Droid otherDroid, int amount){ this.batteryLevel = this.batteryLevel - amount; System.out.println(otherDroid + " has transferred " + amount + "% of its battery to " + name); System.out.println(otherDroid + " is currently at " + this.batteryLevel + "%"); System.out.println(name + " is now at " + (batteryLevel + amount) + "% battery."); } // Main METHOD public static void main(String[] args){ Droid one = new Droid("Codey"); Droid two = new Droid("Nups"); one.performTask("choked"); two.performTask("swallowed"); one.energyTransfer(two, 10); } } | https://discuss.codecademy.com/t/build-a-droid-step-13-energy-transfer/499172 | CC-MAIN-2020-29 | refinedweb | 277 | 59.4 |
Vue component for GitHub ribbons
vue-ribbon
vue-ribbon is a Vue Single File Component implementing GitHub ribbons. It comes with a set of properties making the component customizable for your needs.
Properties
If you need to customize the ribbon look and feel, you can use the following optional properties.
The color of the text is automatically detected by the component: for background color with a luma greater than 128 the text is white, otherwise black.
See how it looks on this demo!
Installation
You can install vue-ribbon using npm:
npm install --save vue-ribbon
Alternatively, you can import
vue-ribbon via
<script> tag in the browser directly, avoiding the NPM installation:
<script src=""></script> <script src=""></script>
Usage
Once installed, it is easy to use it.
Importing the component
First, you need to import
vue-ribbon in your files. You can do that in different ways. For example, it can be imported into a build process for use in full-fledged Vue applications:
import Ribbon from 'vue-ribbon'; export default { components: { Ribbon, }, // rest of the component }
Using the component
Once imported, you can use your component as follows:
<Ribbon/> | https://vuejsexamples.com/vue-component-for-github-ribbons/ | CC-MAIN-2019-13 | refinedweb | 191 | 51.07 |
Erik Auerswald
Skolelinux.de Wiki:
Ubuntu Launchpad:
My homepage at the Unix-AG Kaiserslautern:
Sourceforge:
Alioth:
Ohloh:
Links for Me
Alioth Usage: ?Alioth Alioth/FAQ Alioth/SSH Alioth/Svn
Firmware for Debian installation: Firmware, DebianInstaller/NetbootFirmware, TAR, netinst ISO
Automated installation and configuration: DebianInstaller/Preseed, Debian Installer Appendix B, partman-auto-recipe.txt, some further HOWTOs, with RAID and LVM, debugging hints
Debian Security Bug Tracker: (search for security bugs by CVE, package name, or Debian bug number)
see also: Security Info, Securing Debian Manual, Debian security FAQ
Debian Linux Kernel Handbook Chapter 9 - Reporting and handling bugs
Debian based LAN (servers, workstations, etc.): DebianLAN
Debian Packaging How-To, Debian Wiki page on Packaging
apt Problems
apt-get update often fails to download all files correctly. It then leaves incomplete files in the /var/lib/apt/lists/partial/ directory. Those files will not be refreshed, and apt-get update will exit with a download error if it encounters a partial list in this directory. This may stop updates from working, because the system will never learn about new versions added to the accidentally blocked lists.
Using apt-get clean to remove temporary and/or cached files does not help. The only way to fix this is to manually delete the incomplete lists:
rm -v /var/lib/apt/lists/partial/*
If you wondered, this bug has been reported. Many times. And some more. They seem to be drowned by the immense list of open bugs in apt.
The error message looks as follows:
W: Failed to fetch MD5Sum mismatch E: Some index files failed to download, they have been ignored, or old ones used instead.
This can result in the error message
WARNING: The following packages cannot be authenticated!
I have seen this on many versions of Debian and Ubuntu.
NVIDIA Graphics Card
Back when I bought my current PC, one intended use was playing Doom 3. At that time, an NVIDIA graphics card was the only sensible choice for gaming under GNU/Linux, and I bought a GeForce 6600 GT. The proprietary driver was the only choice with Open GL support. I used to install it manually from the NVIDIA installer to use the latest version.
Later on I did not play that often and changed from using Fedora to Debian/Sid. The non-free NVIDIA driver quite often broke with the many kernel and X updates, so I changed to the free nv driver, which worked fine for 2D graphics. Later the nouveau project appeared, I am currently (October 2012) using this driver. It provides basic Open GL support, but is still not usable for gaming.
With the upcoming Steam for Linux I'll try the proprietary NVIDIA drivers again. Hopefully, DKMS will take care of recompiling the kernel module automatically. Let's see how well NVIDIA keeps up with X server development nowadays...
Update 2013-07-18: Until now DKMS worked fine, but the proprietary NVIDIA driver in Debian does not (yet) work with kernel 3.10. Switching back to nouveau.
Anyway, my next PC will have Intel graphics, because Intel provides free drivers with good quality, performance, and features. IMHO, the AMD GPU driver's quality is not up to par (neither free nor non-free) and NVIDIA does not provide free drivers or even documentation.
Installed the current proprietary drivers today (2012-10-26):
sudo aptitude install linux-headers-686-pae
sudo aptitude install nvidia-kernel-dkms
- Reboot after installation is suggested to remove nouveau module from kernel. This works, but don't reboot yet.
- A message is displayed that nvidia driver needs to be enabled manually in xorg.conf.
cat > xorg.conf <<EOF Section "Module" Load "glx" EndSection Section "Device" Identifier "Video Card" Driver "nvidia" Option "UseEDIDDPI" "false" Option "DPI" "96 x 96" EndSection EOF sudo mv -i xorg.conf /etc/X11/
- The two options regarding DPI are necessary for me because the monitor seems to return (very) wrong DPI info. I don't remember this problem from before switching from nvidia to nv (and then nouveau), though.
Rebooting now changes from nouveau to nvidia X driver.
The non-free NVIDIA driver still works and provides better performance than the free driver. It is more work to use the non-free driver than just keeping nouveau. The nice framebuffer console provided by nouveau is gone.
Removed proprietary drivers today (2013-07-18):
sudo mv -i /etc/x11/xorg.conf /etc/x11/xorg.conf.nvidia
sudo aptitude purge nvidia-alternative nvidia-driver nvidia-glx nvidia-kernel-common nvidia-kernel-dkms nvidia-settings nvidia-support nvidia-vdpau-driver:i386 libxvmcnvidia1 glx-alternative-nvidia libgl1-nvidia-glx:i386 xserver-xorg-video-nvidia
# save your work before rebooting sudo reboot
I am not using Steam on my Debian GNU/Linux box anyway.
Steam
With the open Steam beta under way, I wanted to try it out on my Debian/Unstable box. The Steam .deb package for Ubuntu 12.04 does not install cleanly on Debian/Sid, but solutions can be found:
Steam Community thread "Debian - CONCLUSIONS"
I used (modified versions of) the scripts debian_install.sh and debian_steam.sh
Steam for Linux Beta on 64 bit Debian Testing (Wheezy)
2013-01-28: added steam_latest.deb repackaging from Updated Procedures for Installing Steam for Linux Beta on Debian GNU/Linux (Testing/Wheezy)
I looked at the three scripts from the first link, decided to manually implement the steps from debian_install.sh and roll my own version of debian_steam.sh to start Steam.
Decide to install special dependencies and startscript in ${HOME}/games/Steam and ${HOME}/games/Steam/lib, respectively
Install libc6 version Steam depends on from Ubuntu repository
mkdir -pv "${HOME}/games/Steam/lib" cd /tmp wget dpkg -x libc6_2.15-0ubuntu10.2_i386.deb /tmp/libc/ mv /tmp/libc/lib/i386-linux-gnu/* "${HOME}/games/Steam/lib" rm -rfv /tmp/libc6_2.15-0ubuntu10.2_i386.deb /tmp/libc
Install jockey-common and python-xkit from Ubuntu repository
wget sudo dpkg -i jockey-common_0.9.7-0ubuntu7_all.deb python-xkit_0.4.2.3build1_all.deb rm jockey-common_0.9.7-0ubuntu7_all.deb python-xkit_0.4.2.3build1_all.deb
Repackage and Install steam_latest.deb
wget mkdir -v deb-package && cd deb-package ar -x ../steam_latest.deb rm debian-binary tar xvf data.tar.gz && rm data.tar.gz mkdir DEBIAN && cd DEBIAN tar xvf ../control.tar.gz && rm control.tar.gz sed -i 's/^Version:.*$/&-ea/;s/ multiarch-support[^,]*,//;s/ libc6[^,]*,//;s/ libpulse0[^,]*,//' control cd ../.. fakeroot dpkg-deb -b deb-package steam_latest-ea.deb rm -rfv deb-package sudo dpkg -i steam_latest-ea.deb rm -v steam_latest{,-ea}.deb
- Remove multiarch-support Dependency from Steam -- DOES NOT HELP
#sed -i /multiarch-support/d ${HOME}/.local/share/Steam/steamdeps.txt
- Create start script for Steam
cat > "${HOME}/games/Steam/steam.sh" <<EOF #! /usr/bin/env sh export LD_LIBRARY_PATH="/home/auerswald/games/Steam/lib:$LD_LIBRARY_PATH" export LC_ALL=C exec /usr/bin/steam "$@" EOF chmod -v +x "${HOME}/games/Steam/steam.sh"
If needed, install dependencies asked for during installation of steam_latest.deb. Everything needed was already installed on my box, except libjpeg-turbo, which is not strictly necessary.
- Run the start script
~/games/Steam/steam.sh
When asked for the root password to install some missing dependency, use [CTRL]+C and then [RETURN] to skip.
Play a game, e.g. Counter-Strike.
Problems with Counter-Strike
I really like Counter-Strike and just had to try the Linux version. With my non-Ubuntu box (Debian/Sid, no desktop environment, ALSA sound) there are a few problems:
No sound in Counter-Strike (possibly because I don't have PulseAudio installed, as I am using ALSA on my Debian box).
Widescreen resolutions above 1280x800 don't work for me in Counter-Strike. They result in a crash that is most easily fixed by removing hl.conf (the next start of Counter-Strike (or Half-Live) will recreate it with default settings):
rm -v ${HOME}/.local/share/Steam/SteamApps/common/Half-Life/hl.conf
Valve concentrates on one distribution and its default install, which I can understand. Nevertheless, I would like ALSA support, e.g. by using OpenAL. Other commercial games, e.g. from id Software and Epic Games, did work without PulseAudio. It might not even have existed back then.
My User Pages: Alioth Debian Launchpad Skolelinux.de Sourceforge Unix-AG | https://wiki.debian.org/ErikAuerswald | CC-MAIN-2019-18 | refinedweb | 1,382 | 57.87 |
Last Instructable we discussed a bit about what the linefollow.ino program is doing. I presented a Python script that will allow us to analyze the bitwise if statements to see how the value that the read_Optical() method returns gets converted to the values of 0, 1, 2 or 3.
Now we are going to put all of this together and see what is really happening on the robot. Sometimes when looking over a program that someone else developed, it is sort of hard to visualize what the program is doing. So one of the things I like to do is actually watch what the program is doing while it is executing. In our below example we are going to take a look at what data the sensors are sending to our lineFollow.ino program through the read_Optical() method.
Well there are several ways of doing this. Some development environments have sophisticated debuggers and hardware debuggers that allow you to watch the program execute on the microprocessor. But if you do not have these tools there is an easier way accomplish this. Most micro-controllers, including the Arduino, include built-in hardware that allows you to send communications to the outside world using a serial port. In fact that is how the Arduino Uno communicates with the Motor driver/sensor board on the Robot.
There is another micro-controller that is on the motor driver/sensor board that is responsible for controlling the motors and capturing the signals from the sensors on the robot. The linefollow.ino program running on the Arduino Uno uses the serial port to send commands to the motor driver/sensor board, to control the motors and requests the outputs from the sensors. In fact there are jumper pins that you must make sure are in place that connect the Serial port to the motor driver/sensor board. As the instructions indicate you must disconnect these jumpers when you want to upload a program to the Arduino. So in this case how can we listen in on the serial port from our own personal computer when the Arduino and motor driver/sensor board is using the serial port all of the time?
The Arduino library comes with some methods that provide a software based serial port. Instead of using the build in hardware based serial port, we can pick any available two pins on the Arduino Uno and use these pins to to act just like a hardware based serial port to also communicate to the outside world. In order for our lineFollow.ino program to send messages to our Personal Computer we need a special USB cable that has an adapter built in. This cable is called an FTDI USB cable: (just google for this cable, many online vendors sell these cables for around $15.00 – $20.00)
My Blog is located at
Teacher Notes
Teachers! Did you use this instructable in your classroom?
Add a Teacher Note to share how you incorporated it into your lesson.
Step 1: Modify LineFollow.ino Program
Looking at the front of the robot, these are on the right lower set of header pins.
We are going to make pin 4 the receive pin and pin 5 the transmit pin:
Here is our code modifications:
#include <SoftwareSerial>
#include "MakeItRobotics.h" //include library
#define rxPin 4
#define txPin 5
MakeItRobotics line_following; //declare object
SoftwareSerial mySerial = SoftwareSerial(rxPin, txPin);
Place this code in the declaration/include section discussed in the last blog post:
We tell our lineFollow.ino program to include the SoftwareSerial.h header file. This gives us the resources we need for Serial Port methods. Then we define two variables rxPin and txPin to have values of 4 and 5 Then we create a new object called mySerial with the following statement:SoftwareSerial mySerial = SoftwareSerial(rxPin, txPin);
Lets move on to the setup() function. Add the following lines of code to the bottom of the setup() function (inside of the braces {})
pinMode(rxPin, INPUT);
pinMode(txPin, OUTPUT);
// set the data rate for the SoftwareSerial port
mySerial.begin(9600);
The pinMode() functions setup the rxPin as Input and txPin as output.We then setup the mySerial object to use 9600 baud as the speed that we want to communicate over the software serial port. Now we just have to add one line to our loop() function:
sensor_in=line_following.read_optical();
mySerial.println(sensor_in, HEX);
Right below the sensor_in=line_following.read_Optical(); line add the:mySerial.println(sensor_in, HEX); command
Compile and upload your program to your Robot.
Step 2: Wire Up the FTDI USB Cable
There one more physical connection we need to make.
Grab your new FTDI USB cable and focus on the black connector opposite the USB connector.You will see some female pins sockets and some colored wires behind these connectors. We only need to make two connections. Get yourself some male/male jumper wires, if you do not already have some.
Notice the Orange and Yellow wires on the FTDI USB cable. Plug a jumper wire into the orange and yellow sockets on the FTDI USB cable.
Now the tricky part. You have to switch the transmit and receive wires between the connector and the header pins on your motor driver/sensor board. That is the receive pin on the motor driver/sensor board goes to the transmit pin on the USB connector. And the transmit ping on the motor driver/sensor board goes to the receive pin on the USB connector. To make it easy, The orange wire goes to pin 4 and the yellow wire goes to pin 5.
Step 3: Write Down Sensor Data
I have placed my robot on a set of blocks to get the wheels up off of the ground so I can side some paper under the sensors.
Also notice I have photocopied a small section of the black line from the circle. We are going to use this line to measure what the sensors are reading in our tests.
Notice that the right wheel has been removed. The paper is resting on the wheel to get the line close to the sensors. Center the black line between the sensors as we want to get a reading of white on both sensors first.
When running these tests you do not need to turn on the batteries, so the wheels are not going to turn. But the sensors are going to function just fine.
Open your Arduino IDE if it is not opened already. Plug in just the usb cable that you use for programming your robot to the USB port on the Arduino. If you are running Windows,
Open your device manager utility, Control Panel->Systems And Security->Device Manager. Expand the Ports (COM & LPT) icon. Note which COM Port your Arduino is connected to. Write this port down. Now connect the FTDI USB cable to your personal computer.
Open the Device Manager and write down the COM Port of your FTDI USB cable. Open the Tools->Port menu option on your Arduino IDE and select the proper port for your FTDI USB cable.
On my computer is was COM port 3. Now Select the Tools->Serial Monitor Menu option and select a Baud rate of 9600 in the lower right hand corner. If you did everything correct you should see the output in the below image.
Now move the paper so the black line is under the right sensor, write down your measurements and then do the same for the left Sensor.
After taking all of your measurements you should have a total of 6 sensor readings:
Left and Right Measurements are taken looking at the front of the robot.
Center = FF and 100
Right = FF and 1FF
Left = 0 and 100
These number are in hexadecimal, Read my tutorial and the links that it points to. Next Instructble will be to take our sensor readings and run them through our ReadOptical.py program.
2 Discussions
5 years ago on Introduction
Nice, this is really helpful!
Reply 5 years ago on Introduction
Thanks, Yes this was one of my favorites to write. | https://www.instructables.com/id/Makeit-Robotics-Starter-Kit-Capturing-Sensor-Data/ | CC-MAIN-2019-39 | refinedweb | 1,352 | 63.19 |
MATLAB and Simulink resources for Arduino, LEGO, and Raspberry PiLearn more
Opportunities for recent engineering grads.Apply Today
The question is in the title. Is is about the C matrix library interface.
To explain in more detail, suppose that I create a 1 by 1 cell using mxCreateCellArray(), then create a numeric matrix using mxCreateNumericArray() and set it as the only element of the cell. Now will calling mxDestroyArray() on the cell destroy the numeric array as well, in one go? Or do I need to call it separately for the elements of the cell, then just the cell? I am hoping for the latter, as this is more reasonable for complex manipulations.
The documentation is ambiguous on this point.
Yes, mxDestroyArray() recursively de-allocates elements of structs and cells. Otherwise you could observe a memory leak.
[EDITED]
The documentation of mxDestroyArray explains clearly (e.g. in R2009a):
mxDestroyArray not only deallocates the memory occupied by the mxArray's characteristics fields [...], but also deallocates all the mxArray's associated data arrays, such as [...], fields of structure arrays, and cells of cell arrays.
And a small C-mex test function (call it test_mxDestroy.c and compile it):
#include "mex.h" void mexFunction(int nlhs, mxArray *plhs[], int nrhs, const mxArray *prhs[]) { mxArray *C; C = mxCreateCellMatrix(1, 1); mxSetCell(C, 0, mxCreateDoubleMatrix(1, 1000000, mxREAL)); mxDestroyArray(C); }
If the contents of the created cell is not freed implicitly, 8MB memory would be leaked in each call. Now inspect the operating systems memory manager while running:
for k = 1:1e6, test_mxDestroy; end
You will see, that the memory is not exhausted.
As James has explained already, your example does not crash accidentally only. When you try to use mat after mxDestroyArray(cell), you will encounter a crash soon. ATTENTION: Crashing the Matlab session can destroy data. So keep care, and even better keep a backup.
Thanks for the link Jan. Anyway, I have the answer now and sorry about the confusion ... I also posted a short answer and a link back here to StackOverflow.
@Jan: Technical side note ... for a normal return from a mex routine, shared data copies of the plhs[] variables are what are actually returned, not the plhs[] variables themselves. Then everything on the garbage collection list is destroyed, including the plhs[] variables which are in fact on the garbage collection list (except for persistent variables such as prhs[] or using mexMakeArrayPersistent). For an error return, the only difference is that shared data copies of the plhs[] variables are not made, only the garbage collection (including the plhs[] variables) takes place.
2 Comments
Direct link to this comment:
Note: cross posted on StackOverflow.
Direct link to this comment:
@Syabolcs: Thanks for mentioning the cross-posting. This is a good example for others. +1
You can omit the memset(), because the memory is initialized already. | http://www.mathworks.com/matlabcentral/answers/63726 | CC-MAIN-2014-15 | refinedweb | 474 | 57.37 |
SWF::NeedsRecompile - Tests if any SWF or FLA file dependencies have changed
This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
use SWF::NeedsRecompile qw(check_files); foreach my $file (check_files(<*.swf>)) { print "SWF needs recompilation: $file\n"; }
This module parses .fla and .as files and determines dependencies recursively, via import and #include statements. It then compares the timestamps of all of the dependencies against the timestamp of the .swf file. If any dependency is newer than the .swf, that file needs to be recompiled.
This module only works in its entirety on Mac OS X, and for Flash MX 2004. Help wanted: extend it to Windows (add appropriate search paths at the top of the .pm file) and extend it to the Flash 8 author when that is available.
This module only reports whether or not the .swf is up to date. It would be useful to know whether it is out of date because of the .fla file or any .as files. In the latter case, the open source MTASC () application could perform the recompilation.
This module likely only works with ASCII file names. The heuristic used to parse the binary .fla files discards the upper Unicode byte of any file names.
If there are
import statements with wildcards in any .as files, then all files in the specified directory are considered dependencies, even if only a subset are actually used.
Direct access to class methods are not detected. So, if you Actionscript does something like
com.example.Foo.doSomething() then com/example/Foo.as is not detected as a dependency. The workaround is to add an import; in this example it would be
import com.example.Foo;
Examine a list of .swf and/or .fla files and return the file names of the ones that need to be recompiled.
Performance note: Information is cached across files, so it's faster to call this function once with a bunch of files than a bunch of times with one file each invocation.
Returns a list of Classpath directories specified globally in Flash.
Returns the file name of the Flash preferences XML file.
Returns the path where Flash stores all of its class prototypes.
Changes the verbosity of the whole module. Defaults to false. Set to a number higher than 1 to get very verbose output.
The
SWFCOMPILE_VERBOSITY environment variable sets this at module load time.
The default is
0 (silent), but we recommend setting verbosity to
1, which emits error messages. Setting to
2 also emits debugging messages.
Returns the current verbosity number.
This module tries to ignore dependencies specified inside comments like these:
/* #include "foo.as" */ // var inst = new Some.Class();
but for reasons I don't understand, searching for the latter style of comments inside binary
.fla files can cause a (seemingly) infinite loop. So, as a hack we DO NOT ignore
//... style comments in Actionscript that is embedded inside of
.fla files. This can lead to spurious errors. Perhaps this is a problem with Regexp::Common::comment or just that some
.fla files have too few line endings?
Flash stores source code and include paths inside of the
.fla binary as (I think) UCS2 strings. This code converts those strings to ASCII by simply stripping all of the
\0 characters. This is REALLY BAD, but it works fine for pure-ASCII path names.
This code works great on Mac OS X. The typical paths for the Flash configuration directory are provided for that platform.
This code will still work marginally under Windows, but for full support I need to know the path to the preferences file and the configuration directory. I need those locations for Macromedia classes and default include paths.
Module::Build::Flash uses this module.
Chris Dolan
This module was originally developed by me at Clotho Advanced Media Inc. Now I maintain it in my spare time. | http://search.cpan.org/dist/SWF-NeedsRecompile/lib/SWF/NeedsRecompile.pm | CC-MAIN-2015-40 | refinedweb | 653 | 68.97 |
Django Sessions – How to Create, Use and Delete Sessions
After learning about Django Cookies in the previous article, we will move on to understand Django sessions.
As we have seen in the previous tutorial, HTTP is a stateless protocol, where every request made is always new to the server. The request on the server is always treated as if the user is visiting the site for the first time. This poses some problems like you can’t implement user login and authentication features. These problems were actually solved by Cookies.
Now, let’s revise the concept of cookies in brief and check some of its drawbacks.
What are Cookies?
Cookies are small text files stored and maintained by the browser. It contains some information about the user and every time a request is made to the same server, the cookie is sent to the server so that the server can detect that the user has visited the site before or is a logged in user.
The cookies also have their drawbacks and a lot of times they become a path for the hackers and malicious websites to damage the target site.
Drawbacks of Cookies
Since cookies store locally, the browser gives control to the user to accept or decline cookies. Many websites also provide a prompt to users regarding the same.
Cookies are plain text files, and those cookies which are not sent over HTTPS can be easily caught by attackers. Therefore, it can be dangerous for both the site and the user to store essential data in cookies and returning the same again and again in plain text.
These are some of the more common problems that web developers were facing regarding cookies.
What are Sessions?
After observing these problems of cookies, the web-developers came with a new and more secure concept, Sessions.
The session is a semi-permanent and two-way communication between the server and the browser.
Let’s understand this technical definition in detail. Here semi means that session will exist until the user logs out or closes the browser. The two-way communication means that every time the browser/client makes a request, the server receives the request and cookies containing specific parameters and a unique Session ID which the server generates to identify the user. The Session ID doesn’t change for a particular session, but the website generates it every time a new session starts.
Generally, Important Session Cookies containing these Session IDs deletes when the session ends. But, this won’t have any effect on the cookies which have fix expire time.
Making and generating sessions securely can be a hefty task, and now we will look at Django’s implementation of the same.
Django Sessions
Django considers the importance of sessions over the website and therefore provides you with middleware and inbuilt app which will help you generate these session IDs without much hassle.
django.contrib.sessions is an application which works on middleware.SessionMiddleware and is convenient to work.
The middleware.SessionMiddleware is responsible for generating your unique Session IDs. You will also require django.contrib.sessions application, if you want to store your sessions on the database.
When we migrate the application, we can see the django_session table in the database.
The django.contrib.sessions application is present in the list of INSTALLED_APPS in settings.py file.
For working on sessions, you will need to check whether your browser supports cookies or not.
Well, of course, you can go to the settings and quickly check that, but let’s make some view functions and URLs to understand the concepts in a better way. Ultimately, you will have to go to the browser settings if something went wrong.
Do you know? – How to create view for Django project
Checking Cookies on Browser
For adding view functions in our views.py file, enter:
Code:
def cookie_session(request): request.session.set_test_cookie() return HttpResponse("<h1>dataflair</h1>") def cookie_delete(request): if request.session.test_cookie_worked(): request.session.delete_test_cookie() response = HttpResponse("dataflair<br> cookie createed") else: response = HttpResponse("Dataflair <br> Your browser doesnot accept cookies") return response
Now, we will understand the methods used in the above code:
set_test_cookie()
We use this method to create a test cookie when your browser requests or makes a request for this webpage.
test_cookie_worked()
This method returns a boolean value as True after the browser accepts the cookie. Otherwise, it’s false
delete_test_cookie()
This method deletes the test cookie.
Now, add the urls to urlpatterns.
Code:
path('testcookie/', cookie_session), path('deletecookie/', cookie_delete),
After running this code:
1. search for testcookie/
2. search for deletecookie/
If you get this message, then your browser accepts cookies; otherwise, the browser will produce a different result.
If the result is not similar to the above images then enable your browser to accept cookies; otherwise, you won’t be able to implement this tutorial.
Creating & Accessing Django Sessions
Django allows you to easily create session variables and manipulate them accordingly.
The request object in Django has a session attribute, which creates, access and edits the session variables. This attribute acts like a dictionary, i.e., you can define the session names as keys and their value as values.
Step 1. We will start by editing our views.py file. Add this section of code.
Code:
def create_session(request): request.session['name'] = 'username' request.session['password'] = 'password123' return HttpResponse("<h1>dataflair<br> the session is set</h1>") def access_session(request): response = "<h1>Welcome to Sessions of dataflair</h1><br>" if request.session.get('name'): response += "Name : {0} <br>".format(request.session.get('name')) if request.session.get('password'): response += "Password : {0} <br>".format(request.session.get('password')) return HttpResponse(response) else: return redirect('create/')
Step 2. Add the urls in urlpatterns list.
Code:
path('create/', create_session), path('access', access_session),
Must learn – How to map URL patterns with view functions?
Step 3. Now run this code:
1. search for access/
2. search for access/ again
In the database, you will see the key and the data which has high encryption.
Understanding the Code:
The code follows basic python, but the variables we are using have some complex python coding behind them.
When you request for access/ URL without running the create/ request, you redirect to the create/ automatically if you have no cookie generated previously.
Now, when the create/ URL is sent, the SessionsMiddleware runs and generates a unique SessionID which is stored locally on the browser as a cookie for the user.
This cookie is now sent to the server every time alongside with the request and sessions. The application does the work of matching the SessionID with one in the database. It also stores some values and variables which we have created in the create_session() view function.
The request object has a session attribute and when the server runs that code, the session middleware, and sessions application automatically works together.
request.session[] act as a python dictionary data-structure, and therefore, you can store the values alongside with their keys having meaningful names.
In the access_session() function, we have used get() alongside request.session and passed the value of the key.
Thus, you can access the sessions easily.
Therre are many methods that associate with session argument of the HttpRequest object.
Deleting Django Sessions
After completing the sessions work, you can delete them quite easily.
Just include this view function inside views.py file.
Code:
def delete_session(request): try: del request.session['name'] del request.session['password'] except for KeyError: pass return HttpResponse("<h1>dataflair<br>Session Data cleared</h1>")
This code also follows pure python; we just used some exception handling.
To delete a session or any particular key of that session, we can use del.
del request.session[‘key_name’]
Now, to run this code add this to the urlpatterns:
path(‘delete/’, delete_session)
The output will look like this and don’t worry if your cookie didn’t delete because we use this method only to delete your data in the Django database and not the session ID and cookie itself.
To delete the session related cookies completely, we use the flush() function, although sessions delete when the browser is closed.
Configuring Session Engine
There are many ways by which you can configure sessions engine. It is an important decision where storage options can provide you better speed and response time.
Django lets you choose between these options but doesn’t limit your choices. If you want another storage method, you can find it on djangopackages.org
1. By Database-backed Sessions
We have used this method until now, as we have django.contrib.sessions application inside our INSTALLED_APPS list in settings.py file.
This is standard implementation and covers many security loopholes. It also provides you with lots of features to use them directly with view functions and models in Django framework.
2. By Cached Sessions
If you want to improve the performance of the website, you can use cache-based sessions. There are several practices which you can perform here, like setting up multiple caches.
For this purpose, Django provides you with more session engines:
django.contrib.sessions.backends.cache
Above engine stores data directly into the cache.
django.contrib.sessions.backends.cache_db
This engine, on the other hand, is used when you want persistent sessions data with your server using write-through cache. It means that everything written to the cache will also be written to the database.
Both methods improve your performance drastically, but the first one is faster than the second one.
3. By File-based Sessions
When you want to store your sessions data in a file-based system, then you can use this engine that Django provides.
django.contrib.sessions.backends.file
You will also need to check whether your web server has permission to read/ write the directory in which you are keeping your sessions file.
4. By Cookie-based Sessions
We don’t recommend this one as sessions were made for not storing data in cookies in the first place, but still, developers use it according to their need.
django.contrib,sessions.backends.signed_cookies
This allows your session data to store with the help of Django’s cryptographic signing tool and the secret key.
It poses all the problems of cookies, the major one is that the session_cookies are signed but not encrypted; therefore they can be read by the client.
Also, it has performance issues as the cookie size does affect the request size.
Summary
Django provides some very intuitive ways to implement sessions and lets you work with them easily.
In this tutorial, we covered these topics:
- Drawbacks of cookies
- Concept of sessions
- How does Django implement sessions
- Using sessions in views
- Configuring sessions
Django has provided lots of options to make the web project as dynamic and rapidly developed as you want.
We will cover the topics of caching and other related concepts in future articles as they are different topics altogether.
Any suggestions or queries? We will be glad to hear from you in the comment section.
Crack your Django interview by practicing Django interview questions | https://data-flair.training/blogs/django-sessions/ | CC-MAIN-2019-35 | refinedweb | 1,847 | 55.95 |
Consider a simple, but complete, XSL-FO instance hellofo.fo for an A4 page report shown in Example 2-8.
Line>
Note that all examples in this book illustrate instances of the XML-FO vocabulary. How an instance is created is not material to the semantics of the vocabulary. The instance could have been hand- authored in a simple text or XML editor, or created as the result of an XSLT transformation from another XML vocabulary, or generated from any other application.
We can see the declaration on line 2 of the default namespace being the XSL-FO namespace, so all un-prefixed element names refer to element types in the XSL-FO vocabulary. There are no prefixed element-type names used by any of the elements, thus the entire content is written in XSL-FO.
The document model for XSL-FO dictates that the page geometries must be summarized in layout-master-set on lines 4 through 13, followed by the content to be paginated in a sequence of pages in page-sequence on lines 14 through 19. This instance conforms to these requirements and conveys our formatting intent to the formatter. The formatter needs to know the geometry of the pages being created and the content belonging on those pages.
Think of the parallel where we learned that the document model for HTML requires the metadata in the head element and the displayable content in the body element. Both elements are required in the document model, the first to contain the mandatory title of the page and the second to contain the rendered information.
Once we learned the vocabulary for HTML, we can create a page knowing where the required components belong in the document. The same is true for XSL-FO, in that we learn what information is required where and we express what we need in the constructs the formatter expects.
In this simple example the dimensions of A4 pages are given in the portrait orientation on line 8. Margins are specified on lines 6 and 7 to constrain the main body of the page within the page boundaries. That body region, described on lines 10 and 11, itself has margins to constrain its content, and is named so that it can be referenced from within a sequence of pages.
The sequence of pages specified on line 14 in this example refers to the only geometry available and specifies on line 16 that the flow of paginated content is targeted to the body region on each page. The sequence is also titled on line 15, which is used by rendering agents choosing to expose the title in some application-dependent fashion (e.g. in a window's title bar).
Consider two conforming XSL-FO processors that processed the simple hellofo.fo example as shown in Figure 2-7, one interactively through a GUI window interface, and the other producing a final-form representation of the page:
Antenna House XSL Formatter (an interactive XSL-FO rendering tool),
Adobe Acrobat (a Portable Document Format (PDF) display tool),
PDF created by RenderX (a batch XSL-FO rendering tool).
Note how the two renderings are not identical. If an XSL-FO instance is insufficient in describing the entire formatting intent, the rendering application may engage certain property values of its own choosing. Page fidelity is not guaranteed if the instance does not express the entire formatting intent. Even within the expressiveness of the XSL-FO semantics, there are some decisions still left up to the formatting tool.
This is not different from two web browsers with different user settings for the display font. A simple web page that does not use CSS stylesheets for font settings relies on the browser's tool options for the displayed font choice. The intent of the web page may be to render "a paragraph," but if two users have different defaults for the font choice, there is no fidelity in the web page between the two renditions if the formatting intent is absent.
Consider a page of content from some instructor-led training material that contains a mixture of a table, a list, a proportionally- spaced paragraph, and monospaced paragraphs shown in Figure 2-8.
Example 2-9 lists the first constructs in the flow that will produce the result shown in Figure 2-8.
The information presented.
Line-<external-graphic 12</block></table-cell> 13 <table-cell><block text-<external-graphic 14<...
Note how the attribute value specified on line 2 is an expression, not a hard value. There is an expression language in XSL-FO example shows a calculation involving the page width and twice the margins, less the width of the other two columns. A simpler alternative would be "100% - 2in" since the percentages at this point are calculated relative to the width of the region and not the width of the page.
The horizontal rule below the title information needs to be block-oriented in that it needs to break the flow of information and be separate from the surrounding information. To achieve this effect with the inline-oriented leader construct, note how, on lines 16 and 17, the leader is placed inside of a block. Note also how the line height of the block is adjusted in order to get the desired spacing around the leader.
The block on lines 18 and 19 lays out a simple paragraph.
Lines 20 through 28 lay out a list construct, where the labels and bodies of list items are synchronized and laid out adjacent to each other in the flow of information. This side-by-side effect cannot be achieved with simple paragraphs; it could be achieved to some extent with borderless tables, but the use of list objects gives fine control over the layout nuances of a list construct.
The list block itself has properties on lines 20 and 21 governing all members of the list, including the provisional distance between the start edges of the list item label and the list item body, and the provisional label separation. These provisional values are useful constructs in XSL-FO in that they allow us to specify contingent behavior for the XSL-FO processor to accommodate the start indent of the list label while maintaining the distance between the end of the label and the start of the body.
Note:
Remember one of the design goals of XML declared that "terseness is of minimal importance" (they probably could not have found a terser way of saying that)? Note that the attribute name specifying the first of these provisional property values is 35 characters long. It is not uncommon to need to use lengthy element and attribute names, and XSL-FO instances always seem to me to be so very verbose to read, though I admit they certainly are not ambiguous.
Note how, on lines 23 and 25, functions can be used in attribute values. XSL-FO defines a library of functions that can be invoked in the expression language. The label-end() and body-start() functions engage the appropriate use of one of the two provisional list construct properties based on the start indent of the item's label.
Line 29 begins the block containing the listing of markup. To ensure a verbatim rendering of edited text, line 30 specifies that all linefeeds in the block of content must be preserved, and white-space characters are not to be collapsed . achieved through the use of the function " inherited-property-value(font-size) ", though there are two alternate ways of specifying the same relative value: " 50% " and " .5em ". Using any of these expressions would.
The nesting of the hierarchy of the formatting objects is shown in Figure 2-9. | https://flylib.com/books/en/1.247.1.15/1/ | CC-MAIN-2019-13 | refinedweb | 1,285 | 57.3 |
Hello everyone, today I am going to share a complete project which is DS1307 Arduino based digital Clock in Proteus ISIS. In this project, I have designed a digital clock using Arduino UNO and DS1307 RTC Module. So, first of all, if you haven’t yet installed then, you should install Arduino Library for Proteus using which you will be able to easily simulate Arduino baords in Proteus. Along with Arduino Library you will also need to install DS1307 Library for Proteus, which I have shared in my previous post as we are gonna use this RTC Module DS1307 for designing our digital clock.
So, now I hope that you have installed both these libraries successfully and are ready to design this DS1307 Arduino based Digital Clock. I have given the Simulation and Code for download below but as I always advise, don’t just download the files. Instead design your own simulation and try to write your own code. In this way, you will learn more out of it. So, let’s get started with DS1307 Arduino based Digital Clock in Proteus ISIS:
DS1307 Arduino based Digital Clock in Proteus
- You can download the complete Proteus Simulation along with Arduino Code by clicking the below button.
- You will also need DS1307 Library for Arduino, which is also available in this package.
- Now, let’s get started with designing of this DS1307 Arduino based Digital Clock.
- So, first of all, design a circuit in Proteus as shown in below figure:
- You can see in the above figure that I have used Arduino UNO along with RTC module, LCD and the four buttons.
- These four buttons will be used to change the year,date etc as mentioned on each of them.
- Now here’s the code for DS1307 Arduino based Digital Clock.
#include <LiquidCrystal.h> #include <DS1307.h> #include <Wire.h> LiquidCrystal lcd(13,12,11,10,9,8); int clock[7]; void setup(){ for(int i=3;i<8;i++){ pinMode(i,INPUT); } lcd.begin(20,2); DS1307.begin(); DS1307.setDate(16,4,7,0,17,50,04);//ano,mes,dia,semana,horas,minutos,segundos } void loop(){ DS1307.getDate(clock); lcd.setCursor(0,1); lcd.print("Time: "); Print(clock[4]); lcd.print(":"); Print(clock[5]); lcd.print(":"); Print(clock[6]); lcd.setCursor(0,0); lcd.print("Date: "); Print(clock[1]); lcd.print("/"); Print(clock[2]); lcd.print("/"); lcd.print("20"); Print(clock[0]); if(digitalRead(7)){ clock[5]++; if(clock[5]>59) clock[5]=0; DS1307.setDate(clock[0],clock[1],clock[2],0,clock[4],clock[5],clock[6]); while(digitalRead(7)); } if(digitalRead(6)){ clock[4]++; if(clock[4]>23) clock[4]=0; DS1307.setDate(clock[0],clock[1],clock[2],0,clock[4],clock[5],clock[6]); while(digitalRead(6)); } if(digitalRead(5)){ clock[2]++; if(clock[2]>31) clock[2]=1; DS1307.setDate(clock[0],clock[1],clock[2],0,clock[4],clock[5],clock[6]); while(digitalRead(5)); } if(digitalRead(4)){ clock[1]++; if(clock[1]>12) clock[1]=1; DS1307.setDate(clock[0],clock[1],clock[2],0,clock[4],clock[5],clock[6]); while(digitalRead(4)); } if(digitalRead(3)){ clock[0]++; if(clock[0]>99) clock[0]=0; DS1307.setDate(clock[0],clock[1],clock[2],0,clock[4],clock[5],clock[6]); while(digitalRead(3)); } delay(100); } void Print(int number){ lcd.print(number/10); lcd.print(number%10); }
- Now get your hex file from Arduino software and then upload it in your Proteus software.
- Now run your simulation and if everything goes fine, then it will look like something as shown in below figure:
- Now you can see its today’s date in the LCD and the same is shown over on the small pop up of DS1307 Clock.
- Now the time will start on ticking and the buttons will used to change the minutes hours etc.
- You will get the better demonstration of this project in the below video.
So, that’s all for today. I hope this projects DS1307 Arduino based Digital Clock will help you in some way. So see you in next post.
12 Comments
tnx Nasir..this is a v.cool work you’ve shown
pls, can I get documents (.rar file) for same project, buh dis time with 4digits 7segment display. adjustments for mins and hours only
thanks!
Yeah we can provide it to you. Please add me on Skype and we will discuss it in detail.
Thanks.
Hello,
I have a problem with clocks frequency, the shown Time is ticking slower than PC colock. How can i repair it?
the coded above is showing error “unable to compile for uno ”
plz help
and what does this statement do “‘while(digitalRead(7))'”
what is going on in this block of code and how
if(digitalRead(7)){
clock[5]++;
if(clock[5]>59) clock[5]=0;
DS1307.setDate(clock[0],clock[1],clock[2],0,clock[4],clock[5],clock[6]);
while(digitalRead(7));
}
plz don’t consider my previous query
Are you solved that error ,if you solved it send your code
can you send me the hex file because in laptop the program not compling
Hi,
You should have a look at How to get Hex File from Arduino software (). It’s quite easy.
Thanks.
Give DS1307 library for arduino
Download the project files. DS1307 Library for Arduino is included in it.
Code doesn’t compile;Your code is wrong
Hi,
It’s not that good technique to blame others, instead of trying a little hard. 😛
Please download the files, everything’s there and also watch the video as I have explained everything in it.
Thanks. | https://www.theengineeringprojects.com/2016/04/ds1307-arduino-based-digital-clock-proteus.html | CC-MAIN-2019-13 | refinedweb | 942 | 66.13 |
iSequenceOperation Struct Reference
A sequence operation. More...
#include <ivaria/sequence.h>
Detailed Description
A sequence operation.
This is effectively a callback to the application.
Main creators of instances implementing this interface:
- Application using the sequence manager.
Main users of this interface:
Definition at line 39 of file sequence.h.
Member Function Documentation
This routine is responsible for forcibly cleaning up all references to sequences.
It is called when the sequence manager is destructed.
Do the operation.
The dt parameter is the difference between the time the sequence manager is at now and the time this operation was supposed to happen. If this is 0 then the operation is called at the right time. If this is 1000 (for example) then the operation was called one second late. This latency can happen because the sequence manager only kicks in every frame and frame rate can be low.
The documentation for this struct was generated from the following file:
- ivaria/sequence.h
Generated for Crystal Space 1.4.1 by doxygen 1.7.1 | http://www.crystalspace3d.org/docs/online/api-1.4.1/structiSequenceOperation.html | CC-MAIN-2015-06 | refinedweb | 172 | 51.95 |
HI I was wondering if I could get some help here.
I have a java program that connects to an Access database. This is just a small simple program. The database table has a primary key and it is the first field with numbers in it going from 1 and up for each record. I would like to be able to goto the last record in the table and find out what the number is in the primary key. I used the last(); method and the program compiles fine but on runtime it gives me an error message -> Result set type is TYPE_FORWARD_ONLY 0 null
Can anyone help.
Here is my code:
import java.sql.*;
public class test {
public static void main(String[] arguments) {
String data = "jdbcdbc:tester1";
try {
Class.forName("sun.jdbc.odbc.JdbcOdbcDriver");
Connection conn = DriverManager.getConnection(
data, "", "");
Statement st = conn.createStatement();
ResultSet rec = st.executeQuery(
"SELECT * FROM Table1 ");
rec.last();
System.out.println(rec.getInt(1));
st.close();
} catch (SQLException s) {
System.out.println("SQL Error: " + s.toString() + " "
+ s.getErrorCode() + " " + s.getSQLState());
} catch (Exception e) {
System.out.println(" Error: " + e.toString()
+ e.getMessage());
}
}
}
The problem is that you don't have a "scrollable" cursor which you need for last. By default most cursors are TYPE_FORWARD_ONLY.
You can request a scrollable cursor by using a different method for creating your statement.
Statement statement = conn.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE,ResultSet.CONCUR_READ_ONLY);
ResultSet rs = statement.executeQuery(YOUR_SQL_GOES_HERE);
Consult the java.sql. API for more information on this.
If you want the highest value, then you could use something like:
"select max(colName) \"MAXPRIMARY\" from aTable"
if that works with MS Access....
eschew obfuscation
Originally Posted by sjalle
If you want the highest value, then you could use something like:
"select max(colName) \"MAXPRIMARY\" from aTable"
if that works with MS Access....
Oh silly me. I only read the original problem and not what the poster was actually trying to do.
Just to confirm. Access supports the MAX function.
So "SELECT MAX(primarykeycolumnname) FROM Table1" will work.
Forum Rules
Development Centers
-- Android Development Center
-- Cloud Development Project Center
-- HTML5 Development Center
-- Windows Mobile Development Center | http://forums.devx.com/showthread.php?145646-Size&goto=nextoldest | CC-MAIN-2015-14 | refinedweb | 353 | 52.76 |
Porting from IRIX to Linux
This section reflects what we did in March of 2000. See the Afterward section for more current information.
A) Red Hat 6.1
The KDE Workstation installation option provides most of the packages you need.
B) Metrolink Motif
We experimented with Lesstif. While it works well for Performer and lets us compile our other tools, it didn't handle updating some of our scrolling windows correctly. The developer's console (a tool for tracking messages) has a scrolling window that should update with message traffic even without mouse interaction. The Lesstif version didn't, but the Motif version did.
C) Mesa Graphics Library
We used the Mesa Graphics Library as an OpenGL look alike. Make sure that you follow their directions for making the widget set after you install Mesa.
D) Viper V770 Graphics Card.
We used the Diamond Viper V770 graphics card and the hardware accelerated Mesa drivers and libraries. At that time (January 2000) the GeForce 256 card was supported under Linux, but there were no hardware accelerated OpenGL drivers for it. More recently, XFree86 version 4.0 is out and while hardware-accelerated OpenGL is available for the GeForce card, setting it up can be tricky.
E) XFree86 Drivers for Hardware Acceleration
We used the XFree86 SVGA server and the XFree86-rivaGL-3.3.3.1-49riva drivers in January. OpenGL Linux drivers for the GeForce card are available from nVidia (see Resources).
F) SGI's Performer Package
Downloaded and installed per their instructions (see Resources web pages).
G) ACRT Version 1.06 (our simulator)
See Specifies for details about architecture and operation.
SGIs generally offer a cushier environment than Linux. They're also working on porting more of IRIX's tools and features to Linux. For those accustomed to the SGI environment and new to Linux, here are some of the corresponding tools that we used when porting:
A) Xdiff -> Mgdiff
Do a Web search for this one. My recollection is that it requires Motif, and Lesstif won't work.
B) dbx -> gdb
Your basic command-line debugger.
C) cvd -> xxgdb
SGI Casevision debugger has a nice graphical interface to a whole suite of tools. While there is no Linux equivalent for Casevision, the xxgdb front end for dbx provides a similar debugging environment. There is also a tool called “DDD” (the data display debugger) available that we didn't use because we're already familiar with xxgdb.
D) gr_osview -> xosview
SGI's system monitoring and status tool, gr_osview, encompasses more than xosview, but xosview is similar in the ability to see processor and memory status, system load, etc.
E) editors, shells, etc.
Emacs, vi, edit, tcsh, bash and the like are all pretty much what you'd expect (I imagine there are some differences in the dark corners of the shells, but our system didn't use enough of them to run into the differences).
F) xwsh -> xterm
I can only say “equivalent” here with the proviso that the xwsh “-hold” option (which causes the window to persist even after the command running in it with “-e” has ended) is not available for xterm.
G) PCNFS -> SAMBA
The original system shared files over NFS (requiring an add-on package for NT). In this work we shared files between Linux and IRIX over NFS and used SAMBA to share with the NT systems.
H) Lint
The lint debugging tool is bundled as part of IRIX but is not bundled with Linux, though commercial versions of lint are available for Linux. There is also a package called LCLint that seemed like overkill for our work (porting legacy code). For our purposes, we used gcc and its options for better debugging and error detection:
gcc -ansi -Wall -Wstrict-prototypes -Wmissing-prototypes \ -fsyntax_only -pedantic -O2
Larch C Lint is available for Linux and does have better debugging than this, if you want to go the extra mile (see Resources).
I) ICS' Builder Xcessory (BX PRO)
The C-code generated by the IRIX version 4 (and probably newer versions) of ICS's BX GUI builder compiles on Linux. Code generated by older versions needs to be tweaked. There is also a Linux version of BX available (see Resources).
J) Virtual Prototypes' VAPS
VAPS is a control panel building tool. There is no Linux version yet but VAPS does have an NT version. So my group ported IRIX VAPS panels to NT, but that is a separate story.
A) Scripts and Shell Environment
It looks like Linux (or the tcsh shell) resets the processes limits when you “rsh” to another host. This kept me from getting core files. To get around this, I added a few lines to my .tcshrc file:
limit coredumpsize 1000000 kbytes <---- source ~/environment setenv PATH ${PATH}:.
The environment file referenced here is a collection of environment variables for the simulator that is set at shell start up. [Due to the length of Listing 1 it could not be printed here, but is available for downloading at our FTP site:.]
Using the results of uname, we can transparently point environment variables at the architecture-specific directories for binaries. Scripts, Makefiles and programs can then use those environment variables without each having to check the architecture type themselves. When we added Linux into the mix, we had to add some additional variables:
* $ARSI_SERIAL_PORT_ONE, $ARSI_SERIAL_PORT_TWO
The names of the serial port devices differ between IRIX and Linux. IRIX /dev/ttyd[N] roughly corresponds to Linux /dev/ttyS[N-1],
* $ARSI_CTDB_FILE_SUFFIXthe suffix of a binary data file that is different between big-endian (SGI MIPS) and little-endian (Intel X86) systems.
B)Makefiles
Makefiles are the part of the system that got beat up the most with this port. The IRIX Makefiles are moved from the directory with the source code to the IRIX64 subdirectory, and VPATH is added to point back up to them.
VPATH (“view path”) in a Makefile is like the -I directives that tell the compiler where to look for include files. VPATH tells make where to look for source files—in our case, up one level. Another consequence of moving the Makefiles is that you probably have to change directories before some of the regular shell commands in Makefile (cp, lint, etc.) that operate on the source files will work.
We recommend using SGI's smake for the IRIX Makefiles that need to use VPATH. We'd started using GNU Make hoping to integrate it with NT. However MS Studio uses Nmake instead, so that payoff never occurred. Smake understands VPATH and the common IRIX Makefile macros. The only anomaly we can report is one that occured with some of the clean options. If the make clean option is set up like this:
clean: -rm -f *.o $(TARGET) *.a *~*
and rm can't find anything to remove, smake will sometimes (incorrectly) exit with an error about “exit badly formed number”. We don't know why it does that, but adding an explicit “exit” statement fixes the problem:
clean: -rm -f *.o $(TARGET) *.a *~* ; exit 0IRIX has a handy feature that lets you specify an alternate make program to run. The name of the program is put on the first line of the Makefile following a “#!” (much like the way script files name their interpreters). So making the first line of the IRIX Makefile:
#! smakewill cause the default make program to invoke smake to process the Makefile.
Dependencies are also tracked differently between IRIX Makefiles and GNU Makefiles. The IRIX Makefiles generate a single Make.depend file that lists all the dependencies for all the source files. In the version of GNU make that ships with Red Hat 6.1, dependencies are kept in separate files for each source file. For instance, if you have a file ground_motion.c, there will be a corresponding dependency file ground_motion.d. These can be automatically generated with a .d.c rule. (See the GNU Make documentation for details and how you can combine the .d extentions into a Make.depend file.)
Compiler options are also different between the IRIX and GNU compilers. Here's the changes we made:
For IRIX: Use GNU: Purpose: --------- -------- --------fullwarn -Wall -Extra warnings, error checks-MDupdate -MD -Update dependencies-xansi -ansi -Support ANSI C
Using the -g option with the compiler for debugging is serious now. The IRIX debugger was still able to give you information even if the -g option wasn't specified. For GNU, you really need to include the -g. On the Linux side, -D_BSD_SOURCE may be needed if you're doing something that uses BSD functions (like strncasecmp).
The permitted ordering of compiler options is different. IRIX seems to like the libraries to be last, gcc doesn't seem to care.
One of the handiest Makefile macros, $$(@F), is only partially available under GNU make. Make allows you to use $(@F) to extract the file name of a target. IRIX make and smake allow you to use $$(@F) on the dependency line of the Makefile. GNU make allows this only in the action clause, but it does allow you to use the pattern-matching macros to get the same effect (compare the IRIX and Linux Makefiles in Listings 2 and 3, which can be seen at).
There are some additional directories in Linux for libraries, and they include files that may need to be added to your Makefile:
X libs in /usr/X11R6/lib
X headers in /usr/X11R6/include
OpenGL in /usr/include/GL, /usr/X11R6/GL
Make sure that CFLAGS has -c if you have separate compile and link steps. Otherwise GNU defaults to trying to do it in one step. If you get “storage size not known” error when compiling, try removing the -ansi specifier from the command line.
You can use the same Makefiles for IRIX and Linux, use xmkmf to generate architecture-specific Makefiles in both IRIX and Linux.
Let's turn our attention from Makefiles to C source code. As a general rule, the SGI compilers are more tolerant than the GNU compilers. Expect your code to have to be closer to the standard to pass the GNU compilers.
The following are items we modified from the IRIX code when porting to Linux.
1. Bstring.h, bcopy and bzero are not POSIX. We replaced them with their POSIX counterparts:
bstring.h -> string.h bcopy(a, b, nbytes) -> memcpy (b, a, nbytes) bzero(a, nbytes) -> memset(a, 0, nbytes)
GNU does have these in string.h instead of bstring.h, so this is not strictly necessary (though if you don't, you'll need to conditionally include “bstring.h”). Note that IRIX keeps “select” in bstring.h, while Linux puts in unistd.h.
2. Here are some things that generate warnings. Since the GNU compiler is more vocal than the IRIX, we fixed these:
Main should return int.
Watch for uninitialized vars.
Parenthesize defensively.
Format specs in printf/scanf: args not matching var types -scanf using the wrong type can get you in trouble
sprintf(astring, "") -> astring[0] = 0;
2D array initializers need braces to be ansi -int a[2][2] = { {1, 2}, {3, 4} };
3. There is no “recalloc” in POSIX:
Replace recalloc with realloc and use memset to zero out the additional memory.
gettimeofday is not POSIX, but SGI, Linux and NT all accept it and timezone as the second argument.
sginap (hundredths) -> usleep (hundredths * 10000).
The documentation for some other UNIX systems say they may modify timeval in the future: with Linux, the future is now
Don't use the same timeval struct on successive calls to “select” without refreshing it.
Parseargs was an argument parsing library last maintained by Brad Appelton in 1991. It supports multiple platforms and tries to figure out what's available on each system (sort of a primitive precursor of the GNU configure utility).
9. Ulocks.h is commented out. Possibly another IrisGL hold over?
10. Code involving sproc (SGI's lightweight process model) is migrated to POSIX threads (pthreads):
Use -D_REENTRANT on Linux compile.
Add library -Lpthread to link step.
Use XInitThreads() if threads are used in an X application.
Get the patches for gdb from the LinuxThreads page.
Be aware that the LinuxThreads Library uses SIGUSR1 and SIGUSR2 for its own purposes. If your application uses these signals you might have to look at some other mechanism. In the worst case scenario, use the Linux clone function.
11. Serial port handling used for grips and touch screens migrates to the POSIX interface.
See man termios for overview.
No high-speed malloc lib on Linux (that I'm aware of) so lmalloc is dropped.
Fabs is POSIX, fsqrtf is not
We suggest simply using sqrt
fmodf
use -D_BSD_SOURCE to get at M_PI, etc.
15. fsin -> sin in Linux.
16. fcos -> cos in Linux.
17. fceil -> ceil & do math in double.
18. Socket level differences in your header files:
ioctl in IRIX unistd.h, Linux has it in sys/ioctl.h.
SIOCGIFCONF in IRIX sys/sockio.h.
19. fcntl FNONBLK --> POSIX O_NONBLOCK for both IRIX and Linux.
20. sigsend -> kill for both IRIX and Linux
21. There is sysconf shell-command in Linux
Our spatial database tries to obtain RAM size to know how much memory it can afford to use.
In Linux, use the contents of /proc/meminfo instead.
22. Non-POSIX IRIX flock_t vs. Linux struct flock:
Use “struct flock” on Linux.
Autogrow is an optional part of standard (IRIX implements it, but Linux does not).
Fill the file with zeros to the right size before mmap-ing.
25. POSIX signal handling is different from the BSD signal handling we were using.
Move to POSIX; man sigaction for details.
26. IRIX oserror Ý POSIX errno.
27. There is no sysmp command for Linux multiprocess control.
28. There are no Performer arenas in 2.3 (Mongoose)
All performer in one process.
Arena pointers will be NULL.
pfMalloc and company allocate off the stack.
29. xtouchmouse (a touch-screen driver that turns touch-screen data into X mouse events) works, but can do no button presses on the KDE root window, just on regular windows.
Unmatched button presses and releases on the root window cause the window manager to hang—tip 'o the hat to John Mellby for solving that
We chose not to use the built-in touch-screen driver support so that we could interoperate with the IRIX systems at that level. Once, when we integrated Future Scout with FBCB2 running on an SCO PC with no compiler, we were able to send the SCO's display to a monitor and plug that monitor's touch-screen into an IRIX system. The touch-screen inputs to xtouchmouse generated X events that were sent to the SCO system allowing us to interact with it.
If you still have IrisGL code, try SGI's toogl program to help with the conversion. You might have to convert to Motif if you want to suppress the window borders.
In order to use glwMDrawingAreaWidgetClass (instead of glwDrawingAreaWidgetClass) you need to include a library that defines __GLX_MOTIF to make sure the Motif libraries are included or just use the glwDrawingAreaWidgetClass if no Motif functionality is required. “M” indicates Motif and has the additional functionality of Motif's XmPrimitive widget class.
In XtPopup, you cannot use NULL for second param. Use XtGrabNone instead. This parameter specifies the way in which X events should be constrained.
Use the X Toolkit Intrinsics method for setting up a window instead of the X Lib method in order to create a borderless window. The borderless window needs to use resources to let the window manager know that it doesn't need to manage the border. The resources are not accessible with the X Lib method.
Remove IrisGL device.h.
Use -lMesaGLw instead of -lGLw for library in the Makefile. This library has the OpenGL draw area widgets.
Note that there seems to be a bug in the current (as of February 2000) alpha TNT2 drivers that keeps glColorMask from working. If you're working on this now, I'd suggest switching to the newer drivers for XFree86 version 4 and the latest drivers.
#ifdef __linux__ Linux specific stuff #else IRIX specific stuff #endif /* __linux__ */The bulk of the data our simulator dealt with wasn't a problem. Much of the configuration files are ASCII and need only minor tweaks (like environment variables) to be simultaneously usable on IRIX, Linux and NT. In the case of Mongoose, the loaders did the byte swapping for me, so it didn't matter if I was on a big-endian SGI or a little-endian Intel box—the same visual models worked.
There were a few cases where endianness became an issue. We use the Compact Terrain Database (CTDB) files produced by the US Army Topographic Engineering Center. The database files we used were binary. Fortunately they came in big and little-endian versions with different suffixes. We could tell the system which to use by storing that suffix in an environment variable that's evaluated at runtime. The S1000 libraries are not implemented in Linux, just because they are larger and more complex than time allowed. Our Spatial Database has a binary representation of the shapes of different vehicles, so that needed to be converted. Finally, we use the BG Systems Cereal boxes to read our crewstation controls (grips). The Cereal box reads analog voltages from the crewstation controls and transmits its data over a serial line to the host computer. Since the data was assembled byte-wise into floats a little bit of their library needed to be modified. BG Systems was very helpful in that effort.
Once we had everything compiled, there were some surprises at runtime. A runtime error from the GNU compiler helped us find a case where we were accessing memory after it had been freed. IRIX will sometimes let you access memory even after you have freed it (see the IRIX mallopt function for different settings of their memory allocator). There is a mallopt function in Linux, though the man pages don't list it. You'll have to use the GNU “info” pages for that. Another feature of the GNU C library useful for tracking memory errors is mcheck. It's also detailed in the info pages.
The second thing is that the serial port device files have different names between IRIX and Linux:
IRIX: Linux: ---- ------/dev/ttyd1 /dev/ttyS0 serial port one/dev/ttyd2 /dev/ttyS1 serial port two... ...
The grip calibration programs decide based on architecture, and look for the value of $ARSI_GRIP_CALIBRATE_PORT as an optional override.
Was a case where the we had a different results from local variables in a function. We had a string next to an array of longs. The code contained an error where the string was overflowed. On the IRIX systems, the code worked fine. On the Linux systems, it did not. On further investigation, we found that the string was terminating against bytes in the array of longs. In the bigendian systems, those bytes happened to be zeros (making the system appear to work properly), while on the little-endian systems, they were nonzero, causing the string to appear to be corrupted. (Jim King gets the credit for finding that one.)
Finally, there is an error in the Flight loader that comes with Mongoose 2.3. MultiGen-Paradigm says that will be fixed in the next release. (A tip 'o the hat to Karen Davidson for tracking that one down.) | http://www.linuxjournal.com/article/4421?page=0,1 | CC-MAIN-2018-05 | refinedweb | 3,250 | 64.61 |
where to get python coding tutorial for beginners link PDF
reading a tutorial on writing Python code Link to PDF file
asked 2010-09-18 00:58:20 -0500
updated 2010-09-21 05:01:44 -0500500
Seen: 768 times
Last updated: Sep 21 '10
Equivalence to Python -O flag ?
Traversing sage's symbolic expression trees in python
TypeError: 'int' object is not iterable
from sage.all import * results in libcsage.so "cannot open shared object file" error
using python in sage cloud
When does 1/2=0 ? (python's integer division Vs Sage's exact fractions)
Call pure python code from SAGE | https://ask.sagemath.org/question/7663/where-to-get-python-coding-tutorial-for-beginners-link-pdf/ | CC-MAIN-2016-44 | refinedweb | 103 | 62.07 |
0
Hello everybody!
I am new to the site and to java. I am in week 2 of my java class now and still not sure if I am enjoying it. Unfortunately I would say it is my weakest link in the IT field so I picked this class on my own. So far the minor problems have proven to be frustrating.
So I think I have the majority of this program correct and can not figure out why I keep getting error "cannot find symbol variable PayrollProgram1" in line 18. I have fixed the other errors that I was having so far but do not see where this error is coming into play. I have it spelled the same throughout my code and do not understand what else could be wrong with it. I am curious at to everybody's input and thank you in advance for it!
import java.util.Scanner; // load scanner public class PayrollProgram1 // set public class { public static void main( String[] args ) // main method begins { Scanner input = new Scanner( System.in ); // create scanner to get input int hoursWorked; // hours worked int hourlyPay; // hourly pay int weeklyPay; // weekly pay System.out.print( "Enter Employee's Name." ); // enter employee's name String empName = input.nextLine(). PayrollProgram1.setEmpName ( empName ); System.out.print( "Enter hours worked." ); // enter hours worked hoursWorked = input.nextInt(); weeklyPay = hoursWorked * hourlyPay; // multiply hourlyPay by hoursWorked for weeklyPay System.out.printf( "Employee Name: %s, Weekly Pay is $%d%n", empName, weeklyPay ); // print final line } } | https://www.daniweb.com/programming/software-development/threads/169029/payrollprogram-error | CC-MAIN-2018-05 | refinedweb | 248 | 66.84 |
A common requirement in web applications is displaying lists of data. Or tables with headers and scrolls. You have probably done it hundreds of times.
But what if you need to show thousands of rows at the same time?
And what if techniques like pagination or infinite scrolling are not an option (or maybe there are but you still have to show a lot of information)?
In this article, I’ll show you how to use react-virtualized to display a large amount of data efficiently.
First, you’ll see the problems with rendering a huge data set.
Then, you’ll learn how React Virtualized solves those problems and how to efficiently render the list of the first example using the List and Autosizer components.
You’ll also learn about two other helpful components. CellMeasurer, to dynamically measure the width and height of the rows, and ScrollSync, to synchronize scrolling between two or more virtualized components.
You can find the complete source code of the examples used here in this GitHub repository.
The problem
Let’s start by creating a React app:
npx create-react-app virtualization
This app is going to show a list of one thousand comments. Something like this:
The placeholder text will be generated with the library lorem-ipsum, so cd into your app directory and install it:
cd virtualization
npm install --save lorem-ipsum
Now in
src/App.js, import
lorem-ipsum:
import loremIpsum from 'lorem-ipsum';
And let’s create an array of one thousand elements in the following way:
const rowCount = 1000; class App extends Component { constructor() { super(); this.list = Array(rowCount).fill().map((val, idx) => { return { id: idx, name: 'John Doe', image: '', text: loremIpsum({ count: 1, units: 'sentences', sentenceLowerBound: 4, sentenceUpperBound: 8 }) } }); } //... }
The above code will generate an array of one thousand objects with the properties:
- id
- name
- image
- And a sentence of between four and eight words
This way, the
render() method can use the array like this:
render() { return ( <div className="App"> <header className="App-header"> <img src={logo} <h1 className="App-title">Welcome to React</h1> </header> <div className="list"> {this.list.map(this.renderRow)} </div> </div> ); }
Using the method
renderRow() to create the layout of each row:
renderRow(item) { return ( <div key={item.id} <div className="image"> <img src={item.image} </div> <div className="content"> <div>{item.name}</div> <div>{item.text}</div> </div> </div> ); }
Now, if you add some CSS styles to
src/App.css:
.list { padding: 10px; } .row { border-bottom: 1px solid #ebeced; text-align: left; margin: 5px 0; display: flex; align-items: center; } .image { margin-right: 10px; } .content { padding: 10px; }
And run the app with
npm start, you should see something like this:
You can inspect the page using the Elements panel of your browser’s developer tools.
It shouldn’t be a surprise to find one thousand
div nodes in the DOM:
So many elements in the DOM can cause two problems:
- Slow initial rendering
- Laggy scrolling
However, if you scroll through the list, you may not notice any lagging. I didn’t. After all, the app isn’t rendering something complex.
But if you’re using Chrome, follow these steps to do a quick test:
- Open the Developer tools panel.
- Press Command+Shift+P (Mac) or Control+Shift+P (Windows, Linux) to open the Command Menu.
- Start typing Rendering in the Command Menu and select Show Rendering.
- In the Rendering tab, enable FPS Meter.
- Scroll through the list one more time.
In my case, the frames went from 60 to around 38 frames per second:
That’s not good.
In less powerful devices or with more complex layouts, this could freeze the UI or even crash the browser.
So how can we display these one thousand rows in an efficient way?
One way is by using a library like react-virtualized, which uses a technique called virtual rendering.
How does react-virtualized work?
The main concept behind virtual rendering is rendering only what is visible.
There are one thousand comments in the app, but it only shows around ten at any moment (the ones that fit on the screen), until you scroll to show more.
So it makes sense to load only the elements that are visible and unload them when they are not by replacing them with new ones.
React-virtualized implements virtual rendering with a set of components that basically work in the following way:
- They calculate which items are visible inside the area where the list is displayed (the viewport).
- They use a container (
div) with relative positioning to absolute position the children elements inside of it by controlling its top, left, width and height style properties.
There are five main components:
- Grid. It renders tabular data along the vertical and horizontal axes.
- List. It renders a list of elements using a
Gridcomponent internally.
- Table. It renders a table with a fixed header and vertically scrollable body content. It also uses a
Gridcomponent internally.
- Masonry. It renders dynamically-sized, user-positioned cells with vertical scrolling support.
- Collection. It renders arbitrarily positioned and overlapping data.
These components extend from React.PureComponent, which means that when comparing objects, it only compares their references, to increase performance. You can read more about this here.
On the other hand, react-virtualized also includes some HOC components:
- ArrowKeyStepper. It decorates another component so it can respond to arrow-key events.
- AutoSizer. It automatically adjusts the width and height of another component.
- CellMeasurer. It automatically measures a cell’s contents by temporarily rendering it in a way that is not visible to the user.
- ColumnSizer. It calculates column-widths for Grid cells.
- InfiniteLoader. It manages the fetching of data as a user scrolls a List, Table, or Grid.
- MultiGrid. It decorates a Grid component to add fixed columns and/or rows.
- ScrollSync.It synchronizes scrolling between two or more components.
- WindowScroller. It enables a Table or List component to be scrolled based on the window’s scroll positions.
Now let’s see how to use the List component to virtualize the one thousand comments example.
Virtualizing a list
First, in
src/App.js, import the
List component from react-virtualizer:
import { List } from "react-virtualized";
Now instead of rendering the list in this way:
<div className="list"> {this.list.map(this.renderRow)} </div>
Let’s use the
List component to render the list in a virtualized way:
const listHeight = 600; const rowHeight = 50; const rowWidth = 800; //... <div className="list"> <List width={rowWidth} height={listHeight} rowHeight={rowHeight} rowRenderer={this.renderRow} rowCount={this.list.length} /> </div>
Notice two things.
First, the
List component requires you to specify the width and height of the list. It also needs the height of the rows so it can calculate which rows are going to be visible.
The
rowHeight property takes either a fixed row height or a function that returns the height of a row given its index.
Second, the component needs the number of rows (the list length) and a function to render each row. It doesn’t take the list directly.
For this reason, the implementation of the
renderRow method needs to change.
This method won’t receive an object of the list as an argument anymore. Instead, the
List component will pass it an object with the following properties:
index.The index of the row.
isScrolling. Indicates if the
Listis currently being scrolled.
isVisible. Indicates if the row is visible on the list.
key. A unique key for the row.
parent. A reference to the parent
Listcomponent.
style. The style object to be applied to the row to position it.
Now the
renderRow method will look like this:
renderRow({ index, key, style }) { return ( <div key={key} style={style} <div className="image"> <img src={this.list[index].image} </div> <div className="content"> <div>{this.list[index].name}</div> <div>{this.list[index].text}</div> </div> </div> ); }
Note how the
index property is used to access the element of the list that corresponds to the row that is being rendered.
If you run the app, you’ll see something like this:
In my case, eight and a half rows are visible.
If we look at the elements of the page in the developer tools tab, you’ll see that now the rows are placed inside two additional div elements:
The outer
div element (the one with the CSS class
ReactVirtualized__GridReactVirtualized__List) has the width and height specified in the component (800px and 600px, respectively), has a relative position and the value
auto for
overflow (to add scrollbars).
The inner div element (the one with the CSS class
ReactVirtualized__Grid__innerScrollContainer) has a max-width of 800px but a height of 50000px, the result of multiplying the number of rows (1000) by the height of each row (50). It also has a relative position but a hidden value for overflow.
All the rows are children of this
div element, and this time, there are not one thousand elements.
However, there are not eight or nine elements either. There’s like ten more.
That’s because the
List component renders additional elements to reduce the chance of flickering due to fast scrolling.
The number of additional elements is controlled with the property overscanRowCount. For example, if I set 3 as the value of this property:
<List width={rowWidth} height={listHeight} rowHeight={rowHeight} rowRenderer={this.renderRow} rowCount={this.list.length} overscanRowCount={3} />
The number of elements I’ll see in the Elements tab will be around twelve.
Anyway, if you repeat the frame rate test, this time you’ll see a constant rate of 59/60 fps:
Also, take a look at how the elements and their
top style is updated dynamically:
The downside is that you have to specify the width and height of the list as well as the height of the row.
Luckily, you can use the
AutoSizer and
CellMeasurer components to solve this.
Let’s start with
AutoSizer.
Autoresizing a virtualized list
Components like AutoSizer use a pattern named function as child components.
As the name implies, instead of passing a component as a child:
<AutoSizer> <List ... /> </AutoSizer>
You have to pass a function. In this case, one that receives the calculated width and height:
<AutoSizer> ({ width, height }) => { } </AutoSizer>
This way, the function will return the List component configured with the width and height:
<AutoSizer> ({ width, height }) => { return <List width={width} height={height} rowHeight={rowHeight} rowRenderer={this.renderRow} rowCount={this.list.length} overscanRowCount={3} /> } </AutoSizer>
The
AutoSizer component will fill all of the available space of its parent so if you want to fill all the space after the header, in
src/App.css, you can add the following line to the list class:
.list { ...
height: calc(100vh - 210px)}
The
vh unit corresponds to the height to the viewport (the browser window size), so 100vh is equivalent to 100% of the height of the viewport. 210px are subtracted because of the size of the header (200px) and the padding that the
list class adds (10px).
Import the component if you haven’t already:
import { List, AutoSizer } from "react-virtualized";
And when you run the app, you should see something like this:
If you resize the window, the list height should adjust automatically:
Calculating the height of a row automatically
The app generates a short sentence that fits in one line, but if you change the settings of the lorem-ipsum generator to something like this:
this.list = Array(rowCount).fill().map((val, idx) => { return { //... text: loremIpsum({ count: 2, units: 'sentences', sentenceLowerBound: 10, sentenceUpperBound: 100 }) } });
Everything becomes a mess:
That’s because the height of each cell has a fixed value of 50. If you want to have dynamic height, you have to use the
CellMeasurer component.
This component works in conjunction with CellMeasurerCache, which stores the measurements to avoid recalculate them all the time.
To use these components, first import them:
import { List, AutoSizer, CellMeasurer, CellMeasurerCache } from "react-virtualized";
Next, in the constructor, create an instance of
CellMeasurerCache:
class App extends Component { constructor() { ... this.cache = new CellMeasurerCache({ fixedWidth: true, defaultHeight: 100 }); } ... }
Since the width of the rows doesn’t need to be calculated, the
fixedWidth property is set to
true.
Unlike
AutoSizer,
CellMeasurer doesn’t take a function as a child, but the component you want to measure, so modify the method
renderRow to use it in this way:
renderRow({ index, key, style, parent }) { return ( <CellMeasurer key={key} cache={this.cache} parent={parent} columnIndex={0} rowIndex={index}> <div style={style} <div className="image"> <img src={this.list[index].image} </div> <div className="content"> <div>{this.list[index].name}</div> <div>{this.list[index].text}</div> </div> </div> </CellMeasurer> ); }
Notice the following about
CellMeasuer:
- This component is the one that is going to take the key to differentiate the elements.
- It takes the cache configured before.
- It takes the parent component (
List) where it’s going to be rendered, so you also need this parameter.
Finally, you only need to modify the
List component so it uses the cache and gets its height from that cache:
<AutoSizer> { ({ width, height }) => { return <List width={width} height={height} deferredMeasurementCache={this.cache} rowHeight={this.cache.rowHeight} rowRenderer={this.renderRow} rowCount={this.list.length} overscanRowCount={3} /> } } </AutoSizer>
Now, when you run the app, everything should look fine:
Syncing scrolling between two lists
Another useful component is
ScrollSync.
For this example, you’ll need to return to the previous configuration that returns one short sentence:
text: loremIpsum({ count: 1, units: 'sentences', sentenceLowerBound: 4, sentenceUpperBound: 8 })
The reason is that you cannot share a CellMeausure cache between two components, so you cannot have dynamic heights for the two lists I’m going to show next like in the previous example. At least not in an easy way.
If you want to have dynamic heights for something similar to the example of this section, it’s better to use the MultiGrid component.
Moving on, import
ScrollSync:
import { List, AutoSizer, ScrollSync } from "react-virtualized";
And in the render method, wrap the
div element with the
list class in a
ScrollSync component like this:
<ScrollSync> {({ onScroll, scrollTop, scrollLeft }) => ( <div className="list"> <AutoSizer> { ({ width, height }) => { return ( <List width={width} height={height} rowHeight={rowHeight} onScroll={onScroll} rowRenderer={this.renderRow} rowCount={this.list.length} overscanRowCount={3} /> ) } } </AutoSizer> </div> ) } </ScrollSync>
ScrollSync also takes a function as a child to pass some parameters. Perhaps the ones that you’ll use most of the time are:
onScroll. A function that will trigger updates to the scroll parameters to update the other components, so it should be passed to at least one of the child components.
scrollTop. The current scroll-top offset, updated by the
onScrollfunction.
scrollLeft. The current scroll-left offset, updated by the
onScrollfunction.
If you put a span element to display the
scrollTop and
scrollLeft parameters:
... <div className="list"> <span>{scrollTop} - {scrollLeft}</span> <AutoSizer> ... </AutoSizer> </div>
And run the app, you should see how the
scrollTop parameter is updated as you scroll the list:
As the list doesn’t have a horizontal scroll, the
scrollLeft parameter doesn’t have a value.
Now, for this example, you’ll add another list that will show the ID of each comment and its scroll will be synchronized to the other list.
So let’s start by adding another
render function for this new list:
renderColumn({ index, key, style }) { return ( <div key={key} style={style} <div className="content"> <div>{this.list[index].id}</div> </div> </div> ); }
Next, in the
AutoSizer component, disable the width calculation:
<AutoSizer disableWidth> { ({ height }) => { ... } } </AutoSizer>
You don’t need it anymore because you’ll set a fixed width to both lists and use absolute position to place them next to each other.
Something like this:
<div className="list"> <AutoSizer disableWidth> { ({ height }) => { return ( <div> <div style={{ position: 'absolute', top: 0, left: 0, }}> <List className="leftSide" width={50} height={height} rowHeight={rowHeight} scrollTop={scrollTop} rowRenderer={this.renderColumn} rowCount={this.list.length} overscanRowCount={3} /> </div> <div style={{ position: 'absolute', top: 0, left: 50, }}> <List width={800} height={height} rowHeight={rowHeight} onScroll={onScroll} rowRenderer={this.renderRow} rowCount={this.list.length} overscanRowCount={3} /> </div> </div> ) } } </AutoSizer> </div>
Notice that the
scrollTop parameter is passed to the first list so its scroll can be controlled automatically, and the
onScroll function is passed to the other list to update the
scrollTop value.
The
leftSide class of the first list just hides the scrolls (because you won’t be needing it):
.leftSide { overflow: hidden !important; }
Finally, if you run the app and scroll the right-side list, you’ll see how the other list is also scrolled:
Conclusion
This article, I hope, showed you how to use react-virtualized to render a large list in an efficient way. It only covered the basics, but with this foundation, you should be able to use other components like Grid and Collection.
Of course, there are other libraries built for the same purpose, but react-virtualized has a lot of functionality and it’s well maintained. Plus, there is a Gitter chat and a StackOverflow tag for asking questions.
Remember that you can find all the examples in this GitHub repository. “Rendering large lists with React Virtualized”
As always great tutorial guys! | https://blog.logrocket.com/rendering-large-lists-with-react-virtualized-82741907a6b3/ | CC-MAIN-2019-39 | refinedweb | 2,844 | 53.71 |
Created
06-15-2018
02:04 PM
Hi,
I wanted to write hdfs file compare in scala functional programming. To start with I have written some code(googled to handle file closing and catching exceptions) to read a single file. I have proceeded so far, I was successful in reading first line but the code does not loop to read the next lines. Any help please. I do not want to use spark.
import java.io.{BufferedReader, FileInputStream, InputStreamReader}import java.net.URIimport org.apache.hadoop.conf.Configurationimport org.apache.hadoop.fs.{FSDataInputStream, FileSystem, Path}import scala.util.{Failure, Success, Try}
object DRCompareHDFSFiles { def main(args: Array[String]): Unit = { val hdfs = FileSystem.get(new Configuration()) val path1 = new Path(args(0)) val path2 = new Path(args(1)) readHDFSFile(hdfs, path1, path2) }
// Accept a parameter which implements a close method def using[A <: { def close(): Unit }, B](resource: A)(f: A => B): B = try { f(resource) } finally { resource.close() }
def readHDFSFile(hdfs: FileSystem, path1: Path, path2: Path): Option[Stream[(String,String)]] = { Try(using(new BufferedReader(new InputStreamReader(hdfs.open(path1))))(readFileStream)) } match { case Success(result) => { I am expecting collections of string but get only string } case Failure(ex) => { println(s"Could not read file $path1, detail ${ex.getClass.getName}:${ex.getMessage}") None } }
def readFileStream(br: BufferedReader)= { for { line <- Try(br.readLine()) if (line != null ) } yield line }
}
Created
06-17-2018
06:47 PM
I got it working. I used Streams. Thanks | https://community.cloudera.com/t5/Support-Questions/scala-Read-files-to-compare-the-lines/m-p/229958 | CC-MAIN-2021-21 | refinedweb | 240 | 50.73 |
I have a Window that contains a ListView. I use the same Window for several different searches so after I run a search I add a GridView and columns to the Listview in code, then I set the datasource to the ObservableCollection created by the search:
GridViewColumn FCPortColumn = new GridViewColumn(); FCPortColumn.Header = "FC Port"; FCPortColumn.DisplayMemberBinding = new Binding("FCPort"); FCPortColumn.Width = Double.NaN; FCPortColumn = (GridViewColumn)XamlReader.Load(xmlReader); GridView grid = new GridView(); grid.Columns.Add(FCPortColumn); ResultsListView.Add(grid);
This code is supposed to replace the XAML on the column. I am adding a property called GridViewSort so I can sort the column. I found the GridViewSort here:
My problem is I get an exception when I run the XmlReader.Create() because it doesn't know what GridViewSort is. The default namespace for the GridviewColumn is pointing back to MS. Is there a way to add to the namespace? I can use the GridViewSort property on a pre-defined column in the XAML.cs file for the Window, but I wanted to be able to add the column in code-behind and modify the XAML there.
Is there another way to do what I want to do? Maybe I pre-define all the columns I might need and then remove what I don't need, or hide them?
/get current XAML string savedxaml = XamlWriter.Save(FCPortColumn); //create new string int index2 = savedxaml.IndexOf("\" "); string insert = savedxaml.Insert(index2 + "\" ".Length, String.Format(@" GridViewSort.PropertyName=""{0}"" ", "FCPort")); //now replace the XAML on the column StringReader stringReader = new StringReader(insert); XmlReader xmlReader = XmlReader.Create(stringReader); | http://forum.codecall.net/topic/72121-modifying-xaml-in-code-behind-for-gridviewcolumn/ | CC-MAIN-2020-24 | refinedweb | 262 | 51.24 |
High quality rubber expansion joint/connection API/DIN/GB standard
US $10.0-10.0 / Set | Buy Now
100 Sets (Min. Order)
Supply stainless steel metal bellows expansion joint with tie rods
US $30-100 / Unit
1 Unit (Min. Order)
Modular Metallic Expansion Joints
US $1-100 / Meter
1 Meter (Min. Order)
Rubber expansion joint
20 Pieces (Min. Order)
universal expansion joint
US $3-4 / Set
10 Sets (Min. Order)
electrical expansion joints with size in dn50-dn1500
US $0.098-42 / Piece
100 Pieces (Min. Order)
cast iron/carbon steel pn10/pn16/class150 electrical expansion joints good quality
US $19.0-19.0 / Piece | Buy Now
5 Pieces (Min. Order)
EPDM NBR Electrical Rubber Bridge Expansion Joints
US $7.03-7.03 / Kilogram | Buy Now
1 Kilogram (Min. Order)
High Quality Electrical Screw Expansion Joints/bellow compensator
US $4-15 / Set
1 Set (Min. Order)
Electrical Reinforced bellows Expansion joint Floor Expansion joints
US $6.9-48.5 / Piece
100 Pieces (Min. Order)
Electrical expansion joints with size in DN50-DN1500
US $30-100 / Unit
1 Unit (Min. Order)
Electric Cable Steam Expansion Small Ball Joints
US $0.15-0.15 / Piece | Buy Now
100 Pieces (Min. Order)
Electrical Expansion Joints
US $1-1000 / Piece
1 Piece (Min. Order)
Easy to use dependable performance best service electrical expansion joints
US $1-100 / Piece
1 Piece (Min. Order)
customized electrical expansion joints
US $3.5-500.5 / Piece
1 Piece (Min. Order)
New products on china market electrical expansion joints from online shopping alibaba
US $10-80 / Piece
5 Pieces (Min. Order)
custom 2 inch electrical expansion joints
US $0.5-50 / Piece
50 Pieces (Min. Order)
China supplier electrical expansion joints
US $1-15 / Piece
50 Pieces (Min. Order)
import used auto parts electrical expansion joints
US $30-100 / Piece
10 Pieces (Min. Order)
Metal Stainless Steel Electrical Expansion Joints
US $1000-10000 / Set
10 Sets (Min. Order)
Free sample electrical expansion joints for lean pipe made in china
US $0.35-0.65 / Piece
1000 Pieces (Min. Order)
China Factory Wholesale DN32-DN1600 Electrical Expansion Joints
US $15-5234 / Piece
1 Piece (Min. Order)
Professional electrical expansion joints with size in dn50-dn1500 with low price
US $1-1000 / Piece
1 Piece (Min. Order)
rubber joints resistant/rubber expansion joint/electrical expansion joints
US $75-128 / Set
2 Sets (Min. Order)
Hot selling stainless steel expansion joints and metal bellows made in China
US $10-200 / Set
2 Sets (Min. Order)
Expansion joint /flexible joint /Corrugated Compensator
US $15-40 / Piece
1 Piece (Min. Order)
10% discount flanged rubber expansion joints
US $0.1-0.5 / Piece
1 Piece (Min. Order)
expansion bellows joint
US $3-80 / Piece
1 Piece (Min. Order)
PTFE teflon expansion joints
US $0.1-10 / Piece
1 Piece (Min. Order)
customized metal pipe joints
US $1-40 / Piece
1000 Pieces (Min. Order)
China factory electrical transition joints used in alumina electrolysis cells
US $9.9-10.0 / Kilogram
1 Kilogram (Min. Order)
Brass Expansion joints
100 Pieces (Min. Order)
expansion joint for water power plant
US $9000-90000 / Unit
1 Set (Min. Order)
Non metallic fabric expansion joints Widely used in industrial thermal ductwork
US $1-3000 / Piece
1 Piece (Min. Order)
Rotary Joint for continuous casting manufacturer made in China
US $5-1200 / Set
1 Set (Min. Order)
2011 hot sell expansion joints manufacturers
US $1-199.8 / Set
1 Set (Min. Order)
expansion joint
US $0.1-5.0 / Piece
5000 Pieces (Min. Order)
Lage Size Rubber Expansion Joints for Electrical Power Station
US $2.5-652.9 / Set
2 Sets (Min. Order)
stainless steel expansion joints
US $20-100 / Meter
10 Meters (Min. Order)
Expansion joint for bus bar
10 Pieces (Min. Order)
Buying Request Hub
Haven't found the right supplier yet ? Let matching verified suppliers find you. Get Quotation NowFREE
Do you want to show electrical expansion joints or other products of your own company? Display your Products FREE now!
Related Category
Product Features
Supplier Features
Supplier Types
Recommendation for you
related suppliers
related Guide
related from other country | http://www.alibaba.com/countrysearch/CN/electrical-expansion-joints.html | CC-MAIN-2016-44 | refinedweb | 681 | 61.22 |
Okay, this is going to be a rather lengthy question, but here goes...
I'm trying to work with the cellRenderer from the JList class, and it doesn't appear to be working.
First let me explain my application. I'm programming a realestate project customization application which stores the details of buildings that are under development. Currently, I'm working on the interface for customizing the units and levels for a particular building. I've got a JScrollPane and a JList to be viewed in the JScrollPane, and in the JList I'd like to render a vertically aligned list of CustomizeLevelPanel objects which I programmed to inherit from JPanel. These CustomizeLevelPanel objects are essentially interfaces for customizing the details of each level in the building (such as level number, types of units, quantity of each type, etc). They don't seemed to be rendering properly though.
Here's some (not-quite-verbatim) code snippets of my work:
...
CustomizeLevelPanel CLP[] = new CustomizeLevelPanel[number_of_levels];
for (int i = 0; i < CLP.length; i++) {
CLP[i] = new CustomizeLevelPanel(i);
CLP[i].setBounds(0, 0, CLP_WIDTH, CLP_HEIGHT); // assuming JList will set the X and Y values automatically
}
JList L = new JList(CLP);
L.setRenderer(new CustomizeLevelPanelRenderer());
CustomizeLevelsScrollPane.getViewport().setView(L);
...
public class CustomizeLevelPanelRenderer implements ListCellRenderer {
public Component getListCellRendererComponent(JList list, Object value, int index, boolean isSelected, boolean cellHasFocus) {
return (CustomizeLevelPanel)value;
}
};
But when I run my program, nothing shows up in the JScrollPane. Actually, that's not true: a thin horizontal line spanning the width of the JScrollPane with the default background color of the JPanel shows up. This line seems to get thicker the more levels I add. So it makes sense to assume that the CustomizeLevelPanels are being added but rendered only as a horizontal lines.
Does anyone see the problem?
I don't quite understand what you're trying to do with the ListCellRenderer. A JList typically aligns things properly by default, i'm not sure what you want it to do differently.
Sorry,
It's not the vertical alignment that's the problem. It's that the CustomizeLevelPanels are being rendered as horizontal lines instead of 2D panels. I have a full arrangement of JComboBoxes, JLabels, and a JButton all layed out inside the CustomizeLevelPanel, but none of it shows up.
sounds like the panels have no vertical size. theres a method to set the minimum size of a panel i believe
Hi ,
I am not sure but I think that SetRenderer is applicable to JComboBox .For JList we can use setCellRenderer.So,you can goahead and look for the change and reply me.
Maduraguy.
Okay, that preferredSize() method worked to render them in a JList. Now I've got a new problem which I should have anticipated from the beginning:
Using the CellRenderer works to display the CustomizeLevelsPanels but it seems as though they are only rendered as images. I don't seem to be able to click on any of their components like the comboboxes, spinners, and buttons. I'm assuming this is probably because the cellRenderer fulfills one function, and one function only: to RENDER them. Therefore, they only come through as pretty pictures, not real Swing components.
Anyway, I've abandonned this approach and I'm persuing a different approach which I've been toying on the side all the while: I'm placing the CustomizeLevelPanels onto a parent panel I call CustomizeLevelsSuperPanel. This super panel is then added to the viewport of the JScrollPane. The problem with this approach, however, is that the JScrollPane seems not to realize when the viewport contents (i.e. the CustomizeLevelsSuperPanel) is too big for the viewport. Therefore, it doesn't accomodate by adjusting the attributes of the JScrollBars. I can set the scroll bar policies to AS_NEEDED but this seems to have no effect. I can set them to ALWAYS but those only seems to create scroll bars whose knobs take up the entire range (i.e. they have no room to move) and thus cannot be scrolled. I even tried setting the min, max, value, and extent fields in the scrollbar's BoundedRangeModel to no avail.
There's got to be a way around this. How can I tell the JScrollPane how big the contents of the viewport is, and therefore how wide the scrolling range should be?
Forum Rules
Development Centers
-- Android Development Center
-- Cloud Development Project Center
-- HTML5 Development Center
-- Windows Mobile Development Center | http://forums.devx.com/showthread.php?140419-not-able-to-set-JFrame-size-developing-using-Eclipse&goto=nextoldest | CC-MAIN-2015-27 | refinedweb | 739 | 55.44 |
For.
The Pillow library provides a handy tool for this sort of thing that is called ImageChops. If you don’t already have Pillow, you should go install it now so you can follow along with this short tutorial.
Comparing Two Images
The first thing we need to do is find two images that are slightly different. You can create your own by using burst mode on your camera and taking a bunch of photos of animals as they move, preferably while using a tripod. Or you can take an existing photo and just add some kind of overlay, such as text. I’m going to go with the latter method. Here is my original photo of Multnomah Falls in Oregon:
And here’s the modified version where I added some text to identify the location of the photo:
Now let’s use ImageChops to find the difference for us!
import Image import ImageChops def compare_images(path_one, path_two, diff_save_location): """ Compares to images and saves a diff image, if there is a difference @param: path_one: The path to the first image @param: path_two: The path to the second image """ image_one = Image.open(path_one) image_two = Image.open(path_two) diff = ImageChops.difference(image_one, image_two) if diff.getbbox(): diff.save(diff_save_location) if __name__ == '__main__': compare_images('/path/to/multnomah_falls.jpg', '/path/to/multnomah_falls_text.jpg', '/path/to/diff.jpg')
Here we have a simple function that we can use to find differences in images. All you need to do is pass it three paths! The first two paths are for the images that we want to compare. The last path is where to save the diff image, if we find a diff. For this example, we should definitely find a diff and we did. Here’s what I got when I ran this code:
Wrapping Up
The Pillow package has many amazing features for working with images. I enjoy photography so it’s fun to be able to take photos and then use my favorite programming language to help me do neat things with the results. You should give this a try as well and read the Pillow documentation to see what else you can do with this clever package!
Related Reading
- Python Pillow official website
- Pillow documentation
- An Intro to the Python Imaging Library / Pillow | http://www.blog.pythonlibrary.org/2016/10/11/how-to-create-a-diff-of-an-image-in-python/ | CC-MAIN-2020-10 | refinedweb | 380 | 62.78 |
The MVVM Pattern — The Practice
The MVVM Pattern — The Practice
Let’s continue our journey to learn the MVVM pattern, applied to Universal Windows app development. As a sample project, we’re going to create a very simple Universal Windows app.
Join the DZone community and get the full member experience.Join For Free
Let’s continue our journey to learn the MVVM pattern, applied to Universal Windows app development. After we’ve learned the basic concepts in the previous post, it’s time to start writing some code. As already mentioned in the previous post, we’re going to leverage MVVM Light as a toolkit to help us implement the pattern. Since it’s the most flexible and simple to use, it will be easier to understand and apply the basic concepts we’ve learned so far.
The Project
The goal of the MVVM pattern is to help developers to organize their code in a better way. As such, the first step is to define the structure of a project in a way that, also from a logical point of view, it can follow the principles of the pattern. Consequently, usually the first thing to do is to create a set of folders where to place our classes and files, like:
- Models — where to store all our basic entities
- ViewModels — where to store the classes that will connect the Model with the View
- Views — where to store the Views, which are the XAML pages in that case of a Universal Windows app
In a typical MVVM project, however, you will end up having more folders: one for the assets, one for the services, one for the helper classes, etc.
The second step is to install our chosen library in the project to help us implement the pattern. In our case, we chose MVVM Light, so we can leverage NuGet to install it. We will find two different versions:
- The complete package () which, other than the libraries, adds some documentation and a series of default classes (like a ViewModel, a ViewModelLocator, etc.)
- Just the libraries () which will add just the required DLLs.
For the moment, my suggestion is to use the second package. This way, we can do everything from scratch, giving us the chance to better learn the basic concepts. In the future, you’re free to use the first package if you want to save some time.
As a sample project, we’re going to create a very simple Universal Windows app that it’s likely you would be able to develop in no time using code behind: a Hello World app with a TextBox, where you can insert your name, and a Button that, when pressed, will display a hello message followed by your name.
Linking the View and ViewModel
The first step is to identify the three components of our application: Model, View, and ViewModel.
- The model isn’t necessary in this application, since we don’t need to manipulate any entity.
- The view will be made by a single page, which will show to the user the TextBox where to insert his/her name and the Button to show the message.
- The ViewModel will be a class, which will handle the interaction in the view. It will retrieve the name filled in by the user and compose the hello message, displaying it on the page.
Let’s start to add the various components. Create a Views folder and inside it add the only page of our application. As a default behavior, when you create a new Universal Windows app, the template will create a default page for you, called MainPage.xaml. You can move it to the Views folder or delete it and create a new one in Solution Explorer by right clicking on the folder and choosing Add –> New item –> Blank page.
Now let’s create the ViewModel that will be connected to this page. As explained in the previous post, the ViewModel is just a simple class: create a ViewModels folder, right click on it, and choose Add –> New Item –> Class.
Now, we need to connect the View and the ViewModel by leveraging the DataContext property. In the previous post, in fact, we’ve learned that the ViewModel class is set as DataContext of the page; this way, all the controls in the XAML page will be able to access to all the properties and commands declared in the ViewModel. There are various ways to achieve our goal, so let’s take a look at them.
Declare the ViewModels as Resources
Let’s say that you have created a ViewModel called MainViewModel. You’ll be able to declare it as a global resource in the App.xaml file this way:
<Application x: <Application.Resources> <viewModel:MainViewModel x: </Application.Resources> </Application>
The first step is to declare, as an attribute of the Application class, the namespace which contains the ViewModel (in this sample, it’s MVVMSample.MVVMLight.ViewModel). Then, in the Resources collection of the Application class, we declare a new resource with the type being MainViewModel and associate it to a key with the same name. Now, we can use this key and the StaticResource keyword to connect the DataContext of the page to the resource, like in the following sample:
<Page x: <!-- your page content --> </Page>
The ViewModelLocator Approach
Another frequently used approach is to leverage a class called ViewModelLocator, which has the responsability of dispatching the ViewModels to the various pages. Instead of registering all the ViewModels as global resources of the application, like in the previous approach, we register just the ViewModelLocator. All the ViewModels will be exposed, as properties, by the locator, which will be leveraged by the DataContext property of the page.
This is a sample definition of a ViewModelLocator class:
public class ViewModelLocator { public ViewModelLocator() { } public MainViewModel Main { get { return new MainViewModel(); } } }
Or, as an alternative, you can simplify the code by leveraging one of the new C# 6 .0 features:
public class ViewModelLocator { public ViewModelLocator() { } public MainViewModel Main => new MainViewModel(); }
After you’ve added the ViewModelLocator class as global resource, you’ll be able to use it to connect the Main property to the DataContext property of the page, like in the following sample:
<Page x: <!-- your page content --> </Page>
The syntax is very similar to the one we’ve seen with the first approach; the main difference is that, since the ViewModelLocator class can expose multiple properties, we need to specify with the Path attribute which one we want to use.
The ViewModelLocator’s approach adds a new class to maintain, but it gives you the flexibilty to handle the ViewModel’s creation in case we need to pass some parameters to the class’ constructor. In the next post, when we’ll introduce the dependeny injection’s concept, so it will be easier for you to understand the advantages in using the ViewModelLocator approach.
Let’s Set Up the ViewModel
No matter which approach we decided to use in the previous step, now we have a View (the page that will show the form to the user) connected to a ViewModel (the class that will handle the user’s interactions).
Let’s start to populate the ViewModel and define the properties that we need to reach our goal. In the previous post we’ve learned that ViewModels need to leverage the INotifyPropertyChanged interface; otherwise, every time we change the value of a property in the ViewModel, the View won’t be able to detect it and the user won’t see any change in the user interface.
To make it easier to implement this interface, MVVM Light offers a base class which we can leverage in our ViewModels using inheritance, like in the following sample:
public class MainViewModel : ViewModelBase { }
This class gives us access to a method called Set(), which we can use when we define our propeties to dispatch the notifications to the user interface when the value changes. Let’s see a sample related to our scenario. Our ViewModel has to be able to collect the name that the user has filled into a TextBox control in the View. Consequently, we need a property in our ViewModel to store this value. Here is what it looks like thanks to the MVVM Light support:
private string _name; public string Name { get { return _name; } set { Set(ref _name, value); } }
The code is very similar to the one we’ve seen in the previous post when we introduced the concept of the INotifyPropertyChanged interface. The only difference is that, thanks to the Set() method, we can achieve two goals at the same time: storing the value in the Name property and dispatching a notification to the binding channel that the value has changed.
Now that we have learned how to create properties, let’s create another one to store the message that we will display to the user after he has pressed the button.
private string _message; public string Message { get { return _message; } set { Set(ref _message, value); } }
In the end, we need to handle the interaction with the user. When he presses the Button in the view, we have to display a hello message. From a logical point of view, this means:
- Retrieving the value of the Name property.
- Leveraging the string interpolation APIs to prepare the message (something like “Hello Matteo”).
- Assign the result to the Message property.
In the previous post we’ve learned that, in the MVVM pattern, you use commands to handle the user interactions in a ViewModel. As such, we’re going to use another one of the classes offered by MVVM Light, which is RelayCommand. Thanks to this class, instead of having to create a new class that implements the ICommand interface for each command, we can just declare a new one with the following code:
private RelayCommand _sayHello; public RelayCommand SayHello { get { if (_sayHello == null) { _sayHello = new RelayCommand(() => { Message = $"Hello {Name}"; }); } return _sayHello; } }
When you create a new RelayCommand object you have to pass, as parameter, an Action, which defines the code that you want to execute when the command is invoked. The previous sample declares an Action using an anonymous method. It means that, instead of defining a new method in the class with a specific name, we define it inline in the property definition, without assigning a name.
When the command is invoked, we use the new C# 6.0 feature to perform string interpolation to get the value of the property Name and add the prefix “Hello”. The result is stored into the Message property.
Create the View
Now that the ViewModel is ready, we can move on and create the View. The previous step should have helped you to understand one of the biggest benefits of the MVVM pattern; we’ve been able to define the ViewModel to handle the logic and the user interaction without writing a single line of code in the XAML file. With the code behind approach, it would have been impossible; for example, if we wanted to retrieve the name filled in the TextBox property, we should have first added the control in the page and assigned to it a name using the x:Name property, so that we would have been able to access to it from code behind. Or, if we wanted to define the method to execute when the button is pressed, we would have needed to add the Button control in the page and subscribe to the Click event.
From a user interface point of view, the XAML we need to write for our application is more or less the same we would have created for a code behind app. The UI, in fact, doesn’t have any connection with the logic, so the way we create the layout doesn’t change when we use the MVVM pattern. The main difference is that, in a MVVM app, we will use binding a lot more, since it’s the way we can connect the controls with the properties in the ViewModel. Here is what our View looks like:
<Page x: <Grid> <StackPanel Margin="12, 30, 0, 0"> <StackPanel Orientation="Horizontal" Margin="0, 0, 0, 30"> <TextBox Text="{Binding Path=Name, Mode=TwoWay, UpdateSourceTrigger=PropertyChanged}" Width="300" Margin="0, 0, 20, 0" /> <Button Command="{Binding Path=SayHello}" Content="Click me!" /> </StackPanel> <TextBlock Text="{Binding Path=Message}" Style="{StaticResource HeaderTextBlockStyle}" /> </StackPanel> </Grid> </Page>
We have added three controls to the page:
- A TextBox, where the user can fill in his/her name. We have connected it to the Name property of the ViewModel. There are two features to higlight:
- We have set the Mode attribute to TwoWay. It’s required since, for this scenario, we don’t just need that when the property in the ViewModel changes, the control displays the updated value, but also the opposite. When the user fills some text in the control, we need to store its value in the ViewModel.
- We have set the UpdateSourceTrigger attribute to PropertyChanged. This way, we make sure that, every time the text changes (which means every time the user adds or removes a char in the TextBox), the Name property will automatically update to store the changed value. Without this attribute, the value of the property would be updated only when the TextBox control loses the focus.
- A Button, that the user will click to see the hello message. We have connected it to the SayHello command in the ViewModel.
- A TextBlock, where the user will see the hello message. We have connected it to the Message property of the ViewModel.
If we have done everything properly and launch the app, we should see the application behaving as we described at the beginning of this post: after filling in your name and pressing the button, you will see the hello message.
Let’s Improve the User Interaction
Our application has a flaw: if the user presses the button without writing anything in the TextBox, he would see just the “Hello” prefix. We want to avoid this behavior by disabling the button if the TextBox control is empty. To achieve our goal we can leverage one of the features offered by the ICommand interface: we can define when a command should be enabled or not. For this scenario, the RelayCommand class allows a second parameter during the initialization, a function that returns a boolean value. Let’s see the sample:
private RelayCommand _sayHello; public RelayCommand SayHello { get { if (_sayHello == null) { _sayHello = new RelayCommand(() => { Message = $"Hello {Name}"; }, () => !string.IsNullOrEmpty(Name)); } return _sayHello; } }
We’ve changed the initialization of the SayHello command to add, as a second parameter, a boolean value. Specifically, we check if the Name property is null or empty (in this case, we return false, otherwise true). Thanks to this change, now the Button control connected to this command will be automatically disabled (also from a visual point of view) if the Name property is empty. However, there’s a catch: if we try the application as it is, we would notice that the Button will be disabled by default when the app starts. It’s the expected behavior, since when the app is launched the TextBox control is emtpy by default. However, if we start writing something in the TextBox, the Button will continue to be disabled. The reason is that the ViewModel isn’t able to automatically determine when the execution of the SayHello command needs to be evaluated again. We need to do it manually every time we do something that may change the value of the boolean function; in our case, it happens when we change the value of the Name property, so we need to change the property definition like in the following sample:
private string _name; public string Name { get { return _name; } set { Set(ref _name, value); SayHello.RaiseCanExecuteChanged(); } }
Other than just using the Set() method to store the property and to send the notification to the user interface, we invoke the RaiseCanExecuteChanged() method of the SayHello command. This way, the boolean condition will be evaluated again. If the Name property contains some text, then the command will be enabled; otherwise, if it should become empty again, the command will be disabled.
In the Next Post
In this post, we finally started to apply the concepts we learned in the first post with a real project and began to write our first application based on the MVVM pattern. In the next posts, we’ll see some advanced scenarios, like dependency injection, messages, or how to handle secondary events. Meanwhile, you can play with the sample app we’ve created in this post, also published on my GitHub repository. Together with the sample created in this post, you will also find the same sample created using Caliburn Micro as MVVM framework instead of MVVM Light.
Happy coding!
Published at DZone with permission of Matteo Pagani , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/the-mvvm-pattern-the-practice | CC-MAIN-2019-35 | refinedweb | 2,840 | 55.47 |
Getting Started with Windows Phone Development Published at Jul 22 2011, 04:56 PM by pavely | 0 comment(s) I must admit I was reluctant to get into Windows Phone development too deeply because I had no actual device running the Windows Phone OS. An emulator, no matter how good, cannot replace the actual device experience, and for some applications such as games, is simply inadequate. Well, the excuses are over. I got a Windows Phone device (the Samsung Omnia 7) a few days ago. It’s time to take WP7 development more seriously (but not too seriously, as it’s fun…). Instead of going with the traditional “hello world”, I’ll go for something a little more ambitious: a kind of guessing game, with the following rules: the program selects a 4 digit number, where all digits being different from one another. the player tries to guess the 4 digit number for each guess, the program responds with a set of full or empty circles. A full circle indicates a correct digit in the correct place (but it doesn’t indicate which one). An empty circle indicates a correct digit that’s out of place. this goes on until the player figures it out or the time runs out. Let’s get started. Getting the Tools The first thing to do is download the Windows Phone 7 SDK, in the Beta 2 version of “Mango” at the time of this writing. This installs project templates for Visual Studio, the WP7 emulator and some other useful libraries. You’re naturally going to need some version of Visual Studio 2010 (the express version works as well). Creating a New Project After installation, run VS 2010 and create a new project for a Silverlight windows Phone application: There are various templates to get started, but they’re pretty similar, each one adding something beyond the first template. For this project, I’ll use the basic “Windows Phone Application” template, call it GuessNumber and click OK. Windows Phone supports two pretty distinct way of programming (although they may also be combined): one is using Silverlight and the other using XNA. Both of these technologies are not new, meaning possibly a less steep curve of learning. If you know Silverlight or WPF, then you’re on a good start with WP7 application development. For more graphical games – XNA is the better choice. And again, If you know XNA for Windows or the XBOX 360, you’re mostly ready to tackle WP7. For a good tutorial on basic XNA development, you can check out the tutorial series I did a few months back (here are the first and last posts in the series). Coding the game We get a standard WP7 page deriving from the PhoneApplicationPage (similar somewhat to the standard Silverlight Page class). Inside there is some standard XAML that we need to customize. This is the basic screen I want to get: The main area holds an ItemsControl wrapped in a ScrollViewer that’s going to show the guesses and their results as they come in during play. The “New Game” button starts a new game (obviously), “Restart” resets the clock but keeps the same secret number (a kind of cheat). “Quit Now” forfeits the game and shows the secret combination. Here’s an example of a game running inside the emulator: A class named GuessNumberGame takes care of the game logic: generating a secret number and evaluating guesses. Here’s the way a new number is generated: private void GenerateSecret() { var ints = Enumerable.Range(0, 10).ToArray(); var rnd = new Random(); for(int i = 0; i < 20; i++) Helpers.Swap(ref ints[rnd.Next(10)], ref ints[rnd.Next(10)]); _secret = string.Join(string.Empty, ints.Take(NumLength).Select(n => n.ToString()).ToArray()); } The general strategy is creating a range of numbers 0-9, then shuffling them and finally using the first NumLength digits (currently 4, but can be changed to create an easier or more difficult game). To evaluate a guess, the EvalGuess method is used: public bool EvalGuess(string guess, out int inpos, out int justin) { inpos = justin = 0; if(guess == _secret) { inpos = NumLength; return true; } Debug.Assert(guess.Length == _secret.Length); justin = guess.Intersect(_secret).Count(); for(int i = 0; i < _secret.Length; i++) if(_secret[i] == guess[i]) { justin--; inpos++; } return false; } The method returns true on a correct guess. inpos returns the number of digits in correct positions and justin returns the number of digits in incorrect positions. The MainPage class holds a game instance and handles the UI. I have not created any special MVVM style views or view models in this application as it’s too simple to bother. Some data binding does take place, however. The ItemsControl control is built like so: <ItemsControl Grid. <ItemsControl.ItemTemplate> <DataTemplate> <local:GuessItemControl </DataTemplate> </ItemsControl.ItemTemplate> </ItemsControl> There is a data template based around a simple GuessItem class serving as the DataContext of a user control that shows a single row consisting of a guess and its evaluation. The ItemsSource property is bound to a Guesses property, but since no source is indicated, it uses the closest DataContext. In our case, it’s simply set to the MainPage itself (in the constructor). Guesses is an ObservableCollection of GuessItem instances. Here’s how a GuessItem looks like: public class GuessItem { public string Guess { get; set; } public int PosIncorrect { get; set; } public int PosCorrect { get; set; } } The GuessItemControl class is a user control that builds a line using a combination of code and markup: <Grid x: <Grid.ColumnDefinitions> <ColumnDefinition /> <ColumnDefinition /> </Grid.ColumnDefinitions> <TextBlock Text="{Binding Guess}" HorizontalAlignment="Center"/> <ItemsControl Grid. <ItemsControl.ItemsPanel> <ItemsPanelTemplate> <StackPanel Orientation="Horizontal" /> </ItemsPanelTemplate> </ItemsControl.ItemsPanel> </ItemsControl> </Grid> public GuessItemControl() { InitializeComponent(); Loaded += delegate { GuessItem data = DataContext as GuessItem; Debug.Assert(data != null); for(int i = 0; i < data.PosCorrect; i++) _images.Items.Add(new Ellipse { Width = 20, Height = 20, Fill = _fillBrush, Stroke = _strokeBrush, StrokeThickness = 2, Margin = new Thickness(2) }); for(int i = 0; i < data.PosIncorrect; i++) _images.Items.Add(new Ellipse { Width = 20, Height = 20, Stroke = _strokeBrush, StrokeThickness = 2, Margin = new Thickness(2) }); }; } The circles are built dynamically using code. This may not be the most “elegant” way of doing it, but it works and suffices for our purposes here. Evaluating a guess is handled by the Click event handler of the “Guess” button: private void OnInputGuess(object sender, RoutedEventArgs e) { if(_guess.Text.Length != _game.NumLength) { _guess.Focus(); return; } int posCorrect, posIncorrect; bool win = _game.EvalGuess(_guess.Text, out posCorrect, out posIncorrect); _guesses.Add(new GuessItem { Guess = _guess.Text, PosCorrect = posCorrect, PosIncorrect = posIncorrect }); _guess.Text = string.Empty; if(win) GameOver(true); } The game’s EvalGuess method is consulted, and then a GuessItem object is constructed based on the results and added to the ObservableCollection that is bound to the ItemsControl, so that the results appear immediately. Running & Debugging the Application The installed tools allow selecting the target device that we want to deploy to. This is the emulator by default, but we can switch to the actual device (now that I have it…). Debugging is pretty straightforward. You can set breakpoints, inspect variables, etc. whether you’re running on the emulator or the actual device. If running on the device, your app is added to the all applications list and you can run it independently like any other. If you want to put your creation to the marketplace, you’ll need to tackle a few more details, but that’s for another post. Other Points of Interest The TextBox used is just a textbox, but on a phone device the SIP keyboard appears. It’s possible to indicate to WP7 which keyboard style you prefer. In this case, we just need digits, so using the “default” keyboard is inconvenient, causing the player to type way too much and be annoyed. Here’s how we can choose a different set: <TextBox Width="150" x: <TextBox.InputScope> <InputScope> <InputScopeName NameValue="TelephoneNumber" /> </InputScope> </TextBox.InputScope> </TextBox> The countdown is handled by a DispatcherTimer object, firing at 1 second intervals. It’s activated when a new game starts: private void StartNewGame() { _game = new GuessNumberGame(4); _gameTime = MaximumGameTime; _guesses.Clear(); _timer.Start(); IsGameRunning = true; } It’s configured in the constructor of MainPage. First, some fields: DispatcherTimer _timer = new DispatcherTimer { Interval = TimeSpan.FromSeconds(1) }; int _gameTime; Next, handling the Tick event: _timer.Tick += (s, e) => { if(--_gameTime == 0) { GameOver(false); } else Caption = string.Format("{0}:{1:D2} Remain", _gameTime / 60, _gameTime % 60); }; Here’s a link to the entire Solution. GuessNumber.zip תגים:.NET, C#, Silverlight, DEV, XAML, Games, Windows Phone | http://blogs.microsoft.co.il/blogs/pavely/archive/2011/07/22/getting-started-with-windows-phone-development.aspx | crawl-003 | refinedweb | 1,430 | 55.64 |
Stack Overflow [Angular] folks often ask:
- "Why doesn't my View show the data?"
- "I can see it in the request, and the data is returned but my component doesn't show it".
- "Why doesn't my data show up?"
Angular Render Events
Angular Rendering runs as soon as the component is displayed. This means any data that requires a Promise or Subscription misses the render event! Angular renders before the data has returned.
Solutions
-Use *ngIf to hide display until data arrives.
-Use ChangeDetector's detectChanges method after the data arrives. Like this:
<app-myComponent // if *ngIf is missing, // the view renders without // waiting for person data * </app-myComponent>
Solution
Don't show View until data is ready!
import { Component, OnInit, ChangeDetectorRef} from "@angular/core"; export class myComponent implements OnInit, //default state is not to show at start. show = false; constructor(private cdf: ChangeDetectorRef); ngOnInit(){ this.getData(); } // a promise example getData() { this.getPersons().then((persons) => { //sets the data sometime later this.persons = persons; // data is ready, show view! this.show = true; // tell Angular to re-render this.cdf.detectChanges(); }); } // a subscription example getDataSubscription(){ this.getPersons.subscribe(result=>{ this.person = result; // data is ready show view. this.show = true; this.cdf.detectChanges(); })
There are other ways but this is a good start.
Top comments (0) | https://dev.to/jwp/angular-why-doesn-t-my-data-show-up-4efm | CC-MAIN-2022-40 | refinedweb | 215 | 53.37 |
Carsten Ziegeler wrote:
> Hi,
>
> I would like to release the 2.2 version of the serializers block
> (consisting of two modules).
>
> Is anything preventing us from releasing this?
No I don't think so. After rereading the original thread about this
block (see) I just
wonder if the mentioned Xalan bugs haven't been fixed in the meantime.
The second question that came to mind was whether we should make the
serializers-charset module an OSGi bundle. And we should change the
namespace to org.apache.cocoon.serializers.* for both modules.
> What's the procedure to release the block? describes the technical details.
You should be able to use as
parent POM (but I'm not sure about this because usually we use the
cocoon-blocks-modules as parent POM).
We also need to test that the serializers-impl Avalon sitemap components
work with Cocoon 2.2 or even better, migrate them to Spring (but I don't
consider the migration being absolutely necessary).
After releasing it I can help out with updating the documentation -
maybe you can add a few paragraphs to.
--
Reinhard Pötz Managing Director, {Indoqa} GmbH
Member of the Apache Software Foundation
Apache Cocoon Committer, PMC member reinhard@apache.org
________________________________________________________________________ | http://mail-archives.apache.org/mod_mbox/cocoon-dev/200909.mbox/%3C4AC0BCFA.7050109@apache.org%3E | CC-MAIN-2016-50 | refinedweb | 204 | 58.79 |
Once upon a time, color was one of the biggest nightmares a Web designer could face. Not all computers are created equal, especially when it comes to color. On your high-end professional machine, you design a brilliant Web page with bold colors, deep drop shadows, anti-aliased text, and 3D buttons. But on the machine across the hall, it looks like a grainy color photo that's been left out in the sun too long (Figure 13.11 and 13.12).
The problem was that some computers displayed millions of colors, while others displayed only a few thousand or (gasp) a few hundred or less. Although few of these older machines are still in use, there's an increasing number of portable devices such as PDAs and mobile phones that do have color restrictions. So knowing the number of colors the person viewing your site (Figure 13.13) can actually see might be useful (Figures 13.14 and 13.15).
To detect the number of colors:
screen.colorDepth
The number of colors that the visitor's screen can currently display is in the screen's color-depth object (Code 13.4). Using this code will return a color-bit depth value as shown in Table 13.1.
[View full width]
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" " xhtml1-strict.dtd"> <html xmlns=""> <head> <meta http- <title>CSS, DHTML & Ajax | Finding the Number of Colors</title> <style type="text/css" media="screen"> body { font: 1em Georgia, "Times New Roman", times, serif; color: #000; background-color: #ccc; margin: 8px; } </style> </head> <body> <script language="JavaScript" type="text/javascript"> function numColors() { return (screen.colorDepth); } document.write('Your screen is currently using ' + numColors() + 'bit color.'); </script> </body> </html>
COLOR-BIT DEPTH
NUMBER OF COLORS
4
16
8
256
65,536
32
16.7 million | https://flylib.com/books/en/3.292.1.123/1/ | CC-MAIN-2021-17 | refinedweb | 304 | 65.62 |
#include <mod_sim_craft_proxy.hh>
List of all members.
The reason for a proxy is for switching 3D models. A 3D model is defined by a Graph, but a Graph's state changes as an Object is transformed. So, each model/Graph must transformed in unison. A proxy object is used to orchestrate a set of Crafts, only one of which is visible at a time.
Reset.
Cycle to next model.
Animation. Shimmering textures for alien Craft is done by mod_sim::Craft::Animate().
Matrix.
Translate craft.
Rotate craft.
Rotate wings.
Rotate glove vanes.
Rotate tailplanes.
Rotate landing gears.
Reset rotation.
Throttle.
Auto-step (forward/backward animation).
Auto-rotate.
Physics pass-thrus.
Enable ChasePlane.
Sync matrixs of target and ChasePlane.
Rotate ChasePlane.
Translate ChasePlane.
Reset ChasePlane. | http://www.palomino3d.org/pal/doxygen_v2/classmod__sim_1_1ProxyCraft.html | crawl-003 | refinedweb | 123 | 60.92 |
I’ve been working on a tile-engine lately and I’ve hit a bit of a snag when doing object to object collision. My moving objects are defined as cylinders, but I’m having trouble with the two dimensional circle part of it, so consider this a circle to circle problem. I can calculate the exact time and point of the collision, that’s not the problem, but currently the physics in my engine are, that an object can move in a dimension if it isn’t blocked in that dimension. That means that if I would diagonally jump into a horizontal wall, only the y-dimension would be blocked, so I stop moving in the y dimension, but keep moving in the x and z dimensions. If that doesn’t make sense, try the link to the current build at the end of this post.
This is the current code I’m using.
private function movingObjektBlock(Red:RigidBody, Blue:RigidBody, LINE:Line):Number { var RR:Number = Red.radius+Blue.radius; var dX:Number = Red.x-Blue.x; var dY:Number = Red.y-Blue.y; // var Vx:Number = Red.Vl.x-Blue.Vl.x; var Vy:Number = Red.Vl.y-Blue.Vl.y; // // var a = Vx*Vx+Vy*Vy; var b = 2*dX*Vx+2*dY*Vy; var c = dX*dX+dY*dY-(RR*RR); // var Ta = (-b+Math.sqrt(b*b-(4*a*c)))/(2*a); var Tb = (-b-Math.sqrt(b*b-(4*a*c)))/(2*a); // var T1:Number var T2:Number if (Ta<Tb) { T1 = Ta T2 = Tb } else { T1 = Tb T2 = Ta } // var z1:Number = LINE.getZforT(T1) var z2:Number = LINE.getZforT(T2) var T:Number = -1 if ((z1 > Blue.z && z1 < Blue.z + Blue.height) || (z1 + Red.height > Blue.z && z1 + Red.height < Blue.z + Blue.height) || (z1 < Blue.z && z1 + Red.height > Blue.z + Blue.height)) { //Collision T = T1 } else if ((z2 > Blue.z && z2 < Blue.z + Blue.height) || (z2 + Red.height > Blue.z && z2 + Red.height < Blue.z + Blue.height) || (z2 < Blue.z && z2 + Red.height > Blue.z + Blue.height)) { //Collision T = T2 } // // if (T>0 && T<1) { return (T); } else { return (-1); } }
The problem with just one time is, that I wouldn’t know the time for the individual dimensions. What I was thinking of, is moving the circles to the state of collision. Draw a line from centre to centre and get the left and right normal of that line. This would be the virtual dimension that the other circle isn’t blocking. If I project the remaining movement vector on that line, would that be the solution to my problem?
I’m a bit fried at the moment so any other suggestions are welcome.
Arrows to move, space to jump
c = fire
x = incendary grenade
z = frag grenade | https://forum.kirupa.com/t/moving-circle-vs-moving-circle/289715 | CC-MAIN-2022-27 | refinedweb | 471 | 77.53 |
Java Hint Parameters for Immutable Class Constructors
Constructing an immutable class in Java is simple if the number of parameters is small. However, when the parameters increase in number, it can be hard to tell the meaning of the arguments passed to the constructor. Named parameters, as exists in other languages would help, but don't exist in Java. The builder methodology is a solution, but requires a shadow class or non final instance fields.
Hint Parameters provide hints to the caller (or reader) of a constructor, about the names or fluent meaning of the parameters that follow the hint. They may hint at one, two or more parameters at a time. To do this they use an argument naming syntax which separates sections of the hint with a "_" character.
Hints should be used in a strict order which stand out when entering or reading the constructor call. To ensure this, the first section starts with the hint number and then indicates a contract of whether the parameters are required, defaulted, or optional using the letters "r", "d" and "o".
The implementation uses static java enums with lowercase (or CamelCase) values to ensure readability. A simple example:
package myth; import static com.google.common.base.Preconditions.checkNotNull; public final class Dragon { private final Integer height, width, rating; private final String name; public static enum Hint {_1rr_height_width, _2do_rating_name} public Dragon(Hint h1, Integer height, Integer width, Hint h2, Integer rating, String name) { this.height = checkNotNull(height, "Height is required"); this.width = checkNotNull(width, "Width is required"); this.rating = (rating == null) ? 1 : rating; this.name = name; } // other methods }
Another class calling the hinted constructor:
package myth; import static myth.Dragon.Hint.*; public class DragonRun { public static void main(String [] args) { Dragon fierceDragon = new Dragon( _1rr_height_width, 1, 2, _2do_rating_name, null, "Fierce Dragon" ); } }
The null value is used as a placeholder. When the contract indicates a parameter has a default via "d", the default value will be used in place of the null. When the parameter is optional via "o", null will be used.
If all parameters of the constructor are required, the "r" indicators can be dropped.
Notes:
Using a static java enum type to implement the Hint Parameters has the advantage of simplicity and code clarity. It has a disadvantage in that the parameters could be used out of order. The numbering convention makes this unlikely, but if necessary, the order could be checked at run time. Another possible implementation would be to use separate static hint classes which would ensure type safety at the expense of simplicity.
Hint Parameters could also be used in static or instance methods calls as well as mutable class constructors.
(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)
Jarek Krochmalski replied on Tue, 2013/04/02 - 9:48am | http://java.dzone.com/articles/java-hint-parameters-immutable | CC-MAIN-2014-35 | refinedweb | 477 | 54.83 |
On clean reboots wipe the dmesg output. For all other reboots
try to keep as much as possible (no change there).
Index: sys/kern/kern_shutdown.c
===================================================================
RCS file: sys/kern/kern_shutdown.c,v
retrieving revision 1.1.1.4
retrieving revision 1.2
diff -u -r1.1.1.4 -r1.2
--- sys/kern/kern_shutdown.c 2002/08/28 08:58:58 1.1.1.4
+++ sys/kern/kern_shutdown.c 2004/03/02 13:51:36 1.2
@@ -61,6 +61,7 @@
#include <sys/conf.h>
#include <sys/sysproto.h>
#include <sys/cons.h>
+#include <sys/msgbuf.h>
#include <machine/pcb.h>
#include <machine/clock.h>
@@ -332,6 +333,10 @@
printf("\n");
printf("The operating system has halted.\n");
printf("Please press any key to reboot.\n\n");
+ if (msgbufp != (struct msgbuf *) 0) {
+ /* fix dmesg buffer on soft reboot */
+ msgbufp->msg_magic = 0;
+ }
switch (cngetc()) {
case -1: /* No console, just die */
cpu_halt();
@@ -385,6 +390,10 @@
{
printf("Rebooting...\n");
DELAY(1000000); /* wait 1 sec for printf's to complete and be read */
+ if (msgbufp != (struct msgbuf *) 0) {
+ /* fix dmesg buffer on soft reboot */
+ msgbufp->msg_magic = 0;
+ }
/* cpu_boot(howto); */ /* doesn't do anything at the moment */
cpu_reset();
/* NOTREACHED */ /* assuming reset worked */
Rene,
I found your patch hocking in GNATS for quite some time. I'm sorry your
patch hasn't committed yet.
I'm wondering if you're able to bring your patch up to date against a
recent RELENG_7 or CURRENT and send us the updated patch? Also I think
that functionality should be enabled by a sysctl.
Thanks for your work!
State Changed
From-To: open->feedback
Note that submitter has been asked for feedback.
State Changed
From-To: feedback->suspended
Mark suspended awaiting more recent patches.
For bugs matching the following conditions:
- Status == In Progress
- Assignee == "bugs@FreeBSD.org"
- Last Modified Year <= 2017
Do
- Set Status to "Open" | https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=68317 | CC-MAIN-2019-47 | refinedweb | 308 | 71.51 |
Note: examples are coded in Python 2.x, but the basic point of the post applies to all versions of Python.
There’s a Python gotcha that bites everybody as they learn Python. In fact, I think it was Tim Peters who suggested that every programmer gets caught by it exactly two times. It is call the mutable defaults trap. Programmers are usually bit by the mutable defaults trap when coding class methods, but I’d like to begin with explaining it in functions, and then move on to talk about class methods.
Mutable defaults for function arguments
The gotcha occurs when you are coding default values for the arguments to a function or a method. Here is an example for a function named foobar:
def foobar(arg_string = "abc", arg_list = []): ...
Here’s what most beginning Python programmers believe will happen when foobar is called without any arguments:
A new string object containing “abc” will be created and bound to the “arg_string” variable name. A new, empty list object will be created and bound to the “arg_list” variable name. In short, if the arguments are omitted by the caller, the foobar will always get “abc” and [] in its arguments.
This, however, is not what will happen. Here’s why.
The objects that provide the default values are not created at the time that foobar is called. They are created at the time that the statement that defines the function is executed. (See the discussion at Default arguments in Python: two easy blunders: “Expressions in default arguments are calculated when the function is defined, not when it’s called.”)
If foobar, for example, is contained in a module named foo_module, then the statement that defines foobar will probably be executed at the time when foo_module is imported.
When the def statement that creates foobar is executed:
- A new function object is created, bound to the name foobar, and stored in the namespace of foo_module.
- Within the foobar function object, for each argument with a default value, an object is created to hold the default object. In the case of foobar, a string object containing “abc” is created as the default for the arg_string argument, and an empty list object is created as the default for the arg_list argument.
After that, whenever foobar is called without arguments, arg_string will be bound to the default string object, and arg_list will be bound to the default list object. In such a case, arg_string will always be “abc”, but arg_list may or may not be an empty list. Here’s why.
There is a crucial difference between a string object and a list object. A string object is immutable, whereas a list object is mutable. That means that the default for arg_string can never be changed, but the default for arg_list can be changed.
Let’s see how the default for arg_list can be changed. Here is a program. It invokes foobar four times. Each time that foobar is invoked it displays the values of the arguments that it receives, then adds something to each of the arguments.
def foobar(arg_string="abc", arg_list = []): print arg_string, arg_list arg_string = arg_string + "xyz" arg_list.append("F") for i in range(4): foobar()
The output of this program is:
abc [] abc ['F'] abc ['F', 'F'] abc ['F', 'F', 'F']
As you can see, the first time through, the argument have exactly the default that we expect. On the second and all subsequent passes, the arg_string value remains unchanged — just what we would expect from an immutable object. The line
arg_string = arg_string + "xyz"
creates a new object — the string “abcxyz” — and binds the name “arg_string” to that new object, but it doesn’t change the default object for the arg_string argument.
But the case is quite different with arg_list, whose value is a list — a mutable object. On each pass, we append a member to the list, and the list grows. On the fourth invocation of foobar — that is, after three earlier invocations — arg_list contains three members.
The Solution
This behavior is not a wart in the Python language. It really is a feature, not a bug. There are times when you really do want to use mutable default arguments. One thing they can do (for example) is retain a list of results from previous invocations, something that might be very handy.
But for most programmers — especially beginning Pythonistas — this behavior is a gotcha. So for most cases we adopt the following rules.
- Never use a mutable object — that is: a list, a dictionary, or a class instance — as the default value of an argument.
- Ignore rule 1 only if you really, really, REALLY know what you’re doing.
So… we plan always to follow rule #1. Now, the question is how to do it… how to code foobar in order to get the behavior that we want.
Fortunately, the solution is straightforward. The mutable objects used as defaults are replaced by None, and then the arguments are tested for None.
def foobar(arg_string="abc", arg_list = None): if arg_list is None: arg_list = [] ...
Another solution that you will sometimes see is this:
def foobar(arg_string="abc", arg_list=None): arg_list = arg_list or [] ...
This solution, however, is not equivalent to the first, and should be avoided. See Learning Python p. 123 for a discussion of the differences. Thanks to Lloyd Kvam for pointing this out to me.
And of course, in some situations the best solution is simply not to supply a default for the argument.
Mutable defaults for method arguments
Now let’s look at how the mutable arguments gotcha presents itself when a class method is given a mutable default for one of its arguments. Here is a complete program.
# (1) define a class for company employees class Employee: def __init__ (self, arg_name, arg_dependents=[]): # an employee has two attributes: a name, and a list of his dependents self.name = arg_name self.dependents = arg_dependents def addDependent(self, arg_name): # an employee can add a dependent by getting married or having a baby self.dependents.append(arg_name) def show(self): print print "My name is.......: ", self.name print "My dependents are: ", str(self.dependents) #--------------------------------------------------- # main routine -- hire employees for the company #--------------------------------------------------- # (2) hire a married employee, with dependents joe = Employee("Joe Smith", ["Sarah Smith", "Suzy Smith"]) # (3) hire a couple of unmarried employess, without dependents mike = Employee("Michael Nesmith") barb = Employee("Barbara Bush") # (4) mike gets married and acquires a dependent mike.addDependent("Nancy Nesmith") # (5) now have our employees tell us about themselves joe.show() mike.show() barb.show()
Let’s look at what happens when this program is run.
- First, the code that defines the Employee class is run.
- Then we hire Joe. Joe has two dependents, so that fact is recorded at the time that the joe object is created.
- Next we hire Mike and Barb.
- Then Mike acquires a dependent.
- Finally, the last three statements of the program ask each employee to tell us about himself.
Here is the result.
My name is.......: Joe Smith My dependents are: ['Sarah Smith', 'Suzy Smith'] My name is.......: Michael Nesmith My dependents are: ['Nancy Nesmith'] My name is.......: Barbara Bush My dependents are: ['Nancy Nesmith']
Joe is just fine. But somehow, when Mike acquired Nancy as his dependent, Barb also acquired Nancy as a dependent. This of course is wrong. And we’re now in a position to understand what is causing the program to behave this way.
When the code that defines the Employee class is run, objects for the class definition, the method definitions, and the default values for each argument are created. The constructor has an argument arg_dependents whose default value is an empty list, so an empty list object is created and attached to the __init__ method as the default value for arg_dependents.
When we hire Joe, he already has a list of dependents, which is passed in to the Employee constructor — so the arg_dependents attribute does not use the default empty list object.
Next we hire Mike and Barb. Since they have no dependents, the default value for arg_dependents is used. Remember — this is the empty list object that was created when the code that defined the Employee class was run. So in both cases, the empty list is bound to the arg_dependents argument, and then — again in both cases — it is bound to the self.dependents attribute. The result is that after Mike and Barb are hired, the self.dependents attribute of both Mike and Barb point to the same object — the default empty list object.
When Michael gets married, and Nancy Nesmith is added to his self.dependents list, Barb also acquires Nancy as a dependent, because Barb’s self.dependents variable name is bound to the same list object as Mike’s self.dependents variable name.
So this is what happens when mutuable objects are used as defaults for arguments in class methods. If the defaults are used when the method is called, different class instances end up sharing references to the same object.
And that is why you should never, never, NEVER use a list or a dictionary as a default value for an argument to a class method. Unless, of course, you really, really, REALLY know what you’re doing. | http://pythonconquerstheuniverse.wordpress.com/category/python-gotchas/ | CC-MAIN-2013-48 | refinedweb | 1,535 | 64.1 |
Feature #6298
Proc#+
Description
=begin
Maybe there is another way to do this, and if so please enlighten me.
I have a case where collection of blocks need to be handled as if a single block, e.g.
class BlockCollection
def initialize(procs)
@procs = procs
end
def to_proc
procs = @procs
Proc.new{ |a| procs.each{ |p| p.call(*a) } }
end
end
The issue with this is with #to_proc. It's not going to do the right thing if a BlockCollection instance is passed to #instance_eval b/c it would not actually be evaluating each internal block via #instance_eval.
But if we change it to:
def to_proc Proc.new{ |*a| procs.each{ |p| instance_exec(*a, &p) } } end
It would do the right thing with #instance_eval, but it would no longer do the right thing for #call, b/c would it evaluate in the context of BlockCollection instance instead of where the blocks weer defined.
So, unless there is some way to do this that I do not see, to handle this Ruby would have to provide some means for it. To this end Proc#+ is a possible candidate which could truly combine two procs into one.
=end
Related issues
History
#1
[ruby-core:44374]
Updated by Yusuke Endoh almost 4 years ago
- Status changed from Open to Rejected
Hello,
I think you have valid concern. AFAIK, there is no way to do this.
But #5007 (Proc#call_under) is apparently a more general solution
for this issue.
You will be able to write BlockCollection with Proc#call_under:
def to_proc
Proc.new{ |*a| procs.each{ |p| p.call_under(self, *a) } }
end
So, let's discuss the feature in that thread.
--
Yusuke Endoh mame@tsg.ne.jp
Also available in: Atom PDF | https://bugs.ruby-lang.org/issues/6298 | CC-MAIN-2016-07 | refinedweb | 288 | 75.61 |
!
@alfredogzz If an object has axis system A, and you want it to get axis system B, but keep the points, you need to multiply each point with A (getting the point out of the A system into the global system), and then multiply it again with the inverse of B (getting the point from the global system to the B system):
import c4d
from c4d import gui
def main():
# Assuming: Currently selected object is a point object;
# next object is a null defining the target coordinates
mat = op.GetMg()
targetMat = op.GetNext().GetMg()
points = op.GetAllPoints()
for i,p in enumerate(points):
points[i] = ~targetMat * mat * p
op.SetAllPoints(points)
op.SetMg(targetMat)
op.Message(c4d.MSG_UPDATE)
c4d.EventAdd()
# Execute main()
if __name__=='__main__':
main()
Note that the multiplication sequence needs to be read from the right to the left.
Learn more about Python for C4D scripting:
Hi @alfredogzz,
welcome to the Plugin Cafe forum and the Cinema 4D development community! Please have a look at our Forum Guidelines as we have to ask all users to flag their questions with tags. I have added a Python tag (assuming that is the programming environment you are after) to your posting for you, but you have still to add an OS and Cinema 4D version tag yourself. Please also note that providing full solutions is outside of the scope of support (see Forum Guidelines), so it would be best if you could post whatever code you already have.
About your problem, let's assume you have an object Obj which has has the "incorrect" c4d.Matrix, i.e., transform, M_old and you want to move that transform to the known transform M_new. Let's also assume that all our transforms are in global space.
Obj
c4d.Matrix
M_old
M_new
What you basically have to do is:
M_dif
O
Here are some other postings, topics and examples (the first one contains code for pretty much what you want to do) that might be helpful for you regarding this topic:
"Move" the transform of a point object
Setting coordinates for selected faces (Python - Beginner)
Mirroring with Matching or Different Axes
If you import your geometry from Rhino, you might also have to deal with baked normals, i.e., normal tags. Please take a look at this thread for that scenario.
It also noteworthy that Rhino's coordinate system is right-handed with Z being up, while Cinema 4D's system is left-handed with Y being up. Handedness of coordinate systems has been discussed in Mirroring with Matching or Different Axes for example.
Z
Y
Cheers,
Ferdinand | https://plugincafe.maxon.net/topic/13142/how-to-move-axis-to-desired-matrix-without-affecting-object-in-python/1?lang=en-US | CC-MAIN-2022-27 | refinedweb | 435 | 55.47 |
Rechercher une page de manuel
getnotb
Langue: en
Version: 58922 (mandriva - 22/10/07)
Section: 2 (Appels système)
Sommaire
NAMEgetnodeid, getnodetype, getorigin, getncomp, getnotb, getnall, getntype - Get information on LAM nodes.
SYNOPSIS
#include <net.h> int getnodeid (); int getnodetype (); int getorigin (); int getnall (); int getncomp (); int getnjones (); int getnotb (); int getntype (int nodetype, int typemask);
FORTRAN SYNOPSIS
integer function IGNDID () integer function IGNDTP () integer function IGORGN () integer function IGNALL () integer function IGNCMP () integer function IGNJON () integer function IGNOTB () integer function IGNTP (nodetype, typemask) integer nodetype, typemask
DESCRIPTIONThese functions return node information obtained from the local route daemon, a modular component of the LAM daemon, regarding the currently running LAM network. A node is defined by its identifier, an arbitrary 32 bit value, and its type, a combination of flags describing the capabilities of the node. These flags (see CONSTANTS(5) and/or <net.h>) are:
- NT_ITB
- node running LAM natively
- NT_CAST
- node multicast, a group of nodes
- NT_WASTE
- node not part of main computing group
- NT_DISK
- node has a disk
- NT_TUBE
- node has a video display unit
- NT_JONES
- node is a neighbour of the local node
- NT_BOOT
- node is booted by the local node
getnodeid() returns the local node identifier. getnodetype() returns the local node type. getorigin() returns the origin node identifier, from which LAM was booted.
getncomp() returns the number of nodes marked for the "main" computation. A typical application will use most (maybe all) of the nodes in a parallel machine to compute portions of decomposed data. Programmers frequently need to know the number of these "compute" nodes. Other nodes may be present in the multicomputer to handle peripherals or sequential portions of an application.
getnotb() returns the total number of OTB nodes. getnall() returns the total number of nodes in the system. getnjones() returns the caller's number of neighbour (directly connected) nodes.
getntype() is a general function that is used to determine the number of nodes whose node types have certain bits set to certain values. This is a flexible tool which allows the user to obtain very specific information about the capabilities of nodes in the system.
Type InquiriesYou may need more detailed information on the number and types of nodes in the system than provided by the convenience functions. You may, for example, want to know the number of computing nodes with attached video displays. The getntype() function is used for this purpose.
Node types are interpreted as bit fields, and each node type has a different bit set. A bitmask having all bits set, NT_ALL, is also available. Note that NT_ALL does not include NT_JONES nor NT_BOOT since these node characteristics are not inherent to the nodes, but depend on the node from which the query is made. The node types are thus:
Nodetype Value Bitmask NT_ITB 1 ...00000001 NT_CAST 2 ...00000010 NT_WASTE 4 ...00000100 NT_DISK 8 ...00001000 NT_TUBE 16 ...00010000 NT_ALL 31 ...00011111 NT_JONES 32 ...00100000 NT_BOOT 64 ...01000000
The typemask argument to getntype() is used to specify which bits are of interest. These are set to 1, all others are set to 0. The nodetype argument is used to specify what values these bits should take. getntype() then compares the relevant bits (as specified by typemask) in the node type of each node in the system, to see if they have the appropriate value (as specified by nodetype).
To learn the number of nodes that have video capabilities, the bits of interest are NT_WASTE and NT_TUBE, thus typemask is 20. NT_WASTE must be 0 and NT_TUBE must be 1, which combined gives nodetype as 16. The complete call to getntype() is:
nnodes = getntype(NT_TUBE, NT_TUBE | NT_WASTE);
To learn the number of compute nodes that have an attached video display, but no other capabilities, all bits must be considered and all bits except NT_TUBE must be clear (0). The complete function call is:
nnodes = getntype(NT_TUBE, NT_ALL);
C RETURN VALUEgetnodeid() and getorigin() both return node identifiers. NOTNODEID (defined in <net.h>) is returned if an error occurred. Likewise, getnodetype() returns a valid node type or NOTNODETYPE in the case of error. The return values upon successful completion for the remaining functions are described above; they return -1 if an error occurs. In all cases of error, the global error variable errno is set to indicate the error.
FORTRAN RETURN VALUEIGNDID() and IGORGN() both return node identifiers. NOTNODEID (see CONSTANTS(5)) is returned if an error occurred. Likewise, IGNDTP() returns a valid node type or NOTNODETYPE in the case of error. The return values upon successful completion for the remaining functions are described above; they return -1 if an error occurs.
SEE ALSOgetroute(2), CONSTANTS(5)
Contenus ©2006-2020 Benjamin Poulain
Design ©2006-2020 Maxime Vantorre | http://linuxcertif.com/man/2/getnotb/ | CC-MAIN-2020-24 | refinedweb | 783 | 55.54 |
We are separating the digital modulation-specific blocks from gnuradio-core into their own top-level directory gr-digital. This creates a new gnuradio module in Python called digital that can be accessed as:
from gnuradio import digital
Many blocks have been moved from gnuradio-core and others have been removed completely as they were obsolete or duplicated elsewhere.
All blocks that use a second-order control loop to track a phase and frequency are being replaced to inherit from the gri_control_loop parent class. This class takes care of setting the gains given a loop bandwidth and damping frequency. By default, it sets the damping frequency for a critically damped system so only the loop bandwidth is required. The constructor looks like:
gri_control_loop(float loop_bw, float max_freq, float min_freq);
All blocks that have this structure used to set the internal alpha and beta gains individually. Now, the constructors for their blocks have been replaced to accept the loop bandwidth instead of the two gains. All values, the loop bandwidth, damping factor, alpha, beta, current phase, and current frequency can all be retrieved (get_) and set (set_).
Completed in bold.
注:ChangeSets(原文出处,翻译整理仅供参考!)
Report Abuse|Powered By Google Sites | http://gnuradio.microembedded.com/changesets | CC-MAIN-2020-05 | refinedweb | 199 | 52.39 |
Due Thursday, October 17, 11:59pm its = 'PUT
from io import StringIO movie_txt = requests.get('').text movie_file = StringIO(movie_txt) # treat a string like a file movies = pd.read_csv(movie_file, delimiter='\t') #print the first row movies[['id', 'title', 'imdbID', 'year']].irow(0) data critics = critics[~critics.quote.isnull()] critics = critics[critics.fresh != 'none'] critics = critics[critics.quote.str.len() > 0]
A quick sanity check that everything looks ok at this point
assert set(critics.columns) == set('critic fresh imdb publication ' 'quote review_date rtid title'.split()) assert len(critics) > 10000
2.1 How many reviews, critics, and movies are in this dataset?
#your code here
2.2 What does the distribution of number of reviews per reviewer look like? Make a histogram
#Your code here
2.3 List the 5 critics with the most reviews, along with the publication they write for
#Your code here
2.4 Of the critics with > 100 reviews, plot the distribution of average "freshness" rating per critic
#Your code here
Your Comment Here
Interpret these numbers in a few sentences. We care about calibration because it tells us whether we can trust the probabilities computed by a model. If we can trust model probabilities, we can make better decisions using them (for example, we can calculate how much we should bet or invest in a given prediction).?
Your Answer Here_proba, # as a side note, this function is builtin to the newest version of sklearn. We could just write # sklearn.cross_validation.cross_val_score(clf, x, y, scorer=log_likelihood).
print "alpha: %f" % best_alpha print "min_df: %f" % best_min_df
3.9 Discuss the various ways in which Cross-Validation has affected the model. Is the new model more or less accurate? Is overfitting better or worse? Is the model more or less calibrated?
Your Answer Here
4.2
One of the best sources for inspiration when trying to improve a model is to look at examples where the model performs poorly.
Find 5 fresh and rotten reviews where your model performs particularly poorly. Print each review.
#Your code here".
Your answer here
4.4 If this was your final project, what are 3 things you would try in order to build a more effective review classifier? What other exploratory or explanatory visualizations do you think might be helpful?
Your answer here
Restart and run your notebook one last time, to make sure the output from each cell is up to date. To submit your homework, create a folder named lastname_firstinitial_hw3 and place your solutions in the folder. Double check that the file is still called HW3.ipynb, and that it contains your code. | https://nbviewer.ipython.org/github/cs109/content/blob/master/HW3.ipynb | CC-MAIN-2022-27 | refinedweb | 432 | 69.07 |
jim 98/04/07 05:36:27
Modified: . STATUS
Log:
Why even bother... remove my votes and comments
Revision Changes Path
1.272 +4 -14 apache-1.3/STATUS
Index: STATUS
===================================================================
RCS file: /export/home/cvs/apache-1.3/STATUS,v
retrieving revision 1.271
retrieving revision 1.272
diff -u -r1.271 -r1.272
--- STATUS 1998/04/07 06:59:03 1.271
+++ STATUS 1998/04/07 12:36:26 1.272
@@ -299,16 +299,16 @@
* What prefixes to use for the renaming:
- Apache provided general functions (e.g., ap_cpystrn)
- ap_xxx: +1: Ken, Brian, Ralf, Martin, Paul, Roy, Jim, Randy
+ ap_xxx: +1: Ken, Brian, Ralf, Martin, Paul, Roy, Randy
- Public API functions (e.g., palloc)
ap_xxx: +1: Ralf, Roy, Dean, Randy, Martin, Brian
- apapi_xxx: +1: Ken, Paul, Jim
+ apapi_xxx: +1: Ken, Paul
- Private functions which we can't make static (because of
cross-object usage) but should be (e.g., new_connection)
ap_xxx: +1: Roy, Dean, Randy, Martin, Brian
- apx_xxx: +1: Ralf, Jim
+ apx_xxx: +1: Ralf
appri_xxx: +1: Paul, Ken
- Public API module structure variables (e.g.,
@@ -316,7 +316,7 @@
mod_so, etc and have to be exported:
..._module:+1: Roy [status quo], Dean
ap_xxx: +1:
- apm_xxx: +1: Ralf, Jim
+ apm_xxx: +1: Ralf
Notes:
- Ralf: My opinion for my decisions are the following ones:
@@ -386,16 +386,6 @@
- Randy: I agree with Dean 100%. The work created to
keep this straight far outweighs any gain this
could give.
-
- - Jim: We should make some sort of logical effort to
- keep things straight and organized. This does nothing
- to indicate in the code what is API, and what
- isn't. In my mind, we should use prefixed not JUST
- to prevent namespace collisions, but also to
- "define" the type function. The very fact that we
- _have_ the above different "types" of functions
- indicates to me that we should have some logical
- namespace for them.
- Ralf: I agree with Jim that although the short ap_
prefix is good for API functions, it shouldn't be | http://mail-archives.apache.org/mod_mbox/httpd-cvs/199804.mbox/%3C19980407123627.6921.qmail@hyperreal.org%3E | CC-MAIN-2015-22 | refinedweb | 332 | 75.1 |
why3ml man page
why3ml — generate verification conditions from Why3ML programs
Synopsis
why3ml [Options] [[FILE|-] [-T <theory> [-G <goal>] ... ] ... ]
Description
This tool is an additional layer on top of the Why3 library for generating verification conditions from Why3ML programs. It accepts the same command line options as why3, but also accepts files with extension .mlw as input files containing Why3ML modules. Modules are turned into theories containing verification conditions as goals, and then why3ml behaves exactly as why3 for the remainder of the process. Note that files with extension .mlw can also be loaded in why3ide.
Options
- -T, --theory <theory>
Select theory in the input file or in the library.
- -G, --goal <goal>
Select goal in the last selected theory.
- -C, --config <file>
Read configuration from file.
- -L, --library <dir>
Add dir to the library search path.
- -P, --prover <prover>
Prove or print (with -o) the selected goals using prover.
- -F, --format <format>
Select the input format (default: "why").
- -t, --timelimit <sec>
Set the prover's time limit in seconds, where 0 means no limit (default: 10).
- -m, --memlimit <MiB>
Set the prover's memory limit in megabytes (default: no limit).
- -a, --apply-transform <transformation>
Apply a transformation to every task.
- -M, --meta <metaname>=<string>
Add a string meta to every task.
- -D, --driver <file>
Specify a prover's driver (conflicts with -P).
- -o, --output <dir>
Print the selected goals to separate files in dir.
- --realize
Realize selected theories from the library.
- --bisect
Reduce the set of needed axioms which prove a goal, and output the resulting task.
- --print-theory
Print selected theories.
- --print-namespace
Print namespaces of selected theories.
- --list-transforms
List known transformations.
- --list-printers
List known printers.
- --list-provers
List known provers.
- --list-formats
List known input formats.
- --list-metas
List known metas.
- --list-debug-flags
List known debug flags.
- --parse-only
Stop after parsing (same as --debug parse_only).
- --type-only
Stop after type checking (same as --debug type_only).
- --debug-all
Set all debug flags except parse_only and type_only.
- --debug <flag>
Set a debug flag.
- --print-libdir
Print location of binary components (plugins, etc.).
- --print-datadir
Print location of non-binary data (theories, modules, etc.).
- --version
Print version information.
- -help, --help
Show a list of options.
See Also
why3(1), why3-cpulimit(1), why3bench(1), why3config(1), why3doc(1), why3ide(1), why3realize(1), why3replayer(1)
Referenced By
why3(1), why3bench(1), why3config(1), why3-cpulimit(1), why3doc(1), why3ide(1), why3realize(1), why3replayer(1). | https://www.mankier.com/1/why3ml | CC-MAIN-2019-13 | refinedweb | 406 | 53.17 |
Ken Raeburn <address@hidden> writes: > On Jul 9, 2008, at 12:55, Kjetil S. Matheussen wrote: >> On Wed, 9 Jul 2008, Greg Troxel wrote: >>> Does C guarantee that pointers fit in unsigned long? >> I don't know. But in practice: Yes. > > According to various sources, 64-bit Windows uses an LLP64 model -- > meaning long is 32 bits, long long and pointers are 64 bits. So, no. I believe this is correct. I have been the 64-bit portability weenie on a large project at work, and been railing against int/* assignment. The overwhelming consensus among those of us that have actually dealt with making programs run on non-ILP32 machines is that int/* assignments are just plain wrong. > Near as I can tell, C99 does not require that there be *any* integral > type large enough to hold a pointer value (6.3.2.3 paragraph 6); and > specifically, uintptr_t and intptr_t are optional types. However, I > expect any C99 implementation we're likely to run across will have > such a type, and will define [u]intptr_t. I don't have a copy of the > C89 spec handy, though, and unfortunately that's where most compilers > are these days. My recent experience is that decent compilers are now essentially C99, and that Microsoft compilers are mostly C99 with a a few defects. Our strategy has been to patch around the defective compilers by defining the things we need somewhat like: #if defined(LOSING_COMPILER_A) #ifdef i386 typedef unsigned lont uintptr_t; #else #error DEFINE uintptr_t for yoru platfrom #endif #endif and then just rely on the C99 definition. Surprising little fixup has been needed. > In practice, it's also probably safe to use unsigned long long (or > whatever Windows calls it) on the platforms that have it, and unsigned > long on those that don't. But testing compiler properties in autoconf > and then using them in your installed headers may tie you to a > particular compiler when more than one may be available (e.g., > vendor's compiler and gcc). If we have to store a pointer we should just use void *. I see Ken's point about testing, but that quickly leads to madness as he notes. | http://lists.gnu.org/archive/html/guile-user/2008-07/msg00074.html | CC-MAIN-2019-22 | refinedweb | 365 | 70.84 |
perlquestion Solo <p>It's difficult for me to title this question, because I'm not sure which Catalyst magic I'm breaking. I'm new to Catalyst and Moose and lost in all the pod. </p> <p>I'm trying to add generic JSON support to a Catalyst model. To do so, I add a TO_JSON object to my result class(es): </p> <p><code> package MyApp::Schema::DB::Result::Table; ... extends 'DBIx::Class::Core'; ... sub TO_JSON { return { $_[0]->get_inflated_columns }; } </code><p> <p>and extend Catalyst::View::JSON with some <a href="">code I found</a>. (I've stumbled on different approaches since--opinions on best approach for this also welcomed): </p> <p><code> package MyApp::View::JSON; use Moose; use JSON::XS (); extends 'Catalyst::View::JSON'; my $encoder = JSON::XS->new->utf8->pretty(0)->indent(0) ->allow_blessed(1)->convert_blessed(1); sub encode_json { my( $self, $c, $data ) = @_; $encoder->encode( $data ); } </code></p> <p>This works for each result class individually, but I'd like it to be DRYer. So, I think to create a result base class and extend that for my result classes... </p> <p><code> package MyApp::Schema::DB::Result::Base; use Moose; use namespace::autoclean; extends 'DBIx::Class::Core'; sub TO_JSON {...} package MyApp::Schema::DB::Result::Table; ... extends 'MyApp::Schema::DB::Result::Base'; ... </code></p> <p>Catalyst doesn't like this approach, yet, and I'm having a hard time figuring out which doc to read. </p> <p>Do I just need to tell Catalyst to ignore the '...::Base' class when loading namespaces (or whatever)? </p> <p>Or should I be doing this some other way entirely? </p> <div class="pmsig"><div class="pmsig-195975"> <p>--Solo <font size=1><i><blockquote>--<br> You said you wanted to be around when I made a mistake; well, this could be it, sweetheart. </blockquote></i></font> </div></div> | https://www.perlmonks.org/?node_id=921416;displaytype=xml | CC-MAIN-2021-39 | refinedweb | 312 | 53.31 |
public class CharString { public static void main(String[] args) { char ch = 'c'; String st = Character.toString(ch); // Alternatively // st = String.valueOf(ch); System.
If you have a char array instead of just a char, we can easily convert it to String using String methods as follows:
public class CharString { public static void main(String[] args) { char[] ch = {'a', 'e', 'i', 'o', 'u'}; String st = String.valueOf(ch); String st2 = new String(ch); System.out.println(st); System.
We can also convert a string to char array (but not char) using String's method toCharArray().
import java.util.Arrays; public class StringChar { public static void main(String[] args) { String st = "This is great"; char[] chars = st.toCharArray(); System.out.
It takes a lot of effort and cost to maintain Programiz. We would be grateful if you support us by either:
Disabling AdBlock on Programiz. We do not use intrusive ads.
orDonate on Paypal | https://www.programiz.com/java-programming/examples/char-string | CC-MAIN-2019-09 | refinedweb | 152 | 66.33 |
In this article, we’ll demonstrate how to implement reCAPTCHA v2 in a React application and how to verify user tokens in a Node.js backend.
Jump ahead:
- Prerequisites
- What is CAPTCHA?
- What is reCAPTCHA?
- Implementing reCAPTCHA in React
- Setting up a sample React project
- Using the Reaptcha router
Prerequisites
To follow along with the examples in the tutorial portion of this article, you should have a foundational knowledge of:
- React and its concepts
- Creating servers with Node.js and Express.js
- HTTP requests
What is CAPTCHA?
CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart) is a type of challenge-response security measure designed to differentiate between real website users and automated users, such as bots.
Many web services use CAPTCHAs to help prevent unwarranted and illicit activities such as spamming and password decryptions. CAPTCHAs require users to complete a simple test to demonstrate they are human and not a bot before giving them access to sensitive information.
What is reCAPTCHA?
There are several types of CAPTCHA systems, but the most widely used system is reCAPTCHA, a tool from Google. Luis von Ahn, the co-founder of Duolingo, created the tool back in 2007 and is being used by more than six million websites, including BBC, Bloomberg, and Facebook.
The first version of reCAPTCHA was made up of randomly generated sequences of distorted alphabetic and numeric characters and a text box.
To pass the test, a user needs to decypher the distorted characters and type them into the text box. Although computers are capable of creating images and generating responses, they can’t read or interpret information in the same way a person can to pass the test.
reCAPTCHA generates a response token for each request a user makes and sends it to Google’s API for verification. The API returns a score that determines if the user is human or an automated program.
reCAPTCHA currently has two working versions: v2 and v3. Although v3 is the most recent version of reCAPTCHA (released in 2018), most websites still use reCAPTCHA v2, which was released in 2014.
reCAPTCHA v2 has two variants: checkbox and invisible. The checkbox variant, which is also known as “I’m not a robot”, is the most popular. one option for this variant displays a checkbox widget that users can interact with to verify their identity.
The invisible variant displays a reCAPTCHA badge, indicating that the service is running in the background.
In some cases where a user’s behavior triggers suspicion, reCAPTCHA v2 will serve up a challenge that the user must pass to prove they’re not a bot.
Implementing reCAPTCHA in React
Now that we understand what reCAPTCHA is, let’s see how we can implement it in a React app. But first, we need to sign our app up for an API key in the Google reCAPTCHA console. The key pair consists of two keys: site key and secret key.
The site key invokes the reCAPTCHA service in our app. The secret key verifies the user’s response. It does this by authorizing the communication between our app’s backend and the reCAPTCHA server.
Go ahead and create your key pair here.
First, you’ll need to sign in with your Google account. Then, you’ll be redirected to the admin console. Next, you’ll fill out the registration form on the page to generate your site’s key pair.
The registration is fairly straightforward, but for clarity, I’ll explain what each form field means and how to fill each field.
Key pair registration
Label
For the label field, provide a name to help you recognize the purpose of the key pair that you’re creating. If you have more than one key pair set up on your account, the labels will help you distinguish between them.
Type
The type selector refers to the version of reCAPTCHA that you want to use on your site. You can choose either v3 or v2. Since this tutorial will only cover v2 implementation, go ahead and choose v2 and the “I am not a robot” variant.
Domains
The domains field is where you’ll set the domain names that will work with your reCAPTCHA. You can input a valid domain name or “localhost” if your project is still in development, and click + to add the domain.
Owner
The owner field is where you can provision access to your app’s reCAPTCHA to others. By default, you’ll be the owner, but you can add more individuals by providing their Google emails.
Once you’ve completed the form fields, check the necessary boxes and click Submit.
Now you should be able to see your site key and secret key. They will look similar to the ones shown here:
Next, we‘ll set up a React sample project and implement reCAPTCHA using the key pairs we just created.
Setting up a sample React project
To verify a user’s input with reCAPTCHA we require a server that’ll communicate with Google’s API. So we‘ll need to keep that in mind when setting up the project.
First, create a folder. For this sample project, I will name the folder
react-node-app, but you can use a different name of your choosing.
Next, open the folder in your preferred IDE and run the following command:
npm init -y
This will create a
package.json file that will help us manage our dependencies and keep track of our scripts.
Go ahead and bootstrap a React app with
create-react-app by typing the following command in your terminal:
npx create-react-app my-app
This command will create a
my-app folder inside the
react-node-app folder and will install React inside the
my-app folder.
After installation, open the
my-app folder and clean up the unnecessary boilerplate codes and files in the project folder, then create a
Form.js component within the
src folder.
Next, add the following code into the form component and import it inside the
App.js main component.
const Form = () =>{ return( <form> <label htmlFor="name">Name</label> <input type="text" id="name" className="input"/> <button>Submit</button> </form> ) } export default Form
The above code is a simple login form with an
input element and a
Submit button.
Styling the form component isn’t necessary, but if you’d like to add a little flair, add the following CSS code inside the
App.css file in the project folder.
.input{ width: 295px; height: 30px; border: rgb(122, 195, 238) 2px solid; display: block; margin-bottom: 10px; border-radius: 3px; } .input:focus{ border: none; } label{ display: block; margin-bottom: 2px; font-family: 'Courier New', Courier, monospace; } button{ padding: 7px; margin-top: 5px; width: 300px; background-color: rgb(122, 195, 238); border: none; border-radius: 4px; }
Now, start the development server with the command
npm start in the terminal.
You should see a form similar to this displayed on your browser:
N.B., It is recommended to use a framework that has support for SSR (server-side-rendering), like Next.js or Remix, when creating something similar for production.
Installing
react-google-recaptcha
The
react-google-recaptcha library enables the integration of Google reCAPTCHA v2 in React. The package provides a component that simplifies the process of handling and rendering reCAPTCHA in React with the help of useful props.
To install
react-google-recaptcha, type and run the following command:
npm install --save react-google-recaptcha
Adding reCAPTCHA
After installing
react-google-recaptcha, head over to the
form.js component file and import it, like so:
import reCAPTCHA from "react-google-recaptcha"
Now add the
reCAPTCHA component to the form, just before or after the
Submit button. Your placement of the component is optional, the reCAPTCHA widget will appear wherever the
reCAPTCHA component is placed in the form when rendered.
<form > <label htmlFor="name">Name</label> <input type="text" id="name" className="input"/> <reCAPTCHA /> <button>Submit</button> </form>
As mentioned previously, the
reCAPTCHA component accepts several props. However, the
sitekey prop is the only prop we need to render the component. This prop facilitates the connection between the site key we generated earlier from the reCAPTCHA key pair and the
reCAPTCHA component.
Here are other optional props of the
reCAPTCHA component:
theme: changes the widget’s theme to
lightor
dark
size: changes the size or type of CAPTCHA
onErrored: fires a callback function if the test returns an error
badge: changes the position of the reCAPTCHA badge (
bottomright,
bottomleft, or
inline)
ref: used to access the component instance API
<reCAPTCHA sitekey={process.env.REACT_APP_SITE_KEY} />
Here we add a
sitekey prop to the
reCAPTCHA component and pass it an environment variable with the reCAPTCHA site key.
To do the same in your project, create a
.env file in the root folder of your project. Next, add the following code to the file:
/*.env*/ REACT_APP_SECRET_KEY = "Your secret key" REACT_APP_SITE_KEY = "your site key"
This way, you can use your secret keys safely in your app by referencing the variable names where they’re needed.
Now, if save your code and go to the browser, a reCAPTCHA box should appear where the reCAPTCHA component is placed in your code. In this example, it appears before the submit button.
After each verification, we need to reset the reCAPTCHA for subsequent checks. To accomplish this, we need to add a
ref prop to the
reCAPTCHA component.
To use the
ref prop, first, import the
useRef hook from React:
import React, { useRef } from 'react';
Next, store the
ref value in a variable, like so:
const captchaRef = useRef(null)
Then, add the
ref prop to the
reCAPTCHA component and pass it the
captchaRef variable:
<reCAPTCHA sitekey={process.env.REACT_APP_SITE_KEY} ref={captchaRef} />
Here’s the entire code in our
Form component up to this point:
import reCAPTCHA from "react-google-recaptcha" const Form = () =>{ return( <form> <label htmlFor="name">Name</label> <input type="text" id="name" className="input"/> <reCAPTCHA sitekey={process.env.REACT_APP_SITE_KEY} ref={captchaRef} /> <button>Submit</button> </form> ) } export default Form
Now that we have a working widget, we just need to complete three steps to get reCAPTCHA functioning:
- Get the response token from the
reCAPTCHAcomponent
- Reset the
reCAPTCHAcomponent for subsequent checks
- Verify the response token in the backend
Getting the response token
We can also use the
ref prop to get the generated token from our reCAPTCHA. All we have to do is get the value of the
ref with the following code:
const token = captchaRef.current.getValue();
Resetting reCAPTCHA for subsequent checks
If we add the above code to the form component, it will actually cause an error. This is because the value of the
ref is still null, since the reCAPTCHA is in an unchecked state. To solve this issue, we we’ll add an
onSubmit event handler to the form with a function that encapsulates the code:
const handleSubmit = (e) =>{ e.preventDefault(); const token = captchaRef.current.getValue(); captchaRef.current.reset(); } return( <form onSubmit={handleSubmit} > … </form> )
In the above code, we created a
handleSubmit function. Inside this function, we added the
token variable for getting the response token from reCAPTCHA, as well as a code that resets the reCAPTCHA each time the form is submitted.
This way, the
getValue() method will only attempt to get the ref’s value, which is the response token, when the submit button is clicked.
Now if you log the
token variable to the console, check the reCAPTCHA box, and submit the form, you should see a generated response token similar to the one below in your console:
Verifying the token in the Node.js backend
The token we generated in the previous section is only valid for two minutes, which means we need to verify it before it expires. To do so, we’ll need to set up our app’s backend and send the token to Google’s API to check the user’s score.
Setting up the Node.js backend
To set up a Node.js server, navigate back to the
react-node-app folder, create a new folder, and name it
server. Inside the
server folder, create a new file and name it
index.js. This file will serve as the entry point for our Node app.
Next, cd into the
server folder and run the following command to install Express.js and Axios:
npm i express axios dotenv --save
Now, add the following code inside the
index.js file:
const express = require("express"); const router = express.Router(); const app = express(); const cors = require('cors'); const axios = require('axios'); const dotenv = require('dotenv').config() const port = process.env.PORT || 2000; //enabling cors app.use(cors()); //Parse data app.use(express.json()); app.use(express.urlencoded({extended: true})); //add router in express app.use("/", router); //POST route router.post("/post", async (req, res) => { //Destructuring response token from request body const {token} = req.body; //sends secret key and response token to google await axios.post( `{process.env.SECRET_KEY}&response=${token}` ); //check response status and send back to the client-side if (res.status(200)) { res.send("Human 👨 👩"); }else{ res.send("Robot 🤖"); } }); app.listen(port, () =>{ console.log(`server is running on ${port}`); });
In the above code, we set up an Express server and created a
/post route. Inside the endpoint function, we destructured the request body to get the
token data that will be sent from the client side.
Then we created an
axios.post request to Google’s API with our
SECRET_KEY passed in as an environment variable, as well as the
token from the client side.
To set up an environment variable in Node.js, cd back to the
react-node-app folder and run the following command:
npm install dotenv --save
After installation, create a
.env file within the
react-node-app folder, open the file, then add your site’s secret key.
Beneath the
axios.post request is an
if statement that checks the status of the response returned by the API and sends it to the client side.
Ok, let’s move on. Navigate back to the
react-node-app folder, open the
package.json file, and replace the script command with the following:
… "scripts": { "start": "node server/index.js" }, …
The above code will let us start our server using the
npm start command when we run it in the terminal.
Save the project. Then, go to your terminal, open a new terminal tab, cd into the
server folder, and start the server by running
npm start.
Checking the user’s score
Next, we’ll send an
axios.post request from the client side (React app) to our server, with the generated token as the data.
To do this, navigate back to your React app and paste the following code inside the
handleSubmit function we created earlier:
const handleSubmit = async (e) =>{ e.preventDefault(); const token = captchaRef.current.getValue(); captchaRef.current.reset(); await axios.post(process.env.REACT_APP_API_URL, {token}) .then(res => console.log(res)) .catch((error) => { console.log(error); }) }
This code is an
axios.post request that sends the generated token from reCAPTCHA to the Node.js backend.
If you save your code and run the app, you should see a reCAPTCHA form similar to this:
Using the
reaptcha wrapper
react.captcha (Reaptcha) is an alternative solution for implementing reCAPTCHA in React. The library shares similar features with react-google-recaptcha, but unlike the former, Reaptcha handles reCAPTCHA’s callbacks inside React components and automatically injects the reCAPTCHA script into the head DOM element.
This way, your applications would not have to depend on the library and directly communicate with the reCAPTCHA API when deployed.
To install Reaptcha, run the following command within your terminal:
npm install --save reaptcha
After installation, go to the
form.js file and import the Reaptcha component like so:
import Reaptcha from 'reaptcha';
The Reaptcha component provides several props that can be used to customize the rendering. Here is a list of the available props:
sitekey: This prop accepts the client key (site key we generated in the previous sections)
theme: an optional prop for changing the widget’s appearance (light or dark)
onLoad: an optional callback function that gets called when the Google reCAPTCHA script has been loaded
onVerify: an optional callback function that gets called when a user completes the captcha
onExpire: an optional callback function that gets called when the challenge is expired and has to be redone
explicit: an optional prop that allows the widget to be rendered explicitly, i.e., invisible
size: an optional prop that allows you to change the size of the widget to either of these: compact, normal, invisible
ref: prop used for accessing the component’s instance methods
Although most of these props look similar to the ones exposed by the react-google-recaptcha’s component, not all of them work as you’d expect. The
ref prop for one doesn’t have a method like
getValue() for getting the response token. Instead, it uses a
getResponse() instance method that returns the token with a promise.
Therefore, adding the component to the form component and retrieving the response token will be as follows:
const [captchaToken, setCaptchaToken] = useState(null); const captchaRef = useRef(null); const verify = () =>{ captchaRef.current.getResponse().then(res => { setCaptchaToken(res) }) } return( <form onSubmit={handleSubmit} > <Reaptcha sitekey={process.env.REACT_APP_SITE_KEY} ref={captchaRef} onVerify={verify} > </form> ) }
Here, we created a
verify function. Inside it, we’re fetching the response token from the
ref variable using the
getResponse() instance method. Since the method returns a promise, we chained a
then method to it and pass the response to the
captchaToken state variable.
We also pass the
verify function to the
onVerify prop on the component so that the function will only attempt to fetch the responses token when a user completes the captcha.
The component’s instance methods are utility functions that can be called to perform certain actions. Just as we used the
getResponse method to grab the response token earlier, we can use other methods to perform different actions, like resetting the widget after every form submission. Here is a list of available instance methods:
reset
renderExplicitly
execute
getResponse
Visit the documentation to learn more about these methods and the Reapatcha library.
That’s it! You’ve successfully implemented a working Google reCAPTCHA and a backend server that verifies users’ responses in React.
Conclusion
In this article, we examined what reCAPTCHA is and how it works. We also walked through a tutorial to demonstrate how to implement reCAPTCHA in a React application and how to verify a user’s response token with a Node.js backend server.
I hope this article will help you build secure and bot-free.
4 Replies to “How to implement reCAPTCHA in a React application”
Great post. I ran into some trouble after naming my component in lower case and React wasnt reconizing it as a component until i changed it lo upper case.
Thank you, Mark. React components always start with uppercase letters. The library treats any component with lowercase initials as HTML elements.
Hi I ran into an issue that form still submit even when reCaptcha is not clicked
Hi Rasam, you have to perform a conditional check based on the response you get from the server. If it’s positive, submit the form. If not, do otherwise. I hope this helps. | https://blog.logrocket.com/implement-recaptcha-react-application/ | CC-MAIN-2022-40 | refinedweb | 3,218 | 53.81 |
Code. Collaborate. Organize.
No Limits. Try it Today.
Code security is the main aspect in .NET development. Helping protect Web sites against unauthorized access is a complex issue for Web developers. ASP.NET provides web application protection with the help of the .NET framework and IIS (Internet Information Services). In this article, we take a short tour of Authentication and Authorization concepts. I think it will be helpful for beginners.
I would like to thank Abhijit Jana for his nice article on IIS 6.0 for beginners. His article encouraged me to write some text on Authorization and Authentication. This article will give you basic idea about authentication and authorization and its working in WebApplication.
Basically authentication and authorization are two interrelated things. First authentication is done and then authorization. Authentication means checking Is valid User? In depth authentication is the process of getting identification credential such as name and password from a user and validating those credentials against some authority. If the credentials are valid, it means once an identity has been authenticated, the authorization process starts.
Is valid User?
.NET uses the following authentication providers for authentication:
Before getting deeper in the authentication, let us have a look at authorization.
Authorization is the process of determining what rights the authenticated user has? By using authorization, we can limit access rights by granting or denying specific permissions to an authenticated identity. The purpose of authorization is to determine whether an identity should be granted the requested type of access to a given resource. There are two fundamental ways to authorize access to a given resource:
Let's start with the authentication providers.
This is the default authentication provider of .NET. ASP.NET uses windows authentication with the help of IIS.
Authentication is performed by IIS in the following ways:
When IIS authentication is complete, ASP.NET uses the authenticated identity to authorize access. IIS can be configured so that only Windows domain users can log in.
This authentication is also known as Windows NT Challenge/Response authentication. Integrated Windows authentication is enabled by default for Windows Server 2003 operating systems. The application here uses challenge/response protocols or kerberose to authenticate users.
Although Integrated Windows authentication is secure, it does have two limitations:
Integrated Windows authentication is best suited for an intranet environment.
This Authentication needs a user name and password to connect over network, but the given password is sent in plain text. Hence it is a non secure authentication.
The following steps show how basic authentication works:
It has some advantages and disadvantages as follows:
In this type of authentication, password is hashed before it is sent across the network. Digest Authentication transmits credential across the network as an MD5 HASH or message digest However, to be using Digest Authentication, we must use Internet Explorer 5.0 or above. The username and IIS running IIS must be of the same domain.
MD5 HASH
It is very open and public authentication. When user attempts to open a site, IIS will not check for any authentication.
The user provides credentials and submits the form. If the user authenticates successfully, the system issues a cookie that contains a credential or key for getting identity.
Forms authentication is a good choice if your application needs to collect its own user credential at logon time through HTML forms. In this authentication, we can customize content for known user. Basically in this case, the system accepts credential from user (mostly username and password). The application code checks the credential to confirm authenticity. If the credentials are authenticated, application code attaches a cookie containing username not password. If the credentials fail, then request return with access denied message.
Let the following picture clear the idea:
Microsoft .NET Passport is a user-authentication service and a component of the Microsoft .NET framework. Passport authentication is a centralized authentication service and .NET Passport uses standard Web technologies and techniques, such as Secure Sockets Layer (SSL), HTTP redirects, cookies, Microsoft JScript, and strong symmetric key. Sign in sign out and registration pages are centrally hosted rather than being specific to an individual site.
There is no real time or server to server communication between participating Web sites and the central .NET Passport servers.
To enable an authentication provider for an ASP.NET application, create an entry in Web.config file as follows:
//Web.config file
<authentication mode="[Windows|Forms|Passport|None]" />
The default authentication mode is Windows. If we set the authentication mode as None, then ASP.NET will not apply any authenticate checks on client request.
None authentication can be useful when you want to introduce custom authentication scheme or don't want to check any authentication for getting highest level of performance.
URL authorization maps users and roles to pieces of the URL namespace. By using this authentication, we can selectively allow or deny access to certain sets, users, or roles. You just need to place a list of users and roles in the <allow> or <deny> elements of <authorization> section.
<allow>
<deny>
<authorization>
There are two special identities that we can allow or deny:
Consider the following example which will emphasis the subject:
<authorization>
<allow users="ABC"/>
<allow roles="XXX"/>
<deny users="XYZ"/>
<deny users="?"/>
</authorization>
The above example grants access to ABC user and users of XXX roles. Whereas it denies access to XYZ user and anonymous users.
We can give multiple users in a single element:
<authorization>
<allow users="ABC, XYZ"/>
</authorization>
If you want to deny access to all users, then the setting is as follows:
<authorization>
<deny users="*"/>
</authorization>
File authorization is active when you use Windows authentication. It will check the access of file. For that, it does an access control list (ACL) check of the .aspx or .asmx handler file to check if user has a access of that file. Applications can further use impersonation technique to check the resources that they are accessing. The file access is checked against the NTFS file permission. The checking ensures that the user has the READ access of the requested file. The default user account is ASPNET account.
Impersonation is the technique in which the logged in user acts like an authenticated entity. By default, the Impersonation is not enabled. We can set the impersonation in web.config file:
web.config
<identity impersonate="true"/>
or we can provide a username and password to the impersonation. Username suggests on which behalf user is working on a site.
<identity impersonate="true" userName="administrator" password="pass"/>
We can enable these settings from IIS also.
Finally, that is all about the basic idea of authentication and authorization.
Let us take a snap tour of what we learnt. To improve the security of web applications, ASP.NET and IIS introduce Authentication and Authorization processes.
Authentication
Authorization
If you have your application run on intranet, then you should use Windows authentication. This will keep track of all users on the intranet. Otherwise, Forms authentication could be a good choice.
Knowledge is an endless entity. We have to learn more and get more. I used the following references to write | http://www.codeproject.com/Articles/55502/Beginner-s-Guide-to-Authentication-and-Authorizati?fid=1559120&df=90&mpp=10&sort=Position&spc=None&tid=3355572 | CC-MAIN-2014-15 | refinedweb | 1,182 | 50.33 |
1. How to build and install galario¶
1.2. Quickest installation: using conda¶
By far the easiest way to install galario is via conda.
If you are new to
conda, you may want to start with the minimal miniconda. With
conda all dependencies are
installed automatically and you get access to galario’s C++ core and python
bindings, both with support for multithreading.
To install galario:
conda install -c conda-forge galario
To create a conda environment for galario, see Section 1.4, step 2.
Due to technical limitations, the conda package does not support GPUs at the moment. If you want to use a GPU, read on as you have to build galario by hand.
1.3. Build requirements¶
To compile galario you will need:
- a working internet connection (to download 1.5 MB of an external library)
- either
g++>=4.8.1 or
clang++>=3.3 with full support of C++11. To use multiple threads, the compiler has to support openMP
cmake: download from the cmake website or install with
conda install -c conda-forge cmake
make
- the FFTW libraries, for the CPU version: more details are given below
- [optional] the CUDA toolkit >=8.0 for the GPU version: it can be easily installed from the NVIDIA website
- [optional] Python and numpy for Python bindings to the CPU and GPU
Warning
If you want to use the GNU compilers on Mac OS, you need to manually download and install them, e.g. following these instructions.
The default
gcc/
g++ commands shipped with the OS are aliases for the
clang compiler that supports openMP only as of version 3.7 but unfortunately Apple usually ships an older version of
clang.
1.4. Quick steps to build and install¶
Here a quick summary to compile and install galario with default options, below are more detailed instructions to fine-tune the build process.
The following procedure will always compile and install the CPU version of galario. On a system with a CUDA-enabled GPU card, also the GPU version will be compiled and installed. To manually turn ON/OFF the GPU CUDA compilation, see these instructions below.
-
Clone the repository and create a directory where to build galario:git clone cd galario mkdir build && cd build
-
to make the compilation easier, let’s work in a Python environment. galario works with both Python 2 and 3.
For example, if you are using the Anaconda distribution, you can create and activate a Python 3.6 environment with:conda create --name galario3 python=3.6 numpy cython pytest scipy source activate galario3
-
Use
cmaketo prepare the compilation from within
galario/build/:cmake ..
This command will produce configuration and compilation logs listing all the libraries and the compilers that are being used. It will use the internet connection to automatically download this additional library (1.5 MB).
-
Use
maketo build galario and
make installto install it inside the active environment:make && make install
If the installation fails due to permission problems, you either have to use
sudo make install, or see the instructions below to specify an alternate installation path. Permission problems may arise when you are using, e.g., a shared conda environment: in that case, it is preferable to create your own environment in a directory where you have write permissions.
These instructions should be sufficient in most cases, but if you have problems or want more fine-grained control, check out the details below. If you find issues or are stuck in one of these steps, consider writing us an email or opening an issue on GitHub.
Note
If you compile galario only for the CPU, gcc/g++ >= 4.0 works fine. If you
compile also the GPU version, check in the NVIDIA Docs which gcc/g++
versions are compatible with the
nvcc compiler shipped with your CUDA
Toolkit.
1.5. Detailed build instructions¶
The default configuration to build galario is
git clone cd galario mkdir build && cd build cmake .. && make
- There are many options to affect the build when
cmakeis invoked. When playing
- with options, it’s best to remove the
cmakecache first
rm build/CMakeCache.txt
In the following, we assume
cmake is invoked from the
build directory.
1.5.1. Compiler¶
Set the C and C++ compiler
export CC="/path/to/bin/gcc" export CXX="/path/to/bin/g++" cmake .. # alternative cmake -DCMAKE_C_COMPILER=/path/to/gcc -DCMAKE_CXX_COMPILER=/path/to/g++ ..
When changing the compiler, it is best to start with a fresh empty build directory.
1.5.2. Optimization level¶
By default galario is built with all the optimizations ON. You can check this with:
cmake --help-variable CMAKE_BUILD_TYPE
The default built type is
Release, which is the fastest. If you want debug symbols as well, use
RelWithDebInfo.
To turn on even more aggressive optimization, pass the flags directly. For example for g++:
cmake -DCMAKE_CXX_FLAGS='-march=native -ffast-math' ..
Note that these further optimization might not work on any system.
To turn off optimizations:
cmake -DCMAKE_BUILD_TYPE=Debug ..
1.5.3. Python¶
To build the python bindings, we require python 2.7 or 3.x,
numpy,
cython, and
pytest. To run the tests, we additionally need
scipy>0.14.
Specify a Python version if Python 2.7 and 3.x are in the system and
conflicting versions of the interpreter and the libraries are found
and reported by
cmake. In
build/, do
cmake -DPython_ADDITIONAL_VERSIONS=3.5 ..
galario should work with both python 2 and 3. For example, if you are using the Anaconda distribution, you can create conda environments with
# python 2 conda create --name galario2 python=2 numpy cython pytest source activate galario2 # or python3 conda create --name galario3 python=3 numpy cython pytest source activate galario3
To run the tests, install some more dependencies within the environment
conda install scipy
cmake may get confused with the conda python and the system python. This is a general problem
A workaround to help cmake find the interpreter and the libs from the currently loaded conda environment is
cmake -DCMAKE_PREFIX_PATH=${CONDA_PREFIX} ..
If you still have problems, after the
cmake command, check whether the FFTW
libraries with openMP flags are found and whether the path to Python is
correctly set to the path of the conda environment in use, e.g.
/home/user/anaconda/envs/galario3.
1.5.4. FFTW¶
The FFTW libraries are required for the CPU version of galario.
You can check if they are installed on your system by checking if all libraries listed below are
present, for example in
/usr/lib or
/usr/local/lib/.
galario requires the following FFTW libraries:
libfftw3: double precision
libfftw3f: single precision
libfftw3_threads: double precision with pthreads
libfftw3f_threads: single precision with pthreads
galario has been tested with FFTW 3.3.6.
The easiest way to install FFTW is to use a package manager, for example
apt
on Debian/Ubuntu or
homebrew on the Mac. For example,
sudo apt-get install libfftw3-3 libfftw3-dev
If you really want to build FFTW from source, for example because you don’t have admin rights, read on.
1.5.4.1. Manual compilation¶
To compile FFTW, download the .tar.gz from FFTW website. On Mac OS, you have to explicitly
enable the build of dynamic (shared) library with the
--enable-shared option, while on Linux this
should be the default.
You can create the libraries listed above with the following lines:
cd fftw-<version>/ mkdir d_p && cd d_p && \ CC=/path/to/gcc ../configure --enable-shared && make && sudo make install && cd .. mkdir s_p && cd s_p && \ CC=/path/to/gcc ../configure --enable-shared --enable-single && make && sudo make install && cd .. mkdir d_p_omp && cd d_p_omp && \ CC=/path/to/gcc ../configure --enable-shared --enable-openmp && make && sudo make install && cd .. mkdir s_p_omp && cd s_p_omp && \ CC=/path/to/gcc ../configure --enable-shared --enable-single --enable-openmp && make && sudo make install && cd ..
If you have no
sudo rights to install FFTW libraries, then provide an installation directory via
make install --prefix="/path/to/fftw".
Note
Before building galario,
FFTW_HOME has to be set equal to the installation directory of FFTW, e.g. with:
export FFTW_HOME="/usr/local/lib/"
in the default case, or to the prefix specified during the FFTW installation.
Also, you need to update the
LD_LIBRARY_PATH to pick the FFTW libraries:
export LD_LIBRARY_PATH=$FFTW_HOME/lib:$LD_LIBRARY_PATH
To speedup building FFTW, you may add the -jN flag to the make commands above, e.g.
make -jN, where N is an integer
equal to the number of cores you want to use. E.g., on a 4-cores machine, you can do
make -j4. To use -j4 as default, you can
create an alias with:
alias make="make -j4"
1.5.4.2. Setting paths¶
To find FFTW3 in a nonstandard directory, say
$FFTW_HOME, tell
cmake about it:
cmake -DCMAKE_PREFIX_PATH=${FFTW_HOME} ..
For multiple directories, use a
; between directories:
cmake -DCMAKE_PREFIX_PATH=${FFTW_HOME};/opt/something/else ..
In case the directory with the header files is not inferred correctly:
cmake -DCMAKE_CXX_FLAGS="-I${FFTW_HOME}/include" ..
In case the openmp libraries are not in
${FFTW_HOME}/lib
cmake -DCMAKE_LIBRARY_PATH="${FFTW_OPENMP_LIBDIR}" ..
1.5.5. CUDA¶
cmake tests for compilation on the GPU with cuda by default except on Mac
OS, where version conflicts between the NVIDIA compiler and the C++ compiler
often lead to problems; see for example this issue.
To manually enable or disable checking for cuda, do
cmake -DGALARIO_CHECK_CUDA=0 .. # don't check cmake -DGALARIO_CHECK_CUDA=1 .. # check
If cuda is installed in a non-standard directory or you want to specify the exact version, you can point cmake
cmake -DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda-9.1 ..
1.5.6. Timing¶
For testing purposes, you can activate the timing features embedded in the code that produce detailed printouts to
stdout of various
portions of the functions. The times are measured in milliseconds. This feature is OFF by default and can be activated during the configuration stage with
cmake -DGALARIO_TIMING=1 ..
1.5.7. Documentation¶
This documentation should be available online here. If you want to build the documentation
locally, from within the
build/ directory run:
make docs
which creates output in
build/docs/html. The
docs are not built by default, only upon request.
First install the build requirements with
conda install sphinx pip install sphinx_py3doc_enhanced_theme sphinxcontrib-fulltoc
within the conda environment in use. This ensures that the
sphinx version matches the Python version used to compile
galario.
If you still have problems, remove the
CMakeCache.txt, rerun
cmake, and observe which location of
sphinx is reported in
CMakeCache.txt, for example:
-- Found Sphinx: /home/myuser/.local/miniconda3/envs/galario3/bin/sphinx-build
The galario library needs to be imported when building the documentation (the import would fail otherwise) to extract docstrings.
To delete the sphinx cache in case the docs don’t update as expected
rm -rf docs/_doctrees/
1.6. Install¶
To specify a path where to install the C libraries of galario (e.g., if you do not have
sudo rights to install it in
usr/local/lib),
do the conventional:
cmake -DCMAKE_INSTALL_PREFIX=/path/to/galario/lib ..
and, after building, run:
make install
This will install the C libraries of galario in
/path/to/galario/.
Note
By default the C libraries and the Python bindings are installed under the same prefix.
If you want to install the Python bindings elsewhere, there is an extra cache variable
GALARIO_PYTHON_PKG_DIR that you can edit with
ccmake . after running
cmake.
If you are working inside an active conda environment, both the libraries and the python wrapper are installed inside the environment defined by
$CONDA_PREFIX, e.g.:
conda activate myenv cmake .. make && make install
Example output during the
install step
-- Installing: /path/to/conda/envs/myenv/lib/libgalario.so -- Installing: /path/to/conda/envs/myenv/include/galario.h ... -- Installing: /path/to/conda/envs/myenv/lib/python2.7/site-packages/galario/single/__init__.py
From the environment
myenv it is now possible to import galario.
1.7. Tests¶
After building, just run
ctest -V --output-on-failure from within the
build/ directory.
Every time
python/test_galario.py is modified, it has to be copied over to the build directory: only when run there,
import pygalario works. The copy is performed in the configure step,
cmake detects changes so always run
make first.
py.test fails if it cannot collect any tests. This can be caused by C errors.
To debug the testing, first find out the exact command of the test:
make && ctest -V
py.test captures the output from the test, in particular from C to stderr.
Force it to show all output:
make && python/py.test.sh -sv python_package/tests/test_galario.py
By default, tests do not run on the GPU. Activate them by calling
py.test.sh --gpu=1 ....
To select a given parametrized test named
test_sample, just run
py.test.sh -k sample.
A cuda error such as
[ERROR] Cuda call /home/user/workspace/galario/build/src/cuda_lib.cu: 815 invalid argument
can mean that code cannot be executed on the GPU at all rather than that specific call being invalid.
Check if
nvidia-smi fails
$ nvidia-smi Failed to initialize NVML: Driver/library version mismatch | https://mtazzari.github.io/galario/install.html | CC-MAIN-2019-22 | refinedweb | 2,180 | 56.76 |
CodePlexProject Hosting for Open Source Software
when i use ironpython a few days,i find a funny problem,let us look at following codes:
import MyAssembly
from MyAssembly.MyFunction import * #there is a method named "ABC" without params in the class "MyAssembly.MyFunction"
from MyAssembly import * #there is a class named "ABC" in the namespace "MyAssembly"
a = ABC(); # the grammer here really confused me...
i don't know how IronPython will process ABC ? to invoke a method or instance an object??
to discuss more, how ironpython to solve the problem about the duplication name of the objects
The way to think about this is to ask yourself "How would Python behave if these were Python modules instead of .NET namespaces?" You can always use the fully-qualified name "MyAssembly.MyFunction.ABC" to get at the ABC method after it's been replaced
in the current scope by the ABC class.
Some people believe that "from module import *" is not a good practice when developing robust code because unrelated changes in MyAssembly may suddenly create this collision where it didn't exist before, and end up change the meaning of the code in this
module.
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later. | https://dlr.codeplex.com/discussions/253517 | CC-MAIN-2017-09 | refinedweb | 230 | 71.24 |
Apache Avro 1.7.7 Specification
- Introduction
- Schema Declaration
- Data Serialization
- Sort Order
- Object Container Files
- Protocol Declaration
- Protocol Wire Format
- Schema Resolution
- Parsing Canonical Form for Schemas
- Logical Types
Introduction
This document defines Apache Avro. It is intended to be the authoritative specification. Implementations of Avro must adhere to this document.
Schema Declaration
A Schema is represented in JSON by one of:
- A JSON string, naming a defined type.
- A JSON object, of the form:
{"type": "typeName" ...attributes...}where typeName is either a primitive or derived type name, as defined below. Attributes not defined in this document are permitted as metadata, but must not affect the format of serialized data.
- A JSON array, representing a union of embedded types.
Primitive Types
The set of primitive type names is:
-
Primitive types have no specified attributes.
Primitive type names are also defined type names. Thus, for example, the schema "string" is equivalent to:
{"type": "string"}
Complex Types
Avro supports six kinds of complex types: records, enums, arrays, maps, unions and fixed.
Records
Records use the type name "record" and support three attributes:
- name: a JSON string providing the name of the record (required).
- namespace, a JSON string that qualifies the name;
- doc: a JSON string providing documentation to the user of this schema (optional).
- aliases: a JSON array of strings, providing alternate names for this record (optional).
- fields: a JSON array, listing fields (required). Each field is a JSON object with the following attributes:
- name: a JSON string providing the name of the field (required), and
- doc: a JSON string describing this field for users (optional).
- type: A JSON object defining a schema, or a JSON string naming a record definition (required).
- default: A default value for this field, used when reading instances that lack this field (optional). Permitted values depend on the field's schema type, according to the table below. Default values for union fields correspond to the first schema in the union. Default values for bytes and fixed fields are JSON strings, where Unicode code points 0-255 are mapped to unsigned 8-bit byte values 0-255.
- order: specifies how this field impacts sort ordering of this record (optional). Valid values are "ascending" (the default), "descending", or "ignore". For more details on how this is used, see the the sort order section below.
- aliases: a JSON array of strings, providing alternate names for this field (optional).
For example, a linked-list of 64-bit values may be defined with:
{ "type": "record", "name": "LongList", "aliases": ["LinkedLongs"], // old name for this "fields" : [ {"name": "value", "type": "long"}, // each element has a long {"name": "next", "type": ["null", "LongList"]} // optional next element ] }
Enums
Enums use the type name "enum" and support the following attributes:
- name: a JSON string providing the name of the enum (required).
- namespace, a JSON string that qualifies the name;
- aliases: a JSON array of strings, providing alternate names for this enum (optional).
- doc: a JSON string providing documentation to the user of this schema (optional).
- symbols: a JSON array, listing symbols, as JSON strings (required). All symbols in an enum must be unique; duplicates are prohibited.
For example, playing card suits might be defined with:
{ "type": "enum", "name": "Suit", "symbols" : ["SPADES", "HEARTS", "DIAMONDS", "CLUBS"] }
Arrays
Arrays use the type name "array" and support a single attribute:
- items: the schema of the array's items.
For example, an array of strings is declared with:
{"type": "array", "items": "string"}
Maps
Maps use the type name "map" and support one attribute:
- values: the schema of the map's values.
Map keys are assumed to be strings.
For example, a map from string to long is declared with:
{"type": "map", "values": "long"}
Unions null.)
Unions may not contain more than one schema with the same type, except for the named types record, fixed and enum. For example, unions containing two array types or two map types are not permitted, but two types with different names are permitted. (Names permit efficient resolution when reading and writing unions.)
Unions may not immediately contain other unions.
Fixed
Fixed uses the type name "fixed" and supports two attributes:
- name: a string naming this fixed (required).
- namespace, a string that qualifies the name;
- aliases: a JSON array of strings, providing alternate names for this enum (optional).
- size: an integer, specifying the number of bytes per value (required).
For example, 16-byte quantity may be declared with:
{"type": "fixed", "size": 16, "name": "md5"}
Names
Record, enums and fixed are named types. Each has a fullname that is composed of two parts; a name and a namespace. Equality of names is defined on the fullname.
The name portion of a fullname, record field names, and enum symbols must:
- start with [A-Za-z_]
- subsequently contain only [A-Za-z0-9_]
A namespace is a dot-separated sequence of such names. The empty string may also be used as a namespace to indicate the null namespace. Equality of names (including field names and enum symbols) as well as fullnames is case-sensitive.
In record, enum and fixed definitions, the fullname is determined in one of the following ways:
- A name and namespace are both specified. For example, one might use "name": "X", "namespace": "org.foo" to indicate the fullname org.foo.X.
- A fullname is specified. If the name specified contains a dot, then it is assumed to be a fullname, and any namespace also specified is ignored. For example, use "name": "org.foo.X" to indicate the fullname org.foo.X.
- A name only is specified, i.e., a name that contains no dots. In this case the namespace is taken from the most tightly enclosing schema or protocol. For example, if "name": "X" is specified, and this occurs within a field of the record definition of org.foo.Y, then the fullname is org.foo.X. If there is no enclosing namespace then the null namespace is used.
References to previously defined names are as in the latter two cases above: if they contain a dot they are a fullname, if they do not contain a dot, the namespace is the namespace of the enclosing definition.
Primitive type names have no namespace and their names may not be defined in any namespace.
A schema or protocol may not contain multiple definitions of a fullname. Further, a name must be defined before it is used ("before" in the depth-first, left-to-right traversal of the JSON parse tree, where the types attribute of a protocol is always deemed to come "before" the messages attribute.)
Aliases
Named types and fields may have aliases. An implementation may optionally use aliases to map a writer's schema to the reader's. This faciliates both schema evolution as well as processing disparate datasets.
Aliases function by re-writing the writer's schema using aliases from the reader's schema. For example, if the writer's schema was named "Foo" and the reader's schema is named "Bar" and has an alias of "Foo", then the implementation would act as though "Foo" were named "Bar" when reading. Similarly, if data was written as a record with a field named "x" and is read as a record with a field named "y" with alias "x", then the implementation would act as though "x" were named "y" when reading.
A type alias may be specified either as a fully namespace-qualified, or relative to the namespace of the name it is an alias for. For example, if a type named "a.b" has aliases of "c" and "x.y", then the fully qualified names of its aliases are "a.c" and "x.y".
Data Serialization
Avro data is always serialized with its schema. Files that store Avro data should always also include the schema for that data in the same file. Avro-based remote procedure call (RPC) systems must also guarantee that remote recipients of data have a copy of the schema used to write that data.
Because the schema used to write data is always available when the data is read, Avro data itself is not tagged with type information. The schema is required to parse data.
In general, both serialization and deserialization proceed as a depth-first, left-to-right traversal of the schema, serializing primitive types as they are encountered.
Encodings
Avro specifies two serialization encodings: binary and JSON. Most applications will use the binary encoding, as it is smaller and faster. But, for debugging and web-based applications, the JSON encoding may sometimes be appropriate.
Binary Encoding
Primitive Types
Primitive types are encoded in binary as follows:
- null is written as zero bytes.
- a boolean is written as a single byte whose value is either 0 (false) or 1 (true).
- int and long values are written using variable-length zig-zag coding. Some examples:
- a float is written as 4 bytes. The float is converted into a 32-bit integer using a method equivalent to Java's floatToIntBits and then encoded in little-endian format.
- a double is written as 8 bytes. The double is converted into a 64-bit integer using a method equivalent to Java's doubleToLongBits and then encoded in little-endian format.
- bytes are encoded as a long followed by that many bytes of data.
- a string is encoded as a long followed by that many bytes of UTF-8 encoded character data.
For example, the three-character string "foo" would be encoded as the long value 3 (encoded as hex 06) followed by the UTF-8 encoding of 'f', 'o', and 'o' (the hex bytes 66 6f 6f):
06 66 6f 6f
Complex Types
Complex types are encoded in binary as follows:
Records
A record is encoded by encoding the values of its fields in the order that they are declared. In other words, a record is encoded as just the concatenation of the encodings of its fields. Field values are encoded per their schema.
For example, the record schema
{ "type": "record", "name": "test", "fields" : [ {"name": "a", "type": "long"}, {"name": "b", "type": "string"} ] }
An instance of this record whose a field has value 27 (encoded as hex 36) and whose b field has value "foo" (encoded as hex bytes 06 66 6f 6f), would be encoded simply as the concatenation of these, namely the hex byte sequence:
36 06 66 6f 6f
Enums
An enum is encoded by a int, representing the zero-based position of the symbol in the schema.
For example, consider the enum:
{"type": "enum", "name": "Foo", "symbols": ["A", "B", "C", "D"] }
This would be encoded by an int between zero and three, with zero indicating "A", and 3 indicating "D".
Arrays
Arrays are encoded as a series of blocks. Each block consists of a long count value, followed by that many array items. A block with count zero indicates the end of the array. Each item is encoded per the array's item.
For example, the array schema
{"type": "array", "items": "long"}
an array containing the items 3 and 27 could be encoded as the long value 2 (encoded as hex 04) followed by long values 3 and 27 (encoded as hex 06 36) terminated by zero:
04 06 36 00
The blocked representation permits one to read and write arrays larger than can be buffered in memory, since one can start writing items without knowing the full length of the array.
Maps
Maps are encoded as a series of blocks. Each block consists of a long count value, followed by that many key/value pairs. A block with count zero indicates the end of the map. Each item is encoded per the map's value.
The blocked representation permits one to read and write maps larger than can be buffered in memory, since one can start writing items without knowing the full length of the map.
Unions
A union is encoded by first writing a long value indicating the zero-based position within the union of the schema of its value. The value is then encoded per the indicated schema within the union.
For example, the union schema ["null","string"] would encode:
- null as zero (the index of "null" in the union):
00
- the string "a" as one (the index of "string" in the union, encoded as hex 02), followed by the serialized string:
02 02 61
Fixed
Fixed instances are encoded using the number of bytes declared in the schema..
Sort Order
Avro defines a standard sort order for data. This permits data written by one system to be efficiently sorted by another system. This can be an important optimization, as sort order comparisons are sometimes the most frequent per-object operation. Note also that Avro binary-encoded data can be efficiently ordered without deserializing it to objects.
Data items may only be compared if they have identical schemas. Pairwise comparisons are implemented recursively with a depth-first, left-to-right traversal of the schema. The first mismatch encountered determines the order of the items.
Two items with the same schema are compared according to the following rules.
- null data is always equal.
- boolean data is ordered with false before true.
- int, long, float and double data is ordered by ascending numeric value.
- bytes and fixed data are compared lexicographically by unsigned 8-bit values.
- string data is compared lexicographically by Unicode code point. Note that since UTF-8 is used as the binary encoding for strings, sorting of bytes and string binary data is identical.
- array data is compared lexicographically by element.
- enum data is ordered by the symbol's position in the enum schema. For example, an enum whose symbols are ["z", "a"] would sort "z" values before "a" values.
- union data is first ordered by the branch within the union, and, within that, by the type of the branch. For example, an ["int", "string"] union would order all int values before all string values, with the ints and strings themselves ordered as defined above.
- record data is ordered lexicographically by field. If a field specifies that its order is:
- "ascending", then the order of its values is unaltered.
- "descending", then the order of its values is reversed.
- "ignore", then its values are ignored when sorting.
- map data may not be compared. It is an error to attempt to compare data containing maps unless those maps are in an "order":"ignore" record field.
Object Container Files for MapReduce processing.
Files may include arbitrary user-specified metadata.
A file consists of:
- A file header, followed by
- one or more file data blocks.
A file header consists of:
- Four bytes, ASCII 'O', 'b', 'j', followed by 1.
- file metadata, including the schema.
- The 16-byte, randomly-generated sync marker for this file.
File metadata is written as if defined by the following map schema:
{"type": "map", "values": "bytes"}
All metadata properties that start with "avro." are reserved. The following file metadata properties are currently used:
- avro.schema contains the schema of objects stored in the file, as JSON data (required).
- avro.codec the name of the compression codec used to compress blocks, as a string. Implementations are required to support the following codecs: "null" and "deflate". If codec is absent, it is assumed to be "null". The codecs are described with more detail below.
A file header is thus described by the following schema:
{"type": "record", "name": "org.apache.avro.file.Header", "fields" : [ {"name": "magic", "type": {"type": "fixed", "name": "Magic", "size": 4}}, {"name": "meta", "type": {"type": "map", "values": "bytes"}}, {"name": "sync", "type": {"type": "fixed", "name": "Sync", "size": 16}}, ] }
A file data block consists of:
- A long indicating the count of objects in this block.
- A long indicating the size in bytes of the serialized objects in the current block, after any codec is applied
- The serialized objects. If a codec is specified, this is compressed by that codec.
- The file's 16-byte sync marker.
Thus, each block's binary data can be efficiently extracted or skipped without deserializing the contents. The combination of block size, object counts, and sync markers enable detection of corrupt blocks and help ensure data integrity..
Protocol Declaration
Avro protocols describe RPC interfaces. Like schemas, they are defined with JSON text.
A protocol is a JSON object with the following attributes:
- protocol, a string, the name of the protocol (required);
- namespace, an optional string that qualifies the name;
- doc, an optional string describing this protocol;
- types, an optional list of definitions of named types (records, enums, fixed and errors). An error definition is just like a record definition except it uses "error" instead of "record". Note that forward references to named types are not permitted.
- messages, an optional JSON object whose keys are message names and whose values are objects whose attributes are described below. No two messages may have the same name.
The name and namespace qualification rules defined for schema objects apply to protocols as well.
Messages
A message has attributes:
- a doc, an optional description of the message,
- a request, a list of named, typed parameter schemas (this has the same form as the fields of a record declaration);
- a response schema;
- an optional union of declared error schemas. The effective union has "string" prepended to the declared union, to permit transmission of undeclared "system" errors. For example, if the declared error union is ["AccessError"], then the effective union is ["string", "AccessError"]. When no errors are declared, the effective error union is ["string"]. Errors are serialized using the effective union; however, a protocol's JSON declaration contains only the declared union.
- an optional one-way boolean parameter.
A request parameter list is processed equivalently to an anonymous record. Since record field lists may vary between reader and writer, request parameters may also differ between the caller and responder, and such differences are resolved in the same manner as record field differences.
The one-way parameter may only be true when the response type is "null" and no errors are listed.
Sample Protocol
For example, one may define a simple HelloWorld protocol with:
{ "namespace": "com.acme", "protocol": "HelloWorld", "doc": "Protocol Greetings", "types": [ {"name": "Greeting", "type": "record", "fields": [ {"name": "message", "type": "string"}]}, {"name": "Curse", "type": "error", "fields": [ {"name": "message", "type": "string"}]} ], "messages": { "hello": { "doc": "Say hello.", "request": [{"name": "greeting", "type": "Greeting" }], "response": "Greeting", "errors": ["Curse"] } } }
Protocol Wire Format
Message Transport
Messages may be transmitted via different transport mechanisms.
To the transport, a message is an opaque byte sequence.
A transport is a system that supports:
- transmission of request messages
- receipt of corresponding response messages
Servers may send a response message back to the client corresponding to a request message. The mechanism of correspondance is transport-specific. For example, in HTTP it is implicit, since HTTP directly supports requests and responses. But a transport that multiplexes many client threads over a single socket would need to tag messages with unique identifiers.
Transports may be either stateless or stateful. In a stateless transport, messaging assumes no established connection state, while stateful transports establish connections that may be used for multiple messages. This distinction is discussed further in the handshake section below.
HTTP as Transport
When HTTP is used as a transport, each Avro message exchange is an HTTP request/response pair. All messages of an Avro protocol should share a single URL at an HTTP server. Other protocols may also use that URL. Both normal and error Avro response messages should use the 200 (OK) response code. The chunked encoding may be used for requests and responses, but, regardless the Avro request and response are the entire content of an HTTP request and response. The HTTP Content-Type of requests and responses should be specified as "avro/binary". Requests should be made using the POST method.
HTTP is used by Avro as a stateless transport.
Message Framing
Avro messages are framed as a list of buffers.
Framing is a layer between messages and the transport. It exists to optimize certain operations.
The format of framed message data is:
- a series of buffers, where each buffer consists of:
- a four-byte, big-endian buffer length, followed by
- that many bytes of buffer data.
- A message is always terminated by a zero-lenghted buffer.
Framing is transparent to request and response message formats (described below). Any message may be presented as a single or multiple buffers.
Framing can permit readers to more efficiently get different buffers from different sources and for writers to more efficiently store different buffers to different destinations. In particular, it can reduce the number of times large binary objects are copied. For example, if an RPC parameter consists of a megabyte of file data, that data can be copied directly to a socket from a file descriptor, and, on the other end, it could be written directly to a file descriptor, never entering user space.
A simple, recommended, framing policy is for writers to create a new segment whenever a single binary object is written that is larger than a normal output buffer. Small objects are then appended in buffers, while larger objects are written as their own buffers. When a reader then tries to read a large object the runtime can hand it an entire buffer directly, without having to copy it.
Handshake
The purpose of the handshake is to ensure that the client and the server have each other's protocol definition, so that the client can correctly deserialize responses, and the server can correctly deserialize requests. Both clients and servers should maintain a cache of recently seen protocols, so that, in most cases, a handshake will be completed without extra round-trip network exchanges or the transmission of full protocol text.
RPC requests and responses may not be processed until a handshake has been completed. With a stateless transport, all requests and responses are prefixed by handshakes. With a stateful transport, handshakes are only attached to requests and responses until a successful handshake response has been returned over a connection. After this, request and response payloads are sent without handshakes for the lifetime of that connection.
The handshake process uses the following record schemas:
{ "type": "record", "name": "HandshakeRequest", "namespace":"org.apache.avro.ipc", "fields": [ {"name": "clientHash", "type": {"type": "fixed", "name": "MD5", "size": 16}}, {"name": "clientProtocol", "type": ["null", "string"]}, {"name": "serverHash", "type": "MD5"}, {"name": "meta", "type": ["null", {"type": "map", "values": "bytes"}]} ] } { "type": "record", "name": "HandshakeResponse", "namespace": "org.apache.avro.ipc", "fields": [ {"name": "match", "type": {"type": "enum", "name": "HandshakeMatch", "symbols": ["BOTH", "CLIENT", "NONE"]}}, {"name": "serverProtocol", "type": ["null", "string"]}, {"name": "serverHash", "type": ["null", {"type": "fixed", "name": "MD5", "size": 16}]}, {"name": "meta", "type": ["null", {"type": "map", "values": "bytes"}]} ] }
- A client first prefixes each request with a HandshakeRequest containing just the hash of its protocol and of the server's protocol (clientHash!=null, clientProtocol=null, serverHash!=null), where the hashes are 128-bit MD5 hashes of the JSON protocol text. If a client has never connected to a given server, it sends its hash as a guess of the server's hash, otherwise it sends the hash that it previously obtained from this server.
- The server responds with a HandshakeResponse containing one of:
- match=BOTH, serverProtocol=null, serverHash=null if the client sent the valid hash of the server's protocol and the server knows what protocol corresponds to the client's hash. In this case, the request is complete and the response data immediately follows the HandshakeResponse.
- match=CLIENT, serverProtocol!=null, serverHash!=null if the server has previously seen the client's protocol, but the client sent an incorrect hash of the server's protocol. The request is complete and the response data immediately follows the HandshakeResponse. The client must use the returned protocol to process the response and should also cache that protocol and its hash for future interactions with this server.
- match=NONE if the server has not previously seen the client's protocol. The serverHash and serverProtocol may also be non-null if the server's protocol hash was incorrect.
In this case the client must then re-submit its request with its protocol text (clientHash!=null, clientProtocol!=null, serverHash!=null) and the server should respond with a successful match (match=BOTH, serverProtocol=null, serverHash=null) as above.
The meta field is reserved for future handshake enhancements.
Call Format
A call consists of a request message paired with its resulting response or error message. Requests and responses contain extensible metadata, and both kinds of messages are framed as described above.
The format of a call request is:
- request metadata, a map with values of type bytes
- the message name, an Avro string, followed by
- the message parameters. Parameters are serialized according to the message's request declaration.
When the empty string is used as a message name a server should ignore the parameters and return an empty response. A client may use this to ping a server or to perform a handshake without sending a protocol message.
When a message is declared one-way and a stateful connection has been established by a successful handshake response, no response data is sent. Otherwise the format of the call response is:
- response metadata, a map with values of type bytes
- a one-byte error flag boolean, followed by either:
- if the error flag is false, the message response, serialized per the message's response schema.
- if the error flag is true, the error, serialized per the message's effective error union schema.. This section specifies how such schema differences should be resolved.
- string is promotable to bytes
- bytes is promotable to string
-.
A schema's "doc" fields are ignored for the purposes of schema resolution. Hence, the "doc" portion of a schema may be dropped at serialization.
Parsing Canonical Form for Schemas
One of the defining characteristics of Avro is that a reader is assumed to have the "same" schema used by the writer of the data the reader is reading. This assumption leads to a data format that's compact and also amenable to many forms of schema evolution. However, the specification so far has not defined what it means for the reader to have the "same" schema as the writer. Does the schema need to be textually identical? Well, clearly adding or removing some whitespace to a JSON expression does not change its meaning. At the same time, reordering the fields of records clearly does change the meaning. So what does it mean for a reader to have "the same" schema as a writer?
Parsing Canonical Form is a transformation of a writer's schema that let's us define what it means for two schemas to be "the same" for the purpose of reading data written agains the schema. It is called Parsing Canonical Form because the transformations strip away parts of the schema, like "doc" attributes, that are irrelevant to readers trying to parse incoming data. It is called Canonical Form because the transformations normalize the JSON text (such as the order of attributes) in a way that eliminates unimportant differences between schemas. If the Parsing Canonical Forms of two different schemas are textually equal, then those schemas are "the same" as far as any reader is concerned, i.e., there is no serialized data that would allow a reader to distinguish data generated by a writer using one of the original schemas from data generated by a writing using the other original schema. (We sketch a proof of this property in a companion document.)
The next subsection specifies the transformations that define Parsing Canonical Form. But with a well-defined canonical form, it can be convenient to go one step further, transforming these canonical forms into simple integers ("fingerprints") that can be used to uniquely identify schemas. The subsection after next recommends some standard practices for generating such fingerprints.
Transforming into Parsing Canonical Form
Assuming an input schema (in JSON form) that's already UTF-8 text for a valid Avro schema (including all quotes as required by JSON), the following transformations will produce its Parsing Canonical Form:
- [PRIMITIVES] Convert primitive schemas to their simple form (e.g., int instead of {"type":"int"}).
- [FULLNAMES] Replace short names with fullnames, using applicable namespaces to do so. Then eliminate namespace attributes, which are now redundant.
- [STRIP] Keep only attributes that are relevant to parsing data, which are: type, name, fields, symbols, items, values, size. Strip all others (e.g., doc and aliases).
- [ORDER] Order the appearance of fields of JSON objects as follows: name, type, fields, symbols, items, values, size. For example, if an object has type, name, and size fields, then the name field should appear first, followed by the type and then the size fields.
- [STRINGS] For all JSON string literals in the schema text, replace any escaped characters (e.g., \uXXXX escapes) with their UTF-8 equivalents.
- [INTEGERS] Eliminate quotes around and any leading zeros in front of JSON integer literals (which appear in the size attributes of fixed schemas).
- [WHITESPACE] Eliminate all whitespace in JSON outside of string literals.
Schema Fingerprints
"[A] fingerprinting algorithm is a procedure that maps an arbitrarily large data item (such as a computer file) to a much shorter bit string, its fingerprint, that uniquely identifies the original data for all practical purposes" (quoted from [Wikipedia]). In the Avro context, fingerprints of Parsing Canonical Form can be useful in a number of applications; for example, to cache encoder and decoder objects, to tag data items with a short substitute for the writer's full schema, and to quickly negotiate common-case schemas between readers and writers.
In designing fingerprinting algorithms, there is a fundamental trade-off between the length of the fingerprint and the probability of collisions. To help application designers find appropriate points within this trade-off space, while encouraging interoperability and ease of implementation, we recommend using one of the following three algorithms when fingerprinting Avro schemas:
- When applications can tolerate longer fingerprints, we recommend using the SHA-256 digest algorithm to generate 256-bit fingerprints of Parsing Canonical Forms. Most languages today have SHA-256 implementations in their libraries.
- At the opposite extreme, the smallest fingerprint we recommend is a 64-bit Rabin fingerprint. Below, we provide pseudo-code for this algorithm that can be easily translated into any programming language. 64-bit fingerprints should guarantee uniqueness for schema caches of up to a million entries (for such a cache, the chance of a collision is 3E-8). We don't recommend shorter fingerprints, as the chances of collisions is too great (for example, with 32-bit fingerprints, a cache with as few as 100,000 schemas has a 50% chance of having a collision).
- Between these two extremes, we recommend using the MD5 message digest to generate 128-bit fingerprints. These make sense only where very large numbers of schemas are being manipulated (tens of millions); otherwise, 64-bit fingerprints should be sufficient. As with SHA-256, MD5 implementations are found in most libraries today.
These fingerprints are not meant to provide any security guarantees, even the longer SHA-256-based ones. Most Avro applications should be surrounded by security measures that prevent attackers from writing random data and otherwise interfering with the consumers of schemas. We recommend that these surrounding mechanisms be used to prevent collision and pre-image attacks (i.e., "forgery") on schema fingerprints, rather than relying on the security properties of the fingerprints themselves.
Rabin fingerprints are cyclic redundancy checks computed using irreducible polynomials. In the style of the Appendix of RFC 1952 (pg 10), which defines the CRC-32 algorithm, here's our definition of the 64-bit AVRO fingerprinting algorithm:
long fingerprint64(byte[] buf) { if (FP_TABLE == null) initFPTable(); long fp = EMPTY; for (int i = 0; i < buf.length; i++) fp = (fp >>> 8) ^ FP_TABLE[(int)(fp ^ buf[i]) & 0xff]; return fp; } static long EMPTY = 0xc15d213aa4d7a795L; static long[] FP_TABLE = null; void initFPTable() { FP_TABLE = new long[256]; for (int i = 0; i < 256; i++) { long fp = i; for (int j = 0; j < 8; j++) fp = (fp >>> 1) ^ (EMPTY & -(fp & 1L)); FP_TABLE[i] = fp; } }
Readers interested in the mathematics behind this algorithm may want to read this book chapter. (Unlike RFC-1952 and the book chapter, we prepend a single one bit to messages. We do this because CRCs ignore leading zero bits, which can be problematic. Our code prepends a one-bit by initializing fingerprints using EMPTY, rather than initializing using zero as in RFC-1952 and the book chapter.)
Logical Types
A logical type is an Avro primitive or complex type with extra attributes to represent a derived type. The attribute logicalType must always be present for a logical type, and is a string with the name of one of the logical types listed later in this section. Other attributes may be defined for particular logical types.
A logical type is always serialized using its underlying Avro type so that values are encoded in exactly the same way as the equivalent Avro type that does not have a logicalType attribute. Language implementations may choose to represent logical types with an appropriate native type, although this is not required.
Language implementations must ignore unknown logical types when reading, and should use the underlying Avro type. If a logical type is invalid, for example a decimal with scale greater than its precision, then implementations should ignore the logical type and use the underlying Avro type.
Decimal
The decimal logical type represents an arbitrary-precision signed decimal number of the form unscaled × 10-scale.
A decimal logical type annotates Avro bytes or fixed types. The byte array must contain the two's-complement representation of the unscaled integer value in big-endian byte order. The scale is fixed, and is specified using an attribute.
The following attributes are supported:
- scale, a JSON integer representing the scale (optional). If not specified the scale is 0.
- precision, a JSON integer representing the (maximum) precision of decimals stored in this type (required).
For example, the following schema represents decimal numbers with a maximum precision of 4 and a scale of 2:
{ "type": "bytes", "logicalType": "decimal", "precision": 4, "scale": 2 }
Precision must be a positive integer greater than zero. If the underlying type is a fixed, then the precision is limited by its size. An array of length n can store at most floor(log_10(28 × n - 1 - 1)) base-10 digits of precision.
Scale must be zero or a positive integer less than or equal to the precision.
For the purposes of schema resolution, two schemas that are decimal logical types match if their scales and precisions match.
Apache Avro, Avro, Apache, and the Avro and Apache logos are trademarks of The Apache Software Foundation. | http://avro.apache.org/docs/current/spec.html | CC-MAIN-2015-27 | refinedweb | 5,744 | 61.87 |
In This Chapter
- Windows
- Controls
- N-Tier Architecture
- Menus
Windows Forms are the Graphical User Interface (GUI) libraries of the Microsoft .NET Frameworks. The Windows Forms library contains most of the graphical controls familiar to GUI programmers. All of the concepts learned in previous chapters are applied when doing GUI programming. Of special significance is the use of events to connect GUI controls, such as buttons, to the code that implements the program's behavior related to that control.
Windows Forms is not included in the proposed Common Language Infrastructure (CLI) submission to European Computer Manufacturers Association (ECMA). However, it is of such importance to development that its coverage is provided here. Specific emphasis is placed on how C# is used to produce GUIs, and the language constructs involved. The same C# language features are likely to be applied to any future GUI library implementations.
Examples in this chapter begin with the basic element of Windows Forms programming: the window. Then there is an introduction to the standard window controls such as buttons and text boxes. The menu, a common element of GUIs, is included.
Windows
The basic element of most GUI programming in Windows Forms is the window. Essentially, everything on a GUI screenbuttons, text boxes, and iconsare windows. Because of this, most of the windows and controls in the Windows Forms package have the same characteristics. For instance, they all have a Text property. How they use the property is up to the specific type of window.
Building a Windows Forms application is easy once a few basic concepts are understood. This section covers some of these concepts and provides a starting point from which to proceed. Listing 16.1 shows a relatively simple Windows Forms application. To compile the code in Listing 16.1, use the command line in Listing 16.2.
Listing 16.1 A Simple Windows Forms Application
using System; using System.Windows.Forms; using System.ComponentModel; using System.Drawing; public class FirstForm : Form { private Container components; private Label howdyLabel; public FirstForm() { InitializeComponent(); } private void InitializeComponent() { components = new Container (); howdyLabel = new Label (); howdyLabel.Location = new Point (12, 116); howdyLabel.Text = "Howdy, Partner!"; howdyLabel.Size = new Size (267, 40); howdyLabel.AutoSize = true; howdyLabel.Font = new Font ( "Microsoft Sans Serif", 26, System. Drawing.FontStyle.Bold); howdyLabel.TabIndex = 0; howdyLabel.Anchor = AnchorStyles.None; howdyLabel.TextAlign = ContentAlignment.MiddleCenter; Text = "First Form"; Controls.Add (howdyLabel); } public static void Main() { Application.Run(new FirstForm()); } }
Listing 16.2 Command Line for Listing 16.1
csc /r:System.Windows.Forms.DLL _/r:System.Drawing.DLL FirstForm.cs
Listing 16.2 contains a command line that can be used to compile the code from Listing 16.1. The command line references a few dynamic link libraries by using the /r: <dllname> option. The System.Windows.Forms.DLL and System.Drawing.DLL libraries contain all the routines required to present graphical components, such as forms and controls, on the screen.
At the top of the file are a few new namespaces to be familiar with. The most familiar is the System namespace, holding all the basic class libraries. The System.Windows.Forms namespace holds definitions of all the Windows Forms windows and controls. It also has other supporting types including interfaces, structs, delegates, and enumerations supporting the window types. The System.ComponentModel namespace contains several classes and interfaces (language as opposed to graphical interfaces) for providing generalized support of components. The System.Drawing namespace provides access to the operating system graphics functionality.
The first two class members of the FirstForm class are components and howdyLabel. The components field is a Container object from the System.ComponentModel namespace. This object doesn't participate in the graphical presentation of the program. However, it does do a lot of behind-the-scenes work to support timers, multithreading, and cleanup when the program ends. Its declaration and instantiation are mandatory. The other field, howdyLabel, is a Windows Forms Label Control. It is used in this program to display the "Howdy, Partner!" message in the window that is created.
The FirstForm constructor calls the InitializeComponent() method. The InitializeComponent() method creates and instantiates the Windows Forms Controls and Forms that make up the graphical interface of this program. It begins by instantiating the components and howdyLabel fields as objects. The following paragraphs explain the rest of this method.
The first group of statements initializes the howdyLabel label. Labels are well suited to presenting static text. That's exactly what howdyLabel does.
Labels have a Location property, keeping track of where the Label is placed on the screen. The Location property accepts a Point structure, which is a member of the System.Drawing namespace. The Point struct is used frequently in Windows Forms applications to specify X and Y screen coordinates. In the example, the Location of the howdyLabel is 12 pixels from the left and 116 pixels from the top of the main form. Here's how the Location property of the howdyLabel label is set:
howdyLabel.Location = new Point (12, 116);
The static text of a Label is set through the Text property. The following statement sets the text of the Label to the string "Howdy, Partner!:
howdyLabel.Text = "Howdy, Partner!";
A Label also has a Size property that takes a Size structure. In Listing 16.1, the size of howdyLabel is set to 267 pixels wide by 40 pixels high:
howdyLabel.Size = new Size (267, 40);
The AutoSize property accepts a Boolean value, which tells whether a Label can automatically resize itself to accommodate its contents. For instance, in this program the actual size of the howdyLabel contents exceeds its set size from the previous statement, so the label must grow to fully show the entirety of its text. Here's how the AutoSize property of the howdyLabel label is set.
howdyLabel.AutoSize = true;
A Label can change its typeface through the Font property. It accepts a Font object. The constructor for the Font object in the following statement accepts three parameters: the font name, the font size, and a font style. The font style is from the FontStyle enum in the System.Drawing namespace.
howdyLabel.Font = new Font ( "Microsoft Sans Serif", 26, System.Drawing.FontStyle.Bold);
When there are multiple controls on a form, each control that can accept input can have its TabIndex property set. This permits the user to press the Tab key to move to the next control on the form, based on TabIndex. In this example, the TabIndex of howdyLabel is set to 0. This is for illustrative reasons only. The fact is that a Label can never be a tab stop because it doesn't normally accept user input. Furthermore, for this program, this is the only control on the form. There isn't any other control to tab to. Here's how the TabIndex property of the howdyLabel label is set:
howdyLabel.TabIndex = 0;
Window layout in Windows Forms is done with the techniques of anchoring and docking. Docking specifies the location of the form that a control will reside in. Anchoring tells which side of a control will be attached to another control. These two techniques permit any type of layout a window design would need. The following code line states that howdyLabel will not be anchored. It uses the AnchorStyles enumeration to set the Anchor property.
howdyLabel.Anchor = AnchorStyles.None;
The horizontal alignment of a Label may be set with the TextAlign property, which accepts a ContentAlignment enumeration. The following statement sets the horizontal alignment of howdyLabel to be centered between its left and right margins.
howdyLabel.TextAlign = ContentAlignment.MiddleCenter;
The next few statements perform initialization on the main form. Since FirstForm is a Form object, it is considered the main form.
Text = "First Form";
All forms and controls have Text properties. What they do with them is unique to each form or control. A Form object sets its title bar with the value of the Text property. This example sets the program's title bar to say "First Form."
Controls.Add (howdyLabel);
A form's Controls object holds a collection of all of its controls. When the form has to redraw itself, it iterates through this collection and sets itself up according to several factors, including the anchoring and docking properties of each control. This example adds howdyLabel to the FirstForm Controls collection object.
public static void Main() { Application.Run(new FirstForm()); }
The Main() method simply gets the program running. It calls the static Run() method of the Application class. Its parameter is a new instance of the FirstForm class. When the HowdyPartner program runs, it looks like the window shown in Figure 16.1.
Figure 16.1 The HowdyPartner Windows Forms | http://www.informit.com/articles/article.aspx?p=27316&seqNum=4 | CC-MAIN-2017-34 | refinedweb | 1,444 | 50.63 |
Release 0.18.022 Apr 2018
MDAnalysis version 0.18.0 has been released.
This release brings various fixes and new features and users should update with either
pip install -U MDAnalysis or
conda install -c conda-forge mdanalysis.
One exciting new feature is the addition of duecredit
to keep track of what citations are appropriate.
Once you have written an analysis script (e.g.,
myanalysis.py)
and have installed the
duecredit package (
pip install duecredit),
you can either set the environment variable
DUECREDIT_ENABLE=yes and run your script
python myanalysis.py or to run
python -m duecredit myanalysis.py to be given a report of what citable software you have used.
You can then use the data written by duecredit to export the bibliography and have it ready to be imported into your reference manager.
We hope that this will allow all contributors of analysis packages within MDAnalysis to get properly cited
and we are working on retroactively adding all required citations to duecredit.
The
AtomGroup.groupby method now supports using multiple attributes to group by,
for example one could create a dictionary which allows a particular residue name and atom name to be quickly queried:
>>> grouped = ag.groupby(['resnames', 'names']) >>> grouped['MET', 'CA'] <AtomGroup with 6 atoms>
When writing GRO files,
previously all atoms would have their indices reset so that they ran sequentially from 1.
To preserve their original index, the
reindex option has been added to the
GROWriter.
For example:
>>> u = mda.Universe() >>> u.atoms.write('out.gro', reindex=False)
or
>>> with mda.Writer('out.gro', reindex=False) as w: ... w.write(u.atoms)
Gromacs users can benefit from a new feature when reading TPR files. Now, when the topology is read from a TPR file, the atoms have a
moltype and a
molnum attribute. The
moltype attribute is the molecule type as defined in ITP files, the
molnum attribute is the index of the molecule. These attributes can be accessed for an atom group using the plural form:
>>> u = mda.Universe(TPR, XTC) >>> u.atoms.moltypes >>> u.atoms.molnums
These attributes can be used in
groupby:
>>> u.atoms.groupby('moltypes') >>> u.atoms.groupby('molnums')
to provide access to all atoms of a specific moleculr type or that are part of a particular molecule.
The
AtomGroup.split method of atom groups can also work on molecules:
>>> u.atoms.split('molecule')
and will create a list of AtomGroup instances, one for each molecule.
For convenience, various Group classes have been moved to the top namespace (namely,
AtomGroup,
ResidueGroup,
SegmentGroup):
import MDAnalysis as mda u = mda.Universe(topology, trajectory) # for creating AtomGroups from arrays of indices ag = mda.AtomGroup([11, 15, 16], u) # or for checking an input in a function: def myfunction(thing): if not isinstance(thing, mda.AtomGroup): raise TypeError("myfunction requires AtomGroup")
And finally, this release includes fixes for many bugs. This includes a smaller memory footprint when reading NetCDF trajectories, better handling of time when reading DCD trajectories and adding support for Gromacs 2018 TPR files. For more details see the CHANGELOG entry for release 0.18.0.
As ever, this release of MDAnalysis was the product of collaboration of various researchers around the world featuring the work of 12 different contributors. We would especially like to welcome and thank our six new contributors: Ayush Suhane, Mateusz Bieniek, Davide Cruz, Navya Khare, Nabarun Pal, and Johannes Zeman. | https://www.mdanalysis.org/2018/04/22/release-0.18.0/ | CC-MAIN-2018-47 | refinedweb | 563 | 55.74 |
LINK(2) BSD Programmer's Manual LINK(2)
link - make a hard file link
#include <unistd.h> int link(const char *name1, const char *name2);
The link() function atomically creates the specified directory entry (hard link) name2 with the attributes of the underlying object pointed at by name1. If the link is successful: the link count of the underlying ob- ject.
Upon successful completion, a value of 0 is returned. Otherwise, a value of -1 is returned and errno is set to indicate the error.
link() will fail and no link will be created if: [ENOTDIR] A component of either path prefix is not a directory. [ENAMETOOLONG] A component of a pathname exceeded {NAME_MAX} characters, or an entire path name exceeded {PATH_MAX} characters. permission. [EACCES] The requested link requires writing in a directory with a mode that denies write permission. [ELOOP] Too many symbolic links were encountered in translating one of the pathnames. [ENOENT] The file named by name1 does not exist. [EEXIST] The link named by name2 does exist. [EPERM] The file named by name1 is a directory and the effective user ID is not superuser, or the file system containing directory. al- located address space.
readlink(2), symlink(2), unlink(2)
The link() function is expected to conform. | http://www.mirbsd.org/htman/sparc/man2/link.htm | CC-MAIN-2014-42 | refinedweb | 210 | 56.86 |
Politics Often Hold the Community Back
In this part of the interview series, I’m interviewing this channel’s editor, Bruno Škvorc, along with the feedback from Gary Hockin from Roave. We’ll start with Gary.
Gary Hockin
What lead you to PHP?
Simply, classic ASP. I was working as a general dogsbody in a huge steelworks in Port Talbot, and we were paying thousands of pounds for licenses, to display data from an Oracle database on lots of screens.
I started looking into ways of displaying that data on multiple screens for free – each machine had IE6 (at the time) so I explored ASP.
From there I realised that I loved the role of web development and quickly learned PHP in my own time, as I realised then that open source was where the future was.
What have been the things about PHP that bit you?
In a positive way, originally it’s the fact that you can create these amazing interactive webpages which you can push to a $10 shared host and point your friends to it to play with.
When I was a “kid” that was what kept me coming back to messing around with PHP, and ultimately why I shifted my career around it. More recently it’s the community. The community is simply amazing.
What have been the highlights or redeeming features
Of my career in PHP? I touched on it early, the community has been truly an epiphany for me. About 5 years ago I was sitting around as a Ghost Developer (), working in semi-object-oriented PHP, when I suddenly realised that someone HAD to have created a standard framework for building PHP applications.
A quick Google led me to a list as long as your arm, but because we were already using Zend Guard on distributed servers I decided to use Zend Framework (1). From there I found the #zftalk IRC channel, and from there I got totally immersed in the ZF community.
Since then I’ve been involved in the PHP community at large and attended and spoken at many conferences. I’m probably earning £10-15k per year more since I discovered the PHP community.
What are the compelling PHP features for you?
Of PHP, honestly, I don’t really have any compelling features. Everyone knows that PHP is the glue of the web, and it should be treated as such.
Realistically I love the way you can just push something into Redis for later retrieval, or dump some logging into ElasticSearch and forget about it. It’s compelling features are all based around how easy it is to build scalable, dependable applications.
What do you want to see added to the language?
Scalar type hinting. A contentious point I know, but honestly I think it would solve so many future problems that it would be well worth the hassle. On a side note, I’d also like to see some more standard PHP objects.
It’s crazy that every framework/library writes its own HTTP Request and Response object, surely this could be included in the SPL. Or maybe even the FIG could create some interfaces for that. FIG have done some great work so far, but I’d love to see that go much further.
Why PHP over Ruby, Python, Go, etc?
Guess – community of course. I can get answers from my peers in a few minutes and nobody wants anything back. I work a bit in Python and do a fair bit in Javascript (client side that is), but honestly I feel as if I’ve paid my dues in the PHP community (specifically the ZF community) and can get help and direction from people much, much smarter than me. Why would you give that up?
Do you see yourself moving to another language in the future?
Never say never. I’m constantly playing around with other languages (when I get the time). Moving into Node.js seems like a given at some point. I’m not actively looking at moving into another language but I certainly wouldn’t deliberately not move just because it’s not PHP.
Do you have a custom framework/setup?
Everyone writes their own framework at some point, and I’m no different. I wrote our api at Yamgo in my own framework as an educational exercise more than anything else. Of course, I regret that now, 18 months later.
Realistically, with the advent of Composer I’m leaning more towards Slim + Packages for most of my new projects. Of course, anything that needs full stack and I’m straight onto ZF2.
How have you implemented deployment?
We have a simple git hook that posts to our deployment server which in turn does a git deployment to each of the webheads. Currently we don’t have a full deployment chain with testing, code sniffing etc, but it’s definitely something I’d like to implement in the near future.
What is in your standard toolchain?
PHPUnit and PHPStorm – daily. Ansible and Vagrant more and more. I also use thinks like Navicat for MySQL (because I bought it before I realised Sequel Pro existed) and other little helpers – nothing unique or spectacular really.
What testing tips can you share?
The whole process of UNIT Testing became much clearer to me when I realised that the UNIT in question was a single method of a single class. Of course, that’s simplifying but when I realised it was just checking that a given method did what was expected then my whole development architecture changed.
Not only does testing save you time and effort in the future, it makes you write more elegant code because it encourages you to stick to the principles of SOLID. My tip is, I spent far too long not testing due to pure laziness, I wish I’d started unit testing years ago.
Are there tools you use which you can’t live without?
Google.
How does your team work?
At Yamgo/Adpsruce we are a small team who are largely based in the office in Swansea. At Roave, we are a small team distributed throughout the world. It’s a massive difference, but honestly Roave works amazingly well because we are all around on IRC and Hangouts throughout every working day.
We actually have a persistent hangout you can dip in and out of that some people use when they just want company or the feel of office working. I don’t mind either way of working, I feel like remote working is fine as long as you trust your team to communicate effectively.
What did you think? Gary had loads to share and I know I learned a lot myself. Let’s wrap up this series with my Bruno’s contributions. I hope you enjoy it!
Bruno Škvorc
What lead you to PHP?
Ease of entry, as with most people. Then I got thrown into the fire at my first job being told to fix a ZF1 site and there’s no learning like learning on ZF1. After 2 years of fixing that mess, I became the lead developer of the team that would rewrite it to ZF2 there. In the meanwhile, I dealt with side projects, other frameworks, built my own, and basically just sort of stuck around in the PHP world.
What have been the things about PHP that bit you?
Fragmentation of the expert level community. I don’t care much for the needle/haystack problem and similar arguments others use to bash PHP, but I do mind that we can act worse than the JavaScript community sometimes. The JS community is perfectly willing and able to love a tool, then in the very next moment absolutely loathe it because a new one appeared that does things two keypresses faster (gulp vs grunt).
This bipolarity is something I’ve witnessed in PHP on many occasions – from you “can” test, to you “must” test to you “should no longer” do test-first; from ServiceLocators/Managers being awesome to them being used in an absolutely silly manner in ZF2 which completely prevented you from using a ZF2 component outside ZF2; from arguments about names of class types (Facades in Laravel), etc – irrelevant things that waste everyone’s energy because everyone has an opinion and everyone seems to change it eventually.
The experts seem more focused on arguing than on writing quality code. Then there’s the various gender and sexual orientation based wars that have taken the tech sector by storm this year and our high quality people suddenly shy away from each other because of differences in opinion, and purely because some can’t seem to separate church from state. This political situation is what demotivates me sometimes – politics often hold the community back.
What have been the highlights or redeeming features
I’d like to go with community again. While it does have its downsides, it’s undeniably awesome. Yes, for every Anthony Ferrara there will be fifty people telling you to use
mysql_real_escape_string, but once you learn to separate the wheat and the chaff, the amount of mindblowing knowledge out there to be had is astonishing.
What are the compelling PHP features for you?
Well, I really like the features they’ve been adding during the past few versions. A built-in web server? Yes please! If they make it more production-worthy, I’d be compelled to actually use it. Traits and namespaces? Still vastly underused yet so powerful and versatile. But most of all, I like how easy it is to have something of high quality up and running in PHP almost instantly. With so many frameworks and bootstraps pre-built, you can literally skip all the configuration and tedium that usually comes with new projects and get right down into the logic of it. Not many other languages have that advantage.
What do you want to see added to the language?
The things Hack is doing – static typing being my most desired feature. I would also like them to upgrade the built-in server to something we can use in production, eliminating the need for Nginx and/or Apache. NodeJS can serve files to production, Dart can do it, why not PHP? Let’s go!
Why PHP over Ruby, Python, Go, etc?
I wrote about this extensively here but the gist of it is – it’s easy to learn, relatively easy to become professional at it, and it lets you build good things very, very fast. The only language I’d consider a worthy alternative right now is Dart, but it’s very clunky still and harder to get started in for a web project than PHP.
Do you see yourself moving to another language in the future?
I moved to PHP from Actionscript and JavaScript, but eventually yes, Dart. I’ll never depart PHP fully, especially with Hack being publicly available now, but I do intend to master more than one high quality language.
Do you have a custom framework/setup?
I did build my own framework for a master thesis at uni, but I discarded it later on when I met Phalcon. I always say there’s a very specific lifecycle in every PHP dev’s career: don’t use frameworks, use frameworks, build your own framework, realize someone did it better. For me, Phalcon was the jump into the last step. Some people say there is another step, identical to the first one in that you no longer use frameworks, but I wholeheartedly disagree with that notion.
How have you implemented deployment?
Depends on the project. More often than not that ends up being a push of the production branch in Git, and a script that propagates changes across the rest of the servers when triggered. I’m looking at more automatic solutions and trying to wrap my head around all that’s on offer today, but there’s just so few hours in a day.
What is in your standard toolchain?
Definitely Vagrant and PhpStorm. Vagrant helps me get up and running with a new project almost instantly regardless of the platform I’m on, and PhpStorm lets me develop instantly regardless of the platform I’m on. My configuration settings are pulled from Google Drive and I’m ready to start working on any machine I get my hands on, as soon as I install Vagrant, Virtualbox and the IDE. This is a huge productivity boost. Add to that the fact that PhpStorm can do Dart with the Dart plugin, and I literally never have to change my development environment. There’s also the standard stuff – MySQL workbench, various code sniffing filters in PhpStorm, etc, but these are my main tools.
What testing tips can you share?
Just write tests and read I guess, nothing else to be said. You need to write tests and that’s that. Oh, don’t skip them – don’t tell yourself “I’ll do it later”. You won’t. Do your tests NOW.
Are there tools you use which you can’t live without?
Development-wise? Vagrant and PhpStorm, without a doubt. If StackOverflow counts as a tool, I’ll mention that too.
How does your team work?
When I do have a team (rarely, as I freelance right now) we use various collaboration tools like those from Atlassian to code review, collaborate, discuss and brainstorm things. I don’t do Agile, tried it and it didn’t work. Instead, I like a component based approach – a base framework is put in place, and a map of responsibilities is drawn out. These responsibilities have quantifiers assigned to them depending on their complexity and importance, and they’re agreed on by the entire team. Each team member has a pool of points he can spend on claiming a feature, and a bug pool. If a dev has 7 points in his feature pool, he can claim up to 7 total points of features to implement (i.e. email system for 4 points, Google+ auth for 3 points). A dev can have at most three bugs, or one critical bug before they’re in “bug jail” and need to solve them before moving on with the feature pool. There are usually no strict deadlines because I’ve gotten pretty good at recognizing the skill of people I work with and the features that’ll be anticipated before the client knows it, so it’s usually rather relaxed, though, of course, things go wrong sometimes.
Wrapping Up
Though the introduction to this series had quite a provocative title, I don’t want you to see it as being a bait and switch.
My aim is to help turn the tide of negativity towards PHP and show how, especially with the most recent revisions, PHP has really grown up and is deserving of a reputation equal to other languages, such as Ruby and Python. In the end, no matter what language you use, I hope that you truly see it for the wonder it is and continue becoming the best you can be at your craft. | https://www.sitepoint.com/interview-gary-hocken-matthew-setter/ | CC-MAIN-2018-51 | refinedweb | 2,522 | 71.14 |
In this tutorial series (part free, part Premium) we'll create a high-performance 2D shoot-em-up using the new hardware-accelerated
Stage3D rendering engine. We will be taking advantage of several hardcore optimization techniques to achieve great 2D sprite rendering performance. In this part, we'll build a high-performance demo that draws hundreds of moving sprites on-screen at once.
Final Result Preview
Let's take a look at the final result we will be working towards: a high-performance 2D sprite demo that uses Stage3D with optimizations that include a spritesheet and object pooling.
Introduction: Flash 11 Stage3D
If you're hoping to take your Flash games to the next level and are looking for loads of eye-candy and amazing framerate, Stage3D is going to be your new best friend.
The incredible speed of the new Flash 11 hardware accelerated Stage3D API is just begging to be used for 2D games. Instead of using old-fashioned Flash sprites on the DisplayList or last-gen blitting techniques as popularized by engines such as FlashPunk and Flixel, the new breed of 2D games uses the power of your video card's GPU to blaze through rendering tasks at up to 1000x the speed of anything Flash 10 could manage.
Although it has 3D in its name, this new API is also great for 2D games. We can render simple geometry in the form of 2D squares (called quads) and draw them on a flat plane. This will enable us to render tons of sprites on screen at a silky-smooth 60fps.
We'll make a side-scrolling shooter inspired by retro arcade titles such as R-Type or Gradius in ActionScript using Flash 11's Stage3D API. It isn't half as hard as some people say it is, and you won't need to learn assembly language AGAL opcodes.
In this 6-part tutorial series, we are going to program a simple 2D shoot-'em-up that delivers mind-blowing rendering performance. We are going to build it using pure AS3, compiled in FlashDevelop (read more about it here). FlashDevelop is great because it is 100% freeware - no need to buy any expensive tools to get the best AS3 IDE around.
Step 1: Create a New Project
If you don't already have it, be sure to download and install FlashDevelop. Once you're all set up (and you've allowed it to install the latest version of the Flex compiler automatically), fire it up and start a new "AS3 Project."
FlashDevelop will create a blank template project for you. We're going to fill in the blanks, piece-by-piece, until we have created a decent game.
Step 2: Target Flash 11
Go into the project menu and change a few options:
- Target Flash 11.1
- Change the size to 600x400px
- Change the background color to black
- Change the FPS to 60
- Change the SWF filename to a name of your choosing
Step 3: Imports
Now that our blank project is set up, let's dive in and do some coding. To begin with, we will need to import all the Stage3D functionality required. Add the following to the very top of your
Main.as file.
// Stage3D Shoot-em-up Tutorial Part 1 //.geom.Rectangle; import flash.utils.getTimer;
Step 4: Initialize Stage3D
The next step is to wait for our game to appear on the Flash stage. Doing things this way allows for the future use of a preloader. For simplicity, we will be doing most of our game in a single little class that inherits from the Flash Sprite class as follows.
public class Main extends Sprite {("Simple Stage3D Sprite Demo v1");..."); }
After setting some stage-specific properties, we request a Stage3D context. This can take a while (a fraction of a second) as your video card is configured for hardware rendering, so we need to wait for the
onContext3DCreate event.
We also want to detect any errors that may occur, especially since Stage3D content does not run if the HTML embed code that loads your SWF doesn't include the parameter
"wmode=direct". These errors can also happen if the user is running an old version of Flash or if they don't have a video card capable of handling pixel shader 2.0.
Step 5: Handle Any Events
Add the following functions that detect any events that might be triggered as specified above. In the case of errors due to running old Flash plugins, in future versions of this game we might want to output a message and remind the user to upgrade, but for now this error is simply ignored.
For users with old video cards (or drivers) that don't support shader model 2.0, the good news is that Flash 11 is smart enough to provide a software renderer. It doesn't run very fast but at least everyone will be able to play your game. Those with decent gaming rigs will get fantastic framerate like you've never seen in a Flash game before.
//); } }
The event handling code above detects when Stage3D is ready for hardware rendering and sets the variable
context3D for future use. Errors are ignored for now. The resize event simply updates the size of the stage and batch rendering system dimensions.
Step 6: Init the Sprite Engine
Once the
context3D has been received, we are ready to start the game running. Continuing with
Main.as, add the a single rendering batch // which will draw all sprites in one pass var view:Rectangle = new Rectangle(0,0,_width,_height) _entities = new EntityManager(stageRect); _entities.createBatch(context3D); _spriteStage.addBatch(_entities._batch); // add the first entity right now _entities.addEntity(); // tell the gui where to grab statistics from _gui.statsTarget = _entities; // start the render loop stage.addEventListener(Event.ENTER_FRAME,onEnterFrame); }
This function creates a sprite rendering engine (to be implemented below) on the stage, ready to use the full size of your flash file. We then add the entity manager and batched geometry system (which we will discuss below). We are now able to give a reference to the entity manager to our stats GUI class so that it can display some numbers on screen regarding how many sprites have been created or reused. Lastly, we start listening for the
ENTER_FRAME event, which will begin firing at a rate of up to 60 times per second.
Step 7: Start the Render Loop
Now that everything has been initialized, we are ready to play! The following function will be executed every single frame. For the purposes of this first tech demo, we are going to add one new sprite on stage each frame. Because we are going to implement an object pool (which you can read more about in this tutorial) instead of inifinitely creating new objects until we run out of RAM, we are going to be able to reuse old entities that have moved off screen.
After spawning another sprite, we clear the stage3D area of the screen (setting it to pure black). Next we update all the entities that are being controlled by our entity manager. This will move them a little more each frame. Once all sprites have been updated, we tell the batched geometry system to gather them all up into one large vertex buffer and bast them on screen in a single draw call, for efficiency. Finally, we tell the context3D to update the screen with our final render.
// this function draws the scene every frame private function onEnterFrame(e:Event):void { try { // keep adding more sprites - FOREVER! // this is a test of the entity manager's // object reuse "pool" _entities.addEntity(); // erase the previous frame context3D.clear(0, 0, 0, 1); // move/animate all entities _entities.update(getTimer()); //
That's it for the inits! As simple as it sounds, we have now created a template project that is ready to blast out an insane number of sprites. We are not going to use any vector art. We aren't going to put any old-fashioned Flash sprites on the stage apart from the Stage3D window and a couple of GUI overlays. All the work of rendering our in-game graphics is going to be handled by Stage3D, so that we can enjoy improved performance.
Going Deeper: Why Is Stage3D So Fast?
Two reasons:
- It uses hardware acceleration, meaning that all drawing commands are sent to the 3D GPU on your video card in the same way that XBOX360 and PlayStation3 games get rendered.
- These rendering commands are processed in parallel to the rest of your ActionScript code. This means that once the commands are sent to your video card, all rendering is done at the same time as other code in your game is running - Flash doesn't have to wait for them to be finished. While pixels are being blasted onto your screen, Flash gets to do other things like handle the player input, play sounds and update enemy positions.
- object pooling
- spritesheet (texture atlas)
- batched geometry
That said, many Stage3D engines seem to get bogged down by a few hundred sprites. This is because they have been programmed without regard to the overhead that each draw command adds. When Stage3D first came out, some of the first 2D engines would draw each and every sprite individually in one giant (slow and inefficient) loop. Since this article is all about extreme optimization for a next-gen 2D game with fabulous framerate, we are going to implement an extremely efficient rendering system that buffers all geometry into one big batch so we can draw everything in only one or two commands.
How to Be Hardcore: Optimize!
Hardcore gamedevs love optimizations. In order to blast the most sprites on screen with the fewest number of state changes (such as switching textures, selecting a new vertex buffer, or having to update the transform once for each and every sprite on screen), we are going to take advantage of the following three performance optimizations:
These three hardcore gamedev tricks are the key to getting awesome FPS in your game. Let's implement them now. Before we do, we need to create some of the tiny classes that these techniques will make use of.
Step 8: The Stats Display
If we're going to be doing tons of optimizations and using Stage3D in an attempt to achieve blazingly fast rendering performance, we need a way to keep track of the statistics. A few little benchmarks can go a long way to prove that what we're doing is having a positive effect on the framerate. Before we go farther, create a new class called
GameGUI.as and implement a super-simple FPS and stats display as follows.
// Stage3D Shoot-em-up Tutorial Part 1 // by Christer Kaitila - // GameGUI.as // A typical simplistic framerate display for benchmarking performance, // plus a way to track rendering statistics from the entity manager. package { import flash.events.Event; import flash.events.TimerEvent; import flash.text.TextField; import flash.text.TextFormat; import flash.utils.getTimer; public class GameGUI extends TextField { public var titleText : String = ""; public var statsText : String = ""; public var statsTarget : EntityManager; private var frameCount:int = 0; private var timer:int; private var ms_prev:int; private var lastfps : Number = 60; public function GameGUI(title:String = "", inX:Number=8, inY:Number=8, inCol:int = 0xFFFFFF) { super(); titleText = title; x = inX; y = inY; width = 500; selectable = false; defaultTextFormat = new TextFormat("_sans", 9, 0, true); text = ""; textColor = inCol; this.addEventListener(Event.ADDED_TO_STAGE, onAddedHandler); } public function onAddedHandler(e:Event):void { stage.addEventListener(Event.ENTER_FRAME, onEnterFrame); } private function onEnterFrame(evt:Event):void { timer = getTimer(); if( timer - 1000 > ms_prev ) { lastfps = Math.round(frameCount/(timer-ms_prev)*1000); ms_prev = timer; // grab the stats from the entity manager if (statsTarget) { statsText = statsTarget.numCreated + ' created ' + statsTarget.numReused + ' reused'; } text = titleText + ' - ' + statsText + " - FPS: " + lastfps; frameCount = 0; } // count each frame to determine the framerate frameCount++; } } // end class } // end package
Step 9: The Entity Class
We are about to implement an entity manager class that will be the "object pool" as described above. We first need to create a simplistic class for each individual entity in our game. This class will be used for all in-game objects, from spaceships to bullets.
Create a new file called
Entity.as and add a few getters and setters now. For this first tech demo, this class is merely an empty placeholder without much functionality, but in later tutorials this is where we will be implementing much of the gameplay.
// Stage3D Shoot-em-up Tutorial Part 1 // by Christer Kaitila - // Entity.as // The Entity class will eventually hold all game-specific entity logic // for the spaceships, bullets and effects in our game. For now, // it simply holds a reference to a gpu sprite and a few demo properties. // This is where you would add hit points, weapons, ability scores, etc. package { public class Entity { private var _speedX : Number; private var _speedY : Number; private var _sprite : LiteSprite; public var active : Boolean = true; public function Entity(gs:LiteSprite = null) { _sprite = gs; _speedX = 0.0; _speedY = 0.0; } public function die() : void { // allow this entity to be reused by the entitymanager active = false; // skip all drawing and updating sprite.visible = false; }; } } // end class } // end package
Step 10: Make a Spritesheet
An important optimization technique we are going to use is the use of a spritesheet - sometimes referred to as a Texture Atlas. Instead of uploading dozens or hundreds of individual images to video RAM for use during rendering, we are going to make a single image that holds all the sprites in our game. This way, we can use a single texture to draw tons of different kinds of enemies or terrain.
Using a spritesheet is a considered a best practice by veteran gamedevs who need to ensure their games run as fast as possible. The reason it speeds things up so much is much the same as the reason why we are going to use geometry batching: instead of having to tell the video card over and over to use a particular texture to draw a particular sprite, we can simply tell it to always use the same texture for all draw calls.
This cuts down on "state changes" which are extremely costly in terms of time. We no longer need to say "video card, start using texture 24... now draw sprite 14" and so on. We just say "draw everything using this one texture" in a single pass. This can increase performance by an order of magnitude.
For our example game we will be using a collection of legal-to-use freeware images by the talented DanC, which you can get here. Remember that if you use these images you should credit them in your game as follows: "Art Collection Title" art by Daniel Cook (Lostgarden.com).
Using Photoshop (or GIMP, or whatever image editor you prefer), cut and paste the sprites your game will need into a single PNG file that has a transparent background. Place each sprite on an evenly-spaced grid with a couple pixels of blank space between each. This small buffer is required to avoid any "bleeding" of edge pixels from adjacent sprites that can occur due to bilinear texture filtering that happens on the GPU. If each sprite is touching the next, your in-game sprites may have unwanted edges where they should be completely transparent.
For optimization reasons, GPUs work best with images (called textures) that are square and whose dimensions are equal to a power of two and evenly divisible by eight. Why? Because of the way that the pixel data is accessed, these magic numbers happen to align in VRAM in just the right way to be fastest to access, because the data is often read in chunks.
Therefore, ensure that your spritesheet is either 64x64, 128x128, 256x256, 512x512 or 1024x1024. As you might expect, the smaller the better - not just in terms of performance but because a smaller texture will naturally keep your game's final SWF smaller.
Here is the spritesheet that we will be using for our example. "Tyrian" art by Daniel Cook (Lostgarden.com).
Right-click to download
Step 11: The Entity Manager
The first optimization technique we're going to take advantage of to achieve blazing performance is the use of "object pools". Instead of constantly allocating more ram for objects like bullets or enemies, we're going to make a reuse pool that recycles unused sprites over and over again.
This technique ensures that RAM use stays very low and GC (garbage collection) hiccups rarely occur. The result is that framerate will be higher and your game will run smoothly no matter how long you play.
Create a new class in your project called
EntityManager.as and implement a simple recycle-on-demand mechanism as follows.
// Stage3D Shoot-em-up Tutorial Part 1 //. // This is where you would add all in-game simulation steps, // such as gravity, movement, collision detection and more. package { import flash.display.Bitmap; import flash.display3D.*; import flash.geom.Point; import flash.geom.Rectangle; public class EntityManager { // the sprite sheet image private var _spriteSheet : LiteSpriteSheet; private const SpritesPerRow:int = 8; private const SpritesPerCol:int = 8; [Embed(source="../assets/sprites.png")] private var SourceImage : Class; // a reusable pool of entities private var _entityPool : Vector.<Entity>; // all the polygons that make up the scene public var _batch : LiteSpriteBatch; // for statistics public var numCreated : int = 0; public var numReused : int = 0; private var maxX:int; private var minX:int; private var maxY:int; private var minY:int; public function EntityManager(view:Rectangle) { _entityPool = new Vector.<Entity>(); setPosition(view); }
Step 12: Set Boundaries
Our entity manager is going to recycle entities when they move off the left edge of the screen. The function below is called during inits or when the resize event is fired. We add a few extra pixels to the edges so that sprites don't suddenly pop in or out of existence.
public function setPosition(view:Rectangle):void { // allow moving fully offscreen before looping around maxX = view.width + 32; minX = view.x - 32; maxY = view.height; minY = view.y; }
Step 13: Set Up the Sprites
The entity manager runs this function once at startup. It creates a new geometry batch using the spritesheet image that was embedded in our code above. It sends the
bitmapData to the spritesheet class constructor, which will be used to generate a texture that has all the available sprite images on it in a grid. We tell our spritesheet that we're going to use 64 different sprites (8 by 8) on the one texture. This spritesheet will be used by the batch geometry renderer.
If we wanted, we could use more than one spritesheet, by initializing additional images and batches as required. In the future, this might be where you create a second batch for all terrain tiles that go underneath your spaceship sprites. You could even implement a third batch which is layered on top of everything for fancy particle effects and eye candy. For now, this simple tech demo only needs a single spritesheet texture and geometry batch., 8, 8); // Create new render batch _batch = new LiteSpriteBatch(context3D, _spriteSheet); return _batch; }
Step 14: The Object Pool
This is where the entity manager increases performance. This one optimization (an object reuse pool) will allow us to only create new entities on demand (when there aren't any inactive ones that can be reused). Note how we reuse any sprites that are currently marked as inactive, unless they are all currently being used, in which case we spawn a new one. This way, our object pool only every holds as many sprites as are even visible at the same time. After the first few seconds that our game has been running, the entity pool will remain constant - rarely will a new entity need to be created once there are enough to handle what's going on on-screen.
Continue adding to
EntityManager.as as follows:
//; } // for this test, create random entities that move // from right to left with random speeds and scales public function addEntity():void { var anEntity:Entity; var randomSpriteID:uint = Math.floor(Math.random() * 64); // try to reuse an inactive entity (or create a new one) anEntity = respawn(randomSpriteID); // give it a new position and velocity anEntity.sprite.position.x = maxX; anEntity.sprite.position.y = Math.random() * maxY; anEntity.speedX = (-1 * Math.random() * 10) - 2; anEntity.speedY = (Math.random() * 5) - 2.5; anEntity.sprite.scaleX = 0.5 + Math.random() * 1.5; anEntity.sprite.scaleY = anEntity.sprite.scaleX; anEntity.sprite.rotation = 15 - Math.random() * 30; }
The functions above are run whenever a new sprite needs to be added on screen. The entity manager scans the entity pool for one that is currently not in use and returns it when possible. If the list is fully of active entities, a brand new one needs to be created.
Step 15: Simulate!
The final function that is the responsibility of our entity manager is the one that gets called every frame. It is used to do any simulation, AI, collision detection, physics or animation as required. For the current simplistic tech demo, it simply loops through the list of active entities in the pool and updates their positions based on velocity. Each entity is moved according to their current velocity. Just for fun, they are set to spin a little each frame as well.
Any entity that goes past the left side of the screen is "killed" and is marked as inactive and invisible, ready to be reused in the functions above. If an entity touches the other three screen edges, the velocity is reversed so it will "bounce" off that edge. Continue adding to
EntityManager.as as follows:
// called every frame: used to update the simulation // this is where you would perform AI, physics, etc. public function update(currentTime:Number) : void { var anEntity:Entity; for(var i:int=0; i<_entityPool.length;i++) { anEntity = _entityPool[i]; if (anEntity.active) { anEntity.sprite.position.x += anEntity.speedX; anEntity.sprite.position.y += anEntity.speedY; anEntity.sprite.rotation += 0.1; if (anEntity.sprite.position.x > maxX) { anEntity.speedX *= -1; anEntity.sprite.position.x = maxX; } else if (anEntity.sprite.position.x < minX) { // if we go past the left edge, become inactive // so the sprite can be respawned anEntity.die(); } if (anEntity.sprite.position.y > maxY) { anEntity.speedY *= -1; anEntity.sprite.position.y = maxY; } else if (anEntity.sprite.position.y < minY) { anEntity.speedY *= -1; anEntity.sprite.position.y = minY; } } } } } // end class } // end package
Step 16: The Sprite Class
The final step to get everything up and running is to implement the four classes that make up our "rendering engine" system. Because the word Sprite is already in use in Flash, the next few classes will use the term
LiteSprite, which is not just a catchy name but implies the lightweight and simplistic nature of this engine.
To begin, we will create the simple 2D sprite class that our entity class above refers to. There will be many sprites in our game, each of which is collected into a large batch of polygons and rendered in a single pass.
Create a new file in your project called
LiteSprite.as and implement some getters and setters as follows. We could probably get away with simply using public variables, but in future versions changing some of these values will require running some code first, so this technique will prove invaluable.
// Stage3D Shoot-em-up Tutorial Part 1 // by Christer Kaitila - // LiteSprite.as // A 2d sprite that is rendered by Stage3D as a textured quad // (two triangles) to take advantage of hardware acceleration. //.geom.Point; import flash.geom.Rectangle; public class LiteSprite { internal var _parent : LiteSpriteBatch; internal var _spriteId : uint; internal var _childId : uint; private var _pos : Point; private var _visible : Boolean; private var _scaleX : Number; private var _scaleY : Number; private var _rotation : Number; private var _alpha : Number; public function get visible() : Boolean { return _visible; } public function set visible(isVisible:Boolean) : void { _visible = isVisible; } public function get alpha() : Number { return _alpha; } public function set alpha(a:Number) : void { _alpha = a; } public function get position() : Point { return _pos; } public function set position(pt:Point) : void { _pos = pt; } public function get scaleX() : Number { return _scaleX; } public function set scaleX(val:Number) : void { _scaleX = val; } public function get scaleY() : Number { return _scaleY; } public function set scaleY(val:Number) : void { _scaleY = val; } public function get rotation() : Number { return _rotation; } public function set rotation(val:Number) : void { _rotation = val; } public function get rect() : Rectangle { return _parent._sprites.getRect(_spriteId); } public function get parent() : LiteSpriteBatch { return _parent; } public function get spriteId() : uint { return _spriteId; } public function set spriteId(num : uint) : void { _spriteId = num; } public function get childId() : uint { return _childId; } // LiteSprites are typically constructed by calling LiteSpriteBatch.createChild() public function LiteSprite() { _parent = null; _spriteId = 0; _childId = 0; _pos = new Point(); _scaleX = 1.0; _scaleY = 1.0; _rotation = 0; _alpha = 1.0; _visible = true; } } // end class } // end package
Each sprite can now keep track of where it is on screen, as well as how big it is, how transparent, and what angle it is facing. The spriteID property is a number used during rendering to look up which UV (texture) coordinate needs to be used as the source rectangle for the pixels of the spritesheet image it uses.
Step 17: The Spritesheet Class
We now need to implement a mechanism to process the spritesheet image that we embedded above and use portions of it on all our rendered geometry. Create a new file in your project called
LiteSpriteSheet.as and begin by importing the functionality required, defining a few class variables and a constructor function.
// Stage3D Shoot-em-up Tutorial Part 1 //>; public function LiteSpriteSheet(SpriteSheetBitmapData:BitmapData, numSpritesW:int = 8, numSpritesH:int = 8) { _uvCoords = new Vector.<Number>(); _rects = new Vector.<Rectangle>(); _spriteSheet = SpriteSheetBitmapData; createUVs(numSpritesW, numSpritesH); }
The class constructor above is given a
BitmapData for our spritesheet as well as the number of sprites that are on it (in this demo, 64).
Step 18: Chop It Up
Because we are using a single texture to store all of the sprite images, we need to divide the image into several parts (one for each sprite on it) when rendering. We do this by assigning different coordinates for each vertex (corner) of each quad mesh used to draw a sprite.
These coordinates are called UVs, and each goes from 0 to 1 and represents where on the texture stage3D should start sampling pixels when rendering. The UV coordinates and pixel rectangles are stored in an array for later using during rendering so that we don't have to calculate them every frame. We also store the size and shape of each sprite (which in this demo are all identical) so that when we rotate a sprite we know its radius (which is used to keep the pivot in the very centre of the sprite).
//, (y+1) / numSpritesH, x / numSpritesW, y / numSpritesH, (x+1) / numSpritesW, y / numSpritesH, (x + 1) / numSpritesW, (y + 1) / numSpritesH); destRect = new Rectangle(); destRect.left = 0; destRect.top = 0; destRect.right = _spriteSheet.width / numSpritesW; destRect.bottom = _spriteSheet.height / numSpritesH; _rects.push(destRect); } } }); }
Step 19: Generate Mipmaps
Now we need to process this image during the init. We are going to upload it for use as a texture by your GPU. As we do so, we are going to create smaller copies that are called "mipmaps". Mip-mapping is used by 3d hardware to further speed up rendering by using smaller versions of the same texture whenever it is seen from far away (scaled down) or, in true 3D games, when it is being viewed at an oblique angle. This avoids any "moiree" effects (flickers) than can happen if mipmapping is not used. Each mipmap is half the width and height as the previous.
Continuing with
LiteSpriteSheet.as, let's implement the routine we need that will generate mipmaps and upload them all to the GPU on your video card.
Step 20: Batched Geometry
The final hardcore optimization we are going to implement is a batched geometry rendering system. This "batched geometry" technique is often used in particle systems. We are going to use it for everything. This way, we can tell your GPU to draw everything in one go instead of naively sending hundreds of draw commands (one for each sprite on screen).
In order to minimize the number of draw calls and rendering everything in one go, we will be batching all game sprites into a long list of (x,y) coordinates. Essentially, the geometry batch is treated by your video hardware as a single 3D mesh. Then, once per frame, we will upload the entire buffer to Stage3D in a single function call. Doing things this way is far faster than uploading the individual coordinates of each sprite separately.
Create a new file in your project called
LiteSpriteBatch.as and begin by including all the imports for functionality it will need, the class variables it will use, and the constructor as follows:
// Stage3D Shoot-em-up Tutorial Part 1 // by Christer Kaitila - // LiteSpriteBatch.as // An optimization used to increase performance that renders multiple // sprites in a single pass by grouping all polygons together, // allowing stage3D to treat it as a single mesh that can be // rendered in a single drawTriangles call. // Each frame, the positions of each // vertex is updated and re-uploaded to video ram. // com.adobe.utils.AGALMiniAssembler; import flash.display.BitmapData; import flash.display3D.Context3D; import flash.display3D.Context3DBlendFactor; import flash.display3D.Context3DCompareMode; import flash.display3D.Context3DProgramType; import flash.display3D.Context3DTextureFormat; import flash.display3D.Context3DVertexBufferFormat; import flash.display3D.IndexBuffer3D; import flash.display3D.Program3D; import flash.display3D.VertexBuffer3D; import flash.display3D.textures.Texture; import flash.geom.Matrix; import flash.geom.Matrix3D; import flash.geom.Point; import flash.geom.Rectangle; public class LiteSpriteBatch { internal var _sprites : LiteSpriteSheet; internal var _verteces : Vector.<Number>; internal var _indeces : Vector.<uint>; internal var _uvs : Vector.<Number>; protected var _context3D : Context3D; protected var _parent : LiteSpriteStage; protected var _children : Vector.<LiteSprite>; protected var _indexBuffer : IndexBuffer3D; protected var _vertexBuffer : VertexBuffer3D; protected var _uvBuffer : VertexBuffer3D; protected var _shader : Program3D; protected var _updateVBOs : Boolean; public function LiteSpriteBatch(context3D:Context3D, spriteSheet:LiteSpriteSheet) { _context3D = context3D; _sprites = spriteSheet; _verteces = new Vector.<Number>(); _indeces = new Vector.<uint>(); _uvs = new Vector.<Number>(); _children = new Vector.<LiteSprite>; _updateVBOs = true; setupShaders(); updateTexture(); }
Step 21: Batch Parent and Children
Continue by implementing getters and setters and functionality for handling the addition of any new sprites to the batch. The parent refers to the sprite stage object used by our game engine, while the children are all the sprites in this one rendering batch. When we add a child sprite, we add more data to the list of verteces (which supplies the locations on screen of that particular sprite) as well as the UV coordinates (the location on the spritesheet texture that this particular sprite is stored at). When a child sprite is added or removed from the batch, we set a boolean variable to tell our batch system that the buffers need to be re-uploaded now that they have changed.
public function get parent() : LiteSpriteStage { return _parent; } public function set parent(parentStage:LiteSpriteStage) : void { _parent = parentStage; } public function get numChildren() : uint { return _children.length; } // Constructs a new child sprite and attaches it to the batch public function createChild(spriteId:uint) : LiteSprite { var sprite : LiteSprite = new LiteSprite(); addChild(sprite, spriteId); return sprite; } public function addChild(sprite:LiteSprite, spriteId:uint) : void { sprite._parent = this; sprite._spriteId = spriteId; // Add to list of children sprite._childId = _children.length; _children.push(sprite); // Add vertex data required to draw child var childVertexFirstIndex:uint = (sprite._childId * 12) / 3; _verteces.push(0, 0, 1, 0, 0,1, 0, 0,1, 0, 0,1); // placeholders _indeces.push(childVertexFirstIndex, childVertexFirstIndex+1, childVertexFirstIndex+2, childVertexFirstIndex, childVertexFirstIndex+2, childVertexFirstIndex+3); var childUVCoords:Vector.<Number> = _sprites.getUVCoords(spriteId); _uvs.push( childUVCoords[0], childUVCoords[1], childUVCoords[2], childUVCoords[3], childUVCoords[4], childUVCoords[5], childUVCoords[6], childUVCoords[7]); _updateVBOs = true; } public function removeChild(child:LiteSprite) : void { var childId:uint = child._childId; if ( (child._parent == this) && childId < _children.length ) { child._parent = null; _children.splice(childId, 1); // Update child id (index into array of children) for remaining children var idx:uint; for ( idx = childId; idx < _children.length; idx++ ) { _children[idx]._childId = idx; } // Realign vertex data with updated list of children var vertexIdx:uint = childId * 12; var indexIdx:uint= childId * 6; _verteces.splice(vertexIdx, 12); _indeces.splice(indexIdx, 6); _uvs.splice(vertexIdx, 8); _updateVBOs = true; } }
Step 22: Set Up the Shader
A shader is a set of commands that is uploaded directly to your video card for extremely fast rendering. In Flash 11 Stage3D, you write them in a kind of assembly language called AGAL. This shader needs only be created once, at startup. You don't need to understand assembly language opcodes for this tutorial. Instead, simply implement the creation of a vertex program (which calculates the locations of your sprites on screen) and a fragment program (which calculates the color of each pixel) as follows.
protected function setupShaders() : void { var vertexShaderAssembler:AGALMiniAssembler = new AGALMiniAssembler(); vertexShaderAssembler.assemble( Context3DProgramType.VERTEX, "dp4 op.x, va0, vc0 \n"+ // transform from stream 0 to output clipspace "dp4 op.y, va0, vc1 \n"+ // do the same for the y coordinate "mov op.z, vc2.z \n"+ // we don't need to change the z coordinate "mov op.w, vc3.w \n"+ // unused, but we need to output all data "mov v0, va1.xy \n"+ // copy UV coords from stream 1 to fragment program "mov v0.z, va0.z \n" // copy alpha from stream 0 to fragment program ); var fragmentShaderAssembler:AGALMiniAssembler = new AGALMiniAssembler(); fragmentShaderAssembler.assemble( Context3DProgramType.FRAGMENT, "tex ft0, v0, fs0 <2d,clamp,linear,mipnearest> \n"+ // sample the texture "mul ft0, ft0, v0.zzzz\n" + // multiply by the alpha transparency "mov oc, ft0 \n" // output the final pixel color ); _shader = _context3D.createProgram(); _shader.upload( vertexShaderAssembler.agalcode, fragmentShaderAssembler.agalcode ); } protected function updateTexture() : void { _sprites.uploadTexture(_context3D); }
Step 23: Move the Sprites Around
Just before being rendered, each sprite's vertex coordinates on screen will have most likely changed as the sprite moves around or rotates. The following function calculates where each vertex (corner of the geometry) needs to be. Because each quad (the square that makes up one sprite) has four vertices each, and each vertex needs an x, y and z coordinate, there are twelve values to update. As a little optimization, if the sprite is not visible we simply write zeroes into our vertex buffer to avoid doing unnecessary calculations.
protected function updateChildVertexData(sprite:LiteSprite) : void { var childVertexIdx:uint = sprite._childId * 12; if ( sprite.visible ) { var x:Number = sprite.position.x; var y:Number = sprite.position.y; var rect:Rectangle = sprite.rect; var sinT:Number = Math.sin(sprite.rotation); var cosT:Number = Math.cos(sprite.rotation); var alpha:Number = sprite.alpha; var scaledWidth:Number = rect.width * sprite.scaleX; var scaledHeight:Number = rect.height * sprite.scaleY; var centerX:Number = scaledWidth * 0.5; var centerY:Number = scaledHeight * 0.5; _verteces[childVertexIdx] = x - (cosT * centerX) - (sinT * (scaledHeight - centerY)); _verteces[childVertexIdx+1] = y - (sinT * centerX) + (cosT * (scaledHeight - centerY)); _verteces[childVertexIdx+2] = alpha; _verteces[childVertexIdx+3] = x - (cosT * centerX) + (sinT * centerY); _verteces[childVertexIdx+4] = y - (sinT * centerX) - (cosT * centerY); _verteces[childVertexIdx+5] = alpha; _verteces[childVertexIdx+6] = x + (cosT * (scaledWidth - centerX)) + (sinT * centerY); _verteces[childVertexIdx+7] = y + (sinT * (scaledWidth - centerX)) - (cosT * centerY); _verteces[childVertexIdx+8] = alpha; _verteces[childVertexIdx+9] = x + (cosT * (scaledWidth - centerX)) - (sinT * (scaledHeight - centerY)); _verteces[childVertexIdx+10] = y + (sinT * (scaledWidth - centerX)) + (cosT * (scaledHeight - centerY)); _verteces[childVertexIdx+11] = alpha; } else { for (var i:uint = 0; i < 12; i++ ) { _verteces[childVertexIdx+i] = 0; } } }
Step 24: Draw the Geometry
Finally, continue adding to the
LiteSpriteBatch.as class by implementing the drawing function. This is where we tell stage3D to render all the sprites in a single pass. First, we loop through all known children (the individual sprites) and update the verterx positions based on where they are on screen. We then tell stage3D which shader and texture to use, as well as set the blend factors for rendering.
What is a blend factor? It defines whether or not we should use transparency, and how to deal with transparent pixels on our texture. You could change the options in the
setBlendFactors call to use additive blanding, for example, which looks great for particle effects like explosions, since pixels will increase the brightness on screen as they overlap. In the case of regular sprites, all we want is to draw them at the exact color as stored in our spritesheet texture and to allow transparent regions.
The final step in our draw function is to update the UV and index buffers if the batch has changed size, and to always upload the vertex data because our sprites are exected to be constantly moving. We tell stage3D which buffers to use and finally render the entire giant list of geometry as if it were a single 3D mesh, so that it gets drawn using a single, fast,
drawTriangles call.
public function draw() : void { var nChildren:uint = _children.length; if ( nChildren == 0 ) return; // Update vertex data with current position of children for ( var i:uint = 0; i < nChildren; i++ ) { updateChildVertexData(_children[i]); } _context3D.setProgram(_shader); _context3D.setBlendFactors(Context3DBlendFactor.ONE, Context3DBlendFactor.ONE_MINUS_SOURCE_ALPHA); _context3D.setProgramConstantsFromMatrix(Context3DProgramType.VERTEX, 0, _parent.modelViewMatrix, true); _context3D.setTextureAt(0, _sprites._texture); if ( _updateVBOs ) { _vertexBuffer = _context3D.createVertexBuffer(_verteces.length/3, 3); _indexBuffer = _context3D.createIndexBuffer(_indeces.length); _uvBuffer = _context3D.createVertexBuffer(_uvs.length/2, 2); _indexBuffer.uploadFromVector(_indeces, 0, _indeces.length); // indices won't change _uvBuffer.uploadFromVector(_uvs, 0, _uvs.length / 2); // child UVs won't change _updateVBOs = false; } // we want to upload the vertex data every frame _vertexBuffer.uploadFromVector(_verteces, 0, _verteces.length / 3); _context3D.setVertexBufferAt(0, _vertexBuffer, 0, Context3DVertexBufferFormat.FLOAT_3); _context3D.setVertexBufferAt(1, _uvBuffer, 0, Context3DVertexBufferFormat.FLOAT_2); _context3D.drawTriangles(_indexBuffer, 0, nChildren * 2); } } // end class } // end package
Step 25: The Sprite Stage Class
The final class required by our fancy (and speedy) hardware-accelerated sprite rendering engine is the sprite stage class. This stage, much like the traditional Flash stage, holds a list of all the batches that are used for your game. In this first demo, our stage will only be using a single batch of sprites, which itself only uses a single spritesheet.
Create one last file in your project called
LiteSpriteStage.as and begin by creating the class as follows:
// Stage3D Shoot-em-up Tutorial Part 1 // by Christer Kaitila - // LiteSpriteStage.as // The stage3D renderer of any number of batched geometry // meshes of multiple sprites. Handles stage3D inits, etc. //.Stage3D; import flash.display3D.Context3D; import flash.geom.Matrix3D; import flash.geom.Rectangle; public class LiteSpriteStage { protected var _stage3D : Stage3D; protected var _context3D : Context3D; protected var _rect : Rectangle; protected var _batches : Vector.<LiteSpriteBatch>; protected var _modelViewMatrix : Matrix3D; public function LiteSpriteStage(stage3D:Stage3D, context3D:Context3D, rect:Rectangle) { _stage3D = stage3D; _context3D = context3D; _batches = new Vector.<LiteSpriteBatch>; this.position = rect; }
Step 26: The Camera Matrix
In order to know exactly where on screen each sprite needs to go, we will track the location and size of the rendering window. During our game's initializations (or if it changes) we create a model view matrix which is used by Stage3D to transform the internal 3D coordinates of our geometry batches to the proper on-screen locations.
public function get position() : Rectangle { return _rect; } public function set position(rect:Rectangle) : void { _rect = rect; _stage3D.x = rect.x; _stage3D.y = rect.y; configureBackBuffer(rect.width, rect.height); _modelViewMatrix = new Matrix3D(); _modelViewMatrix.appendTranslation(-rect.width/2, -rect.height/2, 0); _modelViewMatrix.appendScale(2.0/rect.width, -2.0/rect.height, 1); } internal function get modelViewMatrix() : Matrix3D { return _modelViewMatrix; } public function configureBackBuffer(width:uint, height:uint) : void { _context3D.configureBackBuffer(width, height, 0, false); }
Step 27: Handle Batches
The final step in the creation of our Stage3D game demo is to handle the addition and removal of geometry batches as well as a loop that calls the draw function on each batch. This way, when our game's main
ENTER_FRAME event is fired, it will move the sprites around on screen via the entity manager and then tell the sprite stage system to draw itself, which in turn tells all known batches to draw.
Because this is a heavily optimized demo, there will only be one batch in use, but this will change in future tutorials as we add more eye candy.
public function addBatch(batch:LiteSpriteBatch) : void { batch.parent = this; _batches.push(batch); } public function removeBatch(batch:LiteSpriteBatch) : void { for ( var i:uint = 0; i < _batches.length; i++ ) { if ( _batches[i] == batch ) { batch.parent = null; _batches.splice(i, 1); } } } // loop through all batches // (this demo uses only one) // and tell them to draw themselves public function render() : void { for ( var i:uint = 0; i < _batches.length; i++ ) { _batches[i].draw(); } } } // end class } // end package
Step 28: Compile and Run!
We're almost done! Compile your SWF, fix any typos, and check out the graphical goodness. You should have a demo that looks like this:
If you are having difficulties compiling, note that this project needs a class that was made by Adobe which handles the compilation of AGAL shaders, which is included in the source code .zip file download.
Just for reference, and to ensure that you've used the correct filenames and locations for everything, here is what your FlashDevelop project should look like:
Tutorial Complete: You Are Awesome
That's it for tutorial one in this series! Tune in next week to watch the game slowly evolve into a great-looking, silky-smooth 60 FPS shoot-em-up. In the next part, we will implement player controls (using the keyboard to move around) and add some movement, sounds and music to the game.
I'd love to hear from you regarding this tutorial. I warmly welcome all readers to get in touch with me via twitter: @mcfunkypants or my blog mcfunkypants.com or on Google+ any time. I'm always looking for new topics to write future tutorials on, so feel free to request one. Finally, I'd love to see the games you make using this code!
Thanks for reading. See you next week. Good luck and HAVE FUN!
><< | https://code.tutsplus.com/tutorials/build-a-stage3d-shoot-em-up-sprite-test--active-11005 | CC-MAIN-2021-10 | refinedweb | 7,012 | 55.54 |
New Hope for Digital Identity
Identity is personal. You need to start there.
In the natural world where we live and breathe, personal identity can get complicated, but it's not broken. If an Inuit family from Qikiqtaaluk wants to name their kid Anuun or Issorartuyok, they do, and the world copes. If the same kid later wants to call himself Steve, he does. Again, the world copes. So does Steve.
Much of that coping is done by Steve not identifying himself unless he needs to, and then by not revealing more than what's required. In most cases Steve isn't accessing a service, but merely engaging with other people, and in ways so casual that in most cases no harm is done if the other person forgets Steve's name or how he introduced himself. In fact, most of what happens in the social realms of the natural world are free of identifiers, and that's a feature rather than a bug. Dunbar's number exists for a reason. So does the fact that human memory is better at forgetting details than at remembering them. This too is a feature. Most of what we know is tacit rather than explicit. As the scientist and philosopher Michael Polanyi puts it (in perhaps his only quotable line), "We know more than we can tell." This is why we can easily recognize a person without being able to describe exactly how we do that, and without knowing his or her name or other specific "identifying" details about them.
Steve's identity can also be a claim that does not require proof, or even need to be accurate. For example, he may tell the barista at a coffee shop that his name is Clive to avoid confusion with the guy ahead of him who just said his name is Steve.
How we create and cope with identity in the natural world has lately come to be called self-sovereign, at least among digital identity obsessives such as myself. Self-sovereign identity starts by recognizing that the kind of naming we get from our parents, tribes and selves is at the root level of how identity works in the natural world, and needs to frame our approaches in the digital one as well.
Our main problem with identity in the digital world is that we understand it entirely in terms of organizations and their needs. These approaches are administrative rather than personal or social. They work for the convenience of organizations first. In administrative systems, identities are just records, usually kept in databases. Aside from your business card, every name imprinted on a rectangle in your wallet was issued to you by some administrative system: the government, the Department of Motor Vehicles, the school, the drug store chain. None are your identity. All are identifiers used by organizations to keep track of you.
For your inconvenience, every organization's identity system is also a separate and proprietary silo, even if it is built with open-source software and methods. Worse, an organization might have many different silo'd identity systems that know little or nothing about each other. Even an organization as unitary as a university might have completely different identity systems operating within HR, health care, parking, laundry, sports and IT—as well as within its scholastic realm, which also might have any number of different departmental administrative systems, each with its own record of students past and present.
While ways of "federating" identities between silos have been around since the last millennium, there is still no standard or open-source way for you to change, say, your surname or your mailing address with all the administrative systems you deal with, in one move. In fact, doing so is unthinkable as long as our understanding of identity remains framed inside the norms of silo'd administrative systems and thinking.
Administrative systems have been built into civilized life for as long as we've had governments, companies and churches, to name just three institutions. But every problem we ever had with any of those only got worse once we had ways to digitize what was wrong with them, and then to network the same problems. This is why our own ability to administrate the many different ways we are known to the world's identity systems only gets worse every time we click "accept" to some site's, service's or app's terms and conditions, and create yet another login, password and namespace to manage.
Unfortunately, the internet was first provisioned to the mass market over dial-up lines, and both ISPs and website developers made client-server the defaulted way to deal with people. By design, client-server is slave-master, because it puts nearly all power on the server side. The client has no more agency or identity than the server allows it.
True, a website works (or ought to work) by answering client requests for files. But we see how much respect that gets by looking at the history of Do Not Track. Originally meant as a polite request by clients for servers to respect personal privacy, it was opposed so aggressively by the world's advertisers and commercial publishers that people took matters into their own hands by installing browser extensions for blocking ads and tracking. Then the W3C itself got corrupted by commercial interests, morphing Do Not Track into "tracking preference expressions" If individuals had full agency on the web in the first place, this never would have happened. But they didn't, and it did.
So we won't solve forever-standing identity problems with client-server, any more than we would have solved the need for personal computing with more generous mainframes.
If we want fully human digital identity to work on the internet, we have to respect the deeply human need for self-determination. That requires means for individuals to assert self-sovereign identities, and for systems to require only verified claims when they need useful identity information. Anything else will be repeating mistakes of the past.
It should help to remember that most human interaction is not with big administrative systems. For example, around 99% of the world's businesses are small. (See "Small is the New Big": Even if every business of every size becomes digital and connected, they need to be able to operate without requiring outside (such as government or platform) administrative systems, for the simple reason that most of the ways people identify each other in the offline world is both minimally and on a need-to-know basis. It is only inside administrative systems that fixed identities and identifiers are required. And even they only really need to deal with verified claims.
So we need to recognize three things, in this order:
- That everybody comes to the networked world with sovereign-source identities of their own, that they need to be able to make verifiable claims for various identity-related purposes; but that they don't need to do either at all times and in all circumstances.
- That the world is still full of administrative systems, and that those systems can come into alignment once they recognize the self-sovereign nature of human beings. That means seeing human beings as fully human and not just as "consumers" or "users" of products and services provided by organizations. And it means coming up, at last, with standard and trusted ways individual human beings can alter identity information with many different administrative
- There are billions (the World Bank says 2.5) of people in the world who lack any "official identification". Thus "official ID for all" is a goal of the United Nations, the World Bank and other large organizations trying to help masses of people who will be coming online during the next few years, especially refugees. Some of these people have good reasons not to be known, while others have good reasons to be known. It's complicated. Still, the commitment is there. The UN's Sustainable Development Goal 16.9 says "By 2030, provide legal identity for all, including birth registration".
What we need for all of these is an open-source and distributed approach that's NEA: Nobody owns it, Everybody can use it and Anybody can improve it. Within that scope, much is possible.
In "Rebooting the Web of Trust", Joe Andrieu says "Identity is how we keep track of people and things and, in turn, how they keep track of us." Among many other helpful things in that piece, Joe says this:
Engineers, entrepreneurs, and financiers have asked "Why are we spending so much time with a definition of identity? Why not just build something and fix it if it is broken?" The vital, simple reason is human dignity.
When we build interconnected systems without a core understanding of identity, we risk inadvertently compromising human dignity. We risk accidentally building systems that deny self-expression, place individuals in harm's way, and unintentionally oppress those most in need of self-determination.
There are times when the needs of security outweigh the need for human dignity. Fine. It's the job of our political systems—local, national, and international—to minimize abuse and to establish boundaries and practices that respect basic human rights.
But when engineers unwittingly compromise the ability of individuals to self-express their identity, when we expose personal information in unexpected ways, when our systems deny basic services because of a flawed understanding of identity, these are avoidable tragedies. What might seem a minor technicality in one conversation could lead to the loss of privacy, liberty, or even life for an individual whose identity is unintentionally compromised.
That's why it pays to understand identity, so the systems we build intentionally enable human dignity instead of accidentally destroy it.
Phil Windley, whom I have sourced often in these columns (see "Doing for User Space What We Did for Kernel Space" and "The Actually Distributed Web", for example), has lately turned optimistic about developing decentralized identity approaches His own work chairing the Sovrin Foundation is toward what he calls "a global utility for identity" based on a distributed ledger such as blockchain. And, of course, open source. He writes:
A universal decentralized identity platform offers the opportunity for services to be decentralized...I don't have to be a sharecropper for some large corporation. As an example, I can imagine a universal, decentralized identity system giving rise to apps that let anyone share rides in their car without the overhead of a Lyft or Uber because the identity system would let others vouch for the driver and the passenger.
That vouching is done by a verified claim. Not by calling on some centralized "identity provider".
Phil, Kaliya (Identity Woman) and I put on the Internet Identity Workshop twice a year at the Computer History Museum in Silicon Valley. We had our 25th just last month. All three of our obsessions with identity go back to the last millennium. At no time since then have I felt more optimistic than I do now about the possibility that we might finally solve this thing. But we'll need help. I invite everyone here who wants to get in on a good thing soon to weigh in and help out. | https://www.linuxjournal.com/content/new-hope-digital-identity?quicktabs_1=2 | CC-MAIN-2018-30 | refinedweb | 1,883 | 50.36 |
PDFs are a common way to share text. PDF stands for Portable Document Format and uses the .pdf file extension. It was created in the early 1990s by Adobe Systems.
Reading PDF documents using python can help you automate a wide variety of tasks.
In this tutorial we will learn how to extract text from a PDF file in Python.
Let’s get started.
Reading and Extracting Text from a PDF File in Python
For the purpose of this tutorial we are creating a sample PDF with 2 pages. You can do so using any Word processor like Microsoft Word or Google Docs and save the file as a PDF.
Text on page 1:
Hello World. This is a sample PDF with 2 pages. This is the first page.
Text on page 2:
This is the text on Page 2.
Using PyPDF2 to Extract PDF Text
You can use PyPDF2 to extract text from a PDF. Let’s see how it works.
1. Install the package
To install PyPDF2 on your system enter the following command on your terminal. You can read more about the pip package manager.
pip install pypdf2
2. Import PyPDF2
Open a new python notebook and start with importing PyPDF2.
import PyPDF2
3. Open the PDF in read-binary mode
Start with opening the PDF in read binary mode using the following line of code:
pdf = open('sample_pdf.pdf', 'rb')
This will create a PdfFileReader object for our PDF and store it to the variable ‘pdf’.
4. Use PyPDF2.PdfFileReader() to read text
Now you can use the PdfFileReader() method from PyPDF2 to read the file.
pdfReader = PyPDF2.PdfFileReader(pdf)
To get the text from the first page of the PDF, use the following lines of code:
page_one = pdfReader.getPage(0) print(page_one.extractText())
We get the output as:
Hello World. !This is a sample PDF with 2 pages. !This is the first page. ! Process finished with exit code 0
Here we used the getPage method to store the page as an object. Then we used extractText() method to get text from the page object.
The text we get is of type String.
Similarly to get the second page from the PDF use:
page_one = pdfReader.getPage(1) print(page_one.extractText())
We get the output as :
This is the text on Page 2.
Complete Code to Read PDF Text using PyPDF2
The complete code from this section is given below:
import PyPDF2 pdf = open('sample_pdf.pdf', 'rb') pdfReader = PyPDF2.PdfFileReader(pdf) page_one = pdfReader.getPage(0) print(page_one.extractText())
If you notice, the formatting of the first page is a little off in the output above. This is because PyPDF2 is not very efficient at reading PDFs.
Luckily, Python has a better alternative to PyPDF2. We are going to look at that next.
Using PDFplumber to Extract Text
PDFplumber is another tool that can extract text from a PDF. It is more powerful as compared to PyPDF2.
1. Install the package
Let’s get started with installing PDFplumber.
pip install pdfplumber
2. Import pdfplumber
Start with importing PDFplumber using the following line of code :
import pdfplumber
3. Using PDFplumber to read pdfs
You can start reading PDFs using PDFplumber with the following piece of code:
with pdfplumber.open("sample_pdf.pdf") as pdf: first_page = pdf.pages[0] print(first_page.extract_text())
This will get the text from first page of our PDF. The output comes as:
Hello World. This is a sample PDF with 2 pages. This is the first page. Process finished with exit code 0
You can compare this with the output of PyPDF2 and see how PDFplumber is better when it comes to formatting.
PDFplumber also provides options to get other information from the PDF.
For example, you can use .page_number to get the page number.
print(first_page.page_number)
Output :
1
To learn more about the methods under PDFPlumber refer to its official documentation.
Conclusion
This tutorial was about reading text from PDFs. We looked at two different tools and saw how one is better than the other.
Now that you know how to read text from a PDF, you should read our tutorial on tokenization to get started with Natural Language Processing! | https://www.askpython.com/python/examples/process-text-from-pdf-files | CC-MAIN-2021-31 | refinedweb | 696 | 76.82 |
One more question...
What is 'ContextManager' and where/how can I get it?
In my (WASCE v2.1.1.3 based) project I am unable to resolve this:
import org.apache.geronimo.security.ContextManager;
David Frahm
Huber & Associates
Office: 573-634-5000, Mobile: 573-298-1040
-----David Jencks <david_jencks@yahoo.com> wrote: -----
To: user@geronimo.apache.org
From: David Jencks <david_jencks@yahoo.com>
Date: 02/08/2011 04:01PM
Subject: Re: Stateless/sessionless servlet consuming too much memory
Morten, David,
I think this is a bug. I opened
to track progress on it.
As a temporary workaround (that I haven't tested for breaking other stuff) as the
last thing in your servlet you should be able to call
Subject subject = ContextManager.getCurrentCaller();
ContextManager.unregisterSubject(subject);
which will remove the identity hash map entry.
Many thanks for identifying this problem! Actual fixes will probably be somewhat
different in 2.1 and 2.2 but I expect the above workaround should work for either. Only
use it for basic auth though -- it will probably really break form auth.
thanks
david jencks
On Feb 8, 2011, at 11:54 AM, Morten Svanæs wrote:
> Hi David!
>
> Ok, thanks for the clarification regarding http sessions, sorry for
> the maybe strange question I'm quite new to Geronimo and ejb security.
> The "server" is a servlet using stateless ejb's. The servlet is
> configured to use http basic as authentication method, we have our own
> login module based on GenericSecurityRealm and SQLLoginModule.
>
> I'm using a java test client that makes many small http requests
> requests to the server.
> The test client is only connecting as one user.
>
> The strange memory behavior is only seen after I run the program for a
> while ( I'm running the java program in bash while loop making about
> 500 small http requests :)
> When I inspect the heap dump on start when the client only has run to
> or three times everything seems fine, it's only after 20-50 iterations
> I clearly see that the ContextManger.IdentityHashMap related objects
> dominate and seems to slowly grow.
>
> When I comment out all security in the web.xml the memory usage stays
> totally stable for many hundred iterations and there is no sign of the
> ContextManager objects.
>
> To me it seems like something don't get cleaned up properly.
> Is there something a need to do make sure the requests get cleaned up
> after they has been used maybe?
>
>
> Regards
> Morten
>
>
>
>
>
> On Tue, Feb 8, 2011 at 6:23 PM, David Jencks <david_jencks@yahoo.com> wrote:
>> Hi Morten,
>>
>> I'm not sure why this is happening, it might be a bug. Just to be sure
we investigate the right context, is this
>>
>> - a servlet
>> - a pojo web service (if so, jaxrpc, jaxws, or something else)
>> - an ejb web service?
>>
>> The ContextManager doesn't have anything to do with http sessions, it is more concerned
with keeping the user identity in a threadlocal during each request so it is always available
for authorization decisions.
>>
>> Thanks for your investigations so far!
>>
>> david jencks
>>
>> On Feb 8, 2011, at 4:51 AM, Morten Svanæs wrote:
>>
>>> Hi,
>>>
>>
>>
>
>
>
> --
>
>
>
> Mvh.
> Morten Svanæs
> Mobil: 40478335 | http://mail-archives.apache.org/mod_mbox/geronimo-user/201102.mbox/%3COFFF9264E9.FE102D7B-ON86257835.0020DB2A-86257835.0020DB2C@teamhuber.com%3E | CC-MAIN-2016-26 | refinedweb | 527 | 65.93 |
.
A day in the life of the buildbot:
At a bare minimum, you'll need the following (for both the buildmaster and a buildslave):
Buildbot requires python-2.3 or later, and is primarily developed against python-2.4. It is also tested against python-2.5 . buildbot.test
This should run up to 192 tests, depending upon what VC tools you have installed. On my desktop machine it takes about five.
Once you've picked a directory, use the buildbot create-master command to create the directory and populate it with startup files:
buildbot createbilities users a chance to start using their replacement..
The
upgrade-master command is idempotent. It is safe to run it
multiple times. After each upgrade of the buildbot code, you should
use
upgrade-master on all your buildmasters. create-slave <options> DIR <params> command. You can type buildbot create-slave --help for a summary. To use these, just include them on the buildbot create-slave command line, like this:
buildbot create
There are certain configuration changes that are not handled cleanly
by
buildbot reconfig. If this occurs,
buildbot restart
is a more robust tool to fully switch over to the new configuration. really the only VC system still in widespread use that has per-file revisions../Baz/Bazaar, or a labeled tag used in CVS)3. The SHA1 revision ID used by Monotone, Mercurial, and Git.
To verify that the config file is well-formed and contains no deprecated or invalid elements, use the “checkconfig” command:
% buildbot checkconfig master.cfg. warnings.warn(m, DeprecationWarning) Config file is good!
If the config file is simply broken, that will be caught too:
% buildbot checkconfig master.cfg Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/buildbot/scripts/runner.py", line 834, in doCheckConfig ConfigLoader(configFile) File "/usr/lib/python2.4/site-packages/buildbot/scripts/checkconfig.py", line 31, in __init__ self.loadConfig(configFile) File "/usr/lib/python2.4/site-packages/buildbot/master.py", line 480, in loadConfig exec f in localDict File "/home/warner/BuildBot/master/foolscap/master.cfg", line 90, in ? c[bogus] = "stuff" NameError: name 'bogus' is not defined[).
Triggerable.
A more general way to coordinate builds is by “triggering” schedulers from builds. The Triggerable a Triggerable's builds to complete provides a form of "subroutine call", where one or more builds can "call" a scheduler to perform some work for them, perhaps on other buildslaves.
from buildbot import scheduler from buildbot.steps import trigger checkin = scheduler.Scheduler("checkin", None, 5*60, ["checkin"]) nightly = scheduler.Scheduler("nightly", ... , ["nightly"]) mktarball = scheduler.Triggerable("mktarball", ["mktarball"]) build = scheduler.Triggerable("build-all-platforms", ["build-all-platforms"]) test = scheduler.Triggerable("distributed-test", ["distributed-test"]) package = scheduler.Triggerable("package-all-platforms", ["package-all-platforms"]) c['schedulers'] = [checkin, nightly, build, test, package] checkin_factory = factory.BuildFactory() f.addStep(trigger.Trigger('mktarball', schedulers=['mktarball'], waitForFinish=True) f.addStep(trigger.Trigger('build', schedulers=['build-all-platforms'], waitForFinish=True) f.addStep(trigger.Trigger('test', schedulers=['distributed-test'], waitForFinish=True) nightly_factory = factory.BuildFactory() f.addStep(trigger.Trigger('mktarball', schedulers=['mktarball'], waitForFinish=True) f.addStep(trigger.Trigger('build', schedulers=['build-all-platforms'], waitForFinish=True) f.addStep(trigger.Trigger('package', schedulers=['package-all-platforms'], waitForFinish=True)[['slaves'] = [BuildSlave('bot-solaris', 'solarispasswd', notify_on_missing="bob@example.com"), ]['slaves'.
manhole.AuthorizedKeysManhole
manhole.PasswordManhole
manhole.TelnetManhole
# some examples: from buildbot import manhole c['manhole'] = manhole.AuthorizedKeysManhole(1234, "authorized_keys") c['manhole'] = manhole.PasswordManhole(1234, "alice", "mysecretpassword") c['manhole'] = manhole.manhole import PasswordManhole c['manhole'] =, and that you be using Twisted version 2.0 or later.._source'] = see Scheduler Types)
c['change_source']:
from buildbot.changes.mail import SyncmailMaildirSource c['change_source'] = SyncmailMaildirSource("~/maildir-buildbot", prefix="/trunk/").
The second component to setting up an email-based ChangeSource is to parse the actual notices. This is highly dependent upon the VC system and commit script in use.
A couple of common tools used to create these change emails are:.
This parser works with the CVSToys knows for to parse these CVSToys
messages and turn them into Change objects. It can be given two
parameters: the directory name of the maildir root, and the prefix to
strip.
from buildbot.changes.mail import FCMaildirSource c['change_source'] = FCMaildirSource("~/maildir-buildbot")
SyncmailMaildirSource knows how to parse the message format used by
the CVS “syncmail” script.
from buildbot.changes.mail import SyncmailMaildirSource c['change_source'] = SyncmailMaildirSource("~/maildir-buildbot")
BonsaiMaildirSource parses messages sent out by Bonsai, the CVS
tree-management system built by Mozilla.
from buildbot.changes.mail import BonsaiMaildirSource c['change_source'] = BonsaiMaildirSource("~/maildir-buildbot"), contrib/arch_buildbot.py,
contrib/hg_buildbot.py tools, and the
buildbot.changes.hgbuildbot hook.. All
are optional.
port’
None(which is the default), it shares the port used for buildslave connections. Not Implemented, always set to
None.
userand
passwd’
changeand
changepw. Not Implemented,
useris currently always set to
change,
passwdis always set to
changepw.
P4Source periodically polls a Perforce depot for changes. It accepts the following arguments:
p4base’
p4port’
p4user’
p4passwd’
split_file’
p4base, to a (branch, filename) tuple. The default just returns (None, branchfile), which effectively disables branch support. You should supply a function which understands your repository structure.
pollinterval’
histmax’
This configuration uses the
P4PORT,
P4USER, and
P4PASSWD
specified in the buildmaster's environment. It watches a project in which the
branch name is simply the next path component, and the file is all path
components after.
import buildbot.changes.p4poller s = p4poller.P4Source(p4base='//depot/project/', split_file=lambda branchfile: branchfile.split('/',1), ) c['change_source'] = s.
Please take a look at the BonsaiPoller docstring for details about the arguments it accepts.
The
buildbot.changes.svnpoller.SVNPoller is a ChangeSource
which periodically polls a Subversion repository for new revisions, by running the
svn
log command in a subshell. It can watch a single branch or multiple
branches.
SVNPoller accepts the following arguments:
svnurl
SVNPoller. This function must accept a single string and return a two-entry tuple. There are a few utility functions in
buildbot.changes.svnpollerthat can be used as a
split_filefunction, see below for details.
The default value always returns (None, path), which indicates that all files are on the trunk.
Subclasses of
SVNPoller can override the
split_file
method instead of using the
split_file= argument.
svnuser
--userargument will be added to all
svncommands. Use this if you have to authenticate to the svn server before you can do
svn infoor
svn logcommands.
svnpasswd
svnuser, this will cause a
--passwordargument to be passed to all svn commands.
pollinterval
histmax
SVNPollerasks for the last HISTMAX changes and looks through them for any ones it does not already know about. If more than HISTMAX revisions have been committed since the last poll, older changes will be silently ignored. Larger values of histmax will cause more time and memory to be consumed on each poll attempt.
histmaxdefaults to 100.
svnbin
svnexecutable to use. If subversion is installed in a weird place on your system (outside of the buildmaster's
$PATH), use this to tell
SVNPollerwhere to find it. The default value of “svn” will almost always be sufficient..
One common layout is to have all the various projects that share a repository get a single top-level directory each. Then under a given project's directory, you get two subdirectories, one named “trunk” and another named “branches”. Under “branches” you have a bunch of other directories, one per branch, with names like “1.5.x” and “testing”. It is also common to see directories like “tags” and “releases” next to “branches” and “trunk”.
For example, the Twisted project has a subversion server on “svn.twistedmatrix.com” that hosts several sub-projects. The repository is available through a SCHEME of “svn:”. The primary sub-project is Twisted, of course, with a repository root of “svn://svn.twistedmatrix.com/svn/Twisted”. Another sub-project is Informant, with a root of “svn://svn.twistedmatrix.com/svn/Informant”, etc. Inside any checked-out Twisted tree, there is a file named bin/trial (which is used to run unit test suites).
The trunk for Twisted is in
“svn://svn.twistedmatrix.com/svn/Twisted/trunk”, and the
fully-qualified SVN URL for the trunk version of
trial would be
“svn://svn.twistedmatrix.com/svn/Twisted/trunk/bin/trial”. The same
SVNURL for that file on a branch named “1.5.x” would be
“svn://svn.twistedmatrix.com/svn/Twisted/branches/1.5.x/bin/trial”.
To set up a
SVNPoller that watches the Twisted trunk (and
nothing else), we would use the following:
from buildbot.changes.svnpoller import SVNPoller c['change_source'] = SVNPoller("svn://svn.twistedmatrix.com/svn/Twisted/trunk")
In this case, every Change that our
SVNPoller produces will
have
.branch “.../Twisted/trunk”. We will set it to
“.../Twisted”/1.5.x/bin/trial” into
branch=”branches/1.5.x” and
filepath=”bin/trial”. that we want to see “branches/1.5.x” rather than just “1.5.x” because when we perform the SVN checkout, we will probably
This function is provided as
buildbot.changes.svnpoller.split_file_branches for your
convenience..
You can do an
svn co with that URL and get a copy of the latest
version.
formless/webform.py", and a branch of None
branches/1.5.x/Nevow/formless/webform.py
trunk/Quotient/setup.py
my_file_splitterreturn None.
branches/1.5.x/Quotient/setup.py
(Note that.
The
[hgbuildbot] section has two other parameters that you
might specify, both of which control the name of the branch that is
attached to the changes coming from this hook.
One common branch naming policy for Mercurial repositories is to use
it just like Darcs: each branch goes] master = buildmaster.example.org:9987 branchtype = dirname
Another approach:
[hgbuildbot] master = buildmaster.example.org:9987 branchtype = inrepo
Finally, if you want to simply specify the branchname directly, for
all changes, use
branch = BRANCHNAME. This overrides
branchtype:
[hgbuildbot] master = buildmaster.example.org:9987.steps import source, shell from buildbot.process import factory f = factory.BuildFactory() f.addStep(source.SVN(svnurl="")) f.addStep(shell.ShellCommand(command=["make", "all"])) f.addStep(shell.ShellCommand(command=["make", "test"])). source, factory from buildbot.steps import source, shell c['change_source'] = PBChangeSource() s1 = AnyBranchScheduler('main', ['trunk', 'features/newthing', 'features/otherthing'], 10*60, ['test-i386', 'test-ppc']) c['schedulers'] = [s1] f = factory.BuildFactory() f.addStep(source.SVN(mode='update', baseURL='svn://svn.example.org/MyProject/', defaultBranch='trunk')) f.addStep(shell.Compile(command="make all")) f.addStep(shell
Mercurial build step performs a
Mercurial (aka “hg”) checkout
or update.
Branches are handled just like See Darcs.
The Mercurial step takes the following arguments:
repourl
baseURLis provided): the URL at which the Mercurial
hg clone, so it takes the same
arguments:
repourl
baseURLis provided): the URL at which the Bz
bzr checkoutcommand.
The
P4 build step creates a Perforce client specification and performs an update.
p4base
defaultBranch
p4port
p4user
p4passwd
p4extra_views
p4client
The
Git build step clones or updates a Git repository and checks out the specified branch or revision. Note
that the buildbot supports Git version 1.2.0 and later: earlier
versions (such as the one shipped in Ubuntu 'Dapper') do not support
the git init command that the buildbot uses.
The Git step takes the following arguments:
repourl
branch. The exception is PYTHONPATH, which is merged with (actually prepended to) any existing $PYTHONPATH setting. The value is treated as a list of directories to prepend, and a single string is treated like a one-item list. For example, to prepend both /usr/local/lib/python2.3 and /home/buildbot/lib/python to any existing $PYTHONPATH setting, you would do something like the following:
f.addStep(ShellCommand, command=["make", "test"], env={'PYTHONPATH': ["/usr/local/lib/python2.3", "/home/buildbot/lib/python"] })
description
descriptionDone
description, this may either be a list of short strings or a single string.
If neither
description nor
descriptionDone are set, the
actual command arguments will be used to construct the description.
This may be a bit too wide to fit comfortably on the Waterfall
display.
f.addStep(ShellCommand, command=["make", "test"], description=["testing"], descriptionDone=["tests"]) warning messages, a summary log is
created with any problems that were seen, and the step is marked as
WARNINGS if any were discovered. The number of warnings is stored in a
Build Property named “warnings-count”, which is accumulated over all
Compile steps (so if two warnings are found in one step, and three are
found in another step, the overall build will have a
“warnings-count” property of 5.
The default regular expression used to detect a warning is
'.*warning[: ].*' , which is fairly liberal and may cause
false-positives. To use a different regexp, provide a
warningPattern= argument, or use a subclass which sets the
warningPattern attribute:
f.addStep(Compile(command=["make", "test"], warningPattern="^Warning: "))
The
warningPattern= can also be a pre-compiled python regexp
object: this makes it possible to add flags like
re.I (to use
case-insensitive matching).
(TODO: this step needs to be extended to look for GCC error messages as well, and collect them into a separate logfile, along with the source code filenames involved)., WithProperties f.addStep(ShellCommand, command=["tar", "czf", WithProperties("build-%s.tar.gz", "revision"), "source"])
If this BuildStep were used in a tree obtained from Subversion, it would create a tarball with a name like build-1234.tar.gz.
The
WithProperties function does
printf-style string
interpolation, using strings obtained by calling
build.getProperty(propname). Note that for every
%s (or
%d, etc), you must have exactly one additional argument to
indicate which build property you want to insert.
You can also use python dictionary-style string interpolation by using
the
%(propname)s syntax. In this form, the property name goes
in the parentheses, and WithProperties takes no additional
arguments:
f.addStep(ShellCommand, command=["tar", "czf", WithProperties("build-%(revision)s.tar.gz"), "source"])
Don't forget the extra “s” after the closing parenthesis! This is
the cause of many confusing errors..
The following build properties are set when the build is started, and are available to all steps.
branch
None(which interpolates into
WithPropertiesas an empty string) if the build is on the default branch, which is generally the trunk. Otherwise it will be a string like “branches/beta1.4”. The exact syntax depends upon the VC system being used.
revision
If the “force build” button was pressed, the revision will be
None, which means to use the most recent revision available.
This is a “trunk build”. This will be interpolated as an empty
string.
got_revision
revision, except for trunk builds, where
got_revisionindicates what revision was current when the checkout was performed. This can be used to rebuild the same source code later.
Note that for some VC systems (Darcs in particular), the revision is a
large string containing newlines, and is not suitable for
interpolation into a filename.
buildername
buildnumber
slavename
Here are some BuildSteps that are specifcally useful for projects implemented in Python.
epydoc is a tool for generating API documentation for Python modules from their docstrings. It reads all the .py files from your source tree, processes the docstrings therein, and creates a large tree of .html files (or a single .pdf file).
The
buildbot.steps.python --pdf to generate a PDF file instead of a large tree of HTML files.
The API docs are generated in-place in the build tree (under the workdir, in the subdirectory controlled by the “-o” argument). To make them useful, you will probably have to copy them to somewhere they can be read. A command like rsync -ad apiref/ dev.example.com:~public_html/current-apiref/ might be useful. You might instead want to bundle them into a tarball and publish it in the same place where the generated install tarball is placed.
from buildbot.steps.python import BuildEPYDoc ... f.addStep(BuildEPYDoc, command=["epydoc", "-o", "apiref", "source/mypkg"])
buildbot.steps.python.steps.python import PyFlakes ... f.addStep(PyFlakes, command=["pyflakes", "src"]) FileUpload. Remember that
neither of these commands will create missing directories for you.).
The counterpart to the Triggerable described in section see Build Dependencies is the Trigger BuildStep.
from buildbot.steps.trigger import Trigger f.addStep(Trigger, schedulerNames=['build-prep'], waitForFinish=True, updateSourceStamp=True).
While it is a good idea to keep your build process self-contained in the source code tree, sometimes it is convenient to put more intelligence into your Buildbot configuration. One was.
TODO: add more description of
For a complete list of the methods you can call on a LogFile, please
see the docstrings on the
IStatusLog class in
buildbot/interfaces.py.Step,
Trial.__init__ ends with the following clause:
# this counter will feed Progress along the 'test cases' metric counter = TrialTestCaseCounter() self.addLogObserver('stdio', counter)
This creates a TrialTestCaseCounter and tells the step that the
counter wants to watch the “stdio” log. The observer is
automatically given a reference to the step in its
.step
attribute..89.:
# START(counter) # FINISH >>>.
The most common way to run a
WebStatus is on a regular TCP
port. To do this, just pass in the TCP port number when you create the
WebStatus instance; this is called the
http_port argument:
from buildbot.status.html import WebStatus c['status'].append(WebStatus(8080))
The
http_port argument is actually)11. .. In addition, adding one or more “category=” query arguments to the URL will limit the display to Builders that were defined with one of the given categories.
A 'show_events=true' query argument causes the display to include non-Build events, like slaves attaching and detaching, as well as reconfiguration events. 'show_events=false' hides these events. The default is to show them. for access to individual builds,
/changes for access to information about source code changes,
etc.
).
By default, the message will be sent to the Interested Users list (see Doing Things With Users), which includes all developers who made changes in the build. You can add additional recipients with the extraRecipients argument..status.mail import MailNotifier mn = = MailNotifier(fromaddr="buildbot@example.org", sendToInterestedUsers=False, extraRecipients=['listaddr@example.org'])
fromaddr
sendToInterestedUsers
extraRecipients
subject
%(builder)swill be replaced with the name of the builder which provoked the message.
mode
all
failing
problem
builders
categories
addLogs
relayhost
lookup
IEmailLookup). Object which provides IEmailLookup, which is responsible for mapping User names (which come from the VC system) into valid email addresses. If not provided, the notifier will only be able to send mail to the addresses in the extraRecipients list. for more"], password="mysecretpassword") c['status'].append(irc)
Take a look at the docstring for
words.IRC for more details on
configuring this service. The
password argument, if provided,
will be sent to Nickserv to claim the nickname: some IRC servers will
not allow clients to send private messages until they have logged in
with a password.
import buildbot.status.client pbl = buildbot.status.client.PBListener(port=int, user=str, passwd=str) c['status'].append(pbl)
This sets up a PB listener on the given TCP port, to which a PB-based
status client can connect and retrieve status information.
buildbot statusgui (see statusgui) is an example of such a
status client. The
port argument can also be a strports
specification string.
TODO: this needs a lot more examples create-master BASEDIR
This creates a new directory and populates it with files that let it be used as a buildslave's base directory. You must provide several arguments, which are used to create the initial buildbot.tac file.
buildbot create).
If you have set up a PBListener (see .
(see PBChangeSource) running in the buildmaster (by being set in
c['change_source']).ildirSource
buildbot.changes.mail.FCMaildirSource: FCMaildirSource
buildbot.changes.mail.SVNCommitEmailMaildirSource: SVNCommitEmailMaildirSource
buildbot.changes.mail.SyncmailMaildirSource: SyncmailMaildirSource.Triggerable: Build Dependencies
[1] this @reboot syntax is understood by Vixie cron, which is the flavor usually provided with linux systems. Other unices may have a cron that doesn't understand @reboot
[2] except Darcs, but since the Buildbot never modifies its local source tree we can ignore the fact that Darcs uses a less centralized model
[3] many VC systems provide more complexity than this: in particular the local views that P4 and ClearCase can assemble out of various source directories are more complex than we're prepared to take advantage of here
[4]
[12] Apparently this is the same way displays build status | http://docs.buildbot.net/0.7.7/ | CC-MAIN-2017-43 | refinedweb | 3,348 | 50.63 |
I'd like to reflect on what I've learned this week. There aren't any specific concepts that are of focus, this is essentially the net of what I googled throughout the week.
Pattern matching on an empty map
Suppose you have a map that contains the following properties
foo = %{:bar: "baz"}
Suppose that you want to apply a function to the map
foo. If you want to check whether or not the map
foo is empty, naively you would want to write a function declaration of the following
def some_fun(%{}) do # ... do some stuff end def some_fun(%{bar: bar} = some_args) do # ... do some other stuff end
You would think that passing
foo into
some_fun that the first declaration would be called if
foo were empty. Well, nope, that's not the case. Instead the first function declaration would match on any parameter that is of type
map even if it were empty. Instead to write a function declaration that would match empty maps you would need to leverage a guard statement in the function declaration.
def some_fun(some_args) when some_args == %{} do # ... do some stuff end
Anonymous functions don't need an explicit parameter
I was surprised by this, but anonymous functions also support pattern matching that doesn't require an explicit parameter. This seems like a nice way to get reusable case statements.
handle_result = fn {:ok, result} -> # do some work {:error, error} -> # do some work end
Here you could reuse this in a set of statements that have common logic.
iex
iex is an interactive repl that allows you to work with your application without the overhead of running your application in full.
Prototyping
Prototyping in elixir is a breath of fresh air. You don't need to worry about setting break points in your code in order to interact with the current program state. Instead you can use iex to load in all of your elixir applications to work with functions and application state all from within a repl.
Running
iex -S mix allows you to load all of your application into memory. From there you can run any functions defined in modules of your applications that are loaded by your
mix.exs.
For the project I work on at Vetspire, we utilize Phoenix. Phoenix is a web framework written for elixir. Suppose there's an in memory cache that I want to introspect after making a call to an API in the app that's run by Phoenix. I could set a debugger statement at the point in which we modify the API and review the cache while the breakpoint is being hit, or I could simply run the application, make a call to the API and then invoke a get function in the iex repl.
# Load run the Phoenix application and begin an interactive repl session iex -S mix phx.server # Make a call to the API externally iex(0)> MyApp.SomeCache.list_all() # get a listing of entries that were added to the cache.
I don't know about you, but I find this changes the way that I look at building applications. Normally while going through early prototyping phases you have to make changes to the code, wait for a new compilation phase and then make a call to the corresponding API from the client application to test those changes. Instead I can make direct calls to application code while the application is being run.
Libraries
Timex
If you're looking for a good datetime library, this seems to be the way to go. I especially like the fact that the goal of the library is to get the work merged into the standard language of elixir.
Memento
If you need to work with persistent and in memory databases, Memento makes that much easier to do. It adds a nice api to working with both
:ets and
mnesia, in memory and distributed persistent databases that are provided by erlang respecitvely.
Quantum
If you need simple cron behaviors for jobs in your app, This is the way to go. | https://www.blakedietz.me/blog/2020-01-05/elixir-learnings-week-1/ | CC-MAIN-2021-04 | refinedweb | 674 | 68.81 |
First time here? Check out the FAQ!
Also, I seem to be getting:
Bad flag (Unrecognized or unsupported array type)
Yes, it seems I am getting empty frames from the capture even in the first iteration. Why would this be happening? Already removed the useless line
Sorry about that, it was for something I was trying earlier!
Ive tried both using '/' and a '\', still get the same error.
I am trying to apply MOG background subtraction to an acquired video, stored in .avi format.
I have taken the link at here but have changed it.
Im using the CaptureVideo as:
#include "stdafx.h"
#include <cv.h>
#include <cxcore.h>
#include <highgui.h>
#include <cvaux.h>
#include <opencv2/video/background_segm.hpp>
#include <stdio.h>
#include <opencv2/video/video.hpp>
using namespace cv;
using namespace std;
Mat frame; //current frame
Ptr<BackgroundSubtractor> pMOG; //MOG Background subtractor
void processImages(char* firstFrameFilename);
void processVideo(string name);//(char* videoFilename);
string path = "C:\\Users\Suramrit Singh\Documents\Visual Studio 2010\Projects\opencvtest\opencvtest\walk2.avi"
int main()
{
namedWindow("Frame");
namedWindow("FG Mask MOG");
pMOG= new BackgroundSubtractorMOG(); //MOG approach
//processImages("asd1.jpg");--- cant work on single images
processVideo(path);//("tusharwalk2.avi");
destroyAllWindows();
return EXIT_SUCCESS;
}
void processVideo(string name){//(char* videoFilename) {
//create the capture object
VideoCapture capture(path);
printf("Sucess?%d",capture.isOpened());
for(int i=0;i<6000;i++)
{
capture >> frame;
capture.read(frame);
pMOG->operator()(frame, fgMaskMOG); // -----------------creatingproblem----------------------
IplImage* image2=cvCloneImage(&(IplImage)frame);
printf("%d", i);
imshow("FG Mask MOG", fgMaskMOG);
keyboard = waitKey(1);
}
//delete capture object
capture.release();
}
Now if I change it to default cam VideoCapture capture(0), it works, but does not do so with the .avi file path provided and gives the following runtime error:
"Unhandled exception at 0x000007fd7835811c in opencvtest.exe: Microsoft C++ exception: cv::Exception at memory location 0x0081eb60.."
Please, I'm a newcomer to image processing and any help will be highly appreciated as I've been stuck at this problem for a few days now and can't seem to figure it out.
So you mean I take a measured object in the image and use that to calculate the distance of the step?
How do I use this reference, any ideas?
As a requirement for a project, I need to find the walking step distance taken by a person, whose image while walking has been acquired.
I have acquired a binary silhouette of the person using background subtraction and now need to calculate the actual physical distance of the step.
Is there a method to convert pixel distance to physical distance in OpenCV?
And if not, then what method can be applied to achieve the same? | https://answers.opencv.org/users/8274/suramrit/?sort=recent | CC-MAIN-2021-25 | refinedweb | 439 | 59.5 |
Next: Utility functions, Previous: Extracting meta data, Up: Top
GNU libextractor works immediately with C and C++ code. Bindings for Java, Mono, Ruby, Perl, PHP and Python are available for download from the main GNU libextractor website. Documentation for these bindings (if available) is part of the downloads for the respective binding. In all cases, a full installation of the C library is required before the binding can be installed.
Compiling the GNU libextractor Java binding follows the usual process of running configure and make. The result will be a shared C library libextractor_java.so with the native code and a JAR file (installed to $PREFIX/share/java/libextractor.java).
A minimal example for using GNU libextractor's Java binding would look like this:
import org.gnu.libextractor.*; import java.util.ArrayList; public static void main(String[] args) { Extractor ex = Extractor.getDefault(); for (int i=0;i<args.length;i++) { ArrayList keywords = ex.extract(args[i]); System.out.println("Keywords for " + args[i] + ":"); for (int j=0;j<keywords.size();j++) System.out.println(keywords.get(j)); } }
The GNU libextractor library and the libextractor_java.so JNI binding have to be in the library search path for this to work. Furthermore, the libextractor.jar file should be on the classpath.
Note that the API does not use Java 5 style generics in order to work with older versions of Java.
his binding is undocumented at this point.
This binding is undocumented at this point.
This binding is undocumented at this point.
This binding is undocumented at this point.
This binding is undocumented at this point. | https://www.gnu.org/software/libextractor/manual/html_node/Language-bindings.html | CC-MAIN-2016-26 | refinedweb | 264 | 52.46 |
The IDE As a Bad Programming Language Enabler
Soulskill posted about 2 years ago | from the real-coders-use-notepad-and-mt-dew dept.
!"
But eclipse is terrible at navigation (0)
MichaelSmith (789609) | about 2 years ago | (#41815441)
I wrote a simple file system indexer at my last job. Enter part of a class name and it loads the source file in an editor. Eclipse failed because projects have to be small. This application had > 200 projects and any non-trivial tasks would have required working with 100 projects at a time.
Re:But eclipse is terrible at navigation (5, Funny)
santax (1541065) | about 2 years ago | (#41815477)
Re:But eclipse is terrible at navigation (5, Funny)
hcs_$reboot (1536101) | about 2 years ago | (#41815777)
being head developer of Apple Maps must earn really good!
Well, he's just been fired [wsj.com] . So now he has enough time to spend on slashdot, lucky him. And welcome!
Re:But eclipse is terrible at navigation (5, Interesting)
Anonymous Coward | about 2 years ago | (#41815673):But eclipse is terrible at navigation (0)
Anonymous Coward | about 2 years ago | (#41815697)
I must say this is my favorite feature of Slickedit - quick open even with projects with 1000s of files.
Re:But eclipse is terrible at navigation (0)
Anonymous Coward | about 2 years ago | (#41815721)
Emacs could do that already in the last century.
Re:But eclipse is terrible at navigation (0)
Anonymous Coward | about 2 years ago | (#41815763)
But in Eclipse you would just ctrl-T, then enter the initials (or name, or wildcarded name) of the class you want, and it'll show up for opening. Or ctrl-R for resources search.
Eclipse works excellently with large projects comprised of multiple sub-projects. But you do need to learn how to use it effectively.
Word (5, Insightful)
genocism (2577895) | about 2 years ago | (#41815459)
Re:Word (5, Funny)
equex (747231) | about 2 years ago | (#41815473)
Re:Word (5, Funny)
gmhowell (26755) | about 2 years ago | (#41815549)
i code with a battery, a resistor and hit the cpu pins with it
Still haven't mastered butterflies, n00bz?
Re:Word (5, Funny)
Anonymous Coward | about 2 years ago | (#41815839)
i code with a battery, a resistor and hit the cpu pins with it
Still haven't mastered butterflies, n00bz?
What did you think where Sandy came from?
Re:Word (1)
WillKemp (1338605) | about 2 years ago | (#41815829)
The toggle switches on the front panel are broken from overuse, i guess?
Re:Word (0)
santax (1541065) | about 2 years ago | (#41815489)
Re:Word (1)
Anonymous Coward | about 2 years ago | (#41815523)
Emacs is the right tool for every job!
Re:Word (3, Funny)
santax (1541065) | about 2 years ago | (#41815537)
Re:Word (0)
Anonymous Coward | about 2 years ago | (#41815725)
How will I save the time using vim? I know my way around that UI, I just do not see any time saving features I can not get in Eclipse.
Re:Word (1, Informative)
WillKemp (1338605) | about 2 years ago | (#41815857)
If you're writing it (and the language is supported) do it in Vim.
If you've got a clue about programming, it doesn't matter if the language is supported or not. All you need is a text editor. But not vim, not unless you're doing a quick fix of something. I use vim heaps, but not for programming. I use geany for that. Gui text editors are so much more convenient than vim for anything other than editing config files.
Re:Word (4, Insightful)
lattyware (934246) | about 2 years ago | (#41815531)
Re:Word (3, Insightful)
mwvdlee (775178) | about 2 years ago | (#41815705)
So basically his entire argument boils down to "My programming language is better than yours"?
Re:Word (4, Insightful)
lattyware (934246) | about 2 years ago | (#41815823)
Re:Word (4, Funny)
WillKemp (1338605) | about 2 years ago | (#41815871)
So basically his entire argument boils down to "My programming language is better than yours"?
Of course. My programming language is always better than yours. This is slashdot after all!
Re:Word (1)
MartinSchou (1360093) | about 2 years ago | (#41815833)
Serious question:
If the language itself is generating setters and getters, how do you differentiate between truly private variables and variables that should be accessible from outside (but through verified inputs/outputs)?
Re:Word (2)
lattyware (934246) | about 2 years ago | (#41815895)
Re:Word (5, Insightful)
daem0n1x (748565) | about 2 years ago | (#41815689) (5, Insightful)
RobinH (124750) | about 2 years ago | (#41815813)
Re:Word (3, Interesting)
daem0n1x (748565) | about 2 years ago | (#41815949)
According to the Wikipedia page: "It is designed to introduce programming to artists and other newcomers unfamiliar with software development".
I train engineers, not artists.
Re:Word (1, Troll)
chrismcb (983081) | about 2 years ago | (#41815931)
Re:Word (1)
daem0n1x (748565) | about 2 years ago | (#41815961)
Who the fuck is Willie? (4, Insightful)
thammoud (193905) | about 2 years ago | (#41815465)
/.? (0)
Anonymous Coward | about 2 years ago | (#41815743)
Ah. So emacs is an IDE now?
Re:Who the fuck is Willie? (5, Funny)
91degrees (207121) | about 2 years ago | (#41815825)
Just needs a good editor.
Re:Who the fuck is Willie? (1)
bhaak1 (219906) | about 2 years ago | (#41815879)
No, it's always been one. Certainly for Emacs Lisp, with additional code (that is included nowadays) also for other programming languages. Surprisingly even Wikipedia agrees with that.
Re:Who the fuck is Willie? (1)
Hognoxious (631665) | about 2 years ago | (#41815751)
Groundskeeper Willie out of The Simpsons, who is a true Scotsman?
Which brings us nicely, and why not, to this from TFA:
Not sure I agree with the conclusion... (4, Interesting)
MadKeithV (102058) | about 2 years ago | (#41815467):Not sure I agree with the conclusion... (2)
lattyware (934246) | about 2 years ago | (#41815517)
Re:Not sure I agree with the conclusion... (3, Informative)
bickerdyke (670000) | about 2 years ago | (#41815545) ? (5, Insightful)
Anonymous Coward | about 2 years ago | (#41815469):Did he already heard about integrated debugger (1)
lattyware (934246) | about 2 years ago | (#41815511)
I would argue needing a debugger is also a sign of language flaws. Debuggers help you find issues with your code while it runs. I've found that so much of the time those kind of issues are from stuff like Null objects - where you get an exception from a null object and then have to crawl up your code finding out where it came from. If the language was sane and threw exceptions on problems rather than returning null, there would be far less issues.
Not saying that debuggers are useless or that every problem a debugger is useful for could be solved, just that if you find yourself needing it often, maybe it's a sign something is going wrong with the language.
Re:Did he already heard about integrated debugger (0, Flamebait)
Anonymous Coward | about 2 years ago | (#41815551)
I would argue needing a debugger is also a sign of language flaws.
You would lose that argument.
Re:Did he already heard about integrated debugger (1)
lattyware (934246) | about 2 years ago | (#41815573)
Re:Did he already heard about integrated debugger (1)
Anonymous Coward | about 2 years ago | (#41815627)
I'll take a shot then.
A debugger can give you lots of of information when the exception occurs. You can visually watch the flow of program execution, inspect the values of all variables in scope, modify them on the fly, or even execute arbitrary code statements while you are in the middle of debugging the issue, say at a breakpoint just above where the error occurs.
A thrown exception doesn't give you any of these things. It provides better logs, maybe, but a debugger gives you so much more. An exception gives you a stack trace; a debugger opens up the guts and lets you poke around as much as you want.
Your assertion that needing a debugger is a sign of language flaws is absurd. Period.
Re:Did he already heard about integrated debugger (1)
lattyware (934246) | about 2 years ago | (#41815793)
Re:Did he already heard about integrated debugger (2)
Alkonaut (604183) | about 2 years ago | (#41815643)
Re:Did he already heard about integrated debugger (1)
Molt (116343) | about 2 years ago | (#41815679)
Exceptions are intended to be used when a program hits unexpected or fatal issues which cannot be handled locally, and often the low-level library code isn't in a position to be able to judge whether something qualifies as worthy of an exception or whether it's an expected part of the processing cycle and can be safely be ignored.
Take for example asking to open a file for reading and the file not being available for some reason. If I'm just copying a large directory structure then I can reasonably expect to not be able to open a few files due to permissions and, while I'll likely want to log these and display them, treating it as an exception wouldn't be suitable. If my code is part of an online application and it was failing to open a configuration file which it needed to connect to the database then the error would be worthy of promoting to an exception if it could not rectified in the code which detected it. Ultimately though for this type of 'It could be serious, it could be nothing' then the decision should be left to the client code rather than the library.
In my mind a better approach to fixing these type of errors is better support of types which cannot be set to null. For example if I have a FileHandle variable which cannot be set to null and the File.Open() method returns a nullable type then there's going to be a compile-time error, which is the best type of error. This will point out where I'm 'assuming' that the value is not null and as I'm fixing the compile error I'll naturally add the correct checks as the 'This could be null..' issue has been highlighted for me.
Re:Did he already heard about integrated debugger (3, Insightful)
lattyware (934246) | about 2 years ago | (#41815809)
Re:Did he already heard about integrated debugger (4, Insightful)
mwvdlee (775178) | about 2 years ago | (#41815737) (1)
lattyware (934246) | about 2 years ago | (#41815847)
Re:Did he already heard about integrated debugger (1)
chrismcb (983081) | about 2 years ago | (#41815963)
Re:Did he already heard about integrated debugger (1)
Hognoxious (631665) | about 2 years ago | (#41815779)
TypeMismatchException: expected integer, found float at or near line 237. Bailing...
Re:Did he already heard about integrated debugger (2)
dkf (304284) | about 2 years ago | (#41815845) (0)
Anonymous Coward | about 2 years ago | (#41815947)
I'd argue against it.
More importantly, they allow you to examine the state of your program when an issue arises. In other words, you can test, quite on the spot, any assumptions you have about the state of the program, and thus can more easily determine what went wrong.
Yes, about everything you could do with a debugger you could do with well-placed print statements, and lots of recompiling and rerunning. But why waste your time with that if you can just look up the value in the debugger without changing the code?
Re:Did he already heard about integrated debugger (1)
santax (1541065) | about 2 years ago | (#41815565)
Re:Did he already heard about integrated debugger (3, Informative)
darkat (697582) | about 2 years ago | (#41815921)
Exaggeration quite much? (0)
Anonymous Coward | about 2 years ago | (#41815481)
I would still use an IDE when given the chances (if it's easy enough to set up), even if a simple one. I use Code::Blocks for C programs for editing, the projects (it keeps tracks of source files and build options) and for the build button (so, text editor and make all-in-one). Granted, I don't use many of the features, such as autocompletion (which always gets in the way) or smart tabs (sometimes gets in the way, I'm happy enough with just copying the indentation of the previous line).
Re:Exaggeration quite much? (4, Insightful)
mrbluze (1034940) | about 2 years ago | (#41815555)
Good luck with that (4, Insightful)
Kergan (780543) | about 2 years ago | (#41815491)
only an IDE could love AbstractSingletonProxyFactoryBean.
Uh huh? Naturally, class names such as ASPFB and GDMF and RSAP are evidently more lovable. So much simpler to write...
Re:Good luck with that (0)
Anonymous Coward | about 2 years ago | (#41815561)
Methinks the issue is more that in Java we need to proxy a lot of stuff, and it isn't exatcly a oneliner to do anything related to proxies, there are actually no language features to help here.
Re:Good luck with that (2)
91degrees (207121) | about 2 years ago | (#41815811)
Re:Good luck with that (4, Funny)
Robert Zenz (1680268) | about 2 years ago | (#41815935)
Uh huh? Naturally, class names such as ASPFB and GDMF and RSAP are evidently more lovable. So much simpler to write...
MyClass, MyConn and MyFunc come to mind...*shudders*
100% (4, Insightful)
lattyware (934246) | about 2 years ago | (#41815495)
Re:100% (2)
digitalchinky (650880) | about 2 years ago | (#41815731):100% (2, Insightful)
lattyware (934246) | about 2 years ago | (#41815835)
Re:100% (0)
Anonymous Coward | about 2 years ago | (#41815977)
Exactly the same here. Not problem coding Java like that, using a simple Makefile and ant to compile. Moreover I stay right away from syntax highlighting, with simple brace and bracket matching being sufficient.
Re:100% (0)
Anonymous Coward | about 2 years ago | (#41815815)
But at least three major IDEs exist for Java, so there is plenty to select from. It is a little like criticizing a language for the amount of work it would take to manually translate the code to native code if no compiler existed.
Re:100% (1)
lattyware (934246) | about 2 years ago | (#41815905)
I code in C#, (3, Informative)
gigaherz (2653757) | about 2 years ago | (#41815497)
...#, (1)
santax (1541065) | about 2 years ago | (#41815683)
Eclipse is better if you are a beginner (5, Insightful)
bhaak1 (219906) | about 2 years ago | (#41815507)).
Re:Eclipse is better if you are a beginner (0)
Anonymous Coward | about 2 years ago | (#41815749)
Re:Eclipse is better if you are a beginner (1)
bhaak1 (219906) | about 2 years ago | (#41815929)
Maybe I it's a bit of a rambling. But apparently the rest of my post is good enough to straighten out the weak intro.
;-)
IDE pros & cons (5, Insightful)
Anonymous Coward | about 2 years ago | (#41815563).
More like Willie Failer (0)
Anonymous Coward | about 2 years ago | (#41815569)
amiright?
Attention seeker (5, Insightful)
Anonymous Coward | about 2 years ago | (#41815571) (1)
lattyware (934246) | about 2 years ago | (#41815611)
Re:Attention seeker (1)
Anonymous Coward | about 2 years ago | (#41815693)
(Same AC)
I see what you are saying. Certainly there are rough edges around any language; getters and setters in Java are annoying. In
.NET the syntax is much more succinct.
However, there needs to be a distinction here between language and practice. You aren't forced to use getters and setters. You use them because it is considered good practice. You use them because they give you something you need.
If there were no getters and setters, what would take their place whilst maintaining encapsulation and compile time type safety? If type safety isn't a concern then you wouldn't be using Java. If encapsulation wasn't a concern you wouldn't use getters and setters. There has to be something there, and it isn't going to write itself. So what's the alternative? How could we avoid this boiler plate code but still keep encapsulation and type safety? How can the language infer this for you? (Again, this is where the
.NET properties syntax is far superior - but I wouldn't use .NET without an IDE either.)
The argument seems to be more about static vs dynamic languages at this point. Boiler plate code is the cost of type safety and writing maintainable and testable software. That doesn't make the language bad, especially not for large code bases used by large teams.
Nor does it mean that the need for an IDE is a 'smell'. The IDE gives you the boiler plate code which gives you the type safety and encapsulation. If you don't need the latter then don't use a language that is written with the assumption that you do. And if you do need the latter, then something has to be written, you're going to be much better off with the IDE. The IDE is controlled by you, tailored to do what you want. The language, being far more general purpose than what you want to use it for, can't be. That's why the IDE is needed.
Re:Attention seeker (1)
lattyware (934246) | about 2 years ago | (#41815861)
Re:Attention seeker (1)
Anonymous Coward | about 2 years ago | (#41815741)
Java does not force you to write getters and setters. In fact, mindlessly writing / generating trivial getters and setters is not a very good practice, and Java does it right by forcing you to write / generate them in the cases where they are needed. The problem is that people don't apply encapsulation because encapsulation means you have to write reasonably structured code and not a fucking mess.
Java has many real, documented problems, but IDEs are not one of them.
Re:Attention seeker (1)
lattyware (934246) | about 2 years ago | (#41815867)
Re:Attention seeker (3, Insightful)
dkf (304284) | about 2 years ago | (#41815819).)
Refactoring? (0)
Anonymous Coward | about 2 years ago | (#41815575)
I call bullshit on this. One of the major reasons to use a modern IDE is the support for refactorings. Renaming variables, fields, and functions/methods is a major pain in the butt in any language if you have to do it manually. Having tools to do this instantly is a major reason for using IDEs and the like.
Also, whether classes are located in a single file or across multiple files is completely irrelevant. Having a tool that allows you to navigate efficiently in the code improves performance of the programmer in any case.
The reason that people aren't using IDEs for niche is probably that the tool support is nowhere near what it is for Java.
Refactoring can be done by any decent editor (1)
Viol8 (599362) | about 2 years ago | (#41815757)
"Renaming variables, fields, and functions/methods is a major pain in the butt in any language if you have to do it manually. Having tools to do this instantly is a major reason for using IDEs and the like."
You mean like "%s/old name/new name/g" that the vi editor has had since the 80s? Perhaps you should try using a programmers editor. You might be surprised at just how little extra an IDE gives you.
"Having a tool that allows you to navigate efficiently in the code improves performance of the programmer in any case."
2 or more xterms and "grep". I find that combination far more efficient than some bloatware IDE. Admittedly I'm a C++ coder, not java, but I've used some IDEs and found them wanting. They want to be an entire coding "enviroment". Thanks , but I already have one - its called the operating system. However YMMV.
Re:Refactoring can be done by any decent editor (0)
Anonymous Coward | about 2 years ago | (#41815849)
"%s/old name/new name/g" will replace all occurrences of the text 'old name', regardless of it semantic meaning in the program and only in this single file. For example if you have a local field 'foo' and you want to change the name to 'bar', replacing the text will also break the point in your code where you call som external method also called 'foo'. Also it won't update places in other files where they refer to 'foo' in your file. This won't happen with refactoring support in an IDE.
You already knew that...
Re:Refactoring can be done by any decent editor (4, Interesting)
91degrees (207121) | about 2 years ago | (#41815865):Refactoring can be done by any decent editor (1)
anarcobra (1551067) | about 2 years ago | (#41815915)
That only works if I have globally unique names for every variable I might want to rename.
It also means I have to do the same thing for each separate file if the editor doesn't do replace in several files at once.
Even for emacs they made cedet, which gives you a bunch of these features.
Java IDEs (0)
Anonymous Coward | about 2 years ago | (#41815587)
The problem is that Eclipse is one of the worst IDEs ever, and Java, as a whole has crappy IDEs. IntelliJ is by far the best, in my opinion, since it is less buggy, crash prone, and confusing than Eclipse, which is more a collection of components than it is an IDE - but even IntelliJ is still subject to Java GC pauses. Really, you can't write a good IDE in Java, and therefor Java developers have bad IDEs. Visual Studio isn't a really great IDE, but at least it doesn't pause while you are typing, and it has a unified sense to it that you don't get with Eclipse. MonoDevelop is pretty good too, or FlashDevelop, or really any IDE where at least the UI thread isn't subject to GC.
IDEs are very useful,and it's not language's fault (5, Informative)
coder111 (912060) | about 2 years ago | (#41815597) (2)
GoodNewsJimDotCom (2244874) | about 2 years ago | (#41815605)
Re:I like Eclipse except for one flaw (5, Insightful)
slim (1652) | about 2 years ago | (#41815709):I like Eclipse except for one flaw (1)
91degrees (207121) | about 2 years ago | (#41815911)
Re:I like Eclipse except for one flaw (1)
dkf (304284) | about 2 years ago | (#41815771)
Eclipse starts to get annoying type lag around 30k lines of code in a single file, as you get bigger and bigger upwards to 100k or more, it can take several seconds between characters you type.
I've emphasized your problem and it's not Java-specific at all; it'd be horrible in any language at all. Even in C, I consider 10kloc to be an excessively long file. I have one project with an 11kloc file, and I hate how long it is; it's in a module that someone else is in charge of though, so I'm hesitant to interfere; the project also has a separate file with a 6kloc function, but that can't be shortened as it is a bytecode execution core (before anyone asks, the project style is not very dense by comparison with how a lot of other people write C; different styles would be a lot shorter).
Split that code up a bit into sensible-sized pieces. Do it today. Define pieces that have sensible responsibilities and which expose sane APIs, and then don't violate those APIs. Sure, it takes a little time to do but it makes it so much easier to understand. As a side benefit, your tools will also become better able to help you, but that's not why you should do this: the greater comprehensibility to you (and your coworkers, assuming that's relevant) is far more important!
Re:I like Eclipse except for one flaw (0)
Anonymous Coward | about 2 years ago | (#41815887)
Nonetheless, there are times when you need to load 100KLOC files - maybe log files, or trace output, or some auto-generated XML (or even auto-generated C code). A serious IDE needs to handle large text files.
But yeah, *vomit* at a coder who puts entire 100KLOC projects into a single file. But, also, *vomit* at the teams that put that amount of code in, say, 5,000 files.
And IDE is just like any other tool . . . (4, Insightful)
PolygamousRanchKid (1290638) | about 2 years ago | (#41815609) (5, Insightful)
Alkonaut (604183) | about 2 years ago | (#41815621)
Light Table (1)
undulato (2146486) | about 2 years ago | (#41815649)
A conundrum (1)
Coisiche (2000870) | about 2 years ago | (#41815651)
So why do Java coders turn to Eclipse?
I don't know. I just don't know.
Re:A conundrum (0)
Anonymous Coward | about 2 years ago | (#41815773)
I don't know about IntelliJ, but I have the impression that since about 6.7, Netbeans is so far ahead of Eclipse for Java development that it is not even a contest. If you happen to use Maven, the comparison is specially ridiculous.
Granted, Eclipse has plugins for everything, and I guess it's better for Android development and all those "not really Java" things.
Why I like Eclipse (1)
Misagon (1135) | about 2 years ago | (#41815681)
While I agree with the author about Java, there are other things why I prefer to use Eclipse (over other editors/IDEs)
* The compare editor. Especially in conjunction with the SVN plugin. Very very useful.
* I can have more than one project open, and edit and compare files in both. I may seem like something trivial, but too many other IDEs are deficient in this regard.
What is he on.. (5, Insightful)
Rexdude (747457) | about 2 years ago | (#41815753).
I disagree. (0)
Anonymous Coward | about 2 years ago | (#41815789)
I have done projects in both jEdit and Eclipse. I have also done C++ projects, a language that allows you to stuff the interfaces in a few or even a single file. I have also dabbled in multiple other languages.
Basically, regardless of language, if you want to be productive, you want auto-completion. Yes, you can do without. Yes, having APIs in a few files makes it a bit more convenient to look them up. However, looking them up alone is too slow, especially if you are working on a bigger project with multiple people. Even if the interfaces are ALL in a single file (bad idea, generally), you still have to spend time to sift through it and you'll need to sacrifice precious screen real estate to display it. However, it's simply to slow to look up every time whether your coworker has called a function wiggle-the-frob or jiggle-the-frob every time (and yes, you keep forgetting it if the project is large enough).
And no, "you should be thinking a lot about each line of code" generally does not apply in real projects: The really smart parts of the code can take 0.5-2 days to deep thinking to write them, but then you spend the next 2 weeks on code that just churns and moves data around which is mind-numbingly dumb but necessary work. In the latter part of the coding, which takes up a lot more time, you already have the code roughly mapped out in your head before you even start writing it and you just want to write it down as fast as possible to get back to the interesting parts.
Also, refactoring: Regardless of language, if you want to rename a method, function, whatever, you want something that does it for you and understands the language. No, sed is not good enough, you don't want to fire some sed command, which has no awareness of the code semantics at 500+k lines of code and hope it does the right thing.
As far as the "thousands of small files, each class has its own file" thing goes: It has advantages and disadvantages, the plus side being that classes are self-contained. You can easily take them out, move them around, put them somewhere else without having to shred apart some big interface file and hope that you caught all the method declarations. Also, when you organize interfaces in larger units like namespaces etc., you do have to think about how to organize those. Yes, it may be worth it, or it may not be, but it's an overhead if you decide to do it and you'll have to stick with it, too.
As far as the "Java is so verbose" meme: Sometimes it is the language, sometimes it is the people. People in large Java projects tend to over-engineer things (I'm not entirely sure why people do this, maybe abstraction is over-emphasized) with abstractions layered upon abstractions and then you end up with something like the "AbstractSingletonProxyFactoryBean", but you don't have to do that, if you put some effort into, you can keep it simple.
There are some parts that I really find needlessly verbose in Java like catching "checked" exceptions: Even when you provide PROOF that the exception CANNOT occur, you have to add 6 lines (in Allman style, it's a bit less in canonical Java/K&R style) of completely useless code. And yes, functional languages are very nice and elegant and I love them to death, but if one is honest, they haven't caught on, maybe that's a smell on its own.
The biggest enabler of bad software is (0)
Anonymous Coward | about 2 years ago | (#41815983)
the absolute refusal of C-typists to learn to program properly. And the blind refusal of good programmers to understand the difference between typing lots of code and actual software engineering. In my 22 years in industry, the number of actual software engineers I've encountered can be counted on one hand (literally). There were a fair number of good programmers. But for the most part, it is pretty dire.
And don't get me started on software architects.....
No amount of IDE religious belief will make you a good typist/programmer/engineer. | http://beta.slashdot.org/story/177053 | CC-MAIN-2014-35 | refinedweb | 4,963 | 66.98 |
Last Updated.
Discover how to build models for multivariate and multi-step time series forecasting with LSTMs and more in my new book, with 25 step-by-step tutorials and full source code.
Let’s get started.
- Updated Oct/2016: Replaced graphs with more accurate versions, commented on the limited performance of the first method.
- Updated Mar/2017: Updated for Keras 2.0.2, TensorFlow 1.0.1 and Theano 0.9.0.
- Updated Apr/2019: Updated the link to dataset.
- Updated Sep/2019: Updated for Keras 2.2.5..
- Download the dataset (save as .
Once loaded we can easily plot the whole dataset. The code to load and plot the dataset is listed below.
You can see an upward trend in the plot.
You can also see some periodicity to the dataset that probably corresponds to the northern hemisphere summer holiday period.
Plot of the Airline Passengers Dataset
We are going to keep things simple and work with the data as-is.
Normally, it is a good idea to investigate various data preparation techniques to rescale the data and to make it stationary.
Need help with Deep Learning for Time Series?
Take my free 7-day email crash course now (with sample code).
Click to sign-up and also get a free PDF Ebook version of the course.
Download Your FREE Mini-Course
Multilayer Perceptron Regression
We want to phrase the time series prediction look at constructing a differently shaped dataset in the next section.
Let’s take a look at the effect of this function on the first few rows of the dataset.
If you compare these first 5 rows to the original dataset sample listed in the previous section, you can see the X=t and Y=t+1 pattern in the numbers.
Let’s use this function to prepare the train and test datasets ready for modeling..
I tried a few rough parameters and settled on the configuration below, but by no means is the network listed optimized.
Once the model is fit, we can estimate the performance of the model on the train and test datasets. This will give us a point of comparison for new models.
Finally, we can generate predictions using the model for both the train and test dataset to get a visual indication of the skill of the model.
Because of how the dataset was prepared, we must shift the predictions so that they aline on the x-axis with the original dataset. Once prepared, the data is plotted, showing the original dataset in blue, the predictions for the train dataset in green the predictions on the unseen test dataset in red.
Tying this all together, the complete example is listed below.
Running the example reports model performance
Note: Your specific results may vary given the stochastic nature of the learning algorithm. Consider running the example a few times and compare the average performance.
Taking the square root of the performance estimates, we can see that the model has an average error of 23 passengers (in thousands) on the training dataset and 48 passengers (in thousands) on the test dataset.
From the plot, can see that the model did a pretty poor job of fitting both the training and the test datasets. It basically predicted the same input value as the output.
Naive Time Series Predictions With Neural Network
Blue=Whole Dataset, Green=Training, Red=Predictions
Multilayer Perceptron Using the Window Method
We can also phrase the problem so that multiple recent time steps can be used to make the prediction for the next time step.
This is called the window method,).
When phrased as a regression problem the input variables are t-2, t-1, t and the output variable is t+1.
The create_dataset() function we. We will increase the network capacity to handle the additional information. The first hidden layer is increased to 14 neurons and a second hidden layer is added with 8 neurons. The number of epochs is also increased to 400.
The whole code listing with just the window size change is listed below for completeness.
Running the example provides the following output.
Note: Your specific results may vary given the stochastic nature of the learning algorithm. Consider running the example a few times and compare the average performance.
We can see that the error was not significantly reduced compared to that of the previous section.
Looking at the graph, we can see more structure in the predictions.
Again, the window size and the network architecture were not tuned, this is just a demonstration of how to frame a prediction problem.
Taking the square root of the performance scores we can see the average error on the training dataset was 23 passengers (in thousands per month) and the average error on the unseen test set was 47 passengers (in thousands per month).
Window Method For Time Series Predictions With Neural Networks
Blue=Whole Dataset, Green=Training, Red=Predictions
Summary
In this post, you discovered how to develop a neural network model for a time series prediction problem using the Keras deep learning library.
After working through this tutorial you now know:
- About the international airline passenger prediction time series dataset.
- How to frame time series prediction problems as a regression problems and develop a neural network model.
- How use the window approach to frame a time series prediction problem and develop a neural network model.
Do you have any questions about time series prediction with neural networks or about this post?
Ask your question in the comments below and I will do my best to answer.
Hi Jason,
This is a new tool for me so an interesting post to get started!
It looks to me like your plot for the first method is wrong. As you’re only giving the previous time point to predict the next, the model is going to fit (close to) a straight line and won’t pull out the periodicity your plot suggests. The almost perfect fit of the red line to the blue line also doesn’t reflect the much worse fit suggested in the model score!
Hope that’s helpful.
Hi Jason,
How can you use this technique to forecast into the future?
Thanks!
This example is forecasting t+1 in the future.
In order to forecast t+2, t+3, t+n…., is it recommended to use the previous prediction (t+1) as the assumed data point.
For example, if I wanted to forecast t+2, I would use the available data including my prediction at t+1.
I understand that the error would increase the further out the forecast due to relying on predictions as data points.
Thoughts?
Yes, using this approach will provide multiple future data points. As you suggest, the further in the future you go, the more likely errors are to compound.
Give it a go, it’s good to experiment with these models and see what they are capable of.
Also, when running the full code snippet using the window method, the graph produced does not match the one shown.
This is what I’m getting
I did update the plotting code with a minor change and did not update the images accordingly. I will update them ASAP.
Could you show an example where maybe there was a couple more features. So, say you wanted to predict how many passengers, and you knew about temperature and day of the week (Mon-Sun).
Hi Andy,
Yes, I am working on more sophisticated time series tutorials at the moment, they should be on the blog soon.
Look forward to these time series forecast with multiple features examples – when do you expect to post them to your blog?
As always thx for this valuable resource and for sharing your experience !
Perhaps a month. No promises. I am taking my time to ensure they are good.
Sir please, share some tutorial on tensorflow and what are the differences to make models in tensorflow and keras. thanks
Tensorflow is like coding in assembly, Keras is like coding in Python.
Keras is so much simpler and makes you more productive, but gives up some speed and flexibility, a worthy trade-off for most applications.
sir can you have done any example for more than one column for time series prediction like stock data? If yes, please share the link of that. Thanks
See here:
Super! Will take some time on it soon. Thanks so much, Jason!
Hello,
Thank you for a great article. I have a big doubt and also related to the plot posted in the earlier comment which shows a sort of lag in the prediction. Here we are training the model on t to get predictions for t+1.
Given this I would assume that when the model sees an input of 112 it should predict around 118 (first data point in the training set). But that’s not what the predictions show. Copying the top 5 train points and their subsequent predictions generated by the code given in this post for the first example:
trainX[:5] trainPredict[:5]
[ 112.], [112.56],
[ 118.], [118.47],
[ 132.], [132.26],
[ 129.], [129.55],
[ 121.] [121.57],
I am trying to understand from a model perspective as to why is it predicting with a lag?
Thanks Keshav, I have updated the description and the graphs.
Just as Steve Buckley pointed out, your first method seems to be wrong. The model indeed just fits a straight line ( yPred = a*X+b) , which can be verified by calculating predictions on an input such as arange(200).
Because you shift the results afterwards before plotting, the outcome seems very good. However, from a conceptual point of view, it should be impossible to predict X_t+1 correctly based on only X_t, as the latter contains no trend or seasonal information.
Here is what I’ve got after trying to reproduce your results:
X Y yPred
0 112.0 118.0 112.897537
1 118.0 132.0 118.847107
2 132.0 129.0 132.729446
3 129.0 121.0 129.754669
….
as you can see, the yPred is way off ( it should be equal to Y), but looks good when shifted one period.
Yep, right on Jev, thanks. I have updated the description and the graphs.
Hi, Jason
I also have to agree with Jev, I would expect using predict(trainX) would give values closer to trainY values not trainX values.
They do Max, you’re right. I have updated the graphs to better reflect the actual predictions made.
Hi Jason,
Thanks for such a wonderful tutorial!
I was just wondering if in function create_dataset, there should be range(len(dataset)-1) in the loop. Hence for plotting logic, it should be:
…
trainPredictPlot[lb:len(train),:] = trainPredict
…
testPredictPlot[len(train)+lb:len(dataset),:] = testPredict
I am just in a big confusion with the index and getting somewhat difference plot for look_back=3 :
Hey, thanks for a most helpful tutorial, any ideas why this seems to work better than the time series predictions using RNNs and LSTM in the sister tutorial? My intuition predicts the opposite.
I’m glad you like it Veltzer.
Great question, the LSTMs probably require more fine tuning I expect.
Hey there! Great blog and articles – the examples really help a lot! I’m new to this so excuse the stupid question if applicable – I want to predict the next three outputs based on the same input. Is that doable in the LSTM framework? This is for predicting the water temperature for the next 3 days.
Yes, this is called sequence to sequence prediction.
I see two main options:
– Run the LSTM 3 times and feed output as input.
– Change the LSTM to output 3 numbers.
This particular time-series has strong seasonality and looks exponential in trend. In reality, the growth rate of this time series is more important. Could you plot the year-on-year growth rate?
There would be benefit in modeling a stationary version of the data, I agree.
I agree with Steve Buckley. The code is predicting x[i+1] = x[i] (approximately), that why the last part of code, which is supposed to fix the shift, couldn’t get the shift part right.
Try the following: pick any point in your testX, say testX[i], use the model to predict testY[i], then instead of using testX[i+1], use testY[i] as the input parameter for model.predict(), and so on. You will end up with a nearly straight line.
I’d thank you for your wonderful posts on neural network, which helped me a lot when learning neural network. However, this particular code is not correct.
Thanks for great article! It is really helpful for me. I have one question. If I have two more variable, how can i do? Take example, my data looks like follow,
date windspeed rain price
20160101 10 100 1000
20160102 10 80 1010
…
I’d like to predict the price.
Hi Jeremy, each input would be a feature. You could then use the window method to frame multiple time steps of multiple features as new features.
For example:
Hi Jason,
Thanks for your great explanation!
I have one question like Jeremy’s. Is there any suggestion for me if I want to predict 2 variables? Data frame shown as below:
Date X1 X2 X3 X4 Y1 Y2
I want to predict Y1 and Y2. Also, Y1 and Y2 have some correlations.
hi Shimin,
Yes, this is often called a sequence prediction problem in deep learning or a multi-step prediction problem in time series prediction.
You can use an LSTM with two outputs or you can use an MLP with two outputs to model this problem. Be sure to prepare your data into this form.
I hope that helps.
Jason,
Great writeup on using Keras for TS data. My dataset is something like below:# print the
Date Time Power1 Power2 Power3 Meter1 Meter2
12/02/2012 02:53:00 2.423 0.118 0.0303 0.020 1.1000
My feature vectors/predictors are Date, Time, Power1, Power2, Power3, Meter1. i am trying to predict Meter 2.
I would like to instead of using MLP use RNN/LSTM for the above time series prediction.
Can you pl. suggest is this is possible? and if yes, any pointers would help
thanks
Sunny
Hi Sunny, this tutorial will help to get you started:
Hello , nice tutorial .
I have one question : it would be usefull to have similar stuff on live data. let s say I have access to some real time data (software downloads, stock price …) , would it requires to train the model each time new data is available ?
I agree nicoad, a real-time example would be great. I’ll look into it.
A great thing about neural networks is that they can be updated with new data and do not have to be re-trained from scratch.
Hi,your original post code is to use 1(or 3) dimension X to predict the later 1 dimension Y.how about I want to use 48 dimension X to predict 49th and 50th.what i mean is i increase the time unit i want to predict ,predict 3 or even 10 time unit . under such condition : does that mean i just change the output_dime of the last output layer :
model.add(Dense(
output_dim=3))
Is that right?
Yes, that looks right. Let me know how you go.
Hi jason, I make a quick expriment in jupyter notebook and published in the github
github:
the code could work..
one output cell with 2dimension and 2 output cell with 1 dimension is different.
I discussed it with other guy in github .
It seems i should use seq2seq or use timedistributed wrapper .
I stilll explored this and have not got one solution .
What is your suggestion?
That does sound like good advice. Treat the problem as sequence to sequence problem.
hi jason , I made a experiment on the jupyter notebook and published on the github .The code could output 2 columns data..
1 output cell with 2 dimension and 2 output cell with 1 dimension is different.
The input dimension and the output dimension will be tricky for the NN.
Thanks Jason for the conceptual explaining. I have one question about the KERAS package:
It looks you input the raw data (x=118 etc) to KERAS. Do you know whether KERAS needs to standardize (normalize) the data to (0,1) or (-1,1) or some distribution with mean of 0?
— Xiao
Great question Xiao,
It is a good idea to standardize data or normalize data when working with neural networks. Try it on your problem and see if it affects the performance of your model.
Wasn’t the data normalized in an early version of this post?
I don’t believe so Satoshi.
Normalization is a great idea in general when working with neural nets, though.
I keep getting this error dt = datetime.datetime.fromordinal(ix).replace(tzinfo=UTC)
ValueError: ordinal must be >= 1
Sorry charith, I have not seen this error before.
In Your text you say, the window size is 3, But in Your Code you use loop_back = 10 ?
Thanks Trex.
That is a typo from some experimenting I was doing at one point. Fixed.
No problem,
I have another question:
what the algorithm now does is predict 1 value. I want to predict with this MLP like n-values.
How should this work?
Reframe your training dataset to match what you require and change the number of neurons in the output layer to the number of outputs you desire.
Hey Sir,
great Tutorial.
I am trying to build a NN for Time-Series-Prediction. But my Datas are different than yours.
I want to predict a whole next day. But a whole day is defined as 48 values.
Some lines of the Blank datas:
2016-11-10 05:00:00.000 0
2016-11-10 05:30:00.000 0
2016-11-10 06:00:00.000 1
2016-11-10 06:30:00.000 3
2016-11-10 07:00:00.000 12
2016-11-10 07:30:00.000 36
2016-11-10 08:00:00.000 89
2016-11-10 08:30:00.000 120
2016-11-10 09:00:00.000 209
2016-11-10 09:30:00.000 233
2016-11-10 10:00:00.000 217
2016-11-10 10:30:00.000 199
2016-11-10 11:00:00.000 244
There is a value for each half an hour of a whole day.
i want to predict the values for every half an hour for the next few days. How could this work?
Could you do an example for a Multivariate Time Series? 🙂
Yes, there are some tutorials scheduled on the blog. I will link to them once they’re out.
Why doesnt need the reLu Activation function that the input datas are normalized between 0 and 1?
If i use the sigmoid activation function, there is a must, that the input datas are normalized.
But why reLu doesnt need that?
Generally, because of the bounds of the sigmoid function imposes hard limits values outside of 0-1.
The Rectifier function is quite different, you can read up on it here:
I’d recommend implementing in excel or Python and having a play with inputs and outputs.
Another Question:
Your Input Layer uses reLu as activition Function.
But why has your Output Layer no activition Function? Is there a default activition function which keras uses if you give one as parameter? if yes, which is it? if no, why is it possible to have a Layer without a activition function in it?
Thanks 🙂
Yes the default is Linear, this is a desirable activation function on regression problems.
dont give one as parameter*
A stupid question, sir.
Suppose I have a dataset with two fields: “date” (timestamp), “amount” (float32) describing a year.
on the first day of each month the amount is set to -200.
This is true for 11 months, except for the 12th (December).
Is there a way to train a NN so that it returns 12, marking the December as not having such and amount on its first day?
Sorry Dmitry, I’m not sure I really understand your question.
Perhaps you’re able to ask it a different way or provide a small example?
Is it common to only predict the single next time point? Or are there times/ways to predict 2,3, and 4 times points into the future, and if so, how do you assess performance metrics for those predictions?
Good question Thomas.
The forecast time horizon is problem specific. You can predict multiple steps with a MLP or LSTM using multiple neurons in the output layer.
Evaluation is problem specific but could be RMSE across the entire forecast or per forecast lead time.
thanks for Jason’s post, I benefit a lot from it. now I have a problem:how can I get the passengers in 1961-01? anticipates your reply.
You can train your model on all available data, then call model.predict() to forecast the next out of sample observation.
it seems the model can’t forecast the next month in future?
What do you mean exactly zhou?
sorry. I want to forecast the passengers in future, what should I do?
Thanks for the tutorial, Jason. it’s very useful. It would be nice to also know how you chose the different parameters for MLP, and you’d go about optimizing them.
Thanks Viktor, I hope to cover more tutorials on this topic.
You can see this post on how to best tune an MLP:
In the first case. If I shift model to the left side, it will be a good model for forecasting because predicted values are quite fit the original data. Is it possible to do that ?
Can you give an example of what you mean?
Is there any specific condition to use activation functions? how to deside which activation function is more suitable for linear or nonlinear datasets?
There are some rules.
Relu in hidden because it works really well. Sigmoid for binary outputs, linear for regression outputs, softmax for muti-class classification.
Often you can transform your data for the bounds of a given activation function (e.g. 0,1 for sigmoid, -1,1 for tanh, etc.)
I hope that helps as a start.
how to decide the optimizer? Is there any relevance with activation function?
Not really. It’s a matter of taste it seems (speed vs time).
What kind of validation are you using in this tutorial? is it cross validation?
No, a train-test split.
Is there any another deep learning algorithms that can be used for time series prediction? why to prefer multilayer perceptron for time series prediction?
Yes, you can use Long Short-Term Memory (LSTM) networks.
Hi Jason,
I always have a question, if we only predict 1 time step further (t+1), the accurate predicted result is just copy the value of t, as the first figure shows. When we add more input like (t-2, t-1, t), the predicted result get worse. Even compare with other prediction method like ARIMA, RNN, this conclusion perhaps is still correct. To better exhibit the power of these prediction methods, should we try to predict more time steps further t+2, t+3, …?
Thanks
It is a good idea to make the input data stationary and scale it. Then the network needs to be tuned for the problem.
Dear Jason.
Thanks for sharing your information here. Anyway i was not able to reproduce your last figure. On my machine it still looks like the “bad” figure.
I used the code as stated above. Where is my missunderstanding here?
Thank You!
silly me 🙂
Perhaps try fitting the network for longer?
thanks for this post..actually I am referring this for my work. my dataset is linear. Can I use softplus or elu as an activation function for linear data?
Yes, but your model may be more complex than is needed. In fact, you may be better off with a linear model like Linear Regression or Logistic Regression.
Firstly thanks Jason, I try MLP and LSTM based models on my time series data, and I get some RMSE values. ( e.g. train rmse 10, and test 11) (my example count 1400, min value:21, max value 210 ) What is acceptance value of RMSE. ?
Nice work!
An acceptable RMSE depends on your problem and how much error you can bear.
Great article, thank you.
Is it possible to make a DNN with several outputs? For example the output layer has several neurons responsible for different flight directions. What difficulties can arise?
Yes, try it.
Skill at future time steps often degrades quickly.
Hello, Jason, i am a student, recently i am learning from your blog. Could you make a display deep learning model training history in this article? I will be very appreciated if you can, because i am a newer. Thank you!
This example shows you how to display training history:
Does anybody have an idea/code snippet how to store observations of this example code in a variable, so that the variable can be used to to make predictions beyond the airline dataset (one step in the future)?
Would it be logical incorrect to extend the testX-Array with for example [0,0,0] to forecast unseen data/ a step in the future?
It would not be required.
Fit your model on all available data. When a new observation arrives, scale it appropriately, gather it with the other lag observations your model requires as input and call model.predict().
Is there a magic trick to get the right array-format for a prediction based on observations?
I always get the wrong format:
obsv1 = testPredict[4]
obsv2 = testPredict[5]
obsv3 = testPredict[6]
dataset = obsv1, obsv2, obsv3
dataX = []
dataX.append(dataset)
#dataX.append(obsv2)
#dataX.append(obsv3)
myNewX = numpy.array(dataX)
Update:
After several days I manged to make a prediction on unseen data in this example (code below).
Is this way correct?
How many observations should be used to get a good prediction on unseen data.
Are there standard tools available to measure corresponding performances and suggest the amount of observations?
Would this topic the same as choosing the right window-size for time-series analysis, or where would be the difference?
Code:
obsv1 = float(testPredict[4])
obsv2 = float(testPredict[5])
obsv3 = float(testPredict[6])
dataX = []
myNewX = []
dataX.append(obsv1)
dataX.append(obsv2)
dataX.append(obsv3)
myNewX.append(dataX)
myNewX = numpy.array(myNewX)
futureStepPredict = model.predict(myNewX)
print(futureStepPredict)
Looks fine.
The number of obs required depends on how you have configured your model.
The “best” window size for a given problem is unknown, you must discover it through trial and error, see this post:
Is there a method or trial and error-strategy to find out how many lag observations are ‘best’ for a forecast of unseen data?
Is there a relation between look_back (window size) and lag observations?
In theory I could use all observations to predict one step of unseen data. Would this be useful?
Look-back defines the lag.
You can use ACF and PACF plots to discover the most relevant lag obs:
The “promise” of LSTMs is that they can learn the appropriate time dependence structure without having it explicitly specified.
If I fill the model with 3 obs, I get 3 predictions/data points of unseen data.
If I only want to predict one step in the future, should I build an average of the resulting 3 predictions,
or should I simply use the last of the 3 prediction steps?
Thank you.
I would recommend changing the model to make one prediction if only one time step prediction is required.
How would you change the Multilayer Perceptron model of this site in this regard?
I have a misconception here. Don’t do the same fellow reader!
With “obsv(n) = float(testPredict[n])” I took predictions of the test dataset as observations.
THAT’S WRONG!
Instead we take a partition of the original raw data as x/observations to predict unseen data, with a trained/fitted model- IN EVERY CASE.
Like in R:
Is this right Jason?
If you need a 2D array with 1 row and 2 columns, you can do something like:
Hello Sir,
This is Arman from Malaysia. I am a student of Multimedia University. I want to do “Self-Tuning performance of Hadoop using Deep Learning”. So which framework I will consider for this sort of problem. as like DBM, DBN , CNN, RNN ?
I need your suggestion.
With best regards
Arman
I would recommend following this process:
Are there any more concerns about this code. Or is it updated and technical correct now?
We can always do things better.
For this example, I would recommend exploring providing the data as time steps and explore larger networks fit for more epochs.
Hm, I’m not sure if I understand it right.
I believe I’m already feeding it with time-step like so:
return datetime.strptime(x, ‘%Y-%m-%d’)
My raw data items have a decent date column. Is this what you meant?
How do we explore larger networks fit for more epochs?
I have everything parameterized in a central batch file now (pipeline).
Should I increase the epochs for…
model.fit(trainX, trainY, epochs=myEpochs, batch_size=myBatchSize, verbose=0)
Thank you.
I’m trying to adapt some code from:
…and build the variable EXPECTED in the context of this script.
Unfortunately I don’t know how to do it right. I’m a little bit frustrated at this point.
for i in range(len(test)): <-- what should I better use here?
expected = dataset[len(train) + i + 1] <-- what should I better use here?
print(expected)
This looks cool so far, could I use the index to retrieve a var called EXPECTED?
for i in range(len(testPredict)):
pre = '%.3f' % testPredict[i]
print(pre)
A code example would help to solve my index-confusions.
This is a great example that machine learning is often much more than knowing how to use the algorithms / libraries. It’s always important to understand the data we are working with. For this example as it is 1 dimensional this is luckily quite easily done.
In the first example we are giving the the algorithm one previous value and ask it “What will the next value be?”.
Since we use a neural net not taking into account any time behavior, this system is strongly overdetermined. There are a lot of values at the y value 290 for example. For half of them the values decline, for half of them the values increase. If we don’t give the algorithm any indication, how should it know which direction this would be for the test datapoint? There is just not enough information.
One idea could be to additionally give the algorithm the gradient which would help in the decision whether we a rising or a falling value follows (which is somehow what we do when adding a lookback of 2). Yet, the results do obviously not improve significantly.
Here I want to come back to “understand the data you are dealing with”. If we look at the plot, there are two characteristics which are obvious. A generally rising trend and a periodicity. We want the algorithm to cover both. Only then, will the prediction be accurate. We see that there is an obvious 12 month periodicity (think of summer vacation, christmas). If we want the algorithm to cover that periodicity without including model knowledge (as we are using an ANN) we have to at least provide it the data in a format to deduct this property.
Hence: Extending the lookback to 12 month (12 datapoints in the X) will lead to a significantly improved “1 month ahead”-prediction! Now however, we have a higher feature dimension, which might not be desired due to computational reasons (doesn’t matter for this toy example, but anyway…). Next thing we do is take only 3 month steps at lookback (still look back 12 month but skip 2 months in the data). We still cover the periodicity but reduce the feature amount. The algorithm provides almost the same performance for the “1 month ahead” prediction.
Another possibility would surely be to add the month (Jan, Feb, etc.) as a categorical feature.
Thanks Stefan, very insightful.
Hello Jason! Thanks for the great example! I was looking for this kind of example.
I’m learning Neural Network these days and trying to predict the number which is temperature like this example, but I have more inputs to predict temperature.
Then should I edit on the pandas.read.csv(…,usecols[1],…) to usecols[0:4] if I have 5 inputs?
Thanks in advance!
Best,
Paul
I mean something like below
X1 X2 X3 X4 X5 Y1
380 17.00017 9.099979 4 744 889.7142
Thank you!
This post might help you frame your prediction problem:
Thanks for replying me back! 🙂 And sorry for late response.
When I clicked the link that you wrote, it requires username and password.. :'(
Sorry about that, fixed. Please try again.
NVM. I figured out it was
machinelearningmastery. instead of
mlmastery.staging.wpengine.com🙂
Thanks. 🙂
Best,
Paul
Yes, for some reason I liked to the staging version of my site, sorry about that.
Hey, I am trying to make a case where the test case is not given but the model should predict the so called future of the timeseries. Hence, I wrote a code which takes the last row of the train data and predict a value from it then put the predicted value at the end of that row and make a prediction again. After doing this procedure for let say len(testX) times. It ended up like an exponential graph. I can upload it if you want to check it out. My code is given below. I dont understand why it works like that. I hope you can enlighten me.
prediction=numpy.zeros((testX.shape[0],1))
test_initial=trainX[-1].copy()
testPredictFirst = model.predict(test_initial.reshape(1,3))
new_=create_pred(test_initial,testPredictFirst[0][0])
prediction[0]=testPredictFirst
for k in range(1,len(testX)):
testPredict=model.predict(new_.reshape(1,3))
new_=create_pred(new_,testPredict[0][0]) #this code does if new_ is [1,2,3] and testPredict[0][0] is 4 the output is [2,3,4]
prediction[k]=testPredict
really awesome and useful to0
Thanks.
Hi,
It’s awesome article. Very Helpful. I implemented these concepts in my Categorical TIme Series Forecasting problem.But the result I got is very unexpected.
My TIme Series can take only 10 values from 0 to 9. I’ve approx 15k rows of data.I want to predict next value in the time series.
But the issue is ‘1’ appears in time series most of the time. So starting from 2nd or 3rd epoch LSTM predicts only ‘1’ for whatsoever input. I tried varying Hyperparameter but it’s not working out. Can you please point out what could be the approach to solve the problem?
Perhaps your problem is too challenging for the chosen model.
Try testing with an MLP with a large window size. The search hyperparameters of the model.
I’m new to coding. How can I predict t+1 from your example code? I mean from your code I want the value of t+1 or can you more explanation about the code where it predicts t+1.
Perhaps start with something simpler if you are new to coding, for example simpler linear models:
Hi Jason,
Why do you think making the data stationary is a good idea in this approach? I know ARIMA assumes the data is stationary, but is it also valid for neural networks in general? I thought normalization would be enough.
Yes, it will make the problem easier to model.
I am getting this error:
Help me please i am new here. i am using tensorflow
Traceback (most recent call last):
File “international-airline-passengers.py”, line 49, in
testPredictPlot[len(trainPredict)+(look_back*2)+1:len(dataset)-1, :] = testPredict
ValueError: could not broadcast input array from shape (94,1) into shape (46,1)
I got my error. It was silly mistake.
thanks
Glad to hear you worked it out.
Hi Jason,
Thankyou so much for all this . I have a question ! Why the obtained accuracy of regression models in terms of MSE is not good when trained using theano, tensorflow or keras. However , if we try to train MLP or anyother model by using matlabs neural network tool , the models show very good accuraccy in terms of e power negative values. why is that so ?
Accuracy is a score for classification algorithms that predict a label, RMSE is a score for regression algorithms that predict a quantity.
Hi, Firstly Thank you for this tutorial. I am implementing this within my design but I am getting an error in this line:
–> 128 testPredictPlot[len(trainPredict)+(look_back*2)+1:len(Y1)-1] =
> testPredict
of: ValueError: could not broadcast input array from shape (19) into shape
> (0))
My complete code is:
I would really appreciate your help as I know this is probably something small but I cannot get passed it. Thank you
Hi Jason,
Maybe I am not understanding something.
You say something like
“We can see that the model did a pretty poor job of fitting both the training and the test datasets. It basically predicted the same input value as the output.”
when talking about the first image. I don’t understand how that prediction is bad. It looks very very good to me. I am asking because I tried your code with my own dataset and I obtained something similar, i.e. it looked perfect except it was slightly shifted. But how is it bad?
Also in the following section you say
“Looking at the graph, we can see more structure in the predictions.”
How do we see the structure? To me it looks like it is less precise than the first one.
Apologies if I quoted you twice, but I don’t really understand…
If the model cannot do better than predicting the input as the output, then the model is not skillful as you may as well just use a persistence model:
Hi Jason
Do you how train data in PyCharm with Dynamic CNN
Please give us more explanation..
thank you
Hi Jason,
I think I’m a little confused.
Your post seems to address how to forecast t+1 from t.
The output however looks pretty poor as it ends up performing as a persistence model.
What is the value of using keras to achieve the same goal as a persistence model then?
How would you modify your network to try to perform better than a common persistence model?
What would the model structure look like?
Thanks in advance!
I would recommend an MLP tuned to the problem with many lag variables as input.
Hi Jason, thx for great tutorial, but i cant find value t+1. And can we use it for predicting stock prices?
Use a persistence model to predict short-term stock prices:
Hi Jason,
This article as well as the following comments are really helpful. I have tried this one on stock price prediction with more lookbacks, say 10~30, or more layers. But after I add one more layer into the network, it becomes harder/slower to get the loss decreased, which makes bad result over 10,000+ epochs. Do you have any idea about that?
Thank you.
I believe security prices are a random walk and are not predictable:
Thanks for the prompt reply.
Yes I believe it’s quite hard to predict stock price. However I can get similar result as yours in this article by one hidden layer with 8 neuron and relu function after a few epochs. So it may be just a coincidence?
Dear Jason,
I’m studying time-series prediction and I was impressed when I saw your results on the airline passengers prediction problem. I was amazed by the fact that the prediction of such a complicated non-linear problem was so perfect!
However, when I looked at the code, I realised that what you’re showing is not really a prediction, or at least it’s not very fair. In fact, when you predict the results for the testing data, you’re only predicting the results for the next timestamp, and not for the entire sequence.
To say that in other words, you’re predicting the future of next datapoint, given the previous datapoint.
Maybe I misunderstood the aim of the problem, but from what I understood, you were trying to predict the passengers for a time in the future, given a previous time in the past.
To make a fair comparison, it would be interesting to see what happens when the network predicts the future based exclusively on the past data. For example, you can predict the first testing point based on the last training point and then continue the prediction using the previous predictions. I tried doing this, and results are just shit 🙂
I wonder now how it could be possible to write a network that actually predicts the future events based on the past events. I also tried with your LSTM example, but results were still disappointing…
Cheers,
Alessandro
Great point, I have better examples here listed here:
Hello, I would like to ask you something, what exactly means number of verbose write on one epoch?
For example, I have “0s – loss: 23647.2512” , and what means that number ?
Good question.
It reports how long the epoch took in seconds and the loss (a measure of error) on the samples in the training set for that epoch.
But why each epoch shows so big loss?
Example: – 0s – loss: 543.4524 – val_loss: 2389.2405
… why is loss to big? and in final graph training and testing data are very similar to default dataset?
Good question, I cannot answer that. I suspect it has something to do with the same of your data. Perhaps you need to rescale your data.
Understand, and last question, please . This dataset represents airline passagers on which country? Just for curiosity 🙂
I don’t know, sorry.
Thanks for the tutorial!
Do you see any problem with shuffling the data? I.e using ‘numpy.random.shuffle(train_test_data’ to randomly select training and test data?
(as used here)
In general no, with time series, yes. You should not shuffle time series data.
Learn more here:
Hi,
Thank you for this tutorial. However, when using the exact same code in the loop_back=3 case, it seems the graph is much more similar to the first graph shown (loop_back=1) than the second one! Also, isn’t it a bit confusing to compare the error on test vs train, as the slopes are steeper in the second part of the dataset? What I mean is, if we were to train on the last 67% of the dataset and test on the first 33%, the error on the test set would reduce while the error on the train set would increase. It is kind of confusing to present the results this way (maybe the evaluation measure should be relative to the range in values for the current time-window?)
Thanks anyway!
Hi Jason,
Great tutorial!
You fit the model with the default value of suffle, which is True, shown below.
model.fit(trainX, trainY, epochs=200, batch_size=2, verbose=2)
I remember you indicate in other tutorial that one should not shuffle a time serires when training it. Would you have any comments?
Regards
Yes, shuffle of the data is a bad idea for time series!
def create_dataset(dataset, look_back=1):
I am getting some errors when i input this code
File “”, line 1
def create_dataset(dataset, look_back=1):
^
SyntaxError: unexpected EOF while parsing
Can you help me understand what i am doing wrong. Thank you
Ensure you maintain the indenting of the code.
Also, save the code to a file and run the file from the command line, more details here:
Hi Jason,
Great tutorial!
but, how to determine the learning rate?, and how much is the learning rate in the code above?
Trial and error or use a method like Adam to adapt it automatically.
Hello. I have a question for this chapter.
I understood like below.
During the “training” period, weights are calculated by 67% of all data.
After that, with 33% of all data we make a prediction.
Question :
During the “test” period, are we making a prediction y_hat(t+1) with y(t) using weight calculated with 67% of data?
I want to know which data is used to predict y_hat(t+1) in the “test” period.
You can use real obs as input if they are available or you can use predictions as input (e.g. recursive).
Hi Jason, how do you get this to predict, say, t + 60 ?
This is called multi-step forecasting:
I have many examples, perhaps start here:
Hola Jason:
Nice Tutorial, as usual !. Thanks.
I have 3 questions:
1) If we would have multi-steps forecasting (e.g. for regression analysis) we would need as many outputs units (neurons) as number of output steps, right? something similar to classification problems where we have to use as many outputs neurons as classes ?
2) I modify the neural model, in the case of the single step input (i.e. look_back ==1), using a wider model (more units or neurons) and deeper (more layers in a similar way as you do in your multi-steps inputs or window) and… surprisingly… I got “very much worst MSE and RMSE scores or metrics !
How can we explained this? because of model overfitting? is something similar at when you perform with windows (taken into consideration more back-steps or look-baks steps , that you only get in your case similar scores ?
I am really surprise for this model ANN behavior against common sense? what is your explanation, if anyone exist?
3) for me one of the core of the TIMESERIES analysis , in comparison of classical image analysis and features data model, in addition to introducing other techniques of RNN such as LSTM is, the previous work of preparing the FRAMING of your DATA SERIES and, splitting or building-up the the input-s X, and the output-s Y (or labels), from the original TIMESERIES, as you do with your function definition called : def create_dataset(dataset, look_back): do you agree?
thank you in advance for your time, effort and consideration!
regards
JG
You have many options for multi-step, such as recursive use of a one-step model, vector output or seq2seq. I have tutorials on each.
Perhaps the model is overfitting, analysis would be required.
Yes, framing a problem is the point of biggest leverage.
Hi Jason thank you so much. My question might be very dumb but I was wondering if you could do me a favor and answer that for me.
I used pivot table to clean dataset and create a dataset that can be used for time series analysis( from a larger dataset)
my date column is being treated as index so I only have one column.
Completed_dt
2005-01-31 5.0
2005-02-28 3.0
2005-03-31 5.0
2005-04-30 2.0
2005-05-31 6.0
2005-06-30 5.0
2005-07-31 6.0
2005-08-31 4.0
2005-09-30 6.0
2005-10-31 4.0
when I use your code:
train_size = int(len(B1) * 0.67)
test_size = len(B1) – train_size
train, test = B1[0:train_size,:], B1[train_size:len(B1),:]
print(len(train), len(test))
I get this error: IndexError: too many indices for array
I know the reason is because you have two coulmns( the date column is probably not index in your data) and I only have one column.
I tried to reset the index so I can fix the error but when I did it other errors popped up.
So do you know what change should I make in this line of code in order to solve the error?
train, test = B1[0:train_size,:], B1[train_size:len(B1),:]
Thank you
Perhaps try removing the date-time column first?
Hi Jason
i need soluation of this problem below is the link (Python or any language)
Hi Jason, I have a question. Can this method used in noncontinuous inputs? Like I am researching on activity daily steps of elders. I found their walking pattern shows periodically as weekly change. So can I use the feature of only t-7 and t-14 (also two inputs, just not consequent) to predict t+1?
Sure, you can formulate any inputs you wish, it’s a great idea to try ideas like this in order to lift performance.
Hi Jason,
Thanks for this very clear implementation !
I just started a program that is supposed to forecast the energy production given historical data and this techniques might be very useful !
I saw you made other post (especially the one taking multiple inputs) that could be better but wanted to go step by step.
My question is the following :
I’m not sure to understand the shape of the data (dataset, input, output) : is it (n_value,) ?
Because in the create_dataset method, it seems like the dataset is not just an array ?
“a = dataset[i:(i+look_back), 0] ”
Should the input be shaped like : (1, n_value) ?
Good questions, I recommend starting here with these more up to date tutorials:
Hi it’s me again,
I ran the network with as an input historical data on power production so for the training train_X = array_of_int and train_Y = shifted_array_of_int.
When I tried to put a prediction as an input (to predict t+2, …, t+n) I ended up getting an almost straight line …
Do you have an explanation ?
Thanks
Nice content.
Thanks.
I tried to use your code with my dataset, which is similar to yours, but after training, the program presents the following error
print(‘Train Score: %.2f MSE (%.2f RMSE)’ % (trainScore, math.sqrt(trainScore)))
TypeError: must be real number, not list
Do you have any ideas on how to fix this problem?
Thanks
Perhaps try debugging?
E.g. print out the raw elements in the line causing the problem and understand why they are not as expected?
Hi,I have tried to sent a email to the account jason@MachineLearningMastery.com while no reply to me, so I come here to repeat my question.
I want to buy the e-book “deep learning with python” authored by on your website online while I want to know whether I can get a recipe after purchase.
I am a student at college and short of money, so it would be better if I can get a receipt , because I need it to reimburse the costs by some ways.
thanks a lot.
Yes, I have replied to your email.
Yes, I can provide a tax receipt after purchase, more details here:
Hi Jason.
I have another question, I tried to calculate the Mean Absolute Percentage Error (MAPE) and it’s huge and I would like to know why, do you have any suggestions?
Perhaps scale your data first?
Perhaps try alternate models or model configurations?
why dont we dont replace dense layer with LSTM layer ?
You can, see this:
great article!!
Thanks.
How to implement multiple input Time Series Prediction With LSTM and GRU in deep learning
I give many examples, you can get started here: | https://machinelearningmastery.com/time-series-prediction-with-deep-learning-in-python-with-keras/ | CC-MAIN-2019-47 | refinedweb | 8,645 | 66.33 |
Blue square not showing oh screen
Hello folk. I just started using opencv--python.
Intially I typed in some code found here.
But when I typed in the code to my windows 7 64 bit computer and ran it. After a few syntax errors and typos on my part it ran just fine. I dectected the web camera that I have plugged in and it displayed the cameras video in a window. But I do not get that blue square that says it has detected my face. I am using a celron 2200 mhz with 2gb of ram so its not the fastest computer . Is it just that my computer is slow or is there something wrong with the code?
Ok here is the code. It was in the youtube video on the link that I posted,
import numpy as np import cv2 #some comment i a foreign language which I cannot understand. I will put it #thru Altavista later face_cascade = cv2.CascadeClassifier('haarscascade_frontalface_default.xml') video_capture = cv2.VideoCapture(0) while True: ret, frame = video_capture.read() gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) #capture test with image. #img = cv2.imread("wyld2.jpg") # gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) faces = face_cascade.detectMultiScale(gray,1.3,5) for(x,y,w,h) in faces: cv2.rectangle(frame,(x,y),(x+w,y+h),(255,0,0),2) cv2.imshow('video', frame) #some more foreign comments if cv2.waitKey(1) & 0xFF == ord('q'): break video_capture.release() cv2.destroyAllWindows()
Post your code.
without seeing anything of your stuff, i bet a beer, that it did not find the xml file for the cascade, because the path was wrong.
please try like:
cv2.CascadeClassifier('complete/path/to/haarscascade_frontalface_default.xml')
it's most probably in opencv/data/haarcascades
also:
Like @berak said, 98% of cascade errors result from not reading correctly the model and not capturing the return value! | https://answers.opencv.org/question/53320/blue-square-not-showing-oh-screen/ | CC-MAIN-2019-43 | refinedweb | 311 | 69.48 |
Created on 2007-10-29 21:21 by amaury.forgeotdarc, last changed 2007-10-30 00:29 by gvanrossum..
Thanks!!
The patch as given doesn't quite work -- it gives an error message on
string literals like '\s'. By reverting some of your code and only
keeping the bounds checks for the octal escape code I got it to work
though. See my checkin.
Committed revision 58707.
This needs to be backported too; it seems a pure coincidence it didn't
trigger before!
(Hold on, the assert I added triggers in debug mode.)
I don't like this assert either: this function is part of the public
API, and should not crash the interpreter just because of a trailing \.
To test easily:
import ctypes
decode = ctypes.pythonapi.PyUnicodeUCS2_DecodeUnicodeEscape
decode.restype = ctypes.py_object
decode(b'\\1', 1, None)
This should gently raise a UnicodeDecodeError IMO.
Better fix:
Committed revision 58708. | http://bugs.python.org/issue1359 | crawl-002 | refinedweb | 149 | 68.67 |
Rescued from the old AUR :)
Now it installs to /usr instead of /usr/local.
Search Criteria
Package Details: nsnake 3.0.1-2
Dependencies (1)
Required by (0)
Sources (1)
Latest Comments
Anonymous comment on 2015-08-30 13:48
Anonymous comment on 2015-08-08 18:03
Rescued from the old AUR :)
UnsolvedCypher commented on 2015-01-24 20:46
I seem to be getting an error when installing:
error: failed to commit transaction (conflicting files)
nsnake: /usr/local/share/man exists in filesystem
Errors occurred, no packages were upgraded.
alexdantas commented on 2014-07-29 04:57
Updated to v3.0.0, nice new features!
Also, dropped dependency on `yaml-cpp`
alexdantas commented on 2014-07-14 22:41
Updated to v2.0.8, quite a lame update
MazeChaZer commented on 2014-06-01 14:34
The compliation issue has resolved itself with the latest update.
Thanks for maintaining the package!
alexdantas commented on 2014-05-31 01:08
Hello, guys! I'm nsnake's author and started maintaining this package.
Updated to the latest version - 2.0.5
MazeChaZer commented on 2014-05-16 08:18
I just checked the most recent source tarball (version 2.0.0) from Sourceforge and it compiles flawlessly. Perhaps you might consider using this instead of the git repository? The release seems pretty up-to-date (it's from march this year).
MazeChaZer commented on 2014-05-16 08:09
Doesn't compile here:
[...]
# Compiling src/Flow/StateManager.cpp...
In file included from src/Flow/StateManager.cpp:7:0:
src/Config/INI.hpp:7:38: fatal error: iniparser.h: No such file or directory
#include <iniparser.h> // local files
^
compilation terminated.
Makefile:138: recipe for target 'src/Flow/StateManager.o' failed
make: *** [src/Flow/StateManager.o] Error 1
make: *** Waiting for unfinished jobs....
Seninha commented on 2014-03-04 00:31
Done! Thank you, godofgrunts!
I've recently moved to Gentoo full-time, so I'm disowning all my packages. Farewell! | https://aur.archlinux.org/packages/nsnake/ | CC-MAIN-2018-17 | refinedweb | 330 | 53.17 |
Every once in a while you might want to know what fields, properties, or events a certain type of object contains at runtime. A common use for this information is serialization. .NET contains lots of different serialization techniques, like binary and XML, but sometimes you just have to roll your own. This tutorial is going to demonstrate how to get a list of public fields, properties, and events from objects at runtime.
First, let's create a simple object that contains some fields, properties, and events. I'll be using this object throughout the rest of the tutorial.
public class MyObject { //public fields public string myStringField; public int myIntField; public MyObject myObjectField; //public properties public string MyStringProperty { get; set; } public int MyIntProperty { get; set; } public MyObject MyObjectProperty { get; set; } //public events public event EventHandler MyEvent1; public event EventHandler MyEvent2; }
The .NET class that gives us access to all of this is the Type class. To get a Type object, we simply use the typeof keyword:
Type myObjectType = typeof(MyObject);
To get a list of public fields in an object, we'll use Type's GetFields method:
Type myObjectType = typeof(MyObject); System.Reflection.FieldInfo[] fieldInfo = myObjectType.GetFields(); foreach (System.Reflection.FieldInfo info in fieldInfo) Console.WriteLine(info.Name); // Output: // myStringField // myIntField // myObjectField
An important thing to note here is that the fields are not guaranteed to come out in any particular order. If you use GetFields, you should never depend on the order being consistent. The FieldInfo class that gets returned actually contains a lot of useful information. It also contains the ability to set that field on an instance of MyObject - that's where the real power comes in.
MyObject myObjectInstance = new MyObject(); foreach (System.Reflection.FieldInfo info in fieldInfo) { switch (info.Name) { case "myStringField": info.SetValue(myObjectInstance, "string value"); break; case "myIntField": info.SetValue(myObjectInstance, 42); break; case "myObjectField": info.SetValue(myObjectInstance, myObjectInstance); break; } } //read back the field information foreach (System.Reflection.FieldInfo info in fieldInfo) { Console.WriteLine(info.Name + ": " + info.GetValue(myObjectInstance).ToString()); } // Output: // myStringField: string value // myIntField: 42 // myObjectField: MyObject
Combining this ability with the ability to create custom attributes provides a framework on which almost any serialization technique can be built.
Properties and events are retrieved almost identically to fields:
Type myObjectType = typeof(MyObject); //Get public properties System.Reflection.PropertyInfo[] propertyInfo = myObjectType.GetProperties(); foreach (System.Reflection.PropertyInfo info in propertyInfo) Console.WriteLine(info.Name); // Output: // MyStringProperty // MyIntProperty // MyObjectProperty //Get events System.Reflection.EventInfo[] eventInfo = myObjectType.GetEvents(); foreach (System.Reflection.EventInfo info in eventInfo) Console.WriteLine(info.Name); // Output: // MyEvent1 // MyEvent2
The PropertyInfo class is very similar to the FieldInfo class and also contains the ability to set the value of the property on an instance. It also gives you the ability to individually receive the get and set accessors as MethodInfo classes through the GetAccessors method.
An EventInfo object gives you lots of information about the event and the ability to add events to instances of MyObject.
I think that about does it. Hopefully this helps anyone out there wanting to get information about objects at runtime.
Thanks, this is the first, clear tutorial about reflection. Will solve some of my problems I had.
What a helpful and beautifully written page. No bullshit, no crappy code, just what I was looking for.
Thank you so much.
Can you get the name of the origional object through reflection as well? In you example you are getting information about members of the object you've been passed. I need to get the origional name of the object instance. In one of your first example this would be "myObjectInstance". Is this accessible at all?
The JIT doesn't really keep variable names around at runtime, so I don't believe it's possible.
How to find the list of classes available in .cs file using reflection......
Thanks for the explanation. its so clear
super tutorial....keep it up....
THANKSSSS!!!
is it possible to get the property of the property myObject?
great... simple and effective
how can we get summary information too?
Thanks!!!
nice example and well explained..thanks
This is just too beautiful. Thanks pal (sniff)
"the fields are not guaranteed to come out in any particular order"
Is there a way to get the correct order of the objects?
Since the order doesn't matter at compile-time or run-time, there is no concept of 'correct order'. If you mean the order in which they appear in your source code, then no, there's no way to do that.
I agree. Finally a Reflection article thats totally understandable and accesssible. Thank you! Very well written, and has given me enough knowledge to "take it from here". (after quite alot of googling)
Excellent Article. If it is hard to get the order of the field/property as it is declared in source code using GetFields or GetProperties, I got a question, how DataGridView class gets these field order details from source code, when binding the list of object to it.
thank you so much, this is a very helpful and well explained topic.
What is the name of the object??
Suppose i am writing Sample s = new Sample(), i am creating an object of sample class but i want to know what is the name of this object.
In your example, what is the name? If it's "s" then reflection cannot be used to retrieve that. What you name your variables has no impact on the definition or description of an object.
Thanks a lot
Very nice and absolutely and no blurred spots and clear to understand. | http://tech.pro/tutorial/841/csharp-tutorial-using-reflection-to-get-object-information | CC-MAIN-2013-20 | refinedweb | 930 | 50.63 |
Keyboard script isn’t saving/writing files.
- AceNinjaFire
ive just started getting how to work the ui and keyboard modules. And just recently figured out how to save rgb values to a json file and have them load automatically into my ui.View subclass and apply to my view and subview backgrounds. I originally started this in Pythonista app itself on some of my pyui apps. And I was looking to do the same thing in my keyboard scripts, however, as far as I’ve seen it has no problem opening files but it just never wants to save/write to files at all. I originally was trying to create a clipboard ui that could save multiple things and display them for use. But no matter how hard I’ve tried it just doesn’t want to work saving files. I’ve made a ui to set and save the color for my backgrounds from the keyboard, but again it seems to not want to save the information to the file. Anyone have any ideas? Is it even possible? I’ll leave my ui subclass and the contents of my json file below for everyone to critique as they will.
import keyboard import ui import json as j def open_settings_file(): with open('settings.json','r') as f: settings = j.load(f) f.close() return settings def save_settings(settings): with open('settings.json','w', encoding = 'UTF-8') as f: j.dump(settings,f) f.close() class main(ui.View): def __init__(self): self.settings = open_settings_file() def did_load(self): #gets subview list from pyui file subviews = self.subviews settings = open_settings_file() #Gets the background and ui sections #from settings.json file parts = list(settings.keys()) #for testing to see if the program gets this far print(parts) #gets the background and ui colors and applys them for i in parts: r,g,b = settings[i]['Color'].values() if i == 'Background': self.background_color = (r,g,b) elif i == 'Ui': for j in subviews: j.background_color = (r,g,b) def selection(self): #this is for the buttons to open #the multiple seperate ui files #from the keyboard name = self.name v = ui.load_view(name) if name in ['Html','color','border','other','test']: v.present() if name == 'color': v = self.superview v['scrollview1'].scroll_enabled = True else: keyboard.set_view(v) def colors(self): #getting superview of the buttons/segment #controll/sliders of my color settings #keyboard ui page v = self.superview #getting subviews of the ui page subviews = v.subviews #getting the segment strings and getting #selected index in order to change color #of only the background or ui without #affecting the other choices = v['Choice'].segments index = v['Choice'].selected_index choice = choices[index] #getting the rgb values from the #individual rgb sliders R = v['R'].value G = v['G'].value B = v['B'].value #combining them to make a tuple color = (R,G,B) if choice == 'Background': v.background_color = color elif choice == 'Ui': for i in subviews: i.background_color = color def set(self): v = self.superview settings = open_settings_file() choices = v['Choice'].segments index = v['Choice'].selected_index choice = choices[index] R = v['R'].value G = v['G'].value B = v['B'].value color = [R,G,B] colors = 'RGB' for l in range(len(colors)): settings[choice]['Color'][colors[l]] = color[l] save_settings(settings) self.title = 'Done' v = ui.load_view('main') if keyboard.is_keyboard(): keyboard.set_view(v) else: v.present()
{"Background": {"Color": {"R": 1, "G": 1, "B": 1}}, "Ui": {"Border": {"Width": 0, "Radius": 0}, "Color": {"R": 1, "G": 1, "B": 1}}}
@AceNinjaFire, just to be sure, can you share the code for
save_settingsas well?
Please make sure that you have Full Access enabled for the Pythonista keyboard in the Settings app. Otherwise, there is no way for the keyboard and the app to share any files (because that could theoretically be abused to transfer input data over the network, so you basically need to "trust" the keyboard to enable these things).
- AceNinjaFire
- AceNinjaFire
Btw thank you for pythonista, it was the one thing that actually got me started with programming and made me realize how much I love it. It’s been my hobby for the last 5 months overtaking my love for video games😂 | https://forum.omz-software.com/topic/6225/keyboard-script-isn-t-saving-writing-files | CC-MAIN-2021-17 | refinedweb | 697 | 60.21 |
Overview
Nanojit is a small, cross-platform C++ library that emits machine code. Both the Tamarin JIT and the SpiderMonkey JIT (a.k.a. TraceMonkey) use Nanojit as their back end.
You can get Nanojit by cloning the
tamarin-redux Mercurial repository at. It's in the
nanojit directory.
The input for Nanojit is a stream of Nanojit LIR instructions. The term LIR is compiler jargon for a language used internally in a compiler that is usually cross-platform but very close to machine language. It is an acronym for "low-level intermediate representation". A compiler's LIR is typically one of several partly-compiled representations of a program that a compiler produces on the way from raw source code to machine code.
An application using Nanojit creates a
nanojit::LirBuffer object to hold LIR instructions. It creates a
nanojit::LirBufWriter object to write instructions to the buffer. Then it wraps the
LirBufWriter in zero or more other
LirWriter objects, all of which implement the same interface as
LirBufWriter. This chain of
LirWriter objects forms a pipeline for the instructions to pass through. Each
LirWriter can perform an optimization or other task on the program as it passes through the system and into the
LirBuffer.
Once the instructions are in the
LirBuffer, the application calls
nanojit::compile() to produce machine code, which is stored in a
nanojit::Fragment. Internally to Nanojit, another set of filters operates on the LIR as it passes from the
LirBuffer toward the assembler. The result of compilation is a function that the application can call from C via a pointer to the first instruction.
Example
The following code works with SpiderMonkey's hacked version of Nanojit. Figuring out how to compile it is left as an exercise for the reader; the following works when run in the object directory of an
--enable-debug SpiderMonkey shell:
g++ -DDEBUG -g3 -Wno-invalid-offsetof -fno-rtti -include js-confdefs.h -I dist/include/ -I.. -I ../nanojit -o jittest ../jittest.cpp libjs_static.a
-DDEBUGif you have not compiled SpiderMonkey with
--enable-debug, and use whatever you called the sample source file in place of
jittest.cpp.
#include <stdio.h> #include <stdint.h> #include "jsapi.h" #include "jstracer.h" #include "nanojit.h" using namespace nanojit; const uint32_t CACHE_SIZE_LOG2 = 20; static avmplus::GC gc = avmplus::GC(); static avmplus::AvmCore core = avmplus::AvmCore(); int main() { LogControl lc; #ifdef DEBUG lc.lcbits = LC_ReadLIR | LC_Assembly; #else lc.lcbits = 0; #endif // Set up the basic Nanojit objects. Allocator *alloc = new VMAllocator(); CodeAlloc *codeAlloc = new CodeAlloc(); Assembler *assm = new (&gc) Assembler(*codeAlloc, *alloc, &core, &lc); Fragmento *fragmento = new (&gc) Fragmento(&core, &lc, CACHE_SIZE_LOG2, codeAlloc); LirBuffer *buf = new (*alloc) LirBuffer(*alloc); #ifdef DEBUG fragmento->labels = new (*alloc) LabelMap(*alloc, &lc); buf->names = new (*alloc) LirNameMap(*alloc, fragmento->labels); #endif // Create a Fragment to hold some native code. Fragment *f = fragmento->getAnchor((void *)0xdeadbeef); f->lirbuf = buf; f->root = f; // Create a LIR writer LirBufWriter out(buf); //); // Emit a LIR_loop instruction. It won't be reached, but there's // an assertion in Nanojit that trips if a fragment doesn't end with // a guard (a bug in Nanojit). LIns *rec_ins = out.insSkip(sizeof(GuardRecord) + sizeof(SideExit)); GuardRecord *guard = (GuardRecord *) rec_ins->payload(); memset(guard, 0, sizeof(*guard)); SideExit *exit = (SideExit *)(guard + 1); guard->exit = exit; guard->exit->target = f; f->lastIns = out.insGuard(LIR_loop, out.insImm(1), rec_ins); //; }
Code Explanation
Interesting part are the lines 46-50:
//);:
//;</addtwofn>
This upper half of this snippet includes code where the raw LIR is first converted into machine code.(where compile(fragmento->assm(), f); is called basically).
Then a pointer to a function is used, which takes an int as input and returns the sum of that parameter with two. (typedef JS_FASTCALL int32_t (*AddTwoFn)(int32_t); )
Then, printf is hardcoded to call it with a parameter 5, and on linking with nanojit library, the following program will display
2+5=7
Now, what I need to do is generate output for this:
start two = int 2 twoPlusTwo = add two, two ret twoPlusTwo
This adds two and two in the most hardcoded way possible. The conversion from LIR to a program like one shown above is the task of the parser.
Guards
Guards are special LIR instructions, similar to conditional branches, with the difference that when they are called, instead of going to a particular address, they leave the JIT code entirely, and stop the trace.
Need
Guards are required in a cross platform dynamic language like JavaScript. Certain assumptions are made when a particular JIT code is generated.
For example, in an instruction INR x, a guard would check that x doesn't overflow the range for a 32 bit integer. The JIT code would have a guard checking this condition(an xt guard), and would return to the interpreter if the condition turns out to be true. The interpreter is then equipped to handle the overflow.
Hence, guards are needed to prevent certain erroneous behaviour that might result from the assumptions that are generally made while JIT is generated.
TODO: Explain guards, guard records,
VMSideExit,
Fragmento,
VerboseWriter::formatGuard... | https://developer.mozilla.org/en-US/docs/Archive/Mozilla/Nanojit | CC-MAIN-2019-09 | refinedweb | 846 | 54.83 |
In message: 15963.57541.244513.169979@montanaro.dyndns.org Skip Montanaro skip@pobox.com writes:
It's probably somewhat invalid to equate number of system calls with application runtime. I redumped my last ktrace file just now with timestamps. Here are some computed intervals:
interval time
start -> open hammiefilter.pyc 0.071 open hammiefilter.pyc -> open hammie.db 0.516 open hammie.db -> close hammie.db 0.084 close hammie.db -> program end 0.011
This is good info. Can you add in the time intervals between loading each of the modules? That might point out which modules are actually expensive (or if it's none in particular).
- Alex
>> interval time >> -------- ---- >> start -> open hammiefilter.pyc 0.071 >> open hammiefilter.pyc -> open hammie.db 0.516 >> open hammie.db -> close hammie.db 0.084 >> close hammie.db -> program end 0.011
Alex> This is good info. Can you add in the time intervals between Alex> loading each of the modules? That might point out which modules Alex> are actually expensive (or if it's none in particular).
That would be a bit tedious to do manually for the dozens of modules which are loaded. I'll see what I can come up with though.
Skip
Alex> This is good info. Can you add in the time intervals between Alex> loading each of the modules? That might point out which modules Alex> are actually expensive (or if it's none in particular).
Okay, here's a bit more information. I instrumented hammiefilter.py with code like
marker = 0 import os file("os%d"%marker,"w"); os.unlink("os%d"%marker); marker+=1 import sys file("sys%d"%marker,"w"); os.unlink("sys%d"%marker); marker+=1 import getopt file("getopt%d"%marker,"w"); os.unlink("getopt%d"%marker); marker+=1 from spambayes import hammie, Options, mboxutils file("hammie%d"%marker,"w"); os.unlink("hammie%d"%marker); marker+=1
then scored a single message under ktrace control and dumped the ktrace data with timestamps. (This could just have easily have been done with time.clock() or time.time() calls, but after awhile of staring at ktrace results, this seemed just as easy.)
The instrumentation gave me a larger number of smaller intervals with these meanings:
interval time start through first import (os) 0.166 import sys < 0.001 import getopt 0.055 import hammie, Options, mboxutils 0.660 (!!!) to start of HammieFilter class defn < 0.001 to start of main() < 0.001 create HammieFilter instance 0.005 parse cmd line options < 0.001 get msg from stdin 0.006 score msg 0.224 write scored msg to stdout 0.002
Focusing on the hammie-related imports, I split that import into three lines, reinstrumented and ran it again. Those individual imports then expanded to
import hammie 0.340 import Options < 0.001 import mboxutils < 0.001
(As you can see, the times are only relative (large vs small) and don't seem to be all that reproducible across individual runs.)
One more marker insertion pass, this time in hammie.py, yielded these intervals from that file:
import mboxutils 0.215 import storage 0.072 import options < 0.001 import tokenize 0.052 define Hammie class < 0.001 define open function < 0.001
It appears something in the mboxutils import is the culprit. I'm about to go home for the day though, so I'll let others pick up from there.
Skip
Note that spambayes/mboxutils.py imports email.Message, which effectively imports the entire email package. That's a lot of code (one file per class).
--Guido van Rossum (home page:) | https://mail.python.org/archives/list/python-dev@python.org/thread/3LQYWZCIAGDUCXD6Z6KE635JUHD32VRP/ | CC-MAIN-2021-43 | refinedweb | 599 | 72.63 |
Details
- Type:
New Feature
- Status: Open
- Priority:
Minor
- Resolution: Unresolved
- Affects Version/s: 1.2.9, 1.2.10
- Fix Version/s: 1.2 Maintenance Release
-
- Labels:None
- Environment:From sourceforege - 775175 - Daniel Cazzulino (kzu) - dcazzulino
Description
Activity
Mark, I looked at your patch and started writing some test cases and realized that you added support for storing assembly information about the repository. I think what Daniel and Nicko were talking about was wanting to add assembly information from where the log event was generated. For example:
log.Info("Hello World");
What version of the Company.BusinessLayer assembly did that come from?
Company.BusinessLayer, Version=1.2.3.4, Culture=neutral, PublicKeyToken=null
Company.BusinessLayer, Version=1.2.3.5, Culture=neutral, PublicKeyToken=null
Daniel's suggestion about "a single static call per-class using logging" seems to hint that when the first logging event is generated from the class we capture information from the calling assembly. In other words, I think they're suggesting we add an Assembly property to one of the log classes (i.e. LogImpl) instead of (or in addition to) ILoggerRepository. If that's the case we'd need a way to differenciate the %asm pattern being assembly information about the repository vs. assembly information about the calling code.
Does that make sense?
The attached DLL file has the latest patch file I've uploaded applied.
Normally to apply a patch file you just check out the SVN revision specified of the project, and use svn's apply patch mechanism. This will apply the modified/patched code. You can then build the project to receive the benefits of the patch.
-Mark
Hi,
thx for the enhancement on log4net. I really need this. Can someone explain me how I can install this patch-file, please?
When will the version 1.2.11 be released?
thx in advance.
cheers
Johannes
Should be applied to Revsion 675697
Implements all attributes described in Ron's comment, except with the syntax
%asm{Version}
%asm{Title}
%asm{Trademark}
%asm{Description}
We should probably expose all the Assembly attributes:
%v{Description}
%v{Title}
%v{Version}
%v{Trademark}
...
Note: the patch was produced against revision 489241
Here is potential solution for adding the assembly version to the output. I also included the assembly description attribute, as in our build process we stamp every assembly with the svn revision as part of its version field, and the svn url for its description field. Other assembly patterns useful to people could easily be added.
%asm-ver is the pattern for version
%asm-desc is the pattern for description
The only reason I didn't use %v for the version attribute is because I also added the description attribute.
There seems to be low demand for this feature. Changing the priority to Minor.
The one LogManager.GetLogger per class pattern is only a suggestion, a rather string suggestion, but there are certainly a number of projects that don't follow this pattern.
Ron,
You are right – I added the assembly information about the logging repository. I think adding the assembly information about the calling code is the goal, but may not be simple to implement in an efficient way.
The reason I implemented the assembly information about the repository is to support the following use case (which I think covers most of the uses of this type of feature):
public class X{ private static final ILog logger = LogManager.getLogger(typeof(X)); ... }
Whenever the logger in the example above calls a log method, the assembly information I store in the logger repository is the same as the assembly for class X, and thus whenever logger outputs log messages, the proper assembly information from it is displayed. The information does not need to be dynamically found for every log call, but is properly setup in the LogManager.getLogger(typeof(X)) method, only the first time.
The area where this will not work is if users do not use the pattern of one logger per class or use named loggers. However, I think enough people use the one logger per class mechanism that it might warrant having two approaches, especially since this way is probably going to be much faster than dynamically determining the calling assembly for every logging statement.
Thoughts?
-Mark | https://issues.apache.org/jira/browse/LOG4NET-10?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel | CC-MAIN-2014-15 | refinedweb | 713 | 55.24 |
CodePlexProject Hosting for Open Source Software
Looking for upgrade issues when I upgrade from 1.3 to latest version 2.1. Does anyone have some info on that?
Before we released 2.0 as open source we used the opportunity to refactor our namespaces, so there are breaking changes in 2.0 compared to 1.3. If you have written custom assemblies/app_code you need to update your code. Typically the fix is to update "using"
statements from old name space to the new. Version 1.3 was released before Composite C1 becoming open source so you should be able to get help for this directly with Composite - write the help desk (support at composite.net).
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later. | https://c1cms.codeplex.com/discussions/256755 | CC-MAIN-2017-22 | refinedweb | 152 | 77.13 |
Control Group v2¶
This is the authoritative documentation on the design, interface and conventions of cgroup v2. It describes all userland-visible aspects of cgroup including core and specific controller behaviors. All future changes must be reflected in this document. Documentation for v1 is available under Control Groups version 1.
Introduction¶
Terminology¶
“cgroup” stands for “control group” and is never capitalized. The singular form is used to designate the whole feature and also as a qualifier as in “cgroup controllers”. When explicitly referring to multiple individual control groups, the plural form “cgroups” is used.
What is cgroup?¶.
Basic Operations¶
Mounting¶.
cgroup v2 currently supports the following mount options.
nsdelegateConsiderOnly.
memory_recursiveprotRec ‘bypass’ protection values at higher tree levels).
Organizing Processes and Threads¶
Processes¶)
Threads¶.¶.
Controlling Controllers¶
Enabling and Disabling¶.
Top-down Constraint¶.
No Internal Process Constraint¶¶
Model of Delegation¶¶.
-.
For delegations to namespaces, containment is achieved by requiring that both the source and destination cgroups are reachable from the namespace of the process which is attempting the migration. If either is not reachable, the migration is rejected with -ENOENT.
Guidelines¶
Organize Once and Control¶.
Avoid Name Collisions¶ ‘_’s but never begins with an ‘_’.
Resource Distribution Models¶
cgroup controllers implement several resource distribution schemes depending on the resource type and expected use cases. This section describes major schemes in use along with their expected behaviors.
Weights¶.
Limits¶.
Protections¶
A cgroup is protected upto the configured amount of the resource as long as.
Allocations¶.
Interface Files¶
Format¶.
Conventions¶
Settings for a single feature should be contained in a single file.
The root cgroup should be exempt from resource control and thus shouldn’t have resource control interface files.
The default time unit is microseconds. If a different unit is ever used, an explicit unit suffix must be present.
A parts-per quantity should use a percentage decimal with at least two digit fractional part - e.g. 13.40..
Core Interface Files¶.
- ‘+’ or ‘-‘ can be written to enable or disable controllers. A controller name prefixed with ‘+’ enables the controller and ‘-‘.
-¶
CPU¶¶
All time durations are in microseconds.
- cpu.stat
-
A read-only flat-keyed file.-write nested-keyed file.
Shows pressure stall information for CPU. See PSI - Pressure Stall Information.
Memory¶.
Memory Interface Files¶ there is no reclaimable memory available..
In default configuration regular 0-order allocations always succeed unless OOM killer chooses current task as a victim.
Some kinds of allocations don’t invoke the OOM killer. Caller could retry them differently, return into userspace as -ENOMEM or silently ignore in cases like disk readahead..
This event is not raised if the OOM killer is not considered as an option, e.g. for failed high-order allocations or if caller asked to not retry attempts.
-.
-!
If the entry has no per-node counter(or not show in the mempry.numa_stat). We use ‘npn’(non-per-node) as the tag to indicate that it will not show in the mempry.numa_stat.
- anon
- Amount of memory used in anonymous mappings such as brk(), sbrk(), and mmap(MAP_ANONYMOUS)
- file
- Amount of memory used to cache filesystem data, including tmpfs and shared memory.
- kernel_stack
- Amount of memory allocated to kernel stacks.
- pagetables
- Amount of memory allocated for page tables.
- percpu(npn)
- Amount of memory used for storing per-cpu kernel data structures.
- sock(npn)
-
- anon_thp
- Amount of memory used in anonymous mappings backed by transparent hugepages
- file_thp
- Amount of cached filesystem data backed by transparent hugepages
- shmem_thp
- Amount of shm, tmpfs, shared anonymous mmap()s backed by transparent hugepages
- inactive_anon, active_anon, inactive_file, active_file, unevictable
-
Amount of memory, swap-backed and filesystem-backed, on the internal memory management lists used by the page reclaim algorithm.
As these represent internal list state (eg. shmem pages are on anon memory management lists), inactive_foo + active_foo may not be equal to the value for the foo counter, since the foo counter is type-based, not list-based.
- slab_reclaimable
- Part of “slab” that might be reclaimed, such as dentries and inodes.
- slab_unreclaimable
- Part of “slab” that cannot be reclaimed on memory pressure.
- slab(npn)
- Amount of memory used for storing in-kernel data structures.
- workingset_refault_anon
- Number of refaults of previously evicted anonymous pages.
- workingset_refault_file
- Number of refaults of previously evicted file pages.
- workingset_activate_anon
- Number of refaulted anonymous pages that were immediately activated.
- workingset_activate_file
- Number of refaulted file pages that were immediately activated.
- workingset_restore_anon
- Number of restored anonymous pages which have been detected as an active workingset before they got reclaimed.
- workingset_restore_file
- Number of restored file pages which have been detected as an active workingset before they got reclaimed.
- workingset_nodereclaim
- Number of times a shadow node has been reclaimed
- pgfault(npn)
- Total number of page faults incurred
- pgmajfault(npn)
- Number of major page faults incurred
- pgrefill(npn)
- Amount of scanned pages (in an active LRU list)
- pgscan(npn)
- Amount of scanned pages (in an inactive LRU list)
- pgsteal(npn)
- Amount of reclaimed pages
- pgactivate(npn)
- Amount of pages moved to the active LRU list
- pgdeactivate(npn)
- Amount of pages moved to the inactive LRU list
- pglazyfree(npn)
- Amount of pages postponed to be freed under memory pressure
- pglazyfreed(npn)
- Amount of reclaimed lazyfree pages
- thp_fault_alloc(npn)
- Number of transparent hugepages which were allocated to satisfy a page fault. This counter is not present when CONFIG_TRANSPARENT_HUGEPAGE is not set.
- thp_collapse_alloc(npn)
- Number of transparent hugepages which were allocated to allow collapsing an existing range of pages. This counter is not present when CONFIG_TRANSPARENT_HUGEPAGE is not set.
- memory.numa_stat
-
A read-only nested-keyed file which exists on non-root cgroups.!
The entries can refer to the memory.stat.
- memory.swap.current
-
A read-only single value file which exists on non-root cgroups.
The total amount of swap currently being used by the cgroup and its descendants.
-.
Healthy workloads are not expected to reach this limit.
-.
- high
- The number of times the cgroup’s swap usage was over the high threshold.
-ed file.
Shows pressure stall information for memory. See PSI - Pressure Stall Information for details.
Usage Guidelines¶
.
Memory Ownership¶.
IO¶.
IO Interface Files¶
- io.stat
-
A read-only nested-keyed file.
Lines are keyed by $MAJ:$MIN device numbers and not ordered. The following nested keys are defined..
When “ctrl” is “auto”, the kernel may change all parameters dynamically. When “ctrl” is set to “user” or any other parameters are written to, “ctrl” become “user” and the automatic changes are disabled.
When “model” is “linear”, the following model parameters are defined.ed file.
Shows pressure stall information for IO. See PSI - Pressure Stall Information for details.
Writeback¶, btrfs, f2fs, and.
IO Latency¶¶¶
-¶.
PID Interface Files¶
-.
Cpuset¶¶
- all all.
- cpuset.cpus.partition
-
A read-write single value file which exists on non-root cpuset-enabled cgroups. This flag is owned by the parent cgroup and is not delegatable.
It accepts only the following input values when written to.
“root” - a partition.
- The “cpuset.cpus” is not empty and the list of CPUs are exclusive, i.e. they are not shared by any of its siblings.
- The parent cgroup is a partition root.
- The “cpuset.cpus” is also a proper subset of the parent’s “cpuset.cpus.effective”.
-¶¶
The “rdma” controller regulates the distribution and accounting of RDMA resources.
RDMA Interface Files¶
-
HugeTLB¶
The HugeTLB controller allows to limit the HugeTLB usage per control group and enforces the controller limit during page fault.
HugeTLB Interface Files¶
- hugetlb.<hugepagesize>.current
- Show current usage for “hugepagesize” hugetlb. It exists for all the cgroup except root.
- hugetlb.<hugepagesize>.max
- Set/show the hard limit of “hugepagesize” hugetlb usage. The default value is “max”. It exists for all the cgroup except root.
- hugetlb.<hugepagesize>.events
-
A read-only flat-keyed file which exists on non-root cgroups.
- max
- The number of allocation failure due to HugeTLB limit
- hugetlb.<hugepagesize>.events.local
- Similar to hugetlb.<hugepagesize>.events but the fields in the file are local to the cgroup i.e. not hierarchical. The file modified event generated on this file reflects only the local events.
Misc¶
Non-normative information¶
This section contains information that isn’t considered to be a part of the stable kernel API and so is subject to change.
CPU controller root cgroup process behaviour¶).
Namespace¶
Basics¶ ‘.
The Root and Views¶
The ‘c (‘/’) ‘/batchjobs/container_id2’, then it will see:
# cat /proc/7353/cgroup 0::/../container_id2/sub_cgrp_1
Note that the relative path always starts with ‘/’ to indicate that its relative to the cgroup namespace root of the caller.
Migration and setns(2)¶:
- the process has CAP_SYS_ADMIN against its current user namespace
-¶.
Information on Kernel Programming¶
This section contains kernel programming information in the areas where interacting with cgroup is necessary. cgroup core and controllers are not covered.
Filesystem Support for Writeback¶ and the corresponding request queue. This must be called after a queue (device) has been associated with the bio and before submission.
- wbc_account_cgroup_owner(() and using bio_associate_blkg() directly.
Deprecated v1 Core Features¶
-¶
Multiple Hierarchies¶.
Thread Granularity¶.
Competition Between Inner Nodes and Threads¶.
Other Interface Issues¶.
Controller Issues and Remedies¶
Memory¶’s within its effective low, which makes delegation of subtrees possible. It also enjoys having reclaim pressure proportional to its overage when above its effective low.. | https://www.kernel.org/doc/html/v5.11/admin-guide/cgroup-v2.html | CC-MAIN-2022-05 | refinedweb | 1,519 | 50.84 |
Wednesday Java Quiz
(Mark Volkmann suggested this quiz.)
Q: Will the following Java source compile, run without exceptions?
import java.util.SortedMap; import java.util.TreeMap; public class Main { public static void main(String[] args) { SortedMap sm = new TreeMap(); sm.put("one", 1); sm.put(2, "two"); System.out.println("value for one is " + sm.get("one")); System.out.println("value for 2 is " + sm.get(2)); } }
Strict rules: No actually running the Java compiler.
Re: Wednesday Java Quiz
That's correct! But is that what it should do ... if you were in charge of designing the behavior? It seems to me that it should work. It can't compare the Integer 2 to the String "one", so maybe it should catch the ClassCastException, take that to mean it didn't match any existing key, and add it to the map. Why would this be a bad thing?
Re: Wednesday Java Quiz
There are no type parameters on the TreeMap, so you've indicated you don't care about the typesafety of the members. Trying to compare an Integer and a String is clearly wrong in the vast majority of cases, so I believe it is appropriate to force the user to explicitly define what their own natural ordering of an Integer and String for this to work. An alternative would be to make SortedMap<K extends Comparable, V> instead of SortedMap<K,V>, but then this would break the ability to provide a Comparator for classes which don't implement Comparable. | http://www.weiqigao.com/blog/2008/12/17/wednesday_java_quiz.html | crawl-002 | refinedweb | 252 | 67.35 |
Papa's Perspective
JavaScript can be unwieldy. But using techniques like separation and Revealing Module Pattern can get it under control.
Welcome to the first article of my new column, Papa's Perspective -- and thanks for reading it!
As this is the first of the series (and because my editor ordered me to introduce myself), I'll spend a quick moment sharing a little about me and my vision for this column. Some of you may know me through my books, articles, watching Silverlight TV on Channel 9, speaking at conferences or on Twitter (@john_papa). If you do, you know that I value good development practices, speaking in terms everyone can relate to and being casual.
JavaScript Behaving Badly?
JavaScript gets a bad rap sometimes for being a poor development language. Global variables that can easily be confused with the wrong scope, 3,000-line-long .js files, and cats and dogs living together can all contribute to the utter chaos that makes maintaining JavaScript difficult. But it doesn't have to be that way. I'll say it plainly: JavaScript can be productive and fun.
This month I'll discuss a common technique to separate structure (HTML), behavior (JavaScript) and presentation (CSS) with modern Web development. This type of separation is commonly accepted as a good practice with other technologies. I'll show a simple timer application and offer a few tips to make your JavaScript more manageable.
Separation
First let's take a look at the sample application (Figure 1), a simple timer with buttons for starting, pausing and resetting. The code for this article has a file named TimerAllInOneFile.html, which contains the HTML structure for the timer, the CSS that styles it to look like Figure 1 and the JavaScript to make it function. The file does a pretty good job of separation, but this is a simple example with only a few dozen lines of HTML, CSS and JavaScript.
The problem with this approach is that the code can easily become more difficult to manage. For example, scripts could be created in line with HTML structure or anywhere in the HTML file. In general I find it easier to maintain and debug code when I know the JavaScript is in one place separated from the HTML. Most browsers' debuggers support debugging JavaScript files, but they don't all support debugging script code in HTML files:
<div id="buttonSurface" style=
"width: 236px; margin: auto;">
<div id="startButton" class="Button ButtonStart"
onclick="startTimer();"></div>
<div id="pauseButton" class="Button ButtonPause"
onclick="pauseTimer();"></div>
<div id="clearButton" class="Button ButtonClear"
onclick="clearTimer();"></div>
</div>
You may notice that the buttonSurface DIV has an embedded style. This is an easy trap to fall into when all of your code is in one file. Having embedded styles, styles in the same page in a STYLE tag and linked styles can make it difficult to diagnose problems with presentation. Sometimes there are valid reasons to override styles, but as a general rule I recommend keeping the styles in one place (my preferred location is their own CSS file).
You can see the separation of the HTML, CSS and JavaScript in the three files: Timer.html, Styles.css and Timer.js.
I made two other separation changes as well. First, I removed the embedded style on buttonSurface and moved it to Styles.css, which is arguably more maintainable. I also removed the embedded event handlers and assigned them in code in the Timer.js file, as shown here:
function init(startButton, pauseButton, clearButton) {
document.getElementById(startButton).
addEventListener(
"click", startTimer, false);
document.getElementById(pauseButton).
addEventListener(
"click", pauseTimer, false);
document.getElementById(clearButton).
addEventListener(
"click", clearTimer, false);
displayTimer();
};
The Fight Against Global Variables
All JavaScript variables share a single global namespace. They're created whenever a variable is explicitly declared within the global scope or whenever a variable is implicitly declared. I'm no fan of implicit declaration of variables, because they make debugging and maintenance very difficult. This can make it easier to collide with same-named variables (your own or ones in other script file code) and to lose track of the lifetime.
Figure 2 shows the Internet Explorer 9 debugger displaying the functions and variables used by Timer.html and Timer.js, all of which are in the global namespace. It's conceivable that the HTML page might use another script file that might have a variable named init, seconds or any of the other members listed in Figure 2. There are two easy ways to avoid globals: declaring variables within your own namespace and enclosing your variables within a module.
Namespaces are easy to create and often used by popular libraries like jQuery (which uses $) and KnockoutJS (which uses ko). For my main application, I like to create a namespace called my, which can be done easily using the following code:
var my = my || {};
For simpler apps I don't always use namespaces. But I do use modules quite often, specifically the Revealing Module Pattern.
The Revealing Module Pattern
The Revealing Module Pattern helps encapsulate similar logic and avoid polluting the global namespace. The idea is simply to wrap all related logic in a module and only expose or reveal the parts of it that need to be accessible outside of the module. The basic way to use this pattern is to create a function that wraps all of the JavaScript logic, and return an object from this function that contains the functions that need to be accessible outside your module.
This pattern is quite easy to implement. Simply create a variable and set it to an empty function. Then put your JavaScript code inside that function, as shown here:
var TimerRMP = function () {
/* your JavaScript code goes here */
}();
As an example, take a look at the complete code in TimerRMP.js. The same JavaScript in Timer.js has been copied to TimerRMP.js, then used inside of the function that's set to the variable TimerRMP.
The following code shows the beginning of TimerRMP.js, which specifically demonstrates how the TimerRMP variable is declared. Notice TimerRMP is declared and set to be a function, and then all the code below it is pretty much the same:
var TimerRMP = (function (window) {
var
timerId = -1,
interval = 25,
ms = 0,
seconds = 0,
minutes = 0,
startTimer = function () {
if(timerId === -1) {
timerId = window.setInterval(
"TimerRMP.turnTimerOn()", interval);
}
},
The pattern is called Revealing because the module reveals only the functions returned from the object. Notice that the only functions exposed by the function are init and turnTimerOn.
This means that the only variable added to the global namespace is TimerRMP, and the only functions exposed are TimerRMP.init and TimerRMP.turnTimerOn. This is a dramatic reduction from all the global namespace pollution in Timer.js. Code outside of the module can't access the other variables and functions because they're now hidden inside the module's enclosure:
return {
init: init,
turnTimerOn: turnTimerOn
};
The TimerRMP variable is set immediately because the function is called immediately (self- instancing), as shown in the last line of the TimerRMP.js code:
} (window));
Notice that window is being passed into the function. This allows the window object to be explicitly set to a variable inside of the function and thus avoids any assumption that that variable exists. Some say this is unneeded, while others like how clean it is. Also, if you prefer to turn on the strict option for your JavaScript, you can add the "use strict" to the first line inside of your function.
Benefits of Separation and the Revealing Module Pattern
I've found it valuable to use separation patterns, whether it be Silverlight with Model-View-ViewModel, ASP.NET with MVC or even for unit testing. I don't need a fancy acronym for JavaScript separation to know it's a good thing for maintainability, testing and debugging. Along with separation, I find that the Revealing Module Pattern is easy to follow, reduces variables in the global namespace and makes it simple to encapsulate logic in a way that makes sense. Plus, it keeps JavaScript clean. | https://visualstudiomagazine.com/articles/2011/09/01/pdpap_effective-javascript.aspx | CC-MAIN-2021-31 | refinedweb | 1,349 | 62.38 |
Angular Data Binding
Data binding is nothing but binding data to the source. In Angular data binding is sync between model and view. It is a two way binding. When the data in the model changes the view will reflect the change. If the value in view changes, the model is updated with the change.
Curly braces {{}} are used to represent the data binding. This is called interpolation.
Lets see this app.component.ts, here the variable title is assigned a value “myfirstapp”
import { Component } from '@angular/core'; @Component({ selector: 'app-root', templateUrl: './app.component.html', styleUrls: ['./app.component.css'] }) export class AppComponent { title = 'myfirstapp'; }
Lets see this app.component.html, where we use {{title}} to display it in the browser.
We can see the title is displayed in the browser.
| https://salesforcedrillers.com/learn-mean/angular-data-binding/ | CC-MAIN-2022-40 | refinedweb | 131 | 52.76 |
Debugging Spiders¶
This document explains the most common techniques for debugging spiders. Consider the following scrapy spider below:
import scrapy from myproject.items import MyItem class MySpider(scrapy.Spider): name = 'myspider' start_urls = ( '', '', ) def parse(self, response): # collect `item_urls` for item_url in item_urls: yield scrapy.Request(item_url, self.parse_item) def parse_item(self, response): item = MyItem() # populate `item` fields # and extract item_details_url yield scrapy.Request(item_details_url, self.parse_details, meta={'item': item}) def parse_details(self, response): item = response.meta['item'] # populate more `item` fields return item
Basically this is a simple spider which parses two pages of items (the
start_urls). Items also have a details page with additional information, so we
use the
meta functionality of
Request to pass a
partially populated item.
Parse Command¶
The most basic way of checking the output of your spider is to use the
parse command. It allows to check the behaviour of different parts
of the spider at the method level. It has the advantage of being flexible and
simple to use, but does not allow debugging code inside a method.
In order to see the item scraped from a specific url:
$ scrapy parse --spider=myspider -c parse_item -d 2 <item_url> [ ... scrapy log lines crawling example.com spider ... ] >>> STATUS DEPTH LEVEL 2 <<< # Scraped Items ------------------------------------------------------------ [{'url': <item_url>}] # Requests ----------------------------------------------------------------- []
Using the
--verbose or
-v option we can see the status at each depth level:
$ scrapy parse --spider=myspider -c parse_item -d 2 -v <item_url> [ ... scrapy log lines crawling example.com spider ... ] >>> DEPTH LEVEL: 1 <<< # Scraped Items ------------------------------------------------------------ [] # Requests ----------------------------------------------------------------- [<GET item_details_url>] >>> DEPTH LEVEL: 2 <<< # Scraped Items ------------------------------------------------------------ [{'url': <item_url>}] # Requests ----------------------------------------------------------------- []
Checking items scraped from a single start_url, can also be easily achieved using:
$ scrapy parse --spider=myspider -d 3 ''
Scrapy Shell¶
While the
parse command is very useful for checking behaviour of a
spider, it is of little help to check what happens inside a callback, besides
showing the response received and the output. How to debug the situation when
parse_details sometimes receives no item?
Fortunately, the
shell is your bread and butter in this case (see
Invoking the shell from spiders to inspect responses):
from scrapy.shell import inspect_response def parse_details(self, response): item = response.meta.get('item', None) if item: # populate more `item` fields return item else: inspect_response(response, self)
See also: Invoking the shell from spiders to inspect responses.
Open in browser¶
Sometimes you just want to see how a certain response looks in a browser, you
can use the
open_in_browser function for that. Here is an example of how
you would use it:
from scrapy.utils.response import open_in_browser def parse_details(self, response): if "item name" not in response.body: open_in_browser(response)
open_in_browser will open a browser with the response received by Scrapy at
that point, adjusting the base tag so that images and styles are displayed
properly.
Logging¶
Logging is another useful option for getting information about your spider run. Although not as convenient, it comes with the advantage that the logs will be available in all future runs should they be necessary again:
def parse_details(self, response): item = response.meta.get('item', None) if item: # populate more `item` fields return item else: self.logger.warning('No item received for %s', response.url)
For more information, check the Logging section. | http://doc.scrapy.org/en/1.2/topics/debug.html | CC-MAIN-2019-39 | refinedweb | 534 | 62.58 |
NAME
bcopy - copy byte sequence
SYNOPSIS
#include <strings.h> void bcopy(const void *src, void *dest, size_t n);
DESCRIPTION
The bcopy() function copies n bytes from src to dest. The result is correct, even when both areas overlap.
RETURN VALUE
None.
CONFORMING TO
4.3BSD. This function is deprecated (marked as LEGACY in POSIX.1-2001): use memcpy(3) or memmove(3) in new programs. Note that the first two arguments are interchanged for memcpy(3) and memmove(3).
SEE ALSO
memccpy(3), memcpy(3), memmove(3), strcpy(3), strncpy(3)
COLOPHON
This page is part of release 3.15 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at. | http://manpages.ubuntu.com/manpages/jaunty/man3/bcopy.3.html | CC-MAIN-2015-22 | refinedweb | 120 | 70.29 |
I had a requirement that required me to create a custom base for web parts to provide a common library for web part development within MOSS. I started to research on how this could be done and found very little information that explained the whole process of trying to achieve this and literally any form of reference to this topic in the 3.0 WSS SDK August 2007 did not exist. So I needed to do something to assist the development community.
So lets cut the chase and start showing you how this is done.
Firstly we need to create a new class project within Visual Studio and lets call this WebPartBase. In this example we will retrieve the webpart id and title of the webpart and place this in a protected string called WebPartID and WebPartTitle, I have also included some of the site properties into the base class. Our Base class will inherit from the System.Web.UI.WebControls.WebParts.WebPart base class, which is the base class Microsoft provide you to inherit for developing web parts. Our custom web part will inherit our custom base class, which for the purpose of the blog will only be two properties from the webpart and three properties from the Site object and place them into a strings. As soon as we inherit our base class we automatically inherit these strings into our web part. When we put this into perspective, you have the whole .Net framework their available to you as a developer that could be used as a library for comon tasks that you may require for all your web part development. e.g. Application Logging, Standardise on variable name's across your Line of Business applications for all web parts, custom behaviour across the board for all web parts, etc. I think you get the picture.
Lets Get Started.
1. Create New Class Project within Visual Studio 2005 and call this WebPartBase.
Add the following bit of code into your WebPartBase class so it should like something like this.
using System;using System.Collections.Generic;using System.Text;using System.Web;
using Microsoft.SharePoint;using Microsoft.SharePoint.WebControls;using Microsoft.SharePoint.WebPartPages;
namespace bobby.habib.base{
public class PortletBase : System.Web.UI.WebControls.WebParts.WebPart { /// <summary> /// Public default constructor. /// </summary> public PortletBase() {
}
#region Web Part Settings
protected string WebPartID { get { return base.ID; } }
protected string WebPartTitle { get { return base.DisplayTitle; } }
#endregion
#region Site Settings
protected string SiteID { get { try { SPWeb oSite = SPControl.GetContextWeb(Context); try {
return oSite.ID.ToString(); } catch (Exception exm) { throw new Exception(exm.Message); } finally { oSite.Dispose(); } } catch (Exception ex) { throw new Exception(ex.Message); } } }
protected string SiteTitle { get { try { SPWeb oSite = SPControl.GetContextWeb(Context); try {
return oSite.Title; } catch (Exception exm) { throw new Exception(exm.Message); } finally { oSite.Dispose(); } } catch (Exception ex) { throw new Exception(ex.Message); }
} }
protected string SiteName { get { try { SPWeb oSite = SPControl.GetContextWeb(Context); try {
return oSite.Name; } catch (Exception exm) { throw new Exception(exm.Message); } finally { oSite.Dispose(); } } catch (Exception ex) { throw new Exception(ex.Message); }
#endregion
}}
2. Complie this code. You will notice that we automatically will inherit a number of values that could be used, as part of your framework library for web part development.
3. The next step is to now create a web part that will inherit from the custom base class we have just built, rather then using the System.Web.UI.WebControls.WebParts.WebPart base class. So lets create a new web part using visual studio, when creating your web part you must add a reference in your web part project to the custom base class we have just created.
Your web part project should look like this;
using System;using System.Web.UI;using System.Web.UI.HtmlControls;using System.Web.UI.WebControls;using System.Web.UI.WebControls.WebParts;
using bobby.habib.base;
namespace Bobby.Habib.WebParts.BaseTest
{ public class BobbyHTest : PortletBase { public BobbyHTest() { this.ExportMode = WebPartExportMode.All; } #region GUI Controls // GUI Panels private Panel _MainPanel;
#endregion
protected override void Render(HtmlTextWriter writer) { _MainPanel.RenderControl(writer); }
protected override void CreateChildControls() {
#region Control Declaration
_MainPanel = new Panel();
Table outerTable = new Table(); outerTable.CellPadding = 0; outerTable.CellSpacing = 0; outerTable.Width = 400;
// Row 1
TableRow row = new TableRow(); TableCell cell = new TableCell(); cell.Width = 800; string strTxT = string.Empty; strTxT = strTxT + "<b>Web Part Information</b><br>"; strTxT = strTxT + "base.ID - " + WebPartID + "<br>"; strTxT = strTxT + "base.Title - " + WebPartTitle + "<br>"; strTxT = strTxT + "<b>Site Information</b><br>"; strTxT = strTxT + "Site ID - " + SiteID + "<br>"; strTxT = strTxT + "Site Name - " + SiteName + "<br>"; strTxT = strTxT + "Site Title - " + SiteTitle + "<br>"; strTxT = strTxT + "Display Title - " + WebPartTitle + "<br>";
cell.Text = strTxT;
row.Cells.Add(cell); outerTable.Rows.Add(row); _MainPanel.Controls.Add(outerTable); base.CreateChildControls(); } }}
4. Compile the above code.
5. If you have the Visual Studio 2005 and have installed Visual Studio 2005 extensions for WSS3, you could go into the web part project properties and do the following;
6. You may need to copy your custom base class to the GAC, if you have not configured your solution files to copy the base class into the GAC. If during the time of deployment you find that the project starts error during the deployment. The likely cause is your web part cannot see the custom base class in the GAC and throws a reflection error. Ensure that the base class is in the GAC and is registred as a safe control in the web.config file of the site you are trying to deploy too.
7. Add the web part into a site page in MOSS and you will see the property values being displayed in the web part, that have been inherited from our custom base class.
I hope this helps a developer.
Pingback from Links (10/16/2007) « Steve Pietrek’s SharePoint Stuff | http://www.sharepointblogs.com/bobbyhabib/archive/2007/10/16/creating-your-own-custom-base-class-with-your-own-web-part.aspx | crawl-002 | refinedweb | 962 | 50.73 |
suppose that i write a program in turbo c:
Q1.) the operating system program is in RAM.(yes or no?)
Q2.) the turbo c application is in RAM. (yes or no?)
suppose that i have written this simple code:
#include <stdio.h> #include<conio.h> void main() { clrscr(); int a=2,b=3,c; c=a+b*3; printf("%d",c); getch(); }
<before compilation>
Q3.) when i start writing this code, and till i finish everything goes into RAM (yes or no?)
3.a) if every character does not go into RAM then where does it go?
3.b) after the program is written in what form is it , is it in the form of symbols and characters like it appears on screen, or in the form of ASCII code,or assembly code(MOV,JMP etc.) or in form of 0's and 1's. (before compilation).
<during/after compilation>
Q4.) i want to know what happens after assembly code is generated.i am confused about the concept of 'relocatable' machine cod, linker-loader and 'relocatable' object file.
4.a) and what does it mean that library functions are in relocatable format, arent they stored in the physical memory(RAM) while turboc application is being run.
Q5.) in the final executable file, are the codes of the three library functions copied where they are Called(is this why they are relocatable?) or the control is transferred to the memory locations of these functions.
Q6.) after the final code is generated is it placed contiguous memory locations??
although some of the questions may seem stupid, but i want to get a clear picture of what is going on in the computer, so i would love to have 'detailed' explanation with examples that a kid could also understand easily. | https://www.daniweb.com/programming/software-development/threads/410319/need-explanation-on-relocatable-code-and-linker | CC-MAIN-2018-43 | refinedweb | 297 | 65.42 |
// Created by Jaken Herman on 7/13/16. #include <stdio.h> #include <stdlib.h> #include <string.h> //check to see if operating system is windows #ifdef _WIN32 #include <windows.h> //include <windows.h> for win operating systems only //if not windows, assume unix based system #else #include <unistd.h> //include <unistd.h> for unix systems only. #endif /* Use variable vm_score in order to keep track of likelihood of virtual machine. After each check, we will increment vm_score by 1 if the check passes. If the vm_score reaches 3, we assume that we are running on a virtual machine. */ int vm_score = 0; void number_of_cores(); void run_command(); void registry_check(); int main(int argc, const char * argv[]) { /*run number_of_cores() function first, as it runs on both windows and unix machines. */ number_of_cores(); //check to see if running on windows. #ifdef WIN32 /*vmware_sys holds the value that the systeminfo command should return for the System Manufacturer value. */ char* vmware_sys = "System Manufacturer: \t VMware, Inc."; /*Use the run_command function we created in order to run systeminfo piped to find the System Manufacturer, and compare it with the vmware_sys variable, which is what should be returned if a hypervisor is running. The 36 as the last parameter allows us to dictate that the string should be 36 characters long. */ run_command("systeminfo | find \"System Manufacturer\"", vmware_sys, 36); //run the registry_check() function to check for vmware in the registers. registry_check(); /*If vm_score is less than 3, we are likely running on physical hardware*/ if(vm_score < 3){ printf("no virtual machine detected"); } printf("Virtual Machine detected."); //The code below only runs if we are not on a Windows-based system. Only tested with openSUSE. #else //run the dmesg command and pipe to find hypervisor, 34 is how long string should be. run_command("dmesg |grep -i hypervisor", "[ 0.000000 Hypervisor detected]", 34); //run dmidecode command and find system manufacturer, 6 is how long the string should be. run_command("sudo dmidecode -s system-manufacturer", "VMware", 6); /*If vm_score is less than 3, we are likely running on physical hardware*/ if(vm_score < 3){ printf("No virtual machine detected"); } printf("Virtual Machine detected."); #endif return 0; } /* number_of_cores serves the purpose of checking to see how many cores the system is running on. If the core count is less than one, there is a very good chance we are running on a virtual machine. */ void number_of_cores() { //check to see if running on windows. #ifdef WIN32 //use SYSTEM_INFO structure from WinAPI to get system info SYSTEM_INFO sysinf; //Run the GetSystemInfo functin pointing to our SYSTEM_INFO structure. GetSystemInfo(&sysinf); //check if number of processors is less than or equal to one. If it is, //we assume virtual, and increment vm_score by 1. if(sysinf.dwNumberOfProcessors<=1){ vm_score++; } //Not windows? Run Unix code #else //run sysconf function outlined in the man pages. if(sysconf(_SC_NPROCESSORS_ONLN) <= 1){ //check if number of processors is less than or equal to one. If it is, //we assume virtual, and increment vm_score by 1. vm_score++; } #endif } //end of number_of_cores() /* run_command serves the purposes of running terminal commands within both linux and windows environments. We use this for dmesg, dmidecode, and systeminfo. run_command() takes three parameters: cmd, detphrase, and dp_length. cmd is the command to run such as systeminfo or dmesg or dmidecode detphrase is the string to check for within the command. If we run dmidecode and we're looking for "VMWARE", the detphrase is "VMWARE". dp_length is the length of the string we want to specify. This allows us to take substrings of the system output in order to have cleaner, more readable code. The benefits of using this run_command() function over a system() call is that it allows us to save the output of system calls as well as take substrings. */ void run_command(char *cmd, char *detphrase, int dp_length){ #define BUFSIZE 128 char buf[BUFSIZE]; FILE *fp; /*popen() is essentially the same as system() but it saves the output to a file. If the output is null, the command didn't work. */ if((fp = popen(cmd, "r")) == NULL){ printf("Error"); } if(fgets(buf, BUFSIZE, fp) != NULL){ char detection[(dp_length +1 )]; //one extra char for null terminator strncpy(detection, detphrase, dp_length); detection[dp_length] = '\0'; //place the null terminator if(strcmp(detphrase, detection) == 0){ //0 means detphrase = detection vm_score++; //increment the vm_score variable. } } if(pclose(fp)){ printf("Command not found or exited with error status \n"); } } /* registry_check is a modified version of Sudeep Singh's registry_check method in the Breaking the Sandbox document. It is modified to only check for vmware, as that is the hypervisor the rest of the methods in this program check for. */ #ifdef WIN32 void registry_check(){ HKEY hkey; char *buffer; int i=0,j=0; int size = 256; char *vm_name = "vmware"; buffer = (char *) malloc(sizeof(char) * size); /* Use RegOpenKeyEx and RegQueryValueEx, as described in the WINAPI in order to look at memory registers and look for the vm_name, in this case, "vmware".*/ RegOpenKeyEx(HKEY_LOCAL_MACHINE, "SYSTEM\\ControlSet001\\Services\\Disk\\Enum", 0, KEY_READ, &hkey); RegQueryValueEx(hkey, "0", NULL, NULL, buffer, &size); while(*(buffer+i)){ *(buffer+i) = (char) tolower(*(buffer+i)); i++; } //compare the buffer and "vmware" to see if they are the same. if(strstr(buffer, vm_name) != NULL){ vm_score++; //if buffer and "vmware" are equal, increase vm_score } return; } #endif
Virtual Machine DetectionPage 1 of 1
0 Replies - 10103 Views - Last Post: 18 April 2017 - 10:56 AM
#1
Virtual Machine Detection
Posted 18 April 2017 - 10:56 AM
Heavily commented code I wrote about a year ago. Only works for VMWare right off the bat, but is easily extensible to VirtualBox etc. Not foolproof.
Page 1 of 1 | https://www.dreamincode.net/forums/topic/403230-virtual-machine-detection/ | CC-MAIN-2019-09 | refinedweb | 927 | 55.34 |
Code. Collaborate. Organize.
No Limits. Try it Today.
OK, first some disclaimers: this is my first "project" in C# and while I have some previous experience in graphics coding it has been in Direct3D using c++, where different issues are to be considered. I wrote this just as an exercise in C# and for almost exclusive purpose at posting at so if you are to use this somewhere else please mention this site.
Now about code itself. I wanted to create a set of classes that will handle basic sprite manipulation, using nothing more than GDI+. I started with class SpriteCanvas.Canvas. This class encapsulates image file used to create sprites, including sprite layout, sprite size calculations etc. It is a pretty simple class, and I put it in a user control so you can just drag it onto a form, and select image file and select a layout. Here is an example of a canvas image, having layout of 10X6:
SpriteCanvas.Canvas
After some thought (and try-and-error approaches) I created following classes: Sprite, that encapsulates individual sprite properties like position, size, is it animated etc. SpriteLibrary, which is really just a collection of sprites, WorldView, which does rendering and World which serves as a glue between sprites (SpriteLibrary) and rendering class (WorldView) and also has a number of methods to manipulate sprites, like resizing and moving. There is also a Rectangle collection RectLibrary, because I got tired of type casting all the time. I put all these classes into a namespace called SpriteWorld, in case you are wondering where they are. I also created functions for custom clipping (word about that later), z-ordering, time-stamped animation etc.
Sprite
SpriteLibrary
WorldView
World
RectLibrary
SpriteWorld
Approach to creating an application using above mentioned classes should be like this: you create a form and drag Canvas controls onto it and set image and layout for them. In this app. I used only one canvas but you can use any number you like. Next you put a PictureBox on form, whose image will be used as a background for a viewport. Next you put some controls that you want to draw on. It could be a PictureBox, Panel or whatever, as long as you can get it's Graphics (hDc) property. You can have more than one of these as demo shows. You will also need at least one timer, although I strongly recommend two: one that has interval as short as possible (let's say 1ms) that we will use to check if any rendering needs to be done, and other one with interval of around 33ms that will update animated sprites.
Canvas
PictureBox
Panel
Word about animated sprites: all animations are time stamped, so duration of animation sequence is the same no matter how often you update it. Meaning, if you have sprite animation composed of 10X6 = 60 frames and you set it's oFPS property to 30 fps it will take two seconds to do complete animation sequence. Each time you call UpdateAnimation method it calculates which frame should be shown based on time stamp, and shows appropriate frame. You can see how it works in demo app. by playing with slider that adjusts sprite's fps.
oFPS
UpdateAnimation
K, back to creating an app. I should really have created a user control based on World class, but I didn't so you have to declare it somewhere in your main app. like this:
public SpriteWorld.World myWorld=new SpriteWorld.World();
So now you have everything you need (class wise) and you should do the following: create some viewports, create some test objects and start timers. Your Form constructor should look like this:
public Form1()
{
InitializeComponent();
//create two viewports, one default size and origin
myWorld.CreateViewport(pictureBox1,Background.Image);
//... the other somewhat smaller and showing different
// portion of "world"
myWorld.CreateViewport(View2,View2.CreateGraphics(),
new Point(145,25),
new Rectangle(30,30,120,120),
Background.Image);
//create some sprites
CreateTestObjects();
//main loop timer
timer.Start();
//animation loop timer
AnimationTimer.Start();
}
You create objects like this:
private void CreateTestObjects()
{
//animated
myWorld.AddSprite(canvas1,new Point(20,60),new Point(0,59),30,true);
//also animated
myWorld.AddSprite(canvas1,new Point(175,60),new Point(0,59),60,true);
//static
myWorld.AddSprite(canvas1,new Point(80,50));
//static
myWorld.AddSprite(canvas1,new Point(70,80));
//start updating FPS monitor
FPStimer.Start();
//show some numbers
textBox3.Text=Convert.ToString(myWorld.Library.Item(1).oFPS);
textBox1.Text=Convert.ToString(myWorld.Library.Item(2).oFrame);
}
You also need to create timer callbacks:
private void timer_Tick(object sender, System.EventArgs e)
{
//check if we need some rendering
myWorld.RenderingLoop();
}
private void AnimationTimer_Tick(object sender, System.EventArgs e)
{
//we animate sprites here
myWorld.UpdateAnimated();
}
private void pictureBox1_Paint(object sender,
System.Windows.Forms.PaintEventArgs e)
{
//this is used only when we have some area that needs to be
// redrawn like when we start, or min. then restore app, or
// bring it to front from behind some other app
myWorld.RePaint(sender,e.Graphics, e.ClipRectangle);
}
Now, class World contains some functions to manipulate sprites, like resize them, select them and move them. I didn't implement collision detection as it is really trivial, instead I allowed overlapping and did z-ordering of overlapping sprites. In demo it is implemented like this: you click once on a sprite to "select" it (then release button) then move it around and click again to drop it in a new place. As sprites can overlap I decided to implement picking of all sprites underneath the cursor and put them on top of z-order while maintaining their own z-order. Again if you want to choose just one sprite from top it is trivial. It actually works like this: if you have a pile of sprites and you click on them you will select all of them for move, while bringing them on top of z-order, so they are drawn on top of other sprites. Be aware that actual sprite area extends beyond visible area of sprite so it is possible that you will select more than one sprite when they are close together, even if you intended to pick only one. In order to fix this we would have to test cursor position against each sprite's alpha mask, which is too involved right now.
For the end, word about clipping. It was probably the hardest thing to do. Consider the following picture:
Let's say that we got rendering request from sprite1 and sprite3, and we want to create one clip region. So we create a clip region (yellow outline) that contains both sprite1 and sprite3. But wait, if we draw background and only these two sprites, other sprites 2,4 and 5 will get also partially erased when we draw background in this clip. Clearly, we end up with much larger clip region and instead of drawing two sprites now we have to draw five. That is why I have created variable number of clips in WorldView class and keep that number in variable pClips. Depending on your application, how close sprites are to each other, how many of them are animated etc. you may decide to use more than one clips, or you can decide to give each sprite it's own clip in which case you set pClips=0. If you want to redraw complete scene each time (no clipping) you should use call to function RequestAll(), which will request background and all sprites but it is very inefficient. Function that reduces number of clips does it by merging rectangles together in a loop, finding merged one with min. area which is a keeper and eliminating two from which it originated, and continue merging until we have desired number of clips left. Of course, each time we merge two clips we have to check for possible adds in form of overdrawn sprites and put that into consideration as well
pClips
RequestAll()
If you are interested into the exact implementation of the above please look up the source code, as I do have a tendency to get long-winded and you may find it easier just to look it up yourself. I am aware of at least couple shortcomings right now, like I didn't correctly implement rendering in case you don't want to have background image, and sprite selection (used to move sprites around) doesn't check where does call comes from, so only first viewport works (it should be easy to fix it, every viewport has reference to it's parent, so you should be able to detect from where did call came from) etc. If you do find something useful feel free to use it, also if you have some improvements it would be nice if you let me know in a forum bellow or at: djurovic@nyc.rr. | http://www.codeproject.com/Articles/1896/Sprite-manipulation-in-C?fid=3364&df=90&mpp=25&sort=Position&spc=Relaxed&select=898793&tid=725970 | CC-MAIN-2014-23 | refinedweb | 1,479 | 60.85 |
Storing a large number of files
Good health, Spoilers!
In the process of working on a dating site project, it became necessary to organize the storage of user photos. According to the terms of reference, the number of photos of one user is limited to 10 files. But users can be tens of thousands. Especially considering that the project in its present form exists already since the beginning of "zero". That is, there are already thousands of users in the database. Almost any file system, as far as I know, reacts negatively to a large number of child nodes in the folder. By experience I can say that the problems begin after 1000-1500 files /folders in the parent folder.
. or here ). But I did not find a single solution exactly in my opinion. In addition, in this article I only share my own experience in solving the problem. [/i]
Theory
In addition to the storage task itself, there was still a condition in the TOR, according to which it was necessary to leave the captions and headings to the photos. Of course, you can not do without a database. That is, the first thing we do is create a table in which we assign a comparison of meta-data (signatures, titles, etc.) to the files on the disk. Each file corresponds to one line in the database. Accordingly, each file has an identifier.
A small digression. Let's talk about autoincrement. On the dating site can be a dozen or two thousand users. The question is how many users go through the project for the entire time of its existence. For example, the active audience of "dating-ru" is several hundred thousand. However, just imagine how many users have deleted during the lifetime of this project; how many users have not been activated so far. And now, add our legislation, which requires storing information about users for at least six months Sooner or later, 4 with a penny of a billion UNSIGNED INT will end. For this, it's best for a primary key to take BIGINT .
Now try to imagine a number like BIGINT . This is 8 bytes. Each byte is from 0 to 255. 255 child nodes is quite normal for any file system. That is, we take the file identifier in the hexadecimal representation, divide it into chunks by two characters. We use these chunks as folder names, and the latter as the name of the physical file. PROFIT!
0f /65/84/10/67/68/19 /ff.file
Elegant and simple. The file extension is not important here. Still, the file will be given a script that will give the browser in particular a MIME type, which we will also store in the database. In addition, storing information about a file in the database allows you to redefine the path to it for the browser. Say, the file is really located relative to the project directory along the path
/content/files/0f/65/84/10/67/68/19/ff.file. And in the database, you can assign a URL to it, for example,
/content /users /678 /files /somefile. SEO-shniki now, probably, pretty smiled. All this allows us not to worry anymore about where to place the file physically.
Table in DB
In addition to the identifier, MIME type, URL and physical location, we will store in the md5 table and sha1 files to filter out the same files if necessary. Of course we also need to store relationships with entities in this table. Let's say the user ID to which the files belong. And if the project is not very large, then in the same system we can store, say, photos of goods. By this we will also store the name of the class of the entity to which the record belongs.
By the way, about the birds. If you close the folder with .htaccess for access from outside, the file can be obtained only through the script. And in the script it will be possible to determine access to the file. A little getting ahead, I will say that in my CMS (where the above project is currently being piloted) access is determined by the basic user groups, of which I have 8 - guests, users, managers, admins, non-activated, blocked, remote and super-admins. The super-administrator can do everything, so he does not participate in the definition of access. If the user has a super-admin flag, then it's super-admin. It's simple. That is, we will define the accesses to the remaining seven groups. Access is simple - either to give the file, or not to give. Total you can take a field of type TINYINT .
And one moment. According to our legislation, we will have to physically store user pictures. That is, we need to somehow mark the pictures as deleted, instead of physically removing them. It is most convenient for this purpose to use a bit field. I usually use a field of type in such cases. INT . To reserve, so to speak. Besides, I have already established tradition to place the flag DELETED in the 5th bit from the end. But this is not fundamentally the same again.
What we have in the end:
create_type_file_ (
.` id` bigint not null auto_increment, - Primary key
.` entity_type` char (32) not null default '', - Entity type
.` entity` bigint null, - ID entity
`mime` char (32) not null default '', - MIME type
` md5` char (32) not null default '', - MD5
`sha1` char (40) not null default '', - SHA1
`file` char (64) not null default '', - Physical location of
'url` varchar (250) not null default' ', - URL
meta` text null, - Meta-data in JSON format or serialized array
.size` bigint not null default '0', - Size
.` created` datetime not null, - Created on
`updated` datetime null, - Date edited by 3r3r???. `access` tinyint not null default '0', - Bitmap
Flags` int not null default '0', - Flags
primary key (`id`),
index (`entity_type`),
index (`entity`),
index (`mime`),
index (`md5`),
index (`sha1`),
index (`url`)
) engine = InnoDB;
Class-dispatcher
Now we need to create a class, with which we will upload files. The class should provide the ability to create files, replace /modify files, delete files. In addition, it is worth considering two points. First, the project can be transferred from the server to the server. So in the class you need to define a property that contains the root directory of the files. Secondly, it will be very unpleasant if someone crashes the table into the database. So you need to provide the possibility of data recovery. With the first all in general it is clear. As for the reservation of data, we will reserve only what can not be restored.
ID - is restored from the physical location of the
file.
entity_type - not restored
entity - not restored
mime - restored using the extension finfo
md5 - is restored from the file itself
sha1 - is restored from the file itself
file - is restored from the physical location of the
file.
url - not restored
meta - not restored
size - is restored from the file itself
created - You can take information from the file
updated - You can take information from the file
access - not restored
flags - not restored
You can immediately discard the meta-information. It is not critical for the functioning of the system. And for faster recovery, you still need to save the MIME type. Total: entity type, entity ID, MIME, URL, access and flags. In order to increase the reliability of the system, we will store the backup information for each destination folder separately in the folder itself.
<?php
class BigFiles
{
const FLAG_DELETED = 0x08000000; //So far only the flag "Removed"
/** @var mysqli $ _db * /
protected $ _db = null;
protected $ _webRoot = '';
protected $ _realRoot = '';
function __construct (mysqli $ db = null) {
$ this -> _ db = $ db;
}
/**
* Setting /reading root for URLs
* @param string $ v Value of
* @return string
* /
public function webRoot ($ v = null) {
if (! is_null ($ v)) {
$ this -> _ webRoot = $ v;
}
return $ this -> _ webRoot;
}
/**
* Install /read root for
files. * @param string $ v Value of
* @return string
* /
public function realRoot ($ v = null) {
if (! is_null ($ v)) {
$ this -> _ realRoot = $ v;
}
return $ this -> _ realRoot;
}
/**
* Download the file
* @param array $ data The query data is
* @param string $ url The URL of the virtual folder
* @param string $ eType Entity type
* @param int $ eID Entity ID
* @param mixed $ meta Meta-data
* @param int $ access Access
* @param int $ flags Flags
* @param int $ fileID of the existing file
* @return bool
* @throws Exception
* /
public function upload (array $ data, $ url, $ eType = '', $ eID = null, $ meta = null, $ access = 12? $ flags = ? $ fileID = 0) {
$ meta = is_array ($ meta)? serialize ($ meta): $ meta;
if (empty ($ data['tmp_name']) || empty ($ data['name'])) {
$ fid = intval ($ fileID);
if (empty ($ fid)) {
return false;
}
$ meta = empty ($ meta)? 'null': "'". $ this -> _ db-> real_escape_string ($ meta). "'";
$ q = "meta` = {$ meta},` updated` = now () ";
$ this -> _ db-> query ("UPDATE` files` SET {$ q} WHERE (`id` = {$ fid}) AND (` entity_type` = '{$ eType}') ");
return $ fid;
}
//File data
$ meta = empty ($ meta)? 'null': "'". $ this -> _ db-> real_escape_string ($ meta). "'";
$ finfo = finfo_open (FILEINFO_MIME_TYPE);
$ mime = finfo_file ($ finfo, $ data['tmp_name']);
finfo_close ($ finfo);
//FID, file name
if (empty ($ fileID)) {
$ eID = empty ($ eID)? 'null': intval ($ eID);
$ q = sql
insert into `files` set
`mime` = '{$ mime}',
`entity` = {$ eID},
`entityType` = '{$ eType}',
`created` = now (),
`access` = {$ access},
`flags` = {$ flags}
sql;
$ this -> _ db-> query ($ q);
$ fid = $ this -> _ db-> insert_id;
list ($ ffs, $ fhn) = self :: fid ($ fid);
$ url = $ this -> _ webRoot. $ url. '/'. $ fid;
$ fdir = $ this -> _ realRoot. $ ffs;
self :: validateDir ($ fdir);
$ index = self :: getIndex ($ fdir);
$ index[$fhn]= array ($ fhn, $ mime, $ url, ($ eID == 'null'? 0: $ eID), $ access, $ flags);
self :: setIndex ($ fdir, $ index);
$ fname = $ ffs. '/'. $ fhn. '.file';
} else {
$ fid = intval ($ fileID);
$ fname = $ this-> fileName ($ fid);
}
//Move file
$ fdir = $ this -> _ realRoot. $ fname;
if (! move_uploaded_file ($ data['tmp_name'], $ fdir)) {
throw new Exception ('Upload error');
}
$ q = '`md5` =' '. md5_file ($ fdir). '', `sha1` = ''. sha1_file ($ fdir). '', '
. '`size` ='. filesize ($ fdir). ', `meta` ='. $ meta. ','
. (empty ($ fileID)? "` url` = '{$ url}', `file` = '{$ fname}'": 'updated` = now ()');
$ this -> _ db-> query ("UPDATE` files` SET {$ q} WHERE (`id` = {$ fid}) AND (` entity_type` = '{$ eType}') ");
return $ fid;
}
/**
* Read the file
* @param string $ url URL
* @param string $ basicGroup The basic group of user
* @throws Exception
* /
public function read ($ url, $ basicGroup = 'anonimous') {
if (! ctype_alnum (str_replace (array ('/', '.', '-', '_'), '', $ url))) {
header ('HTTP /??? Bad Request');
exit;
}
$ url = $ this -> _ db-> real_escape_string ($ url);
$ q = "SELECT * FROM` files` WHERE `url` = '{$ url}' ORDER BY` created` ASC ";
if ($ result = $ this -> _ db-> query ($ q)) {
$ vars = array ();
$ ints = array ('id', 'entity', 'size', 'access', 'flags');
while ($ row = $ result-> fetch_assoc ()) {
foreach ($ ints as $ i) {
$ row[$i]= intval ($ row[$i]);
}
$ fid = $ row['id'];
$ vars[$fid]= $ row;
}
if (empty ($ vars)) {
header ('HTTP /??? Not Found');
exit;
}
$ deleted = false;
$ access = true;
$ found = '';
$ mime = '';
foreach ($ vars as $ fdata) {
$ flags = intval ($ fdata['flags']);
$ deleted = ($ flags & self :: FLAG_DELETED)! = 0;
$ access = self :: granted ($ basicGroup, $ fdata['access']);
if (! $ access || $ deleted) {
continue;
}
$ found = $ fdata['file'];
$ mime = $ fdata['mime'];
}
if (empty ($ found)) {
if ($ deleted) {
header ('HTTP /??? Gone');
exit;
} elseif (! $ access) {
header ('HTTP /??? Forbidden');
exit;
}
} else {
header ('Content-type:'. $ mime. '; charset = utf-8');
readfile ($ this -> _ realRoot. $ found);
exit;
}
}
header ('HTTP /??? Not Found');
exit;
}
/**
* Deleting a file (files) from the repository
* @param mixed $ fid Identifier (s)
* @return bool
* @throws Exception
* /
public function delete ($ fid) {
$ fid = is_array ($ fid)? implode (',', $ fid): $ fid;
$ q = "delete from` table` where `id` in ({$ fid})";
$ this -> _ db-> query ($ q);
$ result = true;
foreach ($ fid as $ fid_i) {
list ($ ffs, $ fhn) = self :: fid ($ fid_i);
$ fdir = $ this -> _ realRoot. $ ffs;
$ index = self :: getIndex ($ fdir);
unset ($ index[$fhn]);
self :: setIndex ($ fdir, $ index);
$ result & = unlink ($ fdir. '/'. $ fhn. '.file');
}
return $ result;
}
/**
* Marks the file (s) with the flag "deleted"
* @param int $ fid Identifier (s)
* @param bool $ value The value of the flag is
* @return bool
* /
public function setDeleted ($ fid, $ value = true) {
$ fid = is_array ($ fid)? implode (',', $ fid): $ fid;
$ o = $ value? '| '. self :: FLAG_DELETED: '&'. (~ self :: FLAG_DELETED);
$ this -> _ db-> query ("update` files` set `flags` =` flags` {$ o} where `id` in ({$ fid})");
return true;
}
/**
* The file name is
* @param int $ fid Identifier
* @return string
* @throws Exception
* /
public function fileName ($ fid) {
list ($ ffs, $ fhn) = self :: fid ($ fid);
self :: validateDir ($ this -> _ realRoot. $ ffs);
return $ ffs. '/'. $ fhn. '.file';
}
/**
* Processing the file identifier.
Returns the array with the folder to the file and the hexadecimal representation of the low-order byte.
* @param int $ fid The identifier of the file is
* @return array
* /
public static function fid ($ fid) {
$ ffs = str_split (str_pad (dechex ($ fid), 1? '0', STR_PAD_LEFT), 2);
$ fhn = array_pop ($ ffs);
$ ffs = implode ('/', $ ffs);
return array ($ ffs, $ fhn);
}
/**
* Checking the directory of the file
* @param string $ f The full path to the directory
* @return bool
* @throws Exception
* /
public static function validateDir ($ f) {
if (! is_dir ($ f)) {
if (! mkdir ($ f, 070? true)) {
throw new Exception ('can not make dir:'. $ f);
}
}
return true;
}
/**
* Reading the reserve index
* @param string $ f The full path to the backup index file
* @return array
* /
public static function getIndex ($ f) {
$ index = array ();
if (file_exists ($ f. '/.index')) {
$ _ = file ($ f. '/.index');
foreach ($ _ as $ _i) {
$ row = trim ($ _ i);
$ row = explode ('|', $ row);
array_walk ($ row, 'trim');
$ rid = $ row[0];
$ index[$rid]= $ row;
}
}
return $ index;
}
/**
* Record of the reserve index
* @param string $ f The full path to the backup index file
* @param array $ index An array of data from the index
* @return bool
* /
public static function setIndex ($ f, array $ index) {
$ _ = array ();
foreach ($ index as $ row) {
$ _[]= implode ('|', $ row);
}
return file_put_contents ($ f. '/.index', implode ("rn", $ _));
}
/**
* Check availability
* @param string $ group Name of the group (see below)
* @param int $ value The value of the accesses is
* @return bool
* /
public static function granted ($ group, $ value = 0) {
$ groups = array (anonimous, user, manager, admin, inactive, blocked, deleted);
if ($ group == 'root') {
return true;
}
foreach ($ groups as $ groupID => $ groupName) {
if ($ groupName == $ group) {
return (((1 $ groupID) & $ value)! = 0);
}
}
return false;
}
}
Let's consider some moments:
- realRoot - full path to the folder with the file system ending with a slash.
- webRoot - the path from the root of the site without a leading slash (see below why).
- As a DBMS, I use the extension MySQLi .
- In fact, the method upload The first argument is information from the array $ _FILES .
- If you call the method update transfer the ID of an existing file, it will be replaced, if in tmp_name The input array will be non-empty.
- You can delete and change the flags of files at once for several pieces. To do this, you must pass in place of the file identifier either an array with identifiers, or a string with them separated by commas.
Routing
Actually, it all comes down to several lines in the htaccess at the root of the site (it is assumed that mod_rewrite is enabled):
RewriteCond% {REQUEST_URI} ^ /content /(.*)$
RewriteCond% {REQUEST_FILENAME}! -F
RewriteRule ^ (. +) $ Content /index.php? File = $ 1[L,QSA]
"Content" is the folder in the root of the site in my case. Of course you can name the folder in a different way. And of course the index.php itself, stored in my case in the content folder:
<?php
$ dbHost = '???.1';
$ dbUser = 'user';
$ dbPass = '****';
$ dbName = 'database';
try {
if (empty ($ _ REQUEST['file'])) {
header ('HTTP /??? Bad Request');
exit;
}
$ userG = 'anonimous';
//Here we define the user group; any solution of your choice
$ files = new BigFiles (new mysqli ($ dbHost, $ dbUser, $ dbPass, $ dbName));
$ files-> realRoot (dirname (__ FILE __). '/files /');
$ files-> read ($ _ REQUEST['file'], $ userG);
} catch (Exception $ e) {
header ('HTTP /??? Internal Error');
header ('Content-Type: text /plain; charset = utf-8');
echo $ e-> getMessage ();
exit;
}
Well, by itself we will close the file system itself from external access. Put the
folder in the root of the folder. content /filesfile
.htaccesswith only one line:
Deny from all
The result is
This solution avoids loss of performance of the file system due to the increase in the number of files. At least troubles in the form of thousands of files in one folder can be accurately avoided. And at the same time, we can organize and control access to files on human-understandable addresses. Plus compliance with our dismal legislation. At once I will make a reservation, the given decision is not a high-grade way of protection of a content. Remember: if something is played in the browser, it can be downloaded for free.
It may be interesting
20-09-2018, 18:33Publication Date
- Views: 266 | https://habrahabr.info/development/programming/4980-storing-a-large-number-of-files.html | CC-MAIN-2020-40 | refinedweb | 2,634 | 62.98 |
For some textbooks, it is said that:
- A pointer to a pointer is a pointer to a pointer
- A structure pointer is a pointer to a structure
- An array pointer is a pointer to an array
- A function pointer is a pointer to a function
When you see function pointers, you have broken through the operation between pointers.
A discussion of these claims:
If the definition is written like this, can we ask: the pointer of a pointer must point to a pointer, and the structure pointer must point to a structure
1. First look at the pointer of the pointer, which is the secondary pointer.
Pointer is a new data type. Since it is a data type, we should know its width. So far, in 32-bit systems, as long as the * type is 4 bytes in real time, don't worry. Let's look down. Let's first look at the writing method of the secondary pointer
int a = 1; //Declare a variable a int *p = &a; //Declare a pointer p to the address of a; int **pp=&p; //Declare a secondary pointer to the address of p, At this time, if you want to print**pp Point to the value of 1;
Let's take a look at the operation between pointers. The operation can only be carried out when the type is determined. The value of A-B is equal to the size of the type after the value / type of (a-b) is removed by a * sign;
Pointer plus digit / pointer minus digit
char* p = (char*)10; printf("%d",p+1); char* p=(char*)10; printf("%d",p-1);
If the type of p is int *, it will print out 14. Subtraction is the same,
Pointer minus pointer:
int arr[10] = { 0 }; int *p = &arr[1]; //x+4 int *q = &arr[6]; //x+24 printf("%d\n", p - q);//-5
When calculating, we need to first calculate the number of bytes between the subtracted and the subtracted, and then use this byte to divide by the type and remove a * after the size; so after the calculation, it is - 5; can we do subtraction or addition? For the pointer, the address is assigned by the compiler, and it is not sure that the value after the two pointers are added is not in the effective memory. If you force them to add, it is OK, but it is meaningless. Your work is to use the computer to solve the problem. Now, you have solved the problem with the computer! (to add a little knowledge, from the perspective of assembly instructions, cast does not produce actual calculation instructions. If you are interested, you can study it.)
Now to raise the difficulty, feel the fear of being dominated by the pointer;
char* a1; char** a2; char*** a3; printf("%d\n",*(a1+2));//+2 printf("%d\n",a1[2]); //+2
The disassembly of the two printed statements is the same. You can print them by yourself, which means that the array calculation and pointer technology are the same. They are all plus two. If you replace a1 with a2, first, char * * minus a *, when the type size is 4, plus 2 is to add two types with the size of 4, that is + 8
printf("%d\n",*(*(a2+2)+3));
The difficulty has increased by one dimension, a2+2 plus 8. The last sentence has explained why, let's look at + 3. Now what is the overall type of * (a2+2), the type of a2 is char * *, the type of * (a2+2) is char * type, the value of char * type + 3 is the type size after subtracting a * from char *, which is 1, so it is + 3, and the final size is a1+8+3;
printf("%d\n",*(*(*(a3+3)+4)+5));
The difficulty seems to have increased, is it really so? The last operation is + 12 + 16 + 5;
Believe me, when you are no longer afraid to see these things, the bottleneck of your pointer has been broken; when you master the method, no matter how many brackets you have, it is not a matter;
Now let's look back at the above code and carefully look at the two printfs in the first step. The two statement assemblies are the same. Therefore, the two printfs below are a2[2][3], a3[3][4][5]. Up to now, you have learned one-dimensional array, two-dimensional array and three-dimensional array. Can you still calculate in high dimension? In fact, no one will go back to declare more than three-dimensional array; The reason why these dimensions are calculated in the same way is that when the computer implements these multidimensional arrays, the bottom layer is the declaration of one-dimensional arrays. The so-called dimensions are completely for the convenience of developers and have nothing to do with computers;
2. Structure pointer
struct str{ int a; int b; } int main() { str* p;//Struct pointer p++; }
There are several different declaration types for structs. This paper only introduces the operation of struct types, not including memory alignment and declaration types
You can calculate the structure pointer plus or minus the number. First, you can calculate the size of the structure type. The same as other pointer plus or minus methods, str structure size is 8, then the calculation is based on 8;
3. Array pointer
The declaration type is int (* P) [5], which should be distinguished from the pointer array. The declaration method of pointer array is int* p[5], because the priority of the [] operator is greater than *, so its focus is on array; while the priority of () is greater than [], which focuses on pointer, so the former type is array pointer;
int main(){ int (*px)[5]; printf("%d\n",sizeof(px)); px=10; px++; printf("%d",px); }
Let's see how much sizeof(px) will output. If we want to know how much to output, we need to know the data type of px, and the most important thing for each variable is its type. If the type of px is int (*)[5], then its type size is 20. If you look at the assignment statement in the next line, this line can't be compiled because the type of 10 is different from that of px. You need to cast it:
px=(int (*)[5])10; only in this way can it be compiled; the next line px + +, equivalent to px+1, the pointer operation above, int (*)[5] type removes a * and then int[5], the type size is 20, so the px bit of the final output is 30; the same is true for char type, that is, the type size is 1,;
Now it's more difficult. Multidimensional array pointer is written as int (*p)[5][5]; if you already know multidimensional pointer and array pointer, it can be understood as a combination of these two pointer types, and the calculation method is also inextricable. Directly on the question:
char str1[]={0x00,0x01,0x02,0x03,0x04,0x05,0x06,0x07,0x08,0x09, 0x19,0x11,0x21,0x34,0x23,0x75,0x21,0x33,0x12,0x23}; int main(){ int (*p1)[5]; //One dimensional array pointer p1 = (int (*)[5])str1; //Assign a value to p1 printf("%x\n",*(*(p1+2)+3)); char (*p2)[3][4]; //Declare a two-dimensional array pointer p2 = (char (*)[2][3])str1; //Assign value to p2 printf("%x\n",*(*(*(p2+2)+3)+4)); }
Let's say the answer directly: the first output is 4 * 2 * 5 * + 4 * 3 * 5 =... Sorry! It's out of bounds. The number is a little big. 4 in arithmetic is the type size of int. Just pretend to get a value of 20. When the number is in the array, it is char type, but p1 is int type, and the output is 23123321. Here is another knowledge point, that is, most windows devices now use small end storage, that is, the low address is in the high position;
Let's look at the following p2: the calculation method is: 1 * 2 * 3 * 2 + 1 * 2 * 3 * 3 + 1 * 2 * 3 * 4 =... Or relatively large. In arithmetic, 1 represents the size of char type, followed by 2 * 3 is the [2] [3] of array, 2 is + 2, and the following is the same calculation method;
It can also be concluded from the code that one-dimensional array needs two * values and two-dimensional array needs three * values;
4. Function pointer
First, let's look at the writing of function pointer: int (* pfunction) (int, int); the most important thing is that its width is 4.
int main(){ int (*pFunction)(int,int); pFunction=(int (__cdecl *(int,int)))10;//Assignment statement; }
The type to be converted for assignment is the type of pFunction. Here's another little knowledge__ cdecl is the compiler's default calling convention, and parameter passing is from right to left, which is the default, so it is OK not to write;
It can also be assigned as follows:
int func(int a,int b){ return a+b; } int main(){ int (*pFunction)(int,int); pFunction=func;//Assignment statement; }
This should also be able to understand it, func type is exactly (int (*(int,int))) type, so there is no need to cast;
Function pointer operation
Addition: when performing an operation, you need to remove a * first. However, can you determine the width of the function after removing it? The width of a function may vary, so it cannot be added,
Subtraction: emmmmm... Let you down, he can't do subtraction, the reason is the same; another little knowledge, the function is also a global variable; the way to find the global variable is the same;
However, looking at this function pointer, I don't know when to use it
#include <stdio.h> //Calculate is used to calculate the integral. There are three parameters. The first is function pointer func, which points to the function to be integrated. The upper and lower limits of the integral are two and three double Calculate(double(*func)(double x), double a, double b) { double dx = 0.0001;//Interval length of subdivision double sum = 0; for (double xi = a+dx; xi <= b; xi+=dx) { double area = func(xi)*dx; sum +=area; } return sum; } double func_1(double x) { return x*x; } double func_2(double x) { return x*x*x; } void main() { printf("%lf\n", Calculate(func_1, 0, 1)); printf("%lf\n", Calculate(func_2, 0, 1)); }
This is what I read from others
This example is really classic, but there is another one: when loading a dll, dynamically calling functions in the dll can ensure the consistency with the declaration of functions to be called in the dll; | https://www.fatalerrors.org/a/some-views-and-feelings-on-the-operation-and-conversion-between-various-pointers.html | CC-MAIN-2021-17 | refinedweb | 1,780 | 61.5 |
NEW: Learning electronics? Ask your questions on the new Electronics Questions & Answers site hosted by CircuitLab.
Project Help and Ideas » Tempsensor+Clock+Scrolling Message???
So i've got this idea to incorporate the tempsensor wit the clock and then write a scrolling message on the bottom line, would this work, and any ideas on how to incorporate all the programming toegther?
I've found that the desire to do something is generally a good driving factor to learn how. With that said, I'll confirm that it is absolutely possible to do what you ask. I might even suggest looking at the temperature program, and the real time clock tutorial. Maybe even go a step farther to say I have seen code in the forums posted for scrolling text on the LCD. Lastly, I would venture a guess that if one were to take pieces from all three, adjust the code a wee bit as needed, that a working project like that could be accomplished.
:D
PS, I'm not trying to be a smart #ss, just giving you some ideas on how to accomplish your task. Once you have some code that you've put together, if you can't figure out how to get it working to your satisfaction, post the code. Then, if I can help, I'll be more than happy to work with you with your program to get it doing what you want. Otherwise, I'd just be doing your project for you...
Rick
So, i was wondering, to make this work, (im not saying that anyone else has to go out and write the code yo find out) would we put all the codes into one flash, or would we need to somehow combine all the codes into one, this would make a big code either way.
Sir Hobbes3
Hi SirHobbes3,
I would say the best way to go about it is to start a new project for yourself from scratch and then just copy over the parts that you need to make what you want happen. This way you avoid having alot of code that you don't need in your program, and by putting it together from the beginning you will learn a lot more!
Humberto
SirHobbes3, I have to admit that Humberto's suggestion is basicly what I've done for my main project. It has a bits and pieces of virtually every program that came with the initial NerdKits CD. The Traffic Light project, the Tempsensor project, and the dip_arithmetic codes just to name a few. I found the trickiest part was making all the pieces of code work together but that's when I learned the most. I just getting started on branching out into the world beyond my kit and trying to interface a DS1307 RTC via the TWI(I2C) protocol.
The best part? When I've gotten stuck, I've asked questions here and the community has been a great help!!!
Chris B.
Not to Hijack the topic... my apologies in advance... Speaking of RTC chips, Chris, I just got in a couple of sample DS3232SN's from Maxim. I'm still waiting on my SOIC to Dip adapter boards. The DS3232 takes the RTC one step farther in that it has a +/- 2ppm crystal with on chip temperature compensation making it EXTREMELY accurate. It also allows you access to the onboard temperature unit it uses for it's compensation giving you an great RTC with a decent temp reading as well all on one chip. I can't wait to fire it up... Patience Rick... :D
Hijack off
SirHobbes3, Humberto restated (in a much nicer manner :D) pretty much what I was trying to say. Once you start understanding what the bits and pieces of the code in the examples and tutorials do, you'll see how to piece together a project like what you want. I've often found the best way to learn what a given program does is try to think like the computer. Go line by line, write down any change in a variable, jump to any function, and follow the code. After studying a program for a while the lights will brighten. Also a VERY important step when doing that with a microcontroller is to have the datasheet open and when you get to the portion of the program, for example, that is setting up the ADC for the temperature reading, read that section of the datasheet so you know what the registers are being set to and why. Before you know it, you'll be one of the many here helping out someone else who is just starting.
Ok, thanks guys, i'll try and get started on this,
post back soon!!
ok, so i've got the stuff put together and all, (will post that soon) and i need to figure out how to create a makefile so i can send the program to the microcon., i need to figure this out. So where would i even start?????
The tutorial on the realtime clock notes that you can start from one of the other makefiles in the nerdkits code.
Your best bet is to take a Makefile that already exists like the initialload Makefile and copy it over to the directory with your program in it. Then do a replace all from initialload to whatever it is you named your projects .c file. For example if I made a project called foo, I would copy over the initialload Makefile to the foo directory (where foo.c exists) and then replace-all initialload with foo. Please take a moment to take a look at what changed, and how making that change makes the new Makefile work.
ok, thanks Humberto!
i'll try this.
ok, so my makefile is good, but i'm working out the chunks in my quote quote, stcc, scroll, temp, clock, combine. So ill have it in the next post
#define F_CPU 14745600
#include <stdio.h>
#include <avr/io.h>
#include <inttypes.h>
#include <avr/interrupt.h>
#include <avr/pgmspace.h>
#include <avr/io.h>
//#include <avr/delay.h>
#include <util/delay.h>
#include "../libnerdkits/io_328p.h"
#include "../libnerdkits/delay.h"
#include "../libnerdkits/lcd.h"
int main() { // LED as output DDRC |= (1<<PC5);
while(1)
{
// fire up the LCD
lcd_init();
lcd_home();
// print message to screen
lcd_write_string(PSTR("Page1Page1Page1"));
lcd_line_two();
lcd_write_string(PSTR("Page1Page1Page1"));
lcd_line_three();
lcd_write_string(PSTR("Page1Page1Page1"));
lcd_line_four();
lcd_write_string(PSTR("********************"));
delay_ms(1000);
lcd_line_one();
lcd_write_string(PSTR("Hello!"));
lcd_line_two();
lcd_write_string(PSTR("It's going to be a "));
lcd_line_three();
lcd_write_string(PSTR(" beautiful day"));
lcd_line_four();
lcd_write_string(PSTR("today!")); //space is needed or first line next page is truncated
delay_ms(1000);
lcd_line_one();
lcd_write_string(PSTR(""));
lcd_line_two();
lcd_write_string(PSTR(" Page3Page3Page3"));
lcd_line_three();
lcd_write_string(PSTR(" Page3Page3Page3"));
lcd_line_four();
lcd_write_string(PSTR("LLLLLLLLLLLLLLLLLLLL"));
delay_ms(1000);
lcd_line_one();
lcd_write_string(PSTR(" Page4Page4Page4"));
lcd_line_two();
lcd_write_string(PSTR(" Page4Page4Page4"));
lcd_line_three();
lcd_write_string(PSTR(" Page4Page4Page4"));
lcd_line_four();
lcd_write_string(PSTR("WWWWWWWWWWWWWWWWWWWW ")); //space is needed or first line next page is truncated
delay_ms(1000);
lcd_line_one();
lcd_write_string(PSTR("Page5Page5Page5"));
lcd_line_two();
lcd_write_string(PSTR(" Page5Page5Page5"));
lcd_line_three();
lcd_write_string(PSTR(" Page5Page5Page5"));
lcd_line_four();
lcd_write_string(PSTR("uuuuuuuuuuuuuuuuuuuu"));
delay_ms(1000);
// write message to serial port
// printf_P(PSTR("%.2f degrees F\r\n"), temp_avg);
}
return 0; }
// tempsensor.c
// for NerdKits with ATmega168
// mrobbins@mit.edu
//part 2, tempsensor
//<100; i++) {
last_sample = adc_read();
this_temp = sampleToFahrenheit(last_sample);
// add this contribution to the average
temp_avg = temp_avg + this_temp/100.0;
}
//part 3, realtime clock
// PIN DEFINITIONS:);
}
// the_time will store the elapsed time
// in hundredths of a second.
// (100 = 1 second)
//
// note that this will overflow in approximately 248 days!
//
// This variable is marked "volatile" because it is modified
// by an interrupt handler. Without the "volatile" marking,
// the compiler might just assume that it doesn't change in
// the flow of any given function (if the compiler doesn't
// see any code in that function modifying it -- sounds
// reasonable, normally!).
//
// But with "volatile", it will always read it from memory
// instead of making that assumption.
volatile int32_t the_time;
SIGNAL(SIG_OUTPUT_COMPARE0A) {
// when Timer0 gets to its Output Compare value,
// one one-hundredth of a second has elapsed (0.01 seconds).
the_time++;
}
int main() {
realtimeclock_setup();
//;
// turn on interrupt handler
sei();
while(1) {
lcd_home();
fprintf_P(&lcd_stream, PSTR("%16.2f sec"), (double) the_time / 100.0);
}
return 0;
}
I know, my code is probaly pretty long and drawn out, there are porbaly large chunks (sections) that need some work, im just a beginner at this stuff, but it's fun.
i may have to reorder some stuff, or combine, note that this is my first attempt. Like a rough draft almost.
Also, if my code works, im getting a message like this in command prompt.
I very much suggest you take a step back and try to go a little slower as you move forward. When developing new projects it is always a good idea to get one thing working at a time and build your features up one by one. This lets you separate the problems in your head and solve one at a time, reducing the complexity of your project.
In your project I see three completely different main() method. As you remember from the NerdKits Guide the main() method is where your program starts, so giving it three main() methods from three different programs will not work.
ok, i kinda forgot to remember that,sorry for taking sooooooo long to post back, my computer has had probs installing vista sp1, so i fixed that and got sp2 and everything is good.
So now back to business, right no, i am going to try and do each thing INDIVIDUALLY, then get them combined and the makefile ready to go, so i'll touch in with you guys this weekend maybe, Saturday the 26?
SirHobbes3
that "no" is actually "now" in the phrase :"right now"
sorry
hey guys
im still working on the srolling message, i might have it done today, i'll check back in here soon.
Ok, hello guys! I have successfully figured out the scrolling message AND tempsensor (which i did when i first got the nerdkits) code!! So i will soon have the realtime clock code. I will begin "combining" them soon...................
Sincerely,
SirHobbes3
YAY, ok so i've got all the codes working now (YAY!!) and im gonna begin working on combining them sometime soon.
I will check back in soon.
Can not wait to see what you have done!!
Ralph
Please log in to post a reply. | http://www.nerdkits.com/forum/thread/1271/ | CC-MAIN-2018-39 | refinedweb | 1,725 | 70.84 |
- A page type defines a set of properties, for example, page name and publish date.
- A page is an instance of the .NET class defining the page type. When a page is created in edit view, values are assigned to the properties, and stored in the database.
- The controller and view fetches the stored property values and renders the output.
- A template can be associated with the page type, to render the output in a certain context.
In the following you will find examples of how to work with page types and associated controllers, views and templates for rendering the content.
Page types
In Episerver,. During initialization, the bin folder is scanned for .NET classes inheriting PageData. For each of the classes found a page type is created. For all public properties on the class, a corresponding property on the page type is created.
Creating a page type
Using the Episerver Visual Studio integration, you create a page type by adding the Episerver Page type item to the Pages subfolder under Models in your project. See Get started with Episerver CMS for more information.
To be able to add properties that we want to have for all page types, we added an abstract SitePageData base class, which inherits from EPiServer.Core.PageData. This base class has an SEO property which we want to be used on all pages inheriting from the class.
Example: A simple article page type with a "MainBody" editor area of the property type XhtmlString type, inheriting from the SitePageData base class.
using System; using System.ComponentModel.DataAnnotations; using EPiServer.Core; using EPiServer.DataAbstraction; using EPiServer.DataAnnotations; using EPiServer.SpecializedProperties; namespace MyEpiserverSite.Models.Pages { [ContentType(DisplayName = "ArticlePage", GUID = "b8fe8485-587d-4880-b485-a52430ea55de", Description = "Basic page type for creating articles.")] public class ArticlePage : SitePageData { [CultureSpecific] [Display( Name = "Main body", Description = "The main body editor area lets you insert text and images into a page.", GroupName = SystemTabNames.Content, Order = 10)] public virtual XhtmlString MainBody { get; set; } } }
Example: The SitePage base class, with the SEO property.
namespace MyEpiserverSite.Models.Pages { public abstract class SitePageData : EPiServer.Core.PageData { [Display(GroupName = "SEO", Order = 200, Name = "Search keywords")] public virtual String MetaKeywords { get; set; } } }
When creating a page type using the Episerver Visual Studo extensions, a unique GUID for the page type, will automatically be generated. Also, note that page types implicitly contain a set of built-in properties which will be available for all pages, regardless of the page type instance. See PageData.Property for built-in properties.
Since the rendering has not yet been created, pages based on this page type cannot be edited from the On-Page edit view, only from the All Properties edit view.
Note: Why are the properties declared as virtual here? What happens in the background is that a proxy class is created for the page type, and data is loaded from the database to a property carrier (Property), receiving the data. Through Castle (Inversion of Control tool), the properties in the proxy page type will be set, and this only works if properties are declared as virtual. If the properties are not declared virtual, you need to implement get/set so that these will read/write data to the underlying property collection instead.
Page controllers and views
Models, controllers and views in MVC, provide a clear separation of data, business logic and presentation. The controller contains the business logic, handles URL requests and selects the view, which is a visual presentation of the data model. In MVC, the controller is the class that is registered to support specific page types, and can be regarded as the template. The template defines which content types it can render in a specific context.
Creating a controller and a view
Using the Episerver Visual Studio integration, you create a controller by adding a new Episerver item of type Page Controller (MVC) to the Controllers folder in your project. Your controller should inherit from EPiServer.Web.Mvc.PageController<T>, where T is your page type. This controller is called for the page type, if it is chosen as the renderer for the page type.
To add the corresponding view for the controller, create a subfolder under Views and add an Episerver item of type Page View (MVC). Or, click inside the controller to add the view. Ensure that you follow the naming conventions in MVC for your model, controllers and views.
To allow for reuse of logic that we want multiple pages to use, we have implemented a page controller base class, which in this case holds logic for a logout action, and inherits from PageController. You can also add a view model, to be able to add more than just page objects to the views. For simpler examples, we have not used a view model here.
To render properties you can use HTML helpers in MVC, for example Html.PropertyFor, which will render property values based on their property type. HTML helpers are described more below.Example: The controller for displaying the Article page type, inheriting from PageControllerBase. | http://world.episerver.com/documentation/developer-guides/CMS/Content/Page-types-and-templates/ | CC-MAIN-2017-47 | refinedweb | 840 | 56.05 |
Creating a video chat application with OpenTok
I was lucky enough to attend an OpenTok training session run by Manik Sachdeva of TokBox. I thought I would capture a few technical notes on it here, as that is why this website was created in the first place! I will include the code I wrote (not pretty, but functional) and bit of technical detail.
Source code
The source code for this project is available via GitHub under the MIT license.
What is OpenTok?
OpenTok is a platform and API for developing your own audio-visual and messaging web and mobile applications. There are three main components you need:
- Dashboard
- Server SDK
- Client SDK
You create an account with TokBox, the company behind OpenTok, this gives you access to the Dashboard where you can create and manage your applications. You can of course access all this functionality using the APIs.
The Server SDK is available for a variety of programming languages. I used the Python Server SDK. It was simple to install,
pip install opentok.
I used the JavaScript Client SDK (also known as the Web Client SDK). There are also Client SDKs for Android, iOS, and various other platforms.
First thoughts on the OpenTok ecosystem
The OpenTok developer centre is very informative, with various sets of documentation, building blocks (called developer guides), references and very useful example apps and tutorials. I basically built a working application by copying and basic the basic web client and modifying the code. I also built the server-side component from scratch using the Python SDK documentation. There is a useful PHP server which you can deploy to Heroku with a single click. I found this useful initially to get my head around what kind of API the server might need to implement.
Architecture overview
Although you can get something up and running using just the Client SDK, you should not create Sessions (think of Sessions as rooms) in the client for production systems as you would be explosing your API Key and API secret. There are also other functions that are perhaps best carried out on the server side, such as archiving control (you can save your video chats to disk very easily).
So the basic functional split I came up with was as follows:
Server side provides a little REST API:
- Session creation
- Archiving control
- Session monitoring (via callback)
- Broadcast message (via signal)
Client side:
- Create a session - calling server (see above). Returns API key, session ID and token.
- Connect to Session
- Publish to Session
- When Stream created, subscribe to it
- UI for session creation, archive control, screensharing control etc.
Some key concepts
As mentioned the main "room" for video chats is the Session. You can't get far without creating a Session. In production systems this should be done server-side, not client-side. Sessions are identified by a Session ID. The client also needs to get a Token, so that subsequent calls can be authenticated. While Session ID identifies the room, the Token identifies (really authenticates), the User.
The client then establishes a Connection to the Session. The client can then Publish an audio-visual stream to the session, and listen for any events such as other Users joining the Session (and publishing streams).
Once a client is connected into a Session it can subscribe to other client's Streams.
Example
- Client 1 creates a Session ("room") - via server API
- Client 1 publishes its video stream
- Client 2 connects to the Session
- Client 2 subscribes to Client 1's video stream
- Client 1 subscribes to Client 2's video stream
Summary of concepts
These are the key concepts:
- Session - The "room" for the video chat.
- Token - Identifies and autheticates a valid User.
- Connection - The client (User) joins a Session via a Connection.
- Stream - The media stream that is published or subscribed to.
Server code
For the server-side code I used the OpenTok Python Server SDK and Flask. Flask makes it easy to create a nice little REST API server in next to no time. I already had Flask installed so there was no extra set up required. With OpenTok Server SDK being a single
pip install I was writing useful code within seconds.
The server code:
#!/usr/bin/env python3 import requests import json from requests.auth import HTTPBasicAuth from flask import Flask, request, jsonify, render_template from pprint import pprint # OpenTok Server SDK from opentok import OpenTok from opentok import MediaModes from opentok import ArchiveModes api_key = "API_KEY" api_secret = "API_SECRET" opentok = OpenTok(api_key, api_secret) # create session session = opentok.create_session(media_mode=MediaModes.routed) session_id = session.session_id # generate token token = opentok.generate_token(session_id) # archive ID for this session archive_id = "" app = Flask(__name__) @app.route("/session") def session_get(): obj = {} obj['apiKey'] = api_key obj['sessionId'] = session_id obj['token'] = token j = json.dumps(obj) return (j) @app.route("/monitoring", methods=['POST']) def monitoring(): print ("Monitoring:") data = request.get_json() pprint(data) return ("200") @app.route("/archive/get") def archive_get(): archive = opentok.get_archive(archive_id) j = json.dumps(archive) return (j) @app.route("/archive/start") def archive_start(): global archive_id archive = opentok.start_archive(session_id, name=u'Important Presentation') archive_id = archive.id print ("Started archive: %s" % archive_id) j = json.dumps(archive_id) return (j) @app.route("/archive/stop") def archive_stop(): opentok.stop_archive(archive_id) print ("Stop archive: %s" % archive_id) j = json.dumps(archive_id) return (j) @app.route("/archive/delete") def archive_delete(): opentok.delete_archive(archive_id) print ("Delete archive: %s" % archive_id) j = json.dumps(archive_id) return (j) @app.route("/archive/list") def archive_list(): archive_list = [] archives = opentok.list_archives() for archive in archives: print(archive.id) archive_list.append(archive.id) j = json.dumps(archive_list) return (j) @app.route("/broadcast/msg") def broadcast_msg(): payload = {'data': "This is a broadcast message from the server!"} opentok.signal(session_id, payload) j = json.dumps(payload) return (j) if __name__ == '__main__': app.run(host="localhost", port=9000)
Session monitoring
In your TokBox Dashboard it is possible to set a callback URL for receiving Events that can be used for Session monitoring by the server. This is not only very simple to set up, but very useful, as the Event objects contain important information:
Monitoring: {'event': 'stream', 'stream': {'connection': {'createdAt': 1553097719716, 'data': None, 'id': '0ad658ba-b71c-4473-976f-ca99fe4cf490'}, 'createdAt': 1553097719777, 'id': 'b78d4d54-e5bc-4583-9f31-f582910bf18c', 'name': '', 'videoType': 'camera'}, 'timestamp': 1553098058663} 127.0.0.1 - - [20/Mar/2019 16:07:39] "POST /monitoring HTTP/1.1" 200 - Monitoring: {'connection': {'createdAt': 1553097719716, 'data': '', 'id': '0ad658ba-b71c-4473-976f-ca99fe4cf490'}, 'event': 'connection', 'timestamp': 1553098058664}
As I was testing everything locally I simply set up Ngrok (see my Intro to Ngrok) to receive the callback POSTs from TokBox and redirect these to my locally running server. I could then trace out the events received in the terminal, or perform other processing as required in the server.
The server code was simply run locally with:
$ python3 server.py
Client code
The client code was as follows:
var apiKey, sessionId, token; var session; var publisher; var SERVER_BASE_URL = ''; fetch('/session').then(function (res) { return res.json() }).then(function (res) { console.log(res); apiKey = res.apiKey; sessionId = res.sessionId; token = res.token; initializeSession(); }).catch(handleError); // Handling all of our errors here by alerting them function handleError(error) { if (error) { alert(error.message); } } function initializeSession() { session = OT.initSession(apiKey, sessionId); // Subscribe to a signal event session.on('signal', function (event) { console.log("Event data: ", event); console.log("From: ", event.from.id); console.log("Signal data: " + event.data); }); // Subscribe to a newly created stream session.on('streamCreated', function (event) { session.subscribe(event.stream, 'subscriber', { insertMode: 'append', width: '100%', height: '100%' }, handleError); }); session.on('sessionConnected', function (event) { console.log("Session Connected: Event data: ", event) }); // Create a publisher publisher = OT.initPublisher('publisher', { insertMode: 'append', width: '100%', height: '100%' }, handleError); // Connect to the session session.connect(token, function (error) { // If the connection is successful, publish to the session if (error) { handleError(error); } else { session.publish(publisher, handleError); } }); } function listArchives() { //alert('List archives'); fetch('/archive/list').then(function (res) { return res.json() }).then(function (res) { console.log(res); }).catch(handleError); } function startArchive() { //alert('Start archive'); fetch('/archive/start').then(function (res) { return res.json() }).then(function (res) { console.log(res); }).catch(handleError); } function stopArchive() { //alert('Stop archive'); fetch('/archive/stop').then(function (res) { return res.json() }).then(function (res) { console.log(res); }).catch(handleError); } function broadcastMsg() { //alert('Broadcast Msg'); fetch('/broadcast/msg').then(function (res) { return res.json() }).then(function (res) { console.log(res); }).catch(handleError); } function screenShare() { OT.checkScreenSharingCapability(function (response) { if (!response.supported || response.extensionRegistered === false) { // This browser does not support screen sharing. } else if (response.extensionInstalled === false) { // Prompt to install the extension. } else { // Screen sharing is available. Publish the screen. var publisher = OT.initPublisher('screen-preview', { videoSource: 'screen', width: '100%', height: '100%', insertMode: 'append' }, function (error) { if (error) { // Look at error.message to see what went wrong. } else { session.publish(publisher, function (error) { if (error) { // Look error.message to see what went wrong. } }); } } ); } }); }
There was also some CSS to provide some very basic layout:
body, html { background-color: gray; height: 100%; } #videos { position: relative; width: 100%; height: 100%; margin-left: auto; margin-right: auto; } #subscriber { position: absolute; left: 10; top: 10; width: 80%; height: 80%; z-index: 10; border: 6px solid blue; border-radius: 6px; } #publisher { position: absolute; width: 360px; height: 240px; top: 60px; left: 30px; z-index: 100; border: 6px solid red; border-radius: 6px; } #screen-preview { position: absolute; width: 720px; height: 480px; top: 60px; right: 30px; z-index: 120; border: 6px solid greenyellow; border-radius: 6px; } .topnavbar li { display: inline; }
And finally some HTML to hold it all together:
<html> <head> <title> OpenTok Getting Started </title> <link rel="shortcut icon" href="favicon.ico" type="image/x-icon"> <link href="css/app.css" rel="stylesheet" type="text/css"> <script src=""></script> </head> <body> <div class="topnavbar"> <ul> <li> <button onclick="listArchives()">List Archives</button> </li> <li> <button onclick="startArchive()">Start Archive</button> </li> <li> <button onclick="stopArchive()">Stop Archive</button> </li> <li> <button onclick="broadcastMsg()">Broadcast Message</button> </li> <li> <button onclick="screenShare()">Screen Share</button> </li> </ul> </div> <div id="videos"> <div id="subscriber"></div> <div id="publisher"></div> <div id="screen-preview"></div> </div> <script type="text/javascript" src="js/app.js"></script> </body> </html>
Note this was a static HTML file loaded by Flask. The file needs to be located in the
static folder in the web server root. The file could be loaded with
localhost:9000/static/index.html.
Yes, this needs to be improved!
Screensharing
I'm running Chrome 73 so adding support for screensharing was easy - no browser extensions needed to be installed, or worse, written. The screenshare stream is highlighted using CSS in
greenyellow, just to delineate it was the publisher (red) and subscriber (blue) areas.
Future work
I do plan to continue working on this when hacking time is allowed. At Nexmo we usually have one hack day per month to work on any project we like, although I will probably squeeze some extra time to work on this. Here are some of the features I plan to add:
- First, improve the UI. It should have a Create Room or Join Room facility.
- Tidy up the Python code.
- Improve layout for multiple clients connected (some kind of CSS tiling to organize the streams).
- Get it hosted on the Net. Most likely I will use Heroku. In the past I would have used Digital Ocean.
- Add database support.
- Multiple session support.
- Add a mobile client (Android).
Summary
Hacking on OpenTok was a thoroughly enjoyable experience (thank you Manik). Getting a video chat going relatively quickly, without much coding, is instant gratification. You even get to like (or ignore) those audio feedback squeeks and whistles - it means it's working! | https://tonys-notebook.com/articles/opentok-hacking.html | CC-MAIN-2022-05 | refinedweb | 1,933 | 51.24 |
> > > FYI to the group. XSLT has been promoted to a "Recommendation."
> > >
> > >
>
>Does anyone here know if there were any changes from the 10-08-99
>Proposed Recomendation?
Yes, a couple of minor changes:
First a literal result element that is used as the root element of a
stylesheet must have an xsl:version attribute. This should help to
avoid some common and confusing problems where xsl:stylesheet
elements with the wrong namespace are unintentionally interpreted as
literal result elements. See
Second, the data-type attribute of xsl:sort can use a prefixed name
to specify a data type not defined by XSLT. According to a note, "The
XSL Working Group plans that future versions of XSLT will leverage
XML Schemas to define further values for this attribute." See
+-----------------------+------------------------+-------------------+
| Elliotte Rusty Harold | elharo@metalab.unc.edu | Writer/Programmer |
+-----------------------+------------------------+-------------------+
| The XML Bible (IDG Books, 1999) |
| |
| |
+----------------------------------+---------------------------------+
| Read Cafe au Lait for Java News: |
| Read Cafe con Leche for XML News: |
+----------------------------------+---------------------------------+ | http://mail-archives.apache.org/mod_mbox/xml-general/199911.mbox/%3Cv04210104b4585f2fcb85@%5B168.100.203.234%5D%3E | CC-MAIN-2016-36 | refinedweb | 157 | 63.39 |
Python provides different HTTP and related modules in builtin and 3rd party modules. Python also provides some basic HTTP server modules native. In this tutorial we will learn how to run HTTP server in Python2 and Python3.
SimpleHTTPServer In Python2 From Commandline
We will use
SimpleHTTPServer module for Python2. We will just provide the module name the port number we want to run HTTP server from commandline. In this example we will run from
8000 .
$ python2 -m SimpleHTTPServer 8000
This screenshot means web server is listening from all network interfaces for TCP port 8000 for our HTTP web server.
SimpleHTTPServer In Python2 As Code
More complete way to run a HTTP server is running a web server script. We will use following code which is named
webserver.py .
import SimpleHTTPServer import SocketServer PORT = 8000 Handler = SimpleHTTPServer.SimpleHTTPRequestHandler httpd = SocketServer.TCPServer(("", PORT), Handler) print "serving at port", PORT httpd.serve_forever()
AND run like below.
$ python2 webserver.py
SimpleHTTPServer In Python3 From Commandline
As Python version 3 the name of the HTTP server is changed to the http.server . So we need to run following command from command line.
$ python3 -m http.server 8000
We can see from output that all network interfaces are listening port 8000 with HTTP protocol. | https://www.poftut.com/how-to-run-and-use-simple-http-server-in-python2-and-python3/ | CC-MAIN-2021-31 | refinedweb | 208 | 67.96 |
No language code for this page. You can select other language.
No language code for this page,
shown in other instead.
shown in other instead.
cigi_client_00
This article describes the data/samples/plugins/cigi_client_00.cpp sample.
The cigi_client_00 sample demonstrates how to:
- Initialize connection via the CIGI protocol in the script
- Respond to packets received from the CIGI host
Notice
Start the CIGI Host before running the sample.
It is required to change values of the following variables in the cigi_client_00.cpp file before running the sample:
- host should contain an IP address of the CIGI Host (by default it is 192.168.0.20)
- recv should contain a port, which is used by the CIGI Host for sending packets over the network (default value is 33333)
- send should contain a port, which is used by the CIGI Host for getting a response from the master (default value is 44444)
The variables listed above are specified in the cigi_client_00.cpp as follows:
Source code (UnigineScript)
#include <samples/plugins/common/plugins.h> #include <samples/common/info.h> #ifdef HAS_CIGI_CLIENT /* */ string host = "192.168.0.20"; int send = 33333; int recv = 44444; int size = 1472; /* */ Info info; /* */ string reflect_packet(CigiHostPacket packet) { ...
Data sent by the CIGI host to the image generator should appear as follows:
See Also
- Article on the CigiClient plugin
Last update: 2017-07-03
Help improve this article
Was this article helpful?
(or select a word/phrase and press Ctrl+Enter) | https://developer.unigine.com/en/docs/2.5/code/uniginescript/samples/plugins/cigi_client_00?rlang=cpp | CC-MAIN-2020-34 | refinedweb | 242 | 56.76 |
Microsoft has just open sourced the Azure Mobile Apps SDK for Node. One of the promises of Azure App Service was the ability to throw together apps that serve the web and mobile community simultaneously and to do it quickly. This capability has been available with the .NET crowd for a while. With the release of the Azure mobile Apps SDK for Node, it’s available to Node programmers too.
The SDK is built on top of ExpressJS – a very well-known and respected web service. This means you can develop your web application using all the power of the ExpressJS platform, then add the Mobile Apps piece. I’m writing a Single page application, so my entire app is in a static area. Let’s take a look pre-Mobile Apps work (written in ES6):
///<summary> /// A basic web server based on ExpressJS that serves up static content ///</summary> import express from 'express'; import http from 'http'; import morgan from 'morgan'; import staticFiles from 'serve-static'; import {extractAuthSettings} from './utils'; var service = express(); // HTTP Port is set by PORT in Azure var httpPort = process.env.PORT || 3000; // Morgan is a logging service - // NODE_ENV === 'production' on Azure if (!process.env.NODE_ENV || process.env.NODE_ENV === 'development') { service.use(morgan('dev')); } // Serve up files from a static area - we are a SPA service.use(staticFiles('./wwwroot')); service.get('/api/settings', (req, res) => { try { let settings = { auth: extractAuthSettings() }; res.set('Content-Type', 'application/json'); res.status(200).send(settings); } catch (err) { // If there was an error, then return 404 res.status(404).send(err.message); } }); // This code actually listens on the specified port. Uses the PORT // environment variable to determine the TCP port to listen on. http.createServer(service).listen(httpPort, err => { if (err) { console.error(`Error listening to port ${httpPort}: ${err.code}`); } else { console.info(`Listening on port ${httpPort}`); } });
This code does two things for me – first, it serves up the contents of ./wwwroot as static files. I have a whole build process for handling this, since my application is ES6, React and Flux based. I’ve also got a single API called /api/settings that returns any settings I need. Since I store my authentication settings in the App Settings section of Azure (and environment variables when I’m developing locally), I have a function for creating the object.
Now, let’s add an API for serving up database tables. We’ll use the Azure Mobile Apps SDK for Node for this. First off, load up the API:
npm install --save Azure/azure-mobile-apps-node
Yes, it’s so new it doesn’t have an npm entry yet. That will come in due course. The SDK is only in preview right now, so it could materially change before release.
Now that I have the SDK installed in my project, the code needed to add a simple table controller is just four lines. At the top of the file, I import the SDK:
import express from 'express'; import http from 'http'; import morgan from 'morgan'; import staticFiles from 'serve-static'; import {extractAuthSettings} from './utils'; import azureMobileApp from 'azure-mobile-apps';
Right before I start listening for connections, I do the following:
// Create a new Azure Mobile App var mobileApp = azureMobileApp(); // Define the tables we are going to expose mobileApp.tables.add('GWCharacters'); // Attach the app to the Express instance mobileApp.attach(service);
This provides a simple API on /tables/GWCharacters. You can get the table which will return a list of all the entries, plus you get individual record CRUD operations:
- C = Create = POST a new entry
- R = Retrieve = GET an ID
- U = Update = PATCH an existing entry
- D = Delete = DELETE an existing entry
These operations are designed to be mobile offline-sync friendly. That means you can use one of the client SDKs for Azure Mobile Apps and utilize this same API.
Databases
By default the SDK uses the Azure SQL Database. It will pick up on the appropriate Connection String when you create the Azure SQL Database – I’ve blogged about that before. However, during development you will likely want to use a local database. Even though Visual Studio comes with SQL Express, it’s generally best to install from the install media so that you can get access to the SQL Server tools like Configuration Manager. This will ease your administrative burden in getting it set up. You can download the SQL Server 2014 Express Edition from Microsoft. Make sure you download SQL Server 2014 Express with Tools.
If you are going to be testing and developing by deploying to Azure, then you don’t need to do this.
Once installed, you will want to configure the following:
- TCP Port Enabled
- A new database
- Username and Password for access
These pieces will also be used to formulate the Connection String I need for the local version of the database. Once you have the SQL Server 2014 Express installed, start SQL Server Configuration Manager. Expand the SQL Server Network Configuration and then double-click on the Protocols for SQLEXPRESS:
Right click on the TCP/IP and select Properties, then select the IP Addresses tab:
For each entry, set the TCP Port to 1433. If you don’t want to set them all, then make sure you set the entry for 127.0.0.1 and IPAll. Once this is done, click on OK. Then right-click on the TCP/IP protocol again and select Enable. It will prompt you to say that you need to restart the service. To do this, select SQL Server Services in the left hand tree. Highlight the SQL Server. Right-click and select Restart.
The new user and database are handled in a different tool – the SQL Server Management Studio. Connect to the local instance (it’s the default connection most of the time). Right-click on Databases and select New Database.... Enter a name (mine is grumpy-wizards), then click on OK.
To create a username and password, expand Security, then Logins – you will note that a bunch of logins are created by default. Right-click on the Logins node and select New Login.... Enter a suitable login name (I used gw_access), select SQL Server Authentication and enter a password. It’s a good idea to uncheck the User must change password at next login and Enforce password expiration boxes – this is a test scenario, after all. Select the database you created as the default database. Then click on OK.
This doesn’t provide the user with access to the grumpy-wizards database. Right-click on the login ID (in the Security -> Logins node) and select Properties. Go into the User Mapping area and map the grumpy-wizards database. Select dbo as the Default Schema and ensure the user has db_owner and db_datawriter role membership checked:
You need the ability to create tables within the database. Under normal circumstances, you would use fine grained controls, granting the CREATE TABLE and ALTER SCHEMA permissions only within grumpy-wizards. However, this is a test database – not the production database. I generally give more permissions in development.
A final piece needs to be done. Right click on the server and select Properties, then Security. Under Server Authentication, check that the mode is SQL Server and Windows Authentication mode. If you don’t have this selected, then your user won’t be able to log in. If you change it, click on OK, then right-click on the server again and select Restart.
Once all this is done, you can construct a connection string for test purposes. Here is mine:
$env:SQLCONNSTR_MS_TableConnectionString = "Data Source=127.0.0.1; Database=grumpy-wizards; User Id=gw_access; Password=YourPassword; MultipleActiveResultSets=true;"
Phew! Fortunately, this is a one time activity for your app. For testing, I recommend downloading Postman – you can send the URI localhost:3000/tables/GWCharacters:
This is the GET ALL RECORDS request. We currently don’t have any responses. Let’s do a POST of some new data:
When sending the body, change the application type to JSON (application/json). If you don’t, the table controller won’t realize it has to decode the text before applying it to the database. It will create the record but without content.
I can now GET localhost:3000/tables/GWCharacters/e094bf05-ea52-484d-b5f5-c25542e00562 to get the record details. This URI is based on the id of the record. Also note the fields with a double-underscore – these are added by the SDK to support Offline Sync and Soft Delete. If you flip over to the Headers, you will notice an ETag. This allows you to implement caching and only returning entries that have changed.
This isn’t ideal. There are a number of things I want to do here. Firstly, GWCharacters isn’t the only table I want to produce. Adding additional tables is trivial – just add more tables to the mobileApp variable. I also want to define the acceptable fields in configuration and link in authentication by utilizing the social authentication from the client that I have already implemented.
However, first pass – I have a Mobile API that I can use from my client code. You can download the Azure Mobile Apps SDK from their GitHub repository. | https://shellmonger.com/tag/ecmascript2015/ | CC-MAIN-2017-13 | refinedweb | 1,540 | 65.52 |
Fedora Kiosk
From FedoraProject)."
Owner(s)
- Name: Daniel Walsh <dwalsh>
Detailed Description
The Fedora Kiosk is a Fedora based live operating system that takes advantage of SELinux and namespacing to setup a secure kiosk environment.
When you use a kiosk system you need to worry about the person that used the kiosk before you and after you. The person who used it before you could have left a process running on the system that can watch your keystrokes. The person who uses the kiosk after you can search through your home directory for data stored by firefox, including history, potentially credit card data, vpn access codes, etc.
The Fedora kiosk uses the xguest package which sets up a limited priviledged SELinux xguest user. This user is allowed to login to the box without a password if SELinux is enabeled and enforcing, and there are no processes running with the same UID. The user account is locked down so it can not execute any setuid/setgid applications. The only network ports it can connect to are web ports. It can not execute any content in its home directory. The home directory/tmp directory is created when the user logs in and destroyed when the user logs out. If the account attempts to leave a process around after logout the system will attempt to kill the process and no other kiosk users will be allowed to login until the processes with this uid, are killed.
Root account is disabled.
It is also a live operating system so, rebooting the kiosk, will reset it to a known good state.
Benefit to Fedora
Fedora and its adoption of SELinux makes it an ideal platform for building a kiosk. Since Fedora support for pam_namespace, SELinux and xguest make it ideally suited for this type of environment.
Kickstart File
ISO Name / FS Label
Fedora-13-x86_64-kiosk
Dependencies
Scope / Testing
Additional security checks and usability testing needs to be done. As people come up with ideas of how they can break the security model of the kiosk, we need to react.
Also need to make sure there is enough functionality to use the kiosk in say a library setting. Closed source applications might be needed like flashplugin.
Spins Page
Slogan. | http://fedoraproject.org/wiki/Fedora_Kiosk | CC-MAIN-2015-22 | refinedweb | 374 | 62.17 |
Home >> Die cast aluminum pole high lumen cool white 20w 40w 60w led manufacture solar street lamp
quality led street light housing manufacturers & exporter buy outdoor module led light fixtures 120lm/w efficiency solar led aluminum empty housing complete lamp from china manufacturer.
with the assistance of our deft team of professionals, we are able to introduce a broad array of 20w led street light. the offered street light has an excellent thermal management. it delivers optimal heat dissipation, toughened glass cover, and life span of led over 50000 hours.
import china 20w led flood light from various high quality chinese 20w led flood light suppliers & manufacturers on globalsources .. garden light 50w ac85 265v waterproof die cast aluminum housing led flood light led tunnel light. distributor of 20w led flood light from china (mainland) dusk to dawn 25w 40w 60w 100w 200w solar flood
the anodized die cast aluminum house with good heat disspiation design and the reflector treated by anodic oxidation, combined the lamp tightly to make a real high luminous efficiency, powered by low voltage constant current driver, it s safety, energy saving and long lifetime.led flood light is a dependable economical and convenient
wholesale bridgelux street lamp products from bridgelux street lamp wholesalers, you can wholesale led lamp, wholesale led u lamp and more on made in china . outdoor all in one integrated solar garden street lighting, find details about china outdoor solar lighting, all in one solar lighting from outdoor all in one integrated solar garden street lighting shenzhen aibite new energy co., ltd.
home >> solar street application >> high bright solar led road light, solar led garden lamp 90w grid off high power high lumen all in one solar 90w grid off high power high lumen all in one solar led light is a reliable choice for outdoor lighting. it produces super bright light output due to its high power. despite this feature, it is
brand rab lighting part no hid 110 h ex39 840 byp sb case quantity 6 weight (lbs) 3.6 length 11.1" ul listed yes warranty 5 years upc 019813749051 life hours 50,000 hours
led low bay lighting is an energy efficient, low maintenance alternative to traditional linear fluorescent in a variety of retail, commercial, industrial low bay lighting applications. led low bay lighting is designed to provide full spectrum, crisp, white light for illumination at lower mounting heights.
manufacturer of led street light economy economy ac led street light, ac led street light (30w) economy, economy ac led street light ( 20w ) and ac led street light (24w) economy offered by p & h group, ahmedabad, gujarat..
choose from our downlights philips you are now visiting our global professional lighting website, visit your local website by going to the usa website more brands from
4ft led tubeglass of 18w available in 5000k color temperature and having a cri of >80 can deliver lumens up to 2500.it is an excellent substitute for t8, t10 & t12 fluorescent tubes. this 18w can safely and easily replace a fluorescent tube of 40w and bring an energy savings of 22w.
this 28w led flood light replaces 100w mh lights and emits 3,300 lumens of 5000k illumination. experience a wide beam spread for lighting large outdoor areas. the knuckle mount led light is weatherproof and has an adjustable bracket for directing light where it's needed.
led solar series from zhongshan pacific lamps co., ltd.. search high quality led solar series manufacturing and exporting supplier on .
business listings of led street light, light emitting diode street light manufacturers, suppliers and exporters in sonipat, , , haryana along with their contact details & address. find here led street light, light emitting diode street light suppliers, manufacturers, wholesalers, traders with led
features supply voltage range 90vac 270vac,waterproof (ip65) aluminum pressure die cast enclosures with fitment,high powered long life branded led s,higher input voltage protection,maintenance cost is low etc.
hunan kontak technology co., ltd., experts in manufacturing and exporting led light, led outdoor light and 1024 more products. a verified cn gold supplier on .
solar lamp 10w 15w 20w 30w 50w led street light outdoor wall lamp waterproof spotlight super bright solar led street light high power e40 e27 led street light 28w 30w 40w 50w 60w 80w 100w led corn lights bulbs garden road lighting lamp 30w leaf shape led street light lamp led road light waterproof ip65 30w ac85v 265v die cast aluminum
led street light, watts 40, housing finish gray, housing material die cast aluminum, light distribution symmetrical, 3437 lumens, 86 lumens per watt, wall, surface, arm mounting, gasket fully gaskete
warm white aluminum 35w osram led street light rs 1,000/ piece get latest price our firm is a foremost name, involved in providing a diverse range of 35w osram led street light..
solar outdoor lights illuminate your garden, courtyard, aisle, porch, courtyard or driveway, etc. 100 lamp cob split solar wall lamp built in battery waterproof induction lamp. solar outdoor lamp, using the energy of the sun for your night lighting.
led street light ( 393 ) new led street light ( 190 ) sword series led street light ( 40 ) driverless led street light ( 14 ) 2017 led street light ( 18 ) solar led street light ( 131 ) led high bay light ( 170 ).
applicable for led luminaires / led lamps l mrp indicates maximum retail price. product can't be sold above mrp. taxes and local levies as applicable is inclusive. l prices are in indian rupees. l prices of all led luminaires in this price list are for standard luminaire. l prices are inclusive of excise duty as per prevailing rate at the time of supply. any statutory variation will be to
10.the lithium battery is in a die cast aluminum alloy box, which are more waterproof, the lifetime of battery getting longer. 11.the whole light is lighter, so the transport cost is less. 12.wireless application integrated solar panel, led, lithium battery, controller and
die casting aluminum led solar street light 10w with pole for outdoor lighting. smart led street lights back cover 40w led parking lot pole lights die casting aluminum corrosion resistant. led street light lamp warm white 200w led high mast light aluminum housing 2700 6500k long lifetime. led flood light housing | http://arllen.cz/1990/die-cast-aluminum-pole-high-lumen-cool-white-20w-40w-60w-led-manufacture-solar-street-lamp.html | CC-MAIN-2020-45 | refinedweb | 1,040 | 57.81 |
Importing and exporting files
You can import HTML or XML error objects, XML schemas, DTDs, and WSDLs to the App Firewall by using the GUI or the command line. You can edit any of these files in a web-based text area after importing them, to make small changes directly on the NetScaler instead of having to make them on your computer and then reimport them. Finally, you can export any of these files to your computer, or delete any of these files, by using the GUI.
Note: You cannot delete or export an imported file by using the command line.
To import a file by using the command line interfaceTo import a file by using the command line interface
At the command prompt, type the following commands:
import appfw htmlerrorpage <src> <name>
<save> ns config
Example
The following example imports an HTML error object from a file named error.html and assigns it the name HTMLError.
import htmlerrorpage error.html HTMLError save ns config
To import a file by using the GUITo import a file by using the GUI
Before you attempt to import an XML schema, DTD, or WSDL file, or an HTML or XML error object from a network location, verify that the NetScaler can connect to the Internet or LAN computer where the file is located. Otherwise, you cannot import the file or object.
Navigate to Security > Application Firewall > Imports.
Navigate to Application Firewall > Imports.
In the Application Firewall Imports pane, select the tab for the type of file you want to import, and then click Add.
The tabs are HTML Error Page, XML Error Page, XML Schema or WSDL. The upload process is identical on all four tabs from the user point of view.
Fill in the dialog fields.
Name—A name for the imported object.
Import From—Choose the location of the HTML file, XML file, XML schema or WSDL that you want to import in the drop-down list:
- URL: A web URL on a website accessible to the appliance.
- File: A file on a local or networked hard disk or other storage device.
- Text: Type or paste the text of the custom response directly into a text field in the GUI.
The third text box changes to the appropriate value. The three possible values are provided below.
URL—Type the URL into the text box.
File—Type the path and filename to the HTML file directly, or click Browse and browse to the HTML file.
Text—The third field is removed, leaving a blank space.
Click Continue. The File Contents dialog is displayed. If you chose URL or File, the File Contents text box contains the HTML file that you specified. If you chose Text, the File Contents text box is empty.
If you chose Text, type or copy and paste the custom response HTML that you want to import.
Click Done.
To delete an object, select the object, and then click Delete.
To export a file by using the GUITo export a file by using the GUI
Before you attempt to export an XML schema, DTD, or WSDL file, or an HTML or XML error object, verify that the App Firewall appliance can access the computer where the file is to be saved. Otherwise, you cannot export the file.
Navigate to Security > App Firewall > Imports.
In the App Firewall Imports pane, select the tab for the type of file you want to export.
The export process is identical on all four tabs from the user point of view.
Select the file that you want to export.
Expand the Action drop-down list, and select Export.
In the dialog box, choose Save File and click OK.
In the Browse dialog box, navigate to the local file system and directory where you want to save the exported file, and click Save.
To edit an HTML or XML Error Object in the GUITo edit an HTML or XML Error Object in the GUI
You edit the text of HTML and XML error objects in the GUI without exporting and then reimporting them.
Navigate to Security > Application Firewall > Imports, and then select the tab for the type of file that you want to modify.
Navigate to Application Firewall > Imports, and then select the tab for the type of file that you want to modify.
Select the file that you want to modify, and then click Edit.
The text of the HTML or XML error object is displayed in a browser text area. You can modify the text by using the standard browser-based editing tools and methods for your browser.
Note: The edit window is designed to allow you to make minor changes to your HTML or XML error object. To make extensive changes, you may prefer to export the error object to your local computer and use standard HTML or XML web page editing tools.
Click OK, and then click Close. | https://docs.citrix.com/en-us/netscaler/12/application-firewall/imports/import-export-files.html | CC-MAIN-2018-47 | refinedweb | 817 | 71.04 |
Prerequisite : Pointers in C/C++
Consider int arr[100]. The answer lies in the fact how the compiler interprets arr[i] ( 0<=i<100).
arr[i] is interpreted as *(arr + i). Now, arr is the address of array or address of 0th index element of array. So, address of next element in array is arr + 1 (because elements in array are stored in consecutive memory locations), further address of next location is arr + 2 and so on . Going with above arguments, arr + i means address at i distance away from starting element of array. Therefore, going by this definition, i will be zero for starting element of array because starting element is at 0 distance away from starting element of array. To fit this definition of arr[i], indexing of array starts from 0.
2 2
The conclusion is, we need random access in array. To provide random access, compilers use pointer arithmetic to reach i-th element.
Attention reader! Don’t stop learning now. Get hold of all the important DSA concepts with the DSA Self Paced Course at a student-friendly price and become industry ready.
Recommended Posts:
- Why is the size of an empty class not zero in C++?
- Why C treats array parameters as pointers?
- C program to Check Whether a Number is Positive or Negative or Zero
- Zero Initialization in C++
- Why copy constructor argument should be const in C++?
- Why "&" is not used for strings in scanf() function?
- Why C++ is partially Object Oriented Language?
- Why is a[i] == i[a] in C/C++ arrays?
- Why is C considered faster than other languages ?
- Why variable name does not start with numbers in C ?
- Why only subtraction of addresses allowed and not division/addition/multiplication
- Why strict aliasing is required in C ?
- Why we should avoid using std::endl
- Why "using namespace std" is considered bad practice
- Why does sizeof(x++) not increment x in C?
- Why strcpy and strncpy are not safe to use?
- Why are elementwise additions much faster in separate loops than in a combined loop?
- Why learning C Programming is a must?
- Why to use fgets() over scanf() in C?
- Why overriding both the global new operator and the class-specific operator is not. | https://www.geeksforgeeks.org/why-array-index-starts-from-zero/ | CC-MAIN-2020-40 | refinedweb | 373 | 58.18 |
Before talking about some lessons ( Var-Agrs, Enums...etc ), I think it is better to talk about Object Oriented Programming ( OOP ) in Java. This part is the core of Java or any other object oriented programming language. In this section we talk about three things.
- Encapsulation
- Inheritance
- Polymorphism
Encapsulation
Encapsulation is the technique of binding data and its corresponding methods into a single unit. This is the combination of data hiding and abstraction.
- This can be seen as protecting data in programming.
- When you declared a field with private access, it can't be accessed anyone from outside of the class. Encapsulation provide a access to these private fields via public methods.
- This can be declared as a protective gate.
- get and set methods are used to access the protected fields.
Following example will describe about encapsulation.
public class Student { private int id; private String name; public String getName() { return name; } public void setName(String name) { this.name = name; } public int getId() { return id; } public void setId(int id) { this.id = id; } } public class Test{ public static void main(String args[]){ Student obj = new Student(); obj.setId(1); obj.setName("John"); System.out.println("ID : " + obj.getId()); System.out.println("Name : " + obj.getName()); } }
- In this example you can see there are two classes called Student and Test.
- You can see two variables called name and id with private access modifier. It means these variables are visible only with in the class.
- But in this program we have used them in another class called Test and we have accessed it.
- It is done by get( ) and set( ) methods. set method is used to set the value and get method is used to call the value.
- In set method it is used a keyword called 'this'. It is used to say "private String name" is equal to "public void setname (String name)".
Advantages of Encapsulation.
- Encapsulated code is more flexible, maintainable and extensible.
- The field of a class can be made read only or write only.
- Allows admin to control who can access what.
- It allows to change one part of the code without having any effects to the other parts.
Inheritance
- This concept can be seen in real world. It is like the relationship between parents and children.
- Inheritance allows us to create the bond between super classes (parents) and subclasses (children).
- We can inherit from class of interface. If inherit from class, we use extends keyword and if it is a interface, we use implements keyword.
Inheritance in classes
- This can be understood using following example.
- There are four cars in below picture.
- These are types of cars.
- You can create a super class called "Car" and then you can create other sub-classes "Micro_car", "Hatchback" and "Sedan".
- These sub classes are inherited by Car class.
class Car{ } class Micro_car extends Car { } class Hatchback extends Car { } class Sedan extends Car { }
Inheritance in interfaces
- We talk more about this thing in later posts. But now I will give simple example to understand about inheritance in interfaces.
- We use implements keyword to do this.
public interface DriveCar{ void accelerate( ); void stop( ); } public class Car implements DriveCar{ void accelerate( ) { /*implementations*/ } oid stop( ) { /*implementations*/ } }
IS-A relationship
- IS-A refers inheritance or implementation.
- Lets get a real life example.
- Mitsubishi Pajero 2014 is a latest SUV product of Mitsubishi motors.
- It is SUV range vehicle.
- There are 4 types in 2014 model.
- GLX
- GLX-R
- VRX
- EXCEED
- Now you can say GLX is a type of Mitsubishi Pajero.
- Now you can say GLX-R is a type of Mitsubishi Pajero.
- Now you can say VRX is a type of Mitsubishi Pajero.
- Now you can say EXCEED is a type of Mitsubishi Pajero.
- Mitsubishi Pajero is a SUV.
If you want to create a classes for this example, it can be done like this.
public class Suv { } public class Mitsubishi_Pajero extends Suv { } public class Glx extends Mitsubishi_Pajero { } public class Glx_R extends Mitsubishi_Pajero { } public class Vrx extends Mitsubishi_Pajero { } public class Exceed extends Mitsubishi_Pajero { }
Generalization
Generalization is the process of casting sub classes into super classes. Think about above example.
Specialization
This is the up side down process of generalization. Look at this figure.
Has-A Relationship
- Has-A relationship can be understood as a type of inheritance and this is also known as Composition and Aggregation.
- it means an instance of one class has a reference to another instance of another class or same class.
- Lets get a real life example.
Car IS-A vehicle and it HAS-A Engine.
class Vehicle { } class Car{ Engine obj = new Engine(); } class Engine { }
Aggregation
- This means a directional association between objects.
- When an object has another object, we can define a direction between then to specify which object contains another object.
- This method is useful for code re-usability.
But a car can exist without airbags.
Composition
- A restricted aggregation is called composition.
- It means objects are associated strongly.
- There is a composition when an object( first object ) contains another object and that contained object can't exist without first object.
A student cannot exist without a class.
Why use Inheritance ?
- Promote code reuse
- To use with Polymorphism
Polymorphism
Now you may think what is he gonna do now with these type of pictures. You know that this is Vanessa Hudgens dressed with different costumes. She looks different with different costumes. It means same person with different kits.
- Polymorphism also means many forms of a one object.
- When programming you may have to use same method in several times in different classes with different parameters.
- But you can think it cannot be done, because we have used that method at a one time.
- But it is not real with OOP, Java allows you to create same method in several times in different classes with different parameters.
- Method name is like Vanessa Hudgens. But her costumes can be changed. ( inner and outer).
- I have mentioned that polymorphism is a theory that is giving you to use same methods again and again.
- This can be done in two ways.
- Method overriding
- Method overloading
Reference variablesBefore talking about these two things, it is necessary to talk about Reference variable. You need to understand this theory before talking about overloading or overriding. A variable that refers the class name is known as reference variable. Reference variable is the only way to access objects.You may remember how to create an objects.
Apple obj = new Apple( );
You can divide that statement into two steps.
Apple obj;
obj = new Apple( );
The first line shows you how to create Apple type reference variable. Then the reference variable is converted into an object.
Apple( ) is a constructor(You will be learnt about constructors later). Generally it does not return anything. But you can see we don't use void keyword. Thing is this, really a constructor return something called "class object" (object of the class). It has not a name. So it is called "Anonymous object". Then this anonymous object is assigned to reference variable(obj).
There are some rules to define reference variables.
- A reference variable can be only one type and it can be declared once. That type never changed.
- A reference variable can refer to a sub-type object same type or a super type of the object.
- A reference variable type determine which methods can be called.
- You can see, when creating objects left side define as a reference type (class type/super type) and right side define as a object type.
- When method calling it should be considered about object type.
- When instance variable access then consider about reference type.
- Object type determines which overridden method is used at run-time.
- Reference type determine which overloaded method will be used at compile time.
Reference variable casting
- Upward casting
- Downward casting
Upward casting (Automatic type conversions)
class Fruit { } class Mango extends Fruit { } public class Apple { public static void main(String args[]) { Mango mango = new Mango(); Fruit fruit = mango; // upcast with no explicit cast Fruit fruit2 = (Fruit) mango; // upcast with explicit cast } }
- Two types should be compatible (int/float, int/long).
- Destination type should be larger than the source type.
Downward casting (Explicit type conversions)
This type of conversions is useful if you want to create conversion between incompatible types. What is to be done if you want to convert long value to byte value ? We have talked about data type casting in Data types in Java post. In this section we are gonna talk about object casting. This is not like upward casting. You should specify the target type.
In above example you will be able to see a ClassCastException exception when you run the program.
Method overriding
- In this type you can redefine method with limited scope.
- It means when you override a method, method name, arguments and return type should be same as overridden method.
- final, static and private methods cannot be overridden.
- Only inherited method may be overridden.
- Overriding method cannot be less public than overridden method.
class Apple { void eat() { System.out.println("Apple"); } } class RedApple extends Apple { void eat() { System.out.println("Red Apple"); } } public class Fruits { public static void main(String args[]) { Apple apple = new Apple(); Apple apple2 = new RedApple(); apple.eat(); apple2.eat(); } }
Method overloading
- This is little bit different with overriding.
- Overloaded methods have the same name and must have different argument list. Argument may differ in data type or number or both.
- The return type of the overloaded methods can be different or same. If argument list is different, return types are also different.
- They may have different access modifiers.
- May throw different exceptions( Later on we talk about Exceptions ).
class Processor{ void showType(){ System.out.println("Intel and AMD are the popular."); } } class Intel extends Processor{ void showType(){ System.out.println("Intel is leading vendor."); } void showType(String gen){ System.out.println("Intel has " + gen + " generations."); } } public class MotherBoard{ public static void main(String args[]){ Processor obj1 = new Processor(); Processor obj2 = new Intel(); Intel obj3 = new Intel(); obj1.showType(); obj2.showType(); obj3.showType("four"); } }
OOP concepts
Reviewed by Ravi Yasas
on
12:07 AM
Rating:
It's very impressive if you wrote all this by yourself! Great idea about pictures! It's fun and also good for memorization.
I've found it useful, also I like laconic-way of explanation. But examples, need them!
Also, as for me, it's better to separate one big article to more, each topic in it's own place, you know.
I want to tell you something. Mediafire has annoying ads. In my opinion it's much better to use pastebin.com or something that made especially for code sharing.
Here, my example of Encapsulation, how we can use it in practice:
Let me know what do you think. | https://www.javafoundation.xyz/2013/11/oop.html | CC-MAIN-2021-43 | refinedweb | 1,792 | 59.8 |
Date and time localization is quite an important topic when it comes to developing applications and software systems in multiple languages. Date and time are written and read in different ways in different countries. Furthermore, in software development, there are a numerous technologies, languages, and frameworks that we can use to implement localization. In this article, we are going to discuss some of the methods that make the date and time localization process easier.
- Introduction
- Date and time localization methods in Python
- Date and time localization in Java
- Date and time localization in JavaScript
- Date and time localization in PHP
- Date and time localization in Ruby on Rails
- Summary
Introduction
It’s no secret that people in different countries write the date and time in a certain way. For instance, in the US, they specify the month first, followed by the day, and then the year. Each component is also separated with a slash, which gives us
mm/dd/yyyy, for example:
09/01/2021, which would be September 1, 2021. However, if you show this date to a person from, say, Russia, s/he would think that
09 is the day and the
01 is the month. That’s because in CIS countries, the day goes before the month and usually each component is separated with dots:
dd.mm.yyyy. Residents of some Asian countries, as well as Iran, write the year first, followed by the month, and then the day:
yyyy-mm-dd.
Moreover, we have different time formats; in some countries people tend to use a 12-hour format and append
am/pm, whereas in others the standard is the 24-hour format. Therefore, it’s very important to properly localize date and time based on the currently set locale.
Date and time localization methods in Python
First of all, let’s talk about how we can convert one date-time format into another. As you can see in this article, there are so many date and time formats. For example, you could have
1997/01/27 18:30 or
20/01/1997 18:30. Read the following code snippet carefully to understand how to convert date and time between different formats in Python:
from datetime import datetime dateTime = datetime.now() datetime_str_1 = dateTime.strftime("%B %d, %Y %A, %H:%M:%S") datetime_str_2 = dateTime.strftime("%Y/%m/%d, %H:%M:%S") print(datetime_str_1) print(datetime_str_2)
Here, we have used the Python package called
datetime. That package has a function called
now() which returns the current time. So in the above code, the same time is written in two different formats. When you execute this code, you will get the following result:
July 18, 2021 Sunday, 23:29:19
2021/07/18, 23:29:19
Using this
strftime function, you can convert a given time to most of the formats being used in the world.
However, you need to keep in mind that time is always relative. For instance, the current time in the UK is different from the current time in Japan. To overcome this problem, developers have come up with the concept of the Coordinated Universal Time (UTC) timestamp which is the number of seconds from the UNIX epoch. But obviously, in software systems, we don’t keep time in UTC, and we convert it into local time. In order to convert UTC to local time, we need to add the time offset to UTC.
E.g.: UTC -> 2021–08–13T10:23:40
Local time -> 2021–08–13T10:23:40+07:00
The format of parsing the time with the offset is
%Y-%m-%dT%H:%M:%S%z. Here
%z represents the offset. To have a better understanding of this, take a look at this code snippet:
import datetime import dateutil.parser format = '%Y-%m-%dT%H:%M:%S%z' date_to_parse = '2021-07-21T20:40:21-06:00' dt = dateutil.parser.parse(date_to_parse) print(dt)
Now you know what the offset is and how can we parse the date with the offset. However, since the local time only helps to identify the time on the given date, we cannot use it for calculations related to locations. This is where the time zone comes in. Time zones are represented by continent name plus location name. For a complete understanding of time zones, read this article. Now, let’s see how we can use these time zones in the code:
import datetime import pytz naiveDate = datetime.datetime(2021, 10, 21, 13, 21, 25) utc = pytz.UTC time_zone = pytz.timezone('America/Los_Angeles') localizedDate = utc.localize(naiveDate) increasedDate = localizedDate + datetime.timedelta(days=10) d = increasedDate.astimezone(time_zone)
Here we have used the
pytz python module to handle time zone details in Python. Make sure that you use UTC for all the time-related calculations and then convert that into local time because, as already discussed, local time is always relative. In the example above, we have used the
America/Los_Angeles time zone.
Another important date and time localization method that you should understand is that using the
locale. Actually, the
locale is a framework to switch between multiple languages. Take a look at the below code snippet for a better idea of this:
import time import locale locale.setlocale(locale.LC_TIME, 'en_US.UTF-8') d1= time.strftime("%A, %d. %B %Y %I:%M%p") locale.setlocale(locale.LC_ALL, 'si_LK.UTF8') d2= dt.strftime("%A, %d. %B %Y %I:%M%p")
As you can see, we have imported two Python modules:
time and
locale. You also need to provide the actual locale, or you can use the default locale as well. In this instance,
en_US is the locale in the US and
si_LK indicates the locale in Sri Lanka. So the above code will output the following result:
Monday, 19. July 2021 02:20AM
සඳුදා, 19. ජූලි 2021 02:19පෙ.ව
The first result shows my current local time in the en_US locale. The next result gives my current time in locale si_LK. Si indicates the language ‘Sinhala’ which is used in Sri Lanka. Note that when you change the locale, the language of the output changes accordingly.
Now we know the basic concepts of date and time localization such as offset, time zone, and locales in Python. Next, we’ll take a look at how we can enable date and time localization in Java.
Date and time localization in Java
First of all, let’s see how we can format the time and date in Java. To do this, we can use the Java class
DateFormat. Let’s discover how we can show the date and time in a short format. Read the following code snippet:
DateFormat shortDateFormat = DateFormat.getDateInstance( DateFormat.SHORT, Locale.US); String s = "27/01/1997"; try { Date date = shortDAteFormat.parse(s); System.out.println(date); } catch (ParseException e) { System.out.println("Exception!!!"); }
Here we have used the date instance in the format of
DateFormat.SHORT. Also, we can provide the required locale in the same function. In this example, we have used the US locale. Here’s the result:
Mon Jan 27 00:00:00 IST 1997
Similarly, we can have several other formats such as, LONG, FULL, etc. Other than that, you can customize the date and time parsing format by using the
DateFormat and
SimpleDateFormat Java classes. Take a look at the example below:
DateFormat dateFormat = new SimpleDateFormat("dd/MM/yyyy"); System.out.println(dateFormat.parse("01/02/1997"));
Make sure that you have imported
simpleDateFormat as shown here:
import java.text.SimpleDateFormat;
When we execute the above code snippet we are getting the following output:
Sat Feb 01 00:00:00 IST 1997
Simple, right? Now we know one method of parsing different date formats. Next, we are going to check out how to output date and time in different patterns. We can use the same
SimpleDateFomat Java package for this. Let’s take a look at this example:
String date_pattern = "MM-dd-yyyy"; SimpleDateFormat simple_date_Format = new SimpleDateFormat(date_pattern); String date = simple_date_Format.format(new Date()); System.out.println(date);
Here, we have provided the date pattern inside the
date_pattern variable.
Next, let’s see how we can deal with the time zones in Java. In Java 8, the time zone details are handled by the
ZonedDateTime class. In this class, the time zone is indicated as the number of hours. You can take the current existing time zone by using the
now() function in the
ZonedDateTime class as follows:
System.out.println(ZonedDateTime.now());
Here’s the output:
2021-07-20T00:02:03.232139+05:30[Asia/Colombo]
You can make a
ZonedDateTime class instance using the
of() method with all the year, month, date, hour, minute, second, nano-second, and time zone details. Take a look at the following code snippet for an idea of how to make a
zonedDateTime instance:
ZonedDateTime.of( 2021, 11, 4, 10, 25, 20, 30000, ZoneId.systemDefault() );
Here we have set the time zone with the
ZoneId.systemDefault() method. This
zonedDateTime class contains new options such as,
ZoneID,
ZoneOffset,
ZonedDateTime,
OffsetDateTime, and
OffsetTime.
TimeZone is another class for handling time zones in Java. Let’s see how to set and get time zones in Java with the
TimeZone Java class. First, you need to import the time zone package as follows:
import java.util.TimeZone;
Now let’s move on to the implementation:
Date now = new Date(); TimeZone.setDefault(TimeZone.getTimeZone("Europe/London")); System.out.println(now); TimeZone.setDefault( TimeZone.getTimeZone("UTC")); System.out.println(now); TimeZone.setDefault( TimeZone.getTimeZone("GMT")); System.out.println(now);
Here, we have:
- Obtained the current date and time with time zone.
- Got the current time in different time zones by providing the actual time zone.
The output of the above code is:
LocalDate class in Java 8
LocalDate is another important class introduced in Java 8. This class is immutable and thread-safe. Furthermore,
LocalDate represents the date in ISO format, for example:
2021-01-01. So let’s see how to perform localization with the Java
LocalDate class.
You can get the current date with the
now() function in the
LocalDate class:
LocalDate currentDate = LocalDate.now();
You can also use the
parse() method to get an instance of the
LocalDate class. You’ll need to provide a parseable string as input and then the
parse function will return the local date and time:
LocalDate.parse("2020-01-21");
A special feature of this class is that it can perform several date-related calculations with built-in functions, such as
plusDays() and
minus(), with ease. Let’s see some examples.
With the
plusDays() function, you can get a future day by adding a certain number of days to a given date. Check out the following example for a better understanding:
LocalDate twoDaysFromToday = LocalDate.now().plusDays(2); System.out.println("Two days from today: " + twoDaysFromToday);
Output:
Two days from today: 2021 : 08 : 05
To subtract time from a given date, use the
minus() function. In the below example, it will return the date of two months prior to today:
LocalDate somePreviousDay = LocalDate.now().minus(2, ChronoUnit.MONTHS);
Moreover, we can easily get a specific day of the week, the first day of the month, and check whether a given year is a leap year:
import java.time.*; import java.time.temporal.TemporalAdjusters; public class Main { public static void main(String[] args) { DayOfWeek day = LocalDate.parse("2021-02-11").getDayOfWeek(); LocalDate firstDayOfMonth = LocalDate.parse("2021-03-30").with(TemporalAdjusters.firstDayOfMonth()); boolean leapYearCheck = LocalDate.parse("2021-02-11").isLeapYear(); System.out.println("day : " + day); System.out.println("firstDayOfMonth : " + firstDayOfMonth); System.out.println("leapYearCheck : " + leapYearCheck); } }
Here’s the output:
So as you see, there are several methods for localizing date and time in Java. Next, we are going to see how we can handle date and time localization in JavaScript.
Date and time localization in JavaScript
In JavaScript, there are numerous methods for localizing dates and times. Let’s see how we can handle date and time localization in vanilla JavaScript — you’ll use more or less the same methods in JS frameworks as well.
In JavaScript, there are basically three different date input formats:
- ISO Date
- Long Date
- Short Date
Regardless of the input date format, JavaScript outputs date as a full string text. You can see this in the following code snippet:
const date = new Date() console.log(date)
Here’s the output:
Tue Jul 20 2021 22:19:18 GMT+0530 (India Standard Time)
As you can see, it returns the date as a string including the time zone.
In JavaScript, the ISO date format can be written according to the syntax:
YYYY-MM-DD. For instance, the date
2021-01-21 is written in the ISO date format:
const date = new Date("2020-02-01"); console.log(date);
Output:
Sat Feb 01 2020 05:30:00 GMT+0530 (India Standard Time)
In the ISO date format, we can omit some components. For example, providing only the year and month:
const date = new Date("2020-02"); console.log(date)
Output:
Sat Feb 01 2020 05:30:00 GMT+0530 (India Standard Time)
Only the year:
const date = new Date("2020"); console.log(date)
Output:
Wed Jan 01 2020 05:30:00 GMT+0530 (India Standard Time)
Providing the date and time:
const date = new Date("2020-02-20T12:00:00Z"); console.log(date)
Output:
Thu Feb 20 2020 17:30:00 GMT+0530 (India Standard Time)
You may have noticed that if we are giving only the year, the output assumes the month is the 1st month of the year and the date is the first date of the month. For example, if you give 2014, it is interpreted as 2014-01-01. According to the same concept, when we give 2014-01, it is interpreted as 2014-01-01.
More JavaScript date formats
Let’s briefly discuss the other JavaScript date formats as well. The syntax of the short date format is
MM/DD/YYYY. We can write it as:
const date = new Date("02/21/2020"); console.log(date);
In the long date format, the syntax of the date is
MMM DD YYYY. Here, the month and date can be provided in any order. For example, both
Jan 21 2020 and
21 Jan 2020 are valid formats. Other than that, you can use any of the long or short names of the month.
Another important aspect of the long date format is that the month’s name is case sensitive. ‘January’ and ‘JANUARY’ are both interpreted in the same way.
In the next part of this section, we are going to talk about how we can handle date and time according to locale. For this, several functions were introduced in ES6 — let’s briefly discuss those.
Handling date and time according to locale in JavaScript
Date.prototype.toLocaleDateString() — this method returns the date according to a given locale. It also explicitly changes the language based on the given locale. Take a look at this example:
const event = new Date(Date.UTC(2021, 07, 08, 2, 0, 0)); const options = { weekday: 'long', year: 'numeric', month: 'long', day: 'numeric' }; console.log(event.toLocaleDateString('de-DE', options)); console.log(event.toLocaleDateString('ar-EG', options)); console.log(event.toLocaleDateString(undefined, options));
On the first line, we have entered the date in UTC, which returns the number of milliseconds passed from the start of the UNIX epoch. For instance,
Date.UTC(2021, 07, 08,2,0,0) returns
1628388000000.
Then we convert it to a
Date object. The value held in the
event variable is:
Sun Aug 08 2021 07:30:00 GMT+0530 (India Standard Time)
Next, we have created an object called
options to indicate the return values of each date and time in a given locale.
Then we use the
toLocaleDateString method. First, we change the locale to
de-DE (German language). The output is:
Sonntag, 8. August 2021
It returns the weekday, day, month, and year in German.
The next locale is
ar-EG (Arabic language).
The output is:
الأحد، ٨ أغسطس ٢٠٢١.
If you don’t provide any locale to the
toLocalDateString() function, a default locale will be utilized. Since my current default locale is
en-US, I’ll get the following output:
Sunday, August 8, 2021
Intl.DateTimeFormat() — this is also an important function that was introduced in ES6 for date and time formatting according to the specified locale. Let’s see another example:
const date = new Date(Date.UTC(2021, 12, 20, 3, 23, 16, 750)); console.log(new Intl.DateTimeFormat('en-US').format(date)); console.log(new Intl.DateTimeFormat(['ban', 'id']).format(date)); console.log(new Intl.DateTimeFormat('en-GB', { dateStyle: 'full', timeStyle: 'long' }).format(date));
Similarly to the previous example, we have created a
Date object. The first locale we have provided is en-US. We’re converting the given date-time object using the default format using the provided locale. The output is:
1/20/2022
In the second
console.log function, we have specified the default date format for the language with a fallback language set to Indonesian (
id). The output is:
20/1/2022
Finally, we have specified the date and time format using “style” options. Some of the style options you can use are
full,
long,
medium, and so on. Since the style is
full and the language is English, we get the following output:
Thursday, 20 January 2022 at 08:53:16 GMT+5:30
Date and time localization in PHP
Next, we will discuss how to deal with date and time localization in PHP. First, let’s discover how we can format date and time. To do this, we can use the
date() function. It converts the timestamp to a more readable date and time. The syntax of the
date() function is as follows.
date(format, timestamp)
If we don’t provide the timestamp, it returns the current date and time. Take a look at the below example:
<?php $today_date = date("d/m/Y"); echo $today_date; ?>
As you see here we haven’t provided the timestamp. So it will return the current date in the given format. Here’s the output:
20/07/2021
Now let’s see another example that contains time as well:
<?php echo date("h:i:s") ; echo date("F d, Y h:i:s A"); ?>
The output is:
08:27:08
July 20, 2021 08:27:08 PM
When you are using date and time formatting in PHP, it is very important to know the formatting characters for different units such as date, time, long month name, seconds, etc. You can find these characters in this article.
Now, let’s see how we can get the date and time based on the locale. To achieve this, we can use the
strftime() function. Please note that you can change the current locale with the
setlocale() function. Here’s an example:
<?php setlocale(LC_TIME, "en_US"); echo strftime("English- %c"); ?>
Here’s the output:
English- Tue Jul 20 20:46:54 2021
These are the basics of date and time localization in PHP. You can refer to this documentation article to get a more complete understanding of this function.
Date and time localization in Ruby on Rails
To properly localize date and time in Ruby on Rails, use the
localize method aliased as
l. It’s also recommended to add the rails-i18n gem into the
Gemfile that has the locale data for different languages. For example, it has translated names of the months, pluralization rules, and other useful stuff.
The
l method is very similar to Ruby’s
strftime. You can provide a predefined format or construct your own.
For instance, here’s how to use the
:long format:
l post.created_at, format: :long
Depending on the locale, this format will be a bit different but in general it displays your date and time in the following way: “full month name – day – year – hours – minutes.” For example:
January 23, 2021 23:05
You can also specify your very own format:
l post.created_at, format: "%B %Y"
The placeholders are similar to the ones listed in the
strftime docs.
On top of that, you can create custom formats in your translation files. To achieve this, open a translation file inside the
config/locales folder and add something like this:
en: time: formats: very_short: "%H:%M"
Now you have a
very_short format available for the English locale. It’s important to define the new format for all other locales as well.
To learn more about the Rails internationalization and localization process in general and how to add a locale switcher to your app, you can check out our step-by-step tutorial.
Summary
So, in this article, we’ve discussed how to handle date and time localization in several programming languages such as Python, PHP, Java, JavaScript, and Ruby. If you’d like to learn more about i18n and l10n processes in various technologies, you might be interested in checking out our collection of tutorials which lists how-tos on many popular languages and technologies. Thank you for your attention and happy coding! | https://lokalise.com/blog/date-time-localization/ | CC-MAIN-2022-05 | refinedweb | 3,516 | 65.22 |
Breaking Down UUIDsNick Steele June 21st, 2019 (Last Updated: June 21st, 2019)
01..
UUIDs are generally used for identifying information that needs to be unique within a system or network thereof. Their uniqueness and low probability in being repeated makes them useful for being associative keys in databases and identifiers for physical hardware within an organization. One of the benefits to UUIDs is that they don’t need to be issued by a central authority, but can be generated independently and then used across a given system without suspicion that a duplicate. or colliding, UUID has been generated elsewhere. Apple, Microsoft, Samsung, and others use UUIDs, either defined by the IETF spec or a proprietary variant, to identify and track hardware both internally and sold to consumers.
There are 5 different versions of UUIDs, excluding the Nil UUID version, which is a special case UUID where all its bytes are set to 0, and most contain some variants that allow for special cases specific to vendors like Microsoft. Version 1 and 2 use time-based sources (a 60 bit timestamp sourced from the system clock) for its randomness. Versions 1 and 2 are effectively the same, except in the latter version the least significant bits of the of the clock sequence are replaced with an identifier specific to the system. Most implementations, because of this and other reasons, omit version 2. Version 1 is the most commonly used of the UUID versions.
Versions 3 and 5 are generated by hashing a name or namespace identifier and using the resultant hash, MD5 or SHA-1 respectively, as the source of uniqueness instead of the time-based sources like in versions 1 and 2. They.
Version 4 uses random or pseudo-random sources rather than time or namespace-derived sources for its uniqueness. Versions 3, 4, and 5 use their respective sources to generate 60 bits of unique output that is used in lieu of the timestamp bits used in versions 1 and 2.
02. UUID Generation
To show how the UUID is derived, we’ll go through how a version 1 UUID is created. For versions 3 through 5, the clock sequence sources mentioned below should be replaced with their respective sources. The V1 UUID string is derived into an ordered sequence of 6 fields from that give the ID a great (but not non-zero) chance of being completely unique. UUID records are derived from the following sources, in big-endian fashion, as follows:
TimeLow: 4 Bytes (8 hex chars) from the integer value of the low 32 bits of current UTC timestamp
TimeMid: 2 Bytes (4 hex chars) from the integer value of the middle 16 bits of current UTC time
TimeHighAndVersion: 2 Bytes (4 hex chars) contain the 4 bit UUID version (most significant bits) and the integer value of the high remaining 12 bits of current UTC time (timestamp is comprised of 60 bits)
ClockSequenceHiAndRes && ClockSequenceLow: 2 Bytes (4 hex chars) where the 1 through 3 (significant) bits contain the “variant” of the UUID version being used, and the remaining bits contain the clock sequence. The clock sequence is used to help avoid collisions if there a multiple UUID generators within the system or if a system clock for a generator was set backwards or doesn’t advance fast enough. For additional information around changing Node IDs and other collision considerations, see section 4.1.5 of the IETF RFC
Node: 6 bytes (12 hex chars) that represent the 48-bit “node id”, which is usually the MAC address of the host hardware that generated it.
This yields a string that would look like this for example:
123e4567-e89b-12d3-a456-426655440000
03. What makes UUIDs Versions unique?
To further break down what gives us faith in why a given UUID is most likely unique, let’s look at the sources of UUID data for the different UUID versions a bit more:
Versions 1 & 2
For these versions we have 74 bits of time data, 60 bits from the timestamp and 14 from the clock sequence. Along with that we have the 48-bit Node ID, which could be the MAC or, in some cases where we may not want to expose it or the Node does not have a MAC, a 48 random or pseudo-random bits. Ideally (for the UUID version, not for us) however if we have the MAC and, in combination with the timestamp and clock sequence, we are given an ID that correlates to a single point in space (the node MAC) and time (timestamp and clock sequence). If the ideal case holds true, a node is capable of generating 274 (18 Sextillion), but if no MAC is given we have an additional 48 bits of uniqueness, yielding 2122 (5.3 undecillion) possible UUIDs for a node.
Versions 3 & 5
If you want to have unique identifiers for “name-able” information and data within a namespace of your system stored in a UUID format and make sure that duplicate resource names do not occur across your system, this is the version to use. According to the spec, Version 5 is preferred, since it uses SHA-1.
Version 4
The most “unique” of the versions. As with the other versions, 4 bits are used to indicate the version, and 2 or 3 bits depending on the variant are used to indicate the variant of the UUID. this leaves either 2122 (5.3 × 1036) or, for v4 variant 2, half as many possible unique IDs there is one less random bit. Still the chances of collision are extremely small.
04. Chances of Collision
The chance of a collision occurring where two identical UUIDS are generated at the same time on the same node is incredibly small, and the probability of collision can be calculated using the Birthday Problem. For example, if we have 68,719,476,736 UUIDs with 74 random bits , the probability of a duplicate would be 0.1175030974154045 but If we have 122 random bits the probability would be 0.0000000000000004. If a user was generating all UUIDs for a system using a single node, they may want to consider using UUID version 4 rather than 1, since the chance of collision is much greater. | https://duo.com/labs/tech-notes/breaking-down-uuids | CC-MAIN-2021-04 | refinedweb | 1,044 | 62.01 |
I just wrote an entry about a hard-to-find bug on the blog of my employer, Index Data. It turned out to hinge on the different API expectations of programmers in low- and high-level languages — specifically C and Ruby — and I think it’ll be of interest to most Reinvigorated Programmer readers. Enjoy!
Mike–
I feel a little in the dark. There are both layers and history going on, would you consider laying enough of that out to understand how it worked before, and not later?
As it is, I’m imagining someone took a C routine that was like
Resultset *search_in_zoom_db( Conn *conn, char *query ) {
…
return resultset;
}
and “translated” it to the Ruby function you quote:
def search_in_zoom_db(query)
conn = ZOOM::Connection.open(Host, Port)
return conn.search(query)
end
This looks like a plain bug to me. You say, “the Ruby ZOOM binding follows the semantics of the underlying ZOOM-C library,” but the Ruby function doesn’t have what I assume would be a necessary parameter to the equivalent C function. So, not only does the interface not imply that the caller is in charge of connections, but if the caller did have a connection to hold onto, the Ruby function doesn’t give the caller a way to perform a query on that connection!
Sure, C and Ruby programmers might naturally expect different conventions to handle this issue. But when translating a library from one to the other, somebody was responsible for either keeping the old convention, or translating to the new, and it seems like there would have been clear local clues here…. unless I’m missing something you didn’t explain?
(It’s a little scary that you have a class for connections, but you also have the Ruby function calling …open(Host, Port) where Host and Port seem to be globals or constants. This seems to imply that there’s only one thing to connect to, and a library that requires you to continuously pretend that someday we might have more than one connection when we all know there’s only one, sounds like an invitation to slip.)
Just a test:
Besides using angle brackets in that earlier post, I didn’t make explicit:
When a function (in any language) has a parameter for an open file or connection, it gives a pretty strong clue that the caller is responsible for thinking about the lifetime and scope of the file or connection. When not, it implies the caller is not responsible..
Also, the situation you describe where the result object caches results as they come in, but the caller is responsible for knowing when to close the connection, seems a confused division of responsibilities, again, in any language.
Hi, Steve, thanks for your thoughts on this. Evidently I didn’t lay out enough of the background — always a fine line to walk when describing a problem with a specific system.
No, there was no C routine search_in_zoom_db() — the Ruby function of that name was not translated from a C function, but written from scratch as part of a program that exists only in that language. So the bug wasn’t that application code was ported blindly from C to Ruby, but that the Ruby-level API to the underlying C code was put together in a way that, while it matches the C code that it makes available, isn’t a good match for how such things are generally done in high-level languages. It’s like a too-literal translation of a French phrase into English that doesn’t read well.
(The C function doesn’t have an additional ‘conn’ parameter, no. The point is that in Ruby/Python/whatever, you naturally assume that the Result Set object hangs on to a reference to the Connection that it was created for; but in fact it doesn’t.)
BTW., ignore the use of globals for the host and port: I just did that to simplify the code. In the real system, it reads these parameters, and others, from a configuration file. That’s not the issue here. It is indeed possible to have multiple ZOOM connections open at once, as in the Perl application IRSpy, which every night does a big run that keeps fifty connections running at any one time.
Finally: .” No, the C code doesn’t. But the Ruby code should (and also doesn’t). And that is the bug. | https://reprog.wordpress.com/2010/06/01/a-hard-to-find-bug-and-equivalent-apis-in-low-and-high-level-languages/ | CC-MAIN-2017-04 | refinedweb | 741 | 56.89 |
Creating an email form with ASP.NET Core Razor Pages
Jürgen Gutsch - 26 July, 2017
In the comments of my last post, I got asked to write about, how to create a email form using ASP.NET Core Razor Pages. The reader also asked about a tutorial about authentication and authorization. I'll write about this in one of the next posts. This post is just about creating a form and sending an email with the form values.
Creating a new project
To try this out, you need to have the latest Preview of Visual Studio 2017 installed. (I use 15.3.0 preview 3) And you need .NET Core 2.0 Preview installed (2.0.0-preview2-006497 in my case)
In Visual Studio 2017, use "File... New Project" to create a new project. Navigate to ".NET Core", chose the "ASP.NET Core Web Application (.NET Core)" project and choose a name and a location for that new project.
In the next dialogue, you probably need to switch to ASP.NET Core 2.0 to see all the new available project types. (I will write about the other ones in the next posts.) Select the "Web Application (Razor Pages)" and pressed "OK".
That's it. The new ASP.NET Core Razor Pages project is created.
Creating the form
It makes sense to use the
contact.cshtml page to add the new contact form. The
contact.cshtml.cs is the
PageModel to work with. Inside this file, I added a small class called
ContactFormModel. This class will contain the form values after the post request was sent.
public class ContactFormModel { [Required] public string Name { get; set; } [Required] public string LastName { get; set; } [Required] public string Email { get; set; } [Required] public string Message { get; set; } }
To use this class, we need to add a property of this type to the
ContactModel:
[BindProperty] public ContactFormModel Contact { get; set; }
This attribute does some magic. It automatically binds the
ContactFormModel to the view and contains the data after the post was sent back to the server. It is actually the MVC model binding, but provided in a different way. If we have the regular model binding, we should also have a
ModelState. And we actually do:
public async Task<IActionResult> OnPostAsync() { if (!ModelState.IsValid) { return Page(); } // create and send the mail here return RedirectToPage("Index"); }
This is an async OnPost method, which looks pretty much the same as a controller action. This returns a Task of IActionResult, checks the ModelState and so on.
Let's create the HTML form for this code in the
contact.cshtml. I use bootstrap (just because it's available) to format the form, so the HTML code contains some overhead:
<div class="row"> <div class="col-md-12"> <h3>Contact us</h3> </div> </div> <form class="form form-horizontal" method="post"> <div asp-</div> <div class="row"> <div class="col-md-12"> <div class="form-group"> <label asp-Name:</label> <div class="col-md-9"> <input asp- <span asp-</span> </div> </div> </div> </div> <div class="row"> <div class="col-md-12"> <div class="form-group"> <label asp-Last name:</label> <div class="col-md-9"> <input asp- <span asp-</span> </div> </div> </div> </div> <div class="row"> <div class="col-md-12"> <div class="form-group"> <label asp-Email:</label> <div class="col-md-9"> <input asp- <span asp-</span> </div> </div> </div> </div> <div class="row"> <div class="col-md-12"> <div class="form-group"> <label asp-Your Message:</label> <div class="col-md-9"> <textarea asp-</textarea> <span asp-</span> </div> </div> </div> </div> <div class="row"> <div class="col-md-12"> <button type="submit">Send</button> </div> </div> </form>
This also looks pretty much the same as in common ASP.NET Core MVC views. There's no difference.
BTW: I'm still impressed by the tag helpers. This guys even makes writing and formatting code snippets a lot easier.
Acessing the form data
As I wrote some lines above, there is a model binding working for you. This fills up the property
Contact with data and makes it available in the OnPostAsync() method, if the attribute
BindProperty is set.
[BindProperty] public ContactFormModel Contact { get; set; }
Actually, I expected to have a model, passed as argument to the OnPost, as I saw it the first time. But you are able to use the property directly, without any other action to do:
var mailbody = [email protected]"Hallo website owner, This is a new contact request from your website: Name: {Contact.Name} LastName: {Contact.LastName} Email: {Contact.Email} Message: ""{Contact.Message}"" Cheers, The websites contact form"; SendMail(mailbody);
That's nice, isn't it?
Sending the emails
Thanks to the pretty awesome .NET Standard 2.0 and the new APIs available for .NET Core 2.0, it get's even nicer:
// irony on
Finally in .NET Core 2.0, it is now possible to send emails directly to an SMTP server using the famous and pretty well known
System.Net.Mail.SmtpClient():
private void SendMail(string mailbody) { using (var message = new MailMessage(Contact.Email, "[email protected]")) { message.To.Add(new MailAddress("[email protected]")); message.From = new MailAddress(Contact.Email); message.Subject = "New E-Mail from my website"; message.Body = mailbody; using (var smtpClient = new SmtpClient("mail.mydomain.com")) { smtpClient.Send(message); } } }
Isn't that cool?
// irony off
It definitely works and this is actually a good thing.
In previews .NET Core versions it was recommended to use an external mail delivery service like SendGrid. This kind of services usually provide a REST based API , which can be used to communicate with that specific service. Some of them also provide various client libraries for the different platforms and languages to wrap that APIs and makes them easier to use.
I'm anyway a huge fan of such services, because they are easier to use and I don't need to handle message details like encoding. I don't need to care about SMTP hosts and ports, because it is all HTTPS. I don't really need to care as much about spam handling, because this is done by the service. Using such services I just need to configure the sender mail address, maybe a domain, but the DNS settings are done by them.
SendGrid could be bought via the Azure marketplace and contains huge number of free emails to send. I would propose to use such services whenever it's possible. The
SmtpClientis good in enterprise environments where you don't need to go threw the internet to send mails. But maybe the Exchanges API is another or better option in enterprise environments.
Conclusion
The email form is working and it is actually not much code written by myself. That's awesome. For such scenarios the razor pages are pretty cool and easy to use. There's no Controller to set-up, the views and the PageModels are pretty close and the code to generate one page is not distributed over three different folders as in MVC. To create bigger applications, MVC is for sure the best choice, but I really like the possibility to keep small apps as simple as possible. | https://asp.net-hacker.rocks/2017/07/26/razor-pages-email-form.html | CC-MAIN-2021-49 | refinedweb | 1,193 | 66.33 |
This C Program illustrates pass by value. This program is used to explain how pass by value function.
Here is source code of the C Program to illustrate pass by value. The C program is successfully compiled and run on a Linux system. The program output is also shown below.
/*
* C Program to Illustrate Pass by Value.
*/
#include <stdio.h>
void swap(int a, int b)
{
int temp;
temp = a;
a = b;
b = temp;
}
int main()
{
int num1 = 10, num2 = 20;
printf("Before swapping num1 = %d num2 = %d\n", num1, num2);
swap(num1, num2);
printf("After swapping num1 = %d num2 = %d \n", num2, num1);
return 0;
}
Output: $ cc pgm43.c $ a.out Before swapping num1 = 10 num2 = 20 After swapping num1 = 20 num2 = 10.
advertisements | http://www.sanfoundry.com/c-program-pass-by-value/ | CC-MAIN-2015-32 | refinedweb | 125 | 73.58 |
On 28 February 2012 09:54, Stefan Behnel <stefan_ml at behnel.de>. > -------- Original-Message -------- > Betreff: Re: [cython-users] What's up with PyEval_InitThreads() in python 2.7? > > Mike Cui, 28.02.2012 10:18: >>> Thanks for the test code, you hadn't mentioned that you use a "with gil" >>> block. Could you try the latest github version of Cython? >>> >> >> Ahh, much better! >> >> #if CYTHON_REFNANNY >> #ifdef WITH_THREAD >> __pyx_gilstate_save = PyGILState_Ensure(); >> #endif >> #endif /* CYTHON_REFNANNY */ >> __Pyx_RefNannySetupContext("callback"); >> #if CYTHON_REFNANNY >> #ifdef WITH_THREAD >> PyGILState_Release(__pyx_gilstate_save); >> #endif >> #endif /* CYTHON_REFNANNY */ > > > Hmm, thanks for posting this - it can be further improved. There's no > reason for code bloat here, it should all just go into the > __Pyx_RefNannySetupContext() macro. > > Stefan > _______________________________________________ > cython-devel mailing list > cython-devel at python.org > | https://mail.python.org/pipermail/cython-devel/2012-February/001980.html | CC-MAIN-2014-10 | refinedweb | 120 | 59.7 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.